Ollama

Ollama

Ollama

Ollama — run large language models locally with a simple API. Supports Llama, Mistral, Gemma, Phi, DeepSeek, and many more models with automatic GPU acceleration.

AI / Machine LearningFree·92.7M1.5K14d ago

About

Ollama emerged from the need to democratize access to large language models by making them accessible on standard hardware without requiring cloud infrastructure or API dependencies. Created as an open-source project, Ollama simplifies the complexity of downloading, configuring,…

Deployment Options

1 stack

You might also like