
Running LLMs locally with Ollama
A step-by-step guide to installing Ollama on macOS and running large language models like llama2 and Mistral entirely offline. Learn how to interact with the models via chat, API, and even remotely using ngrok.
All my posts about various bits of code and projects
20 posts