3 min read

Run Powerful AI Language Models Locally Using Alpaca

No subscription fees, no prying eyes. Alpaca is a GUI for Ollama that makes it easy to run an LLM safely and privately on your own Linux PC.

Are you wanting to use an AI language model (often called an LLM or “Large Language Model”) like ChatGPT as a digital assistant, but have concerns about the way your data is handled? Do the features of Microsoft’s CoPilot appeal to you, but the potential privacy nightmare scares you away? Maybe you just don’t want to fork over a subscription fee, or get locked in to a 3rd-party ecosystem. Perhaps you’d prefer to embrace open-source software. Whatever your reasons are, the solution might be running an AI model locally.

When I first started digging into running LLMs locally on Linux, I thought I’d have to rely on command-line tools like Ollama. While Ollama is perfectly capable, and running anything in a terminal has the potential to make you feel like a badass, it doesn’t exactly boast a beginner-friendly interface:

A command line interface showing the "curl" command for downloading and installing Ollama.
Ollama runs AI language models locally on your PC, but it's not exactly user-friendly.

Later, I discovered LM Studio, a cross-platform solution that runs locally on Windows, macOS and Linux. It’s powerful and flexible, with a traditional GUI. But if I’m being honest, sometimes too much power and flexibility can be distracting or lead to breakage. Plus, the only aspect of LM Studio that’s open source is the command-line tool. I’m not inherently against proprietary, closed-source software, but I appreciate that if a developer decides to stop working on an open source project, it can be forked and continue to live on.

Heavy amounts of text on a chat-style interface.
Via the Llama 3.1 language model, Alpaca explains the origin of "Tux," the mascot for Linux.Via the Llama 3.1 language model, Alpaca explains the origin of "Tux," the mascot for Linux.

So my journey led me to Alpaca, which acts as a graphical frontend for Ollama. And if we drill down into what truly matters, the resulting takeaway is this: it’s just easy to use.

Alpaca is streamlined and intuitive. It’s easy to install on any Linux distribution via Flathub, and it comes bundled with the Ollama backend. There’s no complicated configuration involved; just choose an AI model to download and start chatting.

Alpaca does a nice job of walking you through the very few necessary steps to get up and running.Alpaca does a nice job of walking you through the very few necessary steps to get up and running.

I started with Meta’s newly released Llama 3.1, which is open source and comes in 8 billion, 70 billion, and 405 billion parameter sizes. (Think of parameters as a way to measure a language model’s complexity. The higher the number, the more capable it is.) Understandably, the 405 billion version is massive, requiring a 231GB download. For comparison, ChatGPT 3’s parameter size was 175 billion. Running these full-fat models could bring the beefiest consumer PC to its knees, but in my experience the smaller parameter models are perfectly capable of being decent digital assistants and chatbots.

On the plus side, Alpaca gets natively integrated into your system’s notifications, so you can minimize the app while it chugs away on a response, and get notified when an answer is ready.

An app notification from KDE Plasma.
Having native notifications is helpful!

But if you decide a particular model isn’t for you, deleting it and clearing up some drive space is a snap. Just go into the “Manage Models” menu, where you can search for, pull, and delete dozens of models.

The Alpaca project came into existence back in June 2024, but the first stable release launched only a couple weeks ago. It looks like improvements are being implemented steadily, but there is one papercut issue which may affect certain AMD GPU users: Alpaca is tapping into CPU resources instead of GPU, which makes processing slower. It appears to be a bug associated with ROCm, but the developer is aware and is investigating a fix.