← All posts

I Bought a Used Mac Studio to Run Local LLMs

How a LinkedIn post, a Gemini chat, and an impulse buy led to the machine that started FamilyKit.

I follow this guy Victor on LinkedIn. He runs local LLMs on Mac Minis and posts about it regularly. Every time one of his posts showed up in my feed, the same thought came back: why am I planning an x86 server when this guy is doing it on a Mac?

Credit where it’s due

I need to get something off my chest first. I’m not an Apple fanboy. Never was. The fact that Command and Control are swapped still annoys me after years of using a Mac. I don’t think I’ll ever fully get over that.

But I own a MacBook M1 and the hardware is still excellent. Bought it years ago, still fast, still quiet, still lasts a full day on battery. Apple’s move from Intel to ARM was bold, and the execution was impressive. The power efficiency of the whole package is unmatched. Nothing in the x86 world comes close right now.

So why was I insisting on building an x86 server again?

Asking the AI

I did what you do in 2025. I asked Gemini for a Mac Mini configuration that could run local LLMs.

After some back and forth it suggested 48GB RAM and pointed me to something I hadn’t fully appreciated before: unified memory.

What is unified memory?

On a regular PC, the CPU and GPU have separate memory pools. Your system might have 32GB of RAM, but your GPU only has its own 8GB or 12GB of VRAM. Running an LLM means the model weights need to fit in GPU memory, because that’s where the fast inference happens. A 70 billion parameter model needs around 40GB. Good luck finding a consumer GPU with 40GB of VRAM for under €1,500.

Apple Silicon works differently. The memory sits on the same chip package as the CPU and GPU. There’s no separate VRAM. All of it is shared, and both CPU and GPU can access it directly at full bandwidth. So a Mac with 64GB unified memory can load a 40GB model and just run it. No special GPU required.

That’s why Apple hardware suddenly makes a lot of sense for local LLMs, despite the fact that nobody in the homelab community seems to talk about it.

And then Gemini pointed me to used Mac Studios with M1 or M2 Max chips. 64GB unified memory.

The best kept secret in the homelab world

Used Mac Studios go for €1,400 to €2,000 in Germany depending on configuration. The 128GB variants cost more, obviously. But a 64GB M1 Max with decent storage? Very reasonable for what you get.

Compare that to my abandoned x86 build from the previous post: €900 for a machine with 16GB RAM that would have been useless for LLMs. The Mac Studio costs roughly double but can actually do the thing I wanted it to do.

The impulse buy

I bought a used Mac Studio M1 Max for €1,700. 64GB unified memory. 4TB SSD.

Was it an impulse buy? Probably. But with RAM and SSD prices going through the roof, just the equivalent memory and storage for an x86 PC would run you about €1,000 at that point. The Mac Studio came with all of that plus a GPU that can run large language models. The math worked. At least that’s what I told myself while clicking “Buy” this time.

ClawdBot Moltbot OpenClaw had just gone mainstream. I wanted something like that running at home. Private, on my own hardware, not sending everything to someone else’s servers.

Maybe a Mac isn’t the ideal server in the traditional sense. But for what I had in mind, it was the ideal machine. And the dream of Linux on Apple Silicon isn’t dead yet either. But that’s for another day.

Two problems, one machine

While waiting for the Mac Studio to arrive (a very long week), I kept thinking about what I actually wanted to build. Beyond the LLM experiments, two problems had been bugging me for years.

The photo problem. I talked about this in the previous post. Photos buried on hard drives, a phone that’s always full, nobody else in the family can access any of it. I wanted to set up Immich, a self-hosted Google Photos replacement, with proper backups and ransomware protection. Everyone in the family should be able to browse all our photos from their phone. I also wanted a digital photo frame in the living room that pulls from our own library. And eventually, let AI curate albums automatically. But first things first.

The paper chaos. I am terrible at organizing documents. Impressively bad. I spend way too much time searching for stuff I know I have somewhere. My dream setup: photograph any letter that arrives in the mail with my phone, post it to a chat room on a local messenger, and let the server handle the rest. Archiving, indexing, backup. When I need to find something later? Same messenger, just ask. Like having a personal assistant for paperwork, except it runs in my basement.

These two problems, combined with the LLM ambitions, shaped what would eventually become FamilyKit. Not as a product at that point. Not even as a name. Just the realization that this one machine could run our family’s digital life.

The Mac Studio arrived a week later. Time to get my hands dirty.


Previous: The €900 NAS I Never Built

Next up: Local LLMs on a Mac: From Magic to Disappointment

Get notified when the repo goes live.

One mail. Promise.