Personal project - My local MCP server journey (LM Studio + Docker + MCP)
Building a private, offline ‘second brain’ that turns local models into a security-focused assistant I can rely on anywhere, even on long flights home.
After wanting a proper local MCP setup for a long time, I finally have it running on my own machine. In the past my Radeon GPU made most “easy” local LLM setups painful, but combining Docker Desktop’s MCP integration, an Obsidian container, and LM Studio now gives me exactly what I wanted: a local-first stack that keeps everything on my hardware instead of in the cloud. Since I care a lot about privacy and security, having all my notes and models running locally is a huge win. I’ll also link the setup and the main resources that inspired this build here.
Right now, the core pipeline works: Obsidian syncs into the MCP server, LM Studio handles the models, and I can query everything from a single interface, even offline. That means I can now run these models on my long flights home and actually get meaningful work done, which was basically impossible before. The next step is to clean up how my Obsidian vault is structured and tie it in more directly with my lecture notes and the main areas of my life: design team, council, and the 3D print shop. Once that’s in place, this should become my go-to automation whenever I need to quickly compile or present what I’ve been working on lately.