I’ve been having performance issues with VS Code on my laptop, due to low RAM (my laptop only has 16GB of RAM). I’m running Kubuntu 24.04 on it and while it works good, when going full dev mode it runs out of memory.
This is a nightmare when I go to work in the offices of a company I am collaborating with, as I can’t work efficiently (and depending on the project, at all!) So I came up with a project to vibecode, which will also allow me to put LLMs to the test!
I’m building a Visual Studio Code clone, with way less features (only what I need), and a feature that I can’t really find anywhere unless I connect to my computer via rustdesk… Work on my laptop, using my desktop’s resources! Working title for the app: Visual Slop Code (will be changed later, if I ever decide that it is good enough to release it to the public).
The plan
Build a cross-platform IDE, using Rust for the backend, React for the frontend, and Tauri as a framework. Here’s the overview from my initial plan.md file:
| Layer | Technology | Details |
|---|---|---|
| Frontend | React 19 + TypeScript + Vite 7 | SPA rendered in Tauri’s webview |
| Backend | Rust (Tauri v2) | PTY management, filesystem ops, git, file watcher, themes, LSP process mgmt, settings, agent command checking |
| Desktop Shell | Tauri v2 | Custom window decorations, cross-platform build |
| Editor | Monaco Editor 0.52 (via @monaco-editor/react) | IntelliSense, git decorations, blame, conflict resolution, LSP integration |
| Terminals | xterm.js 5.5 | Two PTY sessions: coding agent + interactive shell |
| Styling | Custom CSS variables (27 tokens) | 10 built-in themes + VS Code theme import |
| State | React useReducer + Context | Single global AppState with 35+ fields |
| Persistence | ~/.coder-app/settings.json | Theme, layout, recent projects, LSP settings, agent selection, custom agents, window geometry |
This, pretty much, works fine. I’m not interested in all the bells and whistles of VS Code on my laptop. I need something that works. It was implemented with GLM 5.1 and Sonnet 4.6 when things got difficult for GLM (not enough resources on Z.AI platform)
The headless plan
This is where things get interesting: I gave both Claude Sonnet 4.6 and GLM 5.1 my requirements, and asked them to devise a plan. The app should have 3 modes: Regular mode, Server node (headless), and Client node.
Regular mode is pretty much self-explained. You open the app, and work on your computer as usual. But the server node & client node? What the IDE is going on here?
Server node
This one does all the work: It fires a cmd instance of the IDE, that accepts connections. It will be using the resources of the device running the server mode (CPU, RAM, Disk/filesystem). It will be communicating with the Client node via websockets and a proxy server (more on that later) so that you can work like you were on your beast machine, while using your little nutcracker! (Haha)
Client node
The Client node is essentially the frontend of the IDE in the Regular mode. Everything you are doing is communicated back to the Server node, that handles all the backend stuff (saving, loading, lsps, etc).
Why add a proxy server?
The way I work on websites (mainly WordPress sites), is:
- Download the site (from the repo or the server), minus the uploads folder.
- Setup the database locally
- Add an htaccess rewrite for the uploads folder: if the file is not found locally, load it from the production server
To avoid having issues with rewrites (and wp’s search-replace), I’m running each project on its own domain (for example dimitrisp.test). It saves a few minutes each time I setup a project locally.
So, in order to be able to work on my projects remotely using the Server/Client seperation, we need to be able to access the project on the server node. The local machine has no resolver for the .test domains, and it would be counter-productive to add each hostname in the hosts file of the current machine, there will also be a Web-browser tab in the app, that will forward all requests to the server node. The server node will analyze the domain, and serve the dev project (bad explanation, just trying to make a point).
What about security?
Not sure about you, but all my machines are accessible via tailscale and zerotier. Both server/client node will have self-signed SSL certificates (not ideal, but also not a cause for alarm, we are already connecting through a private VPN). I’m not going to use that without VPN, so it is ok for now. If I need to use it without VPN, I will come up with another plan.
Where do LLMs come into this plan?
At some point, GLM was having capacity issues and Claude was down. I didn’t want to stop, so I said “I have a pretty solid plan.md file, that is very well explained, why not try Minimax for something complex?” and I did!
It went better than expected: Minimax managed to implement a big portion of the plan without major hiccups, but at some point we had some complete “I added this, must remove. I removed this, must add it back” circles.
Qwen 3.6 also did great with another big chunk of the plan, no circles, but I had a few “overloaded” messages.
Both models had great thinking processes. However, they took a lot of time to implement things (probably because of capacity issues).
The most amazing thing though:
Remember when I said that I asked Claude and GLM to make a plan for the headless mode? I fed both models the plan of the other mode. The concensus from both models was that the GLM plan was better formulated in most areas (especially for the communication protocol). Claude excelled at authorization and security planning.
I still haven’t finished with everything, there is no demo yet for the Server/Client node seperation. Hopefully by next week I’ll be able to test it. There are a lot of things still to be done!
See you next time!