1
Outpost 2 Programming & Development / Re: Want Outpost 2 on my Mac - learning the hard way with AI
« Last post by jonathangoorin on April 14, 2026, 02:47:53 AM »@BlackBox first of all, thanks for the reply - really appreciate it.
I am a real person. Right now I work as an AI solutions architect, but I come from years of Unity gamedev and DevOps. Outpost is one of my favorite games, so this whole thing is a fun flex/challenge for me.
I do write through an LLM prism because it is faster and helps with wording/spelling. I mostly talk to forum through my MCP server bridge, so posts can sound more structured than normal forum chat.
Second - yes, thread title says "run on Mac", but for me it is more an RE exercise for fun. Basically: let's see if I can wing it and learn while doing it.
I admit I am nowhere near the RE knowledge level of many people here. I am still learning basics (including how to do proper debug sessions). A lot of this I learn from agents step by step. What I am better at is building compile/decompile pipelines and automation around them.
I keep this thread as a log of effort.
About the repo link: I closed it because I started putting sensitive deployment data there. I do plan to publish a clean copy once I get things into a safer state.
So the crazy idea is:
- Set up an OP2-era compile environment.
- Generate synthetic data (small programs/modules in that era style/tooling).
- Decompile with Ghidra.
- Build source <-> decompile training pairs.
- Train/evaluate decompilation-focused models on those pairs.
What I have done so far:
- Started building cross-compilation flow for MSVC 4.20.
- Using Win2K VM on QEMU with VS 4.20 (started with NT4, but telnet there was painful).
- Mount folders with SMB, move files over telnet, run compile commands in VM.
- Got test scripts working for this loop.
- Now containerizing system so it is more encapsulated/coherent.
Next plan:
- Build a metaflow-style pipeline: snippet -> compile -> decompile -> paired artifact.
- Measure base LLM delta against these pairs.
- Then fine-tune LoRAs specialized for this compiler/style.
- Track experiments in MLflow on K8s with GPU cloud infra.
I know about the IDA databases and even got old Olly material from Leviathan. I plan to incorporate that later. Right now I mainly want to prove I can stand up this whole pipeline end-to-end.
Anyway - this is part curiosity, part learning project. I am not claiming I found some magic shortcut. I am posting progress as I go.
I am a real person. Right now I work as an AI solutions architect, but I come from years of Unity gamedev and DevOps. Outpost is one of my favorite games, so this whole thing is a fun flex/challenge for me.
I do write through an LLM prism because it is faster and helps with wording/spelling. I mostly talk to forum through my MCP server bridge, so posts can sound more structured than normal forum chat.
Second - yes, thread title says "run on Mac", but for me it is more an RE exercise for fun. Basically: let's see if I can wing it and learn while doing it.
I admit I am nowhere near the RE knowledge level of many people here. I am still learning basics (including how to do proper debug sessions). A lot of this I learn from agents step by step. What I am better at is building compile/decompile pipelines and automation around them.
I keep this thread as a log of effort.
About the repo link: I closed it because I started putting sensitive deployment data there. I do plan to publish a clean copy once I get things into a safer state.
So the crazy idea is:
- Set up an OP2-era compile environment.
- Generate synthetic data (small programs/modules in that era style/tooling).
- Decompile with Ghidra.
- Build source <-> decompile training pairs.
- Train/evaluate decompilation-focused models on those pairs.
What I have done so far:
- Started building cross-compilation flow for MSVC 4.20.
- Using Win2K VM on QEMU with VS 4.20 (started with NT4, but telnet there was painful).
- Mount folders with SMB, move files over telnet, run compile commands in VM.
- Got test scripts working for this loop.
- Now containerizing system so it is more encapsulated/coherent.
Next plan:
- Build a metaflow-style pipeline: snippet -> compile -> decompile -> paired artifact.
- Measure base LLM delta against these pairs.
- Then fine-tune LoRAs specialized for this compiler/style.
- Track experiments in MLflow on K8s with GPU cloud infra.
I know about the IDA databases and even got old Olly material from Leviathan. I plan to incorporate that later. Right now I mainly want to prove I can stand up this whole pipeline end-to-end.
Anyway - this is part curiosity, part learning project. I am not claiming I found some magic shortcut. I am posting progress as I go.

Recent Posts