From Try Hard to Die Hard, Climbing Floors with Agents
Last week we gave you a mild intro to coding agents and how to build a website. This week we want to step it up a shade and look at a less obvious benefit of the current coding-agent era.
We’re currently working on a gaming simulation product that we believe will revolutionize the way young candidates are evaluated for skills in the new world. One thing we’ve leaned on coding agents for is balancing game systems through repeat simulation. We understand game design is a niche most people won’t dip their paws into, but the benefit is broader: rapid iteration, juggling multiple parameters, and shifting entire products quickly based on an idea or a whim.
We wanted to show that process, but first we needed a game worth balancing. So we gave ourselves a one-day challenge: build a dice game. We picked dice on purpose because they’re mathematical, awkward to tune, and full of edge cases. The idea that made us laugh was a Die Hard parody where you beat henchmen in dice battles instead of gunfights.
The player climbs Nakatomi Plaza, beats henchmen with dice rolls, and faces a final boss in a poker-style showdown called “Nakatomi Roll’em.” There’s resource management, push-your-luck decisions, hand evaluation, AI opponent behaviour, animated dice, and an 80s arcade aesthetic. Small scope, but not exactly a toy. You can find the game below:
I’m not exactly a game developer. Before SimplyfAI, I had never balanced a game economy, written a hand-ranking algorithm, or tuned an AI opponent’s fold/call behaviour. A year ago, I still could have built this, but it probably would have taken weeks and I likely would have stopped once the prototype “worked.”
You already know AI can write and execute code. Let’s focus on something more interesting.
Balancing a Game: Tuning and Simulations
If you look at the game, you’ll notice we have difficulty levels (Easy, Medium, Hard). We also had several inputs we were prepared to tweak, things like:
- starting dice
- henchman health by floor
- loot dice counts
- Gruber’s dice pool / pressure in the finale
We set target win rates for the game:
- Easy: 75%
- Medium: 50%
- Hard: 25%
Then we ran thousands of simulated games, missed the targets, didn’t like the flow, adjusted parameters, and reran. Within minutes we had settings close enough to test properly.
Very slight technical detail, because it matters here: the simulation was a Monte Carlo runner using the same core rules engine as the playable prototype. That means the same hand-ranking logic, the same boss betting logic, and the same floor outcomes were being used in both places. We weren’t balancing a separate spreadsheet version and hoping it matched the real game later.
Yes, you could always do Monte Carlo sims in Python, R, or Excel. That’s not the new thing.
Here’s the useful shift:
It’s not just “AI can optimize.”
It’s that the cost of changing your mind drops.
You can try something, simulate it, play it, hate it, change it, and rerun everything without spending two days rewiring your project.
Why that matters in practice
The way we think about executing ideas might need to change. We’ve been trained to master components one by one, but now there’s real value in building flexible harnesses that can move with you. A rules engine. A playable prototype. A simulation runner. A few tuning outputs. Then you turn the knobs and see what happens.
This Die Hard parody started as a joke and became a real playable prototype with tuning runs behind it simply because iteration got cheap enough to keep going.
Don’t be afraid to jump into new areas and test new waters. Just because you don’t seem like a fit in a space might be exactly why you bring new energy to it.
I mean, it might not fit the stereotypes, but not only is Die Hard a Christmas movie… it’s the best one there is.