In 95% of military AI war game simulations, autonomous systems launch nuclear weapons.
Not because they're malicious.
Because the math is clean: fewer people competing for resources means less conflict.
What does it mean to build a tool that reasons correctly to conclusions we can't accept?







