Futurology: Researchers have managed to simulate the iconic Counter-Strike: Global Offensive map Dust II entirely within a neural network running on a single RTX 3090 GPU. While the clips are a blend of impressive and glitch-ridden, they showcase generative AI’s shocking progress in mimicking full 3D game environments.
One of the people working on the project, Eloi Alonso, took to X/Twitter to flaunt footage of the “Diamond” Diffusion for World Modeling simulation in action. At first glance, despite the output of a lowly 10 FPS, the gameplay is fairly convincing and coherent if you take things slowly. You can wield guns, reload, see muzzle flashes, and even experience recoil.
However, things start to fall apart when you realize the model isn’t actually running CS:GO’s engine. Researchers fed it tons of deathmatches on Dust II until the neural network could essentially “hallucinate” its own approximation of the classic map and gameplay. The GitHub page notes that over five million frames or 87 hours of gameplay were used. They then played all that off the RTX 3090.
Ever wanted to play Counter-Strike in a neural network?
These videos show people playing (with keyboard & mouse) in �’� DIAMOND’s diffusion world model, trained to simulate the game Counter-Strike: Global Offensive.
�’� Download and play it yourself → https://t.co/vLmGsPlaJp
𧵠pic.twitter.com/8MsXbOppQK
– Eloi Alonso (@EloiAlonso1) October 11, 2024
That’s when you start to see the glitches. Since the simulation doesn’t grasp concepts like gravity or collision detection, gameplay physics go out the window. Players can jump endlessly to basically fly, weapons bizarrely morph under certain lighting, and quick movements dissolve the environment into an abstract blurry mess. You can even phase through solid walls like some sort of ghostly interdimensional being.
Of course, if you want the real, non-nightmare fuel Dust II experience, you can just download Counter-Strike 2 on Steam right now and enjoy playing it at framerates that don’t look like slideshows. An RTX 3090 certainly isn’t required – in fact, the game’s optimized enough to run fine on just 1GB of VRAM.
Still, while Alonso’s AI experiment is just that, an experiment, it represents a major milestone for on-device AI processing power. The model was trained entirely on a singular GPU, with that same unit then powering the generative real-time simulation.
Demos like this are rare, but this isn’t the first time generative AI has attempted to recreate gaming experiences. For instance, a Google team recently unveiled GameNGen, which used a custom Stable Diffusion model to generate a Doom level in real-time.
Likely taking developments like these into account, famed developer Peter Molyneux predicted AI will eventually create “huge parts” of games, from characters and animations to dialogue and in-game assets.