All the world’s polygons
How real is your world? How do you know? Maybe it’s the gentle sway of leaves in the wind. Or the sound of crickets chirping at dusk. Or the softness of the light in the summer. Take a step back, blink. Turn your head to the side. Are you sure?
From the earliest 8-bit bush in The Legend of Zelda (1986) to the peatbog sublime of Death Stranding (2018), video games have long been on a quest for perfect simulation. The benefits are obvious: more convincing worlds equals more immersive gameplay; more immersive gameplay equals more profit. From real-time weather systems to 3D scanned rainforests, an economy of simulated nature has emerged to answer gaming’s demands. But is this desire for ecological simulation also a kind of capture?
This essay explores how simulation in gaming carries echoes from the past even as its implications careen towards the future. The same computing platforms powering next-gen immersive gaming are also fueling long-range climate forecasts and evaluating proposals to modify the earth’s atmosphere. In other words, the same climate simulator powering the real-time weather system of the next Grand Theft Auto is also going to tell us whether geoengineering is a good idea.
Consider the real-time digital Earth models currently being assembled of the planet—also known as digital Earth twins or Earth Virtualization Engines—as the conceptual sibling of a Natural History Museum, with all the related baggage. Consider the implications of this archive that floats incandescently above us in the cloud, the energy funneled into its maintenance ironically contributing to the slow death of the real thing. What new ways of relating to the world do simulation technologies open, and what do they inevitably foreclose?
I. Two kinds of wind
In cinema, there is the wind that blows and the wind blown by a machine. In computer games there is only one kind of wind.
—Harun Farocki, Parallel I (2012)
Harun Farocki, Parallel I, 2012, HD video installation, 2 channels, color, sound, 17 min.
When the German artist and filmmaker Harun Farocki transcended this mortal plane in 2014, he left behind one of the most prescient analyses of simulation in gaming. Across four acts, Parallel I–IV (2012–2014) analyzes the perpetual feedback loop between simulation and culture. The project explores how virtual environments map the present and anticipate the future, even while largely reinforcing historical ways of knowing the world.
It is ironic that the video game industry, whose output is largely centered around violent and human-centric modes of play, has chosen environmental realism as its representational benchmark. Trees shuddering in the wind, clouds unfurling overhead, dappled sunlight on swaying leaves: these have long been the stress test of computational photorealism.
Video games must render their worlds in real time with each playthrough, which requires an immense amount of computational muscle. As the software grows more powerful, the hardware must follow, exponentially. Consider the first supercomputer, the CRAY-1 model, built from dairy pipes and hoses in Wisconsin in 1972, is considerably less powerful than the smartphone in your pocket. Now consider the world’s most powerful supercomputer—the exasystem El Capitan, brought online in November 2024—which runs at a speed of 17 exaFLOPs, capable of running almost 2 quintillion calculations a second.
More real, more beautiful: with the arrival of titles like Sea of Thieves (2018), The Legend of Zelda: The Breath of the Wild (2017) and Firewatch (2016), we edge ever closer to the ludic sublime. And yet, the technologies underpinning these worlds are streaked with violence. Tennis for Two (1952), largely considered the first video game, was crafted with missile-tracking technology by a group of bored physicists killing time in the lab. Spacewar!, developed a decade later on the PDP-1 computer model, was funded by the Pentagon and later used in military training.
The Pentagon-funded Spacewar! (1960), popularly cited as the first video game, displayed on a PDP-1, the computer used to program it.
And yet, a realistic “landscape”—for this is how game environments are termed, a choice of syntax that reinforces their status as secondary and inert—matters. The believability of a body of water or blade of grass aggregate to reinforce what game scholar Ian Bogost terms the “magic circle”, or narrative immersion, of a game.
Harocki’s death predated the Cambrian explosion of simulated ecology in the late twenty-teens, realized through 3D scanned megamalls with thousands of free-to-use assets as well as powerful photogrammetry mobile apps, which allow anyone to make a 3D model of virtually anything. One wonders what the artist would think about a photorealistic trunk, scanned in from Iceland, that now appears simultaneously in a medieval RPG, a text-based adventure game about a park ranger on the run from his dying wife, and a posthuman mycelium-zombie survival game, amongst thousands of other titles. Building lifelike game environments has never been so easy; nor has photorealism ever been so photocopied, and, in a sense, unremarkable. The question then becomes, what else might simulated environments be capable of?
II. All the world’s polygons (Rainforest)
We live to capture this world to give life to countless others. We capture the world so you can create your own.
—Quixel advertising campaign, 2018
A terrain scan of the Vasquez Rocks State Park, currently for sale on Unreal Engine’s new asset marketplace, Fab.
There is a park 40 miles northwest of Los Angeles that looks Martian, as if plucked from some science-fiction story: gigantic primordial rock formations thrust sideways into the sky like cosmic daggers. Its cinematic otherworldliness is no stranger to Hollywood: Vasquez Rocks State Park lies within the 30-mile “studio zone” of the industry, permitting a cheap actor and crew day rate. As a result, dozens of iconic movies—pre-CGI worldbuilding—were filmed here. Over a century, this ancient landscape has been Dr. Evil’s underground complex in Austin Powers, the planet Vulcan in Star Wars, and, a little unconvincingly, Dracula’s rural Transylvania. Hollywood’s “plug-and-play” approach to the Vasquez Rocks served as a precedent to its contemporary use of virtual production.
As an increasingly integral tool in the industry’s production pipelines, the game engine can be thought of as one huge stage set. Things are sculpted, painted, meticulously lit, and precisely filmed. There is a constant negotiation between environmental complexity and render efficiency that determines how virtual worlds are made; this tension is crucial to how such engines have evolved over the years.
In computer graphics, each 3D model is made up of polygons: these are flat surfaces composed of vertices. A higher poly model is more realistic, but also requires more computational power to render, causing more lag.
To combat this, game engines will only render out the polygons that are visible on screen, adjusting their level of detail according to the distance from the camera, a technique known as “billboarding”. As the name implies, “billboarding” reinforces the understanding of virtual environments as mere backdrops against which the “real” action of game worlds takes place.
A cropped illustration from German theologian and musicologist Johannes Zahn’s System der visuellen Wahrnehmung beim Menschen (1687) depicting emission theory, a medieval theory of vision proposing that visual perception is achieved through beams emitted by the eyes.
Wherever the camera’s gaze turns, reality sprouts. Curiously, this representational tactic has overlaps with Medieval theories of vision, which philosophers believed to emanate outwards from human eyes, a concept known as “emission theory”. In a sense, this camera-centric computation of the game engine interface also mirrors the essentialist origins of ecosystem science. In the 18th and 19th centuries, a centralized technique for tracking and monitoring forest ecologies was introduced by the timber economy.
This “fiscal forestry” only visualized value: any parts of the ecosystem that were considered nonprofitable—such as fauna, fungi, and other flora—were simply left out of the model, as if they didn’t exist. But there is a kind of poetic undertone to this hyper-efficient framework. It suggests that the virtual world is natively complex and will reveal itself in slivers, only materializing when the player pays close attention.
A sliver of continent-sized offerings from digital asset company XFrog’s ecological warehouse (screen capture, https://www.xfrog.com/com/).
Despite the unprecedented efficiency, rendering nature is still a drag; animating foliage and flowers is tough work, and building biomes requires at least some specialist knowledge. Companies like Xfrog were early response calls to this demand, offering a suite of hyperrealistic 3D natural assets that use AI to mimic ecological randomness. These platforms catered principally towards architectural designers who needed a believable, geolocationall-accurate backdrop to cinch the deal on their speculative buildings (clients include Frank Gehry and Zaha Hadid Architects). Xfrog offers all the world’s polygons for the low price of $600; customers can hone in on particular continental ecologies—Asia, Europe, “the Americas”—at $200 each.
Epic Games, the parent company of Unreal Engine, deployed an ecological countermove in 2019: it bought Quixel, a company founded in Sweden in 2011. From 2011 to 2024, Quixel offered an ever-expanding library of environmental 3D assets, scanned in by hand by its globe-trekking 250 employees. In 2025, Epic is consolidating Quixel with its own marketplace and once-rival Sketchfab, transforming these world inventories into purchasable assets. Beyond the classic bait-and-switch headache of paywalling a resource the company promised (and profited from promising) would always be free, it represents a problematic paradigm shift for game worlds to come: both decreasing the engine’s accessibility and putting an unprecedented price tag on ecological representation. Furthermore, the question of copyright ownership and infringement on, say, a 3D scan of a rose, is a particularly thorny question—does the likeness of a rose belong to the rose’s owner, the owner of the scanner, or the owner of the platform? Scanning in a flower or fjord is an objective process; data is captured and transformed into a decipherable object. There is little leeway for artistic license. Considering this, what does Epic’s decision to privatize its ecological haul suggest about its claims over the real world alongside game world?
With a not-so-humble mission to scan the whole world, Quixel is well on its way: its offerings range from Boston ferns to horse manure, Cambodian ruins to chicken nuggets. All in all, that’s over 16,000 textures, brushes, plants, manmade and natural objects, alongside photorealistic scans of entire real-world scenes. Occasionally, these goods are also packaged as “collections”: off-the-peg curated worlds for the taking. At the time of writing, the current collections offered by the Quixel platform are Medieval Banquet, Junkyard Vol 1, and Roadside Construction.
A photorealistic, lore-infused scan of a trash pile. This 3D scan is just one of many environmental offerings from Quixel’s free-to-use “megascans” library, which will be moved over to Epic Games’ pay-to-play Fab marketplace at the end of 2024.
Like a cyber scavenger, I sift through the digital detritus of the junkyard, which, as its name hints, will eventually be replaced with a better—junkier?—edition. Scrolling past photorealistic rusted axles, wall-mounted chains and a filthy cluster of oil containers, my eye is drawn to the humble “trash pile”. Zooming in on the cluster of nondescript garbage, I catch flickers of an engineered story: a gummy scrap of paper with the word “dentist” scrawled on it; a sealed envelope, tantalizingly face-side down, whose contents could be anything. I think about the type of person who art directs these assets—positioning them just so and then slowly scanning them in—immortalizing these worlds within worlds. Where do these objects come from? What kinds of stories will they scaffold?
The same SpeedTree asset appears in two major AAA titles. Image courtesy of the ever-omniscient anonymous Reddit crowd.
Quixel and its predecessors like XFrog are ultimately finite world inventories. Particularly astute gamers sometimes catch the same asset appearing in different titles, triggering what’s known as an “immersion break”: when a player is violently ejected from the believability of the virtual world. Yet with the rapid leaps being made in smartphone-based 3D scanning apps, and an increasing number of budding game designers scanning in real-world assets to personalize their games, it’s possible that a “whole Earth” database, comprising millions of contributors’ scans, may come into existence in the near future. Teddy Bergsman, founder of Quixel, sees it happening in the next five years.
As dazzling as the concept of a planet-sized inventory of hi-fidelity natural assets may be, there is an irony to this digital doubling. Recalling the museological impulse to immortalize in glossy vitrines all the world’s life—a desire fueled by colonialism, the invention of taxonomy in the 18th century, and the conceptualization of extinction in the 17th—this ecological doubling feels, inescapably, like a preemptive memento mori to a world in crisis.
With the imminent environmental paywall of Quixel’s free-to-use library moving into the pay-to-play Fab marketplace, the blatant commodification of environmental realism achieves an additionally sombre note. As I examine the 8k textural realism of a lichen-covered rock scanned in from the borderlands between Boreal forest and Arctic tundra—a discounted offering on Fab’s Cyber Monday sale—I am left with the darker implication of its capture. Between the amount of carbon burned to achieve this scan of such a fragile ecotone, the energy expended by future gamers in playing the ecologically accurate game levels it appears in, its position on the marketplace feels like an environmental estate sale. Does a playable replica of all the world’s polygons neutralize the desire to protect the real thing?
III. Second bodies and twin planets
For the second body, there is no stable boundary between one species and another: we’re all in the same boat.
—Daisy Hildyard, The Second Body (2017)
Concept art for NASA’s GEDI system, which is running on the ISS until 2028. GEDI is producing the first high-resolution “4D” map of the Earth’s forest systems and their carbon storage capacities.
What is the second body? Let’s start with the first. It’s fairly straightforward: the one you can semi-see in front of you when you look down at your hands or legs. It’s You: bounded. The second body is a little less easy to define. It’s the flights you take, the food you consume, the clothes you buy, the miles you drive; and simultaneously, it’s these actions acting upon a global, shared body. You can think of these bodies as a pair of nested dolls, except wired up to each other, one body sensing and influencing the other in a perpetual cycle. No player vs landscape. No subject vs backdrop. No real separation at all.
How to describe this feeling? “The language we have at the moment is weak,” Hildyard suggests, frustrated by the apparent impossibility of iterating on an idea that butts up against the edges of language itself. “I want to start,” she suggests, “by talking about the whole world.”
It’s planets all the way down. The Second Body hints at the idea of an interconnected biosphere that was earlier hypothesized by James Lovelock and Lynn Margulis in their Gaia hypothesis, published in 1974. Contrast this with Earthrise, the first full-Earth image taken from outer space, captured in 1968. Earthrise is often said to have triggered an awareness of the ecological co-dependency of the planet; paradoxically, it also frames Earth as a closed, bounded system that can be mapped, modeled, and represented in its entirety.
With the concept sketch for a so-called “digital Earth” captured by a NASA astronaut hovering over the moon, it is no surprise that the term “digital twin” was also coined in-house at NASA in 2010. Digital twins are real-time virtual models of physical objects or systems. They are hardly a new achievement, although within the last couple of years, the technology is increasingly intersecting with everyday life: airports use them to monitor passenger flow and security; cities from Singapore to Los Angeles are increasingly integrating them into urban planning processes; surgeons are using them on the operating table before making a first cut. But the concept of scaling up to an Earth-size digital twin—including the planet’s atmosphere and weather system—is a newer—and more complicated—endeavor.
Concept art of Nvidia’s Earth-2, a digital twin of the planet that will run complex climate simulations and provide deeper immersion in video games with real-time weather systems.
In March 2024, American tech conglomerate Nvidia—whose state-of-the-art graphics cards power the gaming industry—announced Earth-2: its very own digital twin of the Earth’s climate system. Short-range forecasting will be used in gaming, generating real-time simulated weather that’s synced to the player’s location. In the long term, NVIDIA’s CEO, Jensen Huang, positions the platform as a tool for more accurately predicting extreme weather events as well as mitigating the impacts of the climate crisis. However, NVIDIA is simultaneously pitching this technology to major fossil fuel companies in order to streamline extraction and increase profits. Of course, Earth-2 is hardly a standalone case; dozens of digital Earths have entered the simulated fray for more, better, faster modeling—including rival systems from NASA, the European Space Agency, Microsoft, and Cesium, whose 3D geospatial map you see here.
In Berlin last summer, a group of more than 100 scientists from 93 institutions and research centers put together a proposal[1] for an internationally stewarded collective of Earth Virtualization Engines (EVEs), a network of digital earth models stationed across the globe. EVEs wants to open up the future of climate modeling to a broader public. In addition to making the impact of climate change more palpable to a global audience, this open source-ification of climate modeling would help to fill the industry’s massive data gap over global majority countries. Speaking to a climate scientist at the National Center for Atmospheric Research in Wyoming this past July, I learned of the deeper implications of model bias.
In the arid prairies of Cheyenne, Wyoming, sandwiched between a Microsoft datacenter and a bitcoin mining farm, the supercomputer Derecho pumps out 20 quadrillion calculations a second. Its main modeling goal is to simulate a future Earth’s atmosphere under various forms of solar geoengineering, including spraying sulfur dioxide particles into the stratosphere to block out the sun. However, I am told that these $35 million calculations may well be useless.
Here’s the paradox of climate modeling: more accuracy begets more believability, but more specificity begets inaccuracy. In evaluating the accuracy of their future predictions, climate models are assessed on how well they simulate the past. But we are moving into a climate future that is behaving very differently from what we’ve seen before. Earth’s climate is a chaotic pendulum, with multiple systems acting upon each other simultaneously. Miscalculating the amount of sea ice in one square kilometer in the North Pole will have an exponential domino effect, scaling up to a wrongly predicted typhoon season in Taiwan. And we are currently so devoid of climate data[2] from countries near the equator and the poles—those the most susceptible to both the particle drift caused by stratospheric aerosol injection and climate change more broadly—that it threatens to throw the viability of these climate models off the charts. This data void is a powerful reminder of this second body, this vital point of interconnection between worlds.
Towards the end of The Second Body, Hildyard—whose house has just flooded, totally wrecked by a “freak storm”, an increasingly common occurrence—is speaking with an evolutionary biologist, Paul. Paul explains how the notion of the individual bounded body is nonexistent in the colonies of bacteria he creates.
Once they hit a certain population count, the bacteria will willingly atrophy their own genetic code, reducing their functionality to only a single task. No longer being able to function as an individual, these bacteria instead opt to act as a collective. Here, symbiosis is the only form of survival. Could it be the same for Earth one and Earth two?
Perhaps, in resisting this fetish for bigger, faster, more accurate models, this fantasy of calculating a singular future for the planet, it would be better to instead zoom in on the map, to let things get fuzzy and indeterminate. A virtual Earth that rejects the self-contained fantasy of a digital twin could become an elastic bridge between the second and first body. Using these systems to envision multiple possible futures, and embracing the parts that are chaotic and unknown, could yield a new way of seeing our world.
To emphasize this point, it may help us to return to the affective dimensions of the game environment. To be spawned here is to slow down. To move through it is to pay close attention to all the ways it behaves, changes, and evolves. Like crossing back over the magic circle of a role-play, there is a valuable bleed between Game World and World World. We have to choose what habits, what forms of knowledge, and what kinds of worlds we want to bring back to the other side.