Google is engaging in unprecedented, massive, ongoing data collection to transform intractable problems into solvable chores.
Spaces of military simulation have long been a theme of interest here, including the desert test-cities of California's Fort Irwin or the law enforcement training architecture of U.S. police departments, so this shot of Brazilian police training "in a mock favela set up in Rio de Janeiro" caught my eye as part of a recent round-up of shots looking at preparations for the 2014 World Cup.
Someone builds a surrogate or a stand-in—a kind of stage-set on which to test their most viable theories—then they control that replicant world down to every curb height and door frame. Architecture then comes along simply as ornamentation, in order to give this virtual world a physical footprint—to supply a testbed on which somebody else's spatial ideas can be verified (or violently disproven).
Finally, like the 1:1 scale model in which Google self-driving cars operate,1 techniques learned inside these proto-cities are then imposed upon the very thing those sites were meant to model, tricking the real-world favela into resembling its denigrated copy: a wild space neutered by the decoy it played no role in authorizing.
- 1. source: http://www.theatlantic.com/technology/archive/2014/05/all-the-world-a-track-the-trick-that-makes-googles-self-driving-cars-work/370871/
There is, at least in an analogical sense, a connection between how the Google cars work and how our own brains do. We think about the way we see as accepting sensory input and acting accordingly. Really, our brains are making predictions all the time, which guide our perception. The actual sensory input—the light falling on retinal cells—is secondary to the prior experience that we've built into our brains through years of experience being in the world.
That Google's self-driving cars are using these principles is not surprising. That they are having so much success doing so is.
Peter Norvig, the head of AI at Google, and two of his colleagues coined the phrase "the unreasonable effectiveness of data" in an essay to describe the effect of huge amounts of data on very difficult artificial intelligence problems. And that is exactly what we're seeing here. A kind of Googley mantra concludes the Norvig essay: "Now go out and gather some data, and see what it can do."