“The climax has been reached. / Mathematical eyes look around, see patterns, compose, within their conceptions, / It all comes together.” – Mathematische Augen by Black Replica[1]

According to the philosopher and artist Daniel Rubinstein, we currently find ourselves in a ‘twilight of representation’.[2] Photography is no longer a medium of the visible, but one of making the unseen visible.[3] How do self-driving cars ‘see’ the cities they traverse? What spaces could a single autonomous vacuum cleaner measure, and how does the electronic band on the arm of the Amazon worker explore their performance throughout the company’s warehouses? Using camera eyes and sensors, these ‘smart’ devices read our platformed world and increasingly communicate with one another, thereby augmenting their processuality.

Terrain is inscribed in autonomous vehicles in the form of digital maps: ‘robots which have stored their operational space’.[4] On the basis of these primary representations, the machines continue to read their environments and thereby rapidly generate their own new and expanded ‘operational space’. Hackers were able to read out the unsecured data of a simple vacuum-cleaner robot and found detailed mappings of its operational spaces on its memory chip, which enabled the adaptive mini-robot to avoid repeatedly taking the wrong route or being blocked by angled furniture.

Are we still able to follow this machine-readable world when it is only with great difficulty that its purely algorithmic pictorial logic can be translated to a human scale? Digital collecting machines, which are increasingly engaging in a coded dialogue with one another, translate their images for the human eye, images that originally do not ‘target the human eye. Although a computer can process images, it does not need real images to verify or falsify what it reads in an image. For the computer, the representation within the computer suffices.’ [5]

For the chief technology officer of Amazon’s cloud service provider AWS, so-called M2M connectivity is more than a vision of the future: ‘I expect that the coming year will already see a massive increase on machine-to-machine communication in the Internet. At the moment such communications make up between three and five percent of data traffic. The networking of devices and factories will significantly increase this volume. What we are dealing with here, for example, is sensors that talk to one another.’[6] But what happens in the AI black boxes and among communicating machine parks if their own programmers admit that they themselves no longer understand the newly invented language of ‘smarter’ machines but can only attempt to read their effects?[7] How can we make a future machine-driven city comprehensible if automated units set its rules? Are we approaching a post-human world that will radically dissolve the Anthropocene?

Always in Motion

An already somewhat tattered poster around the corner from my place caught my attention recently, informing me that ‘genuine cars in your vicinity are looking for you.’ It shows a modern sports car and the URL-modelled brand name oply.com along with references to the Apple and Google Play app stores. Adopting the vernacular of a dating service, the illuminated headlight eyes seem to gaze out at future customers, already showing themselves from their best side. This sensationally performative campaign by a start-up that went bankrupt in the spring of 2020 describes cars equipped with sensors as search machines looking for possible candidates to capture.

In the logic of this advertisement it is not people who are looking for their mobile partner, which in purely legal German terms would only be permitted for licensed taxi operators and not even for Uber drivers. Instead, the vehicle itself sneaks up to future passengers. In the open-top sports car there is no ‘vehicle driver’ to be seen, as if a self-driving vehicle would cruise autonomously around Berlin: Get in[8]

So-called shared cars, like the rental bikes which turned up suddenly and disappeared just as quickly, or the electro-scooters that get in everyone’s way, are loss-making operations in terms of rental income. Always in motion, equipped with sensors and initiating business contacts with customers via apps, these vehicles constantly collect data that goes beyond their function as means of transport. The use of this data for daily mapping updates, localised restaurant recommendations, and the selling of individual movement and consumption profiles can hardly be exaggerated: Data is the new oil of what is for the most part electric mobility, and the smart phone is the key to the car.

When the completely self-driving vehicle is eventually ready for market, it will – as the ‘iPhone of the street’ – be well equipped to fill its passengers’ spare time with gaming, films and other service offerings. Even the cars currently being sold offer a diverse range of add-ons (remotely activated steering wheel heating, real-time navigation systems, increased kW propulsion power, etc.).[9] Digitally broadcast from Las Vegas in 2021, the Consumer Electronics Show, which is now increasingly eclipsing the classic Detroit Motor Show, saw Mercedes present a ‘Hyper-screen’ as the ‘brain and nerve centre of the car’. Whereas until now the motor, gearbox and chassis have been the core elements of a car, in the latest Mercedes offering this role has been assumed by a structure made of curved glass that occupies the entire front area of the interior and houses three monitors.

Fluid

In order to be better able to understand the city as a data factory traversed by self-driving vehicles, it is worth looking back at the precursors of the ‘smart city’. The smart bombs of the Gulf War have found their continuation in smart cars and smart cities, and for this reason, investigations in the fields of military and automobile technology along with those in the fields of consumer electronics and art will improve our understanding of the ‘operational spaces’ we are dealing with. As always in the USA, the military-industrial complex was at the vanguard of these developments and provided Silicon Valley with its first start-up capital, highly lucrative commissions and numerous test runs.

SmartThings Inc. is the name of a smart-home enterprise based in Silicon Valley that concentrates on the development of automation software and cloud platforms for smart homes. The firm claims that its platform already has over 60 million active users. Now a subsidiary of Samsung Electronics, SmartThings provides support for around 600 entertainment and household devices produced by the Korean firm, including robot vacuum cleaners, refrigerators, televisions and washing machines.

Machines, objects and people will soon be able to communicate with one another in the ‘Internet of Things’ (IoT). The next stages will see the teaching of diverse equipment, sensors and actors. At the same time, the transitions between shopping, gaming, battlefield and factory are becoming ever more fluid, but not always seamlessly: ‘Recognizing or tracking in the field is much more difficult than in the factory. A factory is a controlled space with stable light conditions and a regulated order. The location systems will have to be improved, or the entire world brought into line with factory conditions.’[10] For this reason, self-driving vehicles are preferably tested in sunny areas and on clearly defined autobahn routes, since this makes the difference between object and background clear: the view is not obscured by rain or snow and the chaotic details of the city can shut out of the picture. In this context objects are abstracted into box models and kept in digital wire cages. Coming back to the question posed by the oply.com advertisement, how then do algorithm-driven vehicles recognise me? And what happens when they don’t look that closely?

In Seventh Heaven

In a report in the New York Times, Joshua Brown stands in the driveway of his simple suburban house and leans proudly against the back of his Tesla Model S, which he has christened ‘Tessy’. A short time later, on 7 May 2016, the same self-driving vehicle failed to brake and drove the forty-year-old straight through the trailer of a truck. ‘What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.’[11]

The Tesla’s computerised cameras and radar – Tesla wantonly does without the more expensive Lidar system – confused the truck’s white tarpaulin with a cloud on the horizon and as a result Tessy’s brakes were not activated. A police diagram shows the force with which the Tesla vehicle smashed through two security barriers at the side of US 27-A in Florida, ran down a security post and then landed on its roof – or the part of it not ripped off by the trailer. Joshua Brown was returning from a family gathering at Walt Disney World in Orlando and died as probably the first victim of a self-driving car.

The ‘Autopilot’ launched by Tesla in October 2015 boasts a diverse range of capabilities that include braking, acceleration and the overtaking of other cars. But in fact the name promises more than the system can deliver because drivers always have to have their hands close to the steering wheel and continue to pay attention to the traffic. Joshua Brown is said to have illegally taken his eyes off the traffic around him and should have had at least seven seconds to apply the brakes. To this extent, federal auto-safety regulators[12] could not identify a systemic failure and soon gave the Tesla series the all clear again.

Covering more than 72,000 km in nine months, the adventurous bomb disposal expert (Iraq War) and (recreational) parachutist posted more than two dozen videos of his test drives for his fans and other Tesla disciples. ‘Mr. Brown posted videos on the internet showing himself riding in Autopilot mode. “The car’s doing it all itself,” he said in one, smiling as he took his hands from the wheel.’[13]Joshua Brown’s most popular video, ‘Autopilot Saves Model S’, shows him on a freeway driving from Cleveland to his home in Canten.[14] ‘A white truck cuts in front of Mr. Brown’s vehicle, and by his account, the Tesla’s Autopilot feature swerves the car to the right, avoiding a collision.’[15] When Elon Musk, the founder of Tesla, shared the video on Twitter, it went viral. And Joshua Brown could hardly believe it: ‘@elonmusk noticed my video! With so much testing/driving/talking about it to so many people I'm in 7th heaven!’

Six months after the crash Tesla installed new software via the cloud that gives more weight to the radar system, although it is less suited than Lidar to recognising the shape and size of an object. The new ‘Autopilot’ software warns the driver far more often to keep his hands on the steering wheel and switches off the Autopilot after three warnings. The software only restarts once the car has been driven to the side of the road.

Anthropomorphic Test Devices

The fatal accident on the freeway near Florida’s Goethe State Forest in May 2016 demonstrated the state of the machine readability of the world. ‘As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing’ (Tesla Team). Human guinea pigs – ‘still in a public beta phase’ – are covering thousands of kilometres of road in order to train Tesla’s software, like all the other test vehicles used by car and software firms.

Crash test dummies are used in accident research to generate data sets. These human-machine dummies have recently been dubbed anthropomorphic test devices, and Hybrid III devices are fitted with a variety of accelerometers and force sensors in the head, neck, chest, spine, pelvis and legs. According to Wikipedia, they are also usually fitted with an angle sensor for the knee and an angular velocity sensor for the head.

In the future, such testing will be increasing carried out using computer models, which allow for reproducible testing with finely controlled variations of influence parameters. However, computing capacities are not quite up to the task yet. The relevant boundary conditions of the car with its rigidities and of the human body can still not be modelled accurately enough, which is why physical crash tests with crash test dummies are required for the legal certification of new vehicle models.

2 Jochen Becker Din

Comments