Like what you've read?

On Line Opinion is the only Australian site where you get all sides of the story. We don't
charge, but we need your support. Here�s how you can help.

  • Advertise

    We have a monthly audience of 70,000 and advertising packages from $200 a month.

  • Volunteer

    We always need commissioning editors and sub-editors.

  • Contribute

    Got something to say? Submit an essay.


 The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
On Line Opinion logo ON LINE OPINION - Australia's e-journal of social and political debate

Subscribe!
Subscribe





On Line Opinion is a not-for-profit publication and relies on the generosity of its sponsors, editors and contributors. If you would like to help, contact us.
___________

Syndicate
RSS/XML


RSS 2.0

Staying in control

By Peter McMahon - posted Tuesday, 20 October 2015


Recently a group of high-powered researchers and businesspeople called for a rethink in relation to our ever-growing reliance on robotic technologies. More specifically, they objected to the development of 'killer robots' that could decide for themselves whether or not to kill human beings.

In an open letter presented to the Joint Conference on Artificial Intelligence in Buenos Aires, over a thousand of the world's experts on AI warned of a 'military artificial intelligence arms race' and called for a ban on 'offensive autonomous weapons'.

The group included Elon Musk of Tesla, Steve Wozniak, co-founder of Apple, Demis Hassabis, chief executive of Google DeepMind, and super-brain Stephen Hawking. It would be hard to find a better informed and generally more intellectually capable group of people on the planet.

Advertisement

The most familiar forms of such autonomous weapons are the drones increasingly used around the world, but the principles apply equally to ground weapons as well as those for use on and in the oceans. Although this proposed ban applies to military weaponry, these military advances tend to quickly find their way into broader security-related activities, and general policing. The question of how much control to cede to the machine on the spot is now a very urgent one.

As for the air where drones mostly operate, it has been mooted that the next generation of fighters will be the last flown by human pilots because of the limits on manoeuvrability due to human frailty. At a certain point, human pilots just can't take the extra gees and black out. Given this situation, in a few decades the world's major air forces may be all robotic.

Weapons with AI capability have been called the third generation in warfare, following gunpowder and nuclear arms, but this is somewhat misleading. Gunpowder and nuclear arms are both explosives and they radically increased firepower, whereas AI weaponry totally transforms the character of warfare itself. They remove the main constraints on the practice of warfare, the desire for self-preservation and basic compassion. Furthermore, unlike nuclear weapons, robotic systems are relatively cheap removing an important constraint on use.

There is of course another practical factor of real concern – operators of such weaponry may not be able to maintain physical contact and control. AI weapons are by definition complex digital systems, and such systems are inherently vulnerable to disruption. There is growing focus on what has been called cyber-warfare which is all about disrupting digital systems through some form of hacking. Not only could such weapons be made to malfunction, they could possibly even be taken over to attack the attackers.

Given enough intent it is possible to ban specific weapons, although the record is poor starting with the Church's attempts to ban crossbows in medieval times. The harsh reality of history is that if a weapon is decisive it will be used, at least to threaten. Recently a ban was effected on laser weapons designed to blind, another result of fast-developing new technology. But this is a marginal weapon with very limited impact whereas AI weaponry is a game changer.

As serious as it is, the issue of autonomous weaponry is but one aspect of a much wider problem. The underlying issue here is that humanity is now facing a fundamental challenge: our technology, in its latest digital form, is on the verge of being able to take over most of our essential modes of living.

Advertisement

The world financial system is no longer understood by any humans and is now basically run by algorithmic programs. This is due to the basic complexity of huge numbers of constantly interacting units of money in a system many times the size of the real economy of goods and services. Algorithmic trading systems can act instantaneously according to certain criteria, shifting billions at near light speed and leaving human traders way behind. Ultimately, of course, the application of such algorithmic systems only adds to the overall complexity.

Our main industrial production systems are also increasingly run by computer systems and the actual work is done by robots. Most jobs that don't require an actual human body, such as the trades like plumbing and electrical, perhaps hairdressing, are in the process of being automated. The shift in retail, for instance, is now well advanced. Self-service in supermarkets and Internet shopping mean we can get what we need, or just want, without interacting with a live human being at all.

Even jobs that currently require high skill levels, like journalism, law, some medical and middle managerial work, are being done by 'intelligent systems'. Some medical and counselling systems already work better than the human variety. Computer systems remember everything and don't exhibit the frailties of often over-worked or otherwise stressed human beings.

The basic technological driver of all this is the growing technical capacity and declining costs of digital systems, ultimately driven by Moore's Law which says that processing power doubles every 12-18 months. The basic economic driver is the desire by employers to replace expensive and hard to control humans with machines or digital systems.

But as we hand over the work to these systems, at what point do we lose control over our own lives? The issue of machines deciding life and death is clearly the literal cutting edge, but there are other critical concerns as well.

For instance, as we automate jobs all sorts of skills and knowledge sets are lost. As such, it will be increasingly difficult to even figure out what is happening with systems that are constantly developing because there won't be any humans who retain enough expertise. The essential information of our modern culture is steadily shifting into a cybernetic substrate and away from human awareness.

Perhaps the best example is the central role of the Internet in everyday life. The Internet is an incredibly complex arrangement of hardware and software that is now vital to all sorts of activities. The Internet is also incredibly vulnerable and could suffer catastrophic failure any time. For starters, parts of the physical infrastructure are particularly vulnerable to accidental or deliberate disruption. The software is even more vulnerable to disruption by hacking.

The people who actually run the Internet live in abject fear of some genius fifteen year old from Manila crashing the net with a super-virus he made up for fun, or some Eastern European crime gang doing it for the money. So far this has only occurred in partial ways, but the security people know how close we are to such a disaster.

One of the common concerns is that the frenetic growth of the Internet has not allowed consolidation of core systems and software. The money is in pushing capacities farther and faster, not going back and making it all sturdier and safer. The Y2K bug was a reminder of what could go wrong.

There is a basic question of competence here: can humans even maintain real control over our increasingly complex systems? In their open letter the AI expert group have said that handing over real time decision making over life and death to machines is a step too far, but we are moving in the same direction in many less dramatic ways. Jet airliners can now operate without pilots, and we are about to shift into driverless car traffic systems. We are all increasingly passengers in our own lives.

And it is a two way process: as our machines get smarter, we human beings get less capable and we become ever more dependent on them. And the very same digital technology is generating more ways of distracting ourselves, from IPads to virtual reality systems. As we lose capability in real world situations, we become ever more involved in our ever more intricate cyberspace worlds.

This basic logic, of growing technological capacity and declining human capacity, has been played out in many books, movies and TV shows because popular culture has a way of picking up the main themes of any era. For instance, right now there are plenty of movies concerned with these ideas, from the latest Terminator film to a subtle little piece called Ex Machina. These movies are of interest exactly because we fear that we are on the cusp of losing control over our own technological creations.

Eventually, probably very soon, we need to ask ourselves what life is about anyway. Is it about removing all threats, even as we create new ones, and just living easier, better entertained lives? Or is it about living in ways that nourish us physically, emotionally and intellectually, and enable us to become in some sense morally better people?

The people behind the open letter from Buenos Aires think we should not allow machines to decide to kill human beings. This would not just be the ultimate power for machines, but also the ultimate denial of responsibility by human beings.

  1. Pages:
  2. 1
  3. 2
  4. All


Discuss in our Forums

See what other readers are saying about this article!

Click here to read & post comments.

2 posts so far.

Share this:
reddit this reddit thisbookmark with del.icio.us Del.icio.usdigg thisseed newsvineSeed NewsvineStumbleUpon StumbleUponsubmit to propellerkwoff it

About the Author

Dr Peter McMahon has worked in a number of jobs including in politics at local, state and federal level. He has also taught Australian studies, politics and political economy at university level, and until recently he taught sustainable development at Murdoch University. He has been published in various newspapers, journals and magazines in Australia and has written a short history of economic development and sustainability in Western Australia. His book Global Control: Information Technology and Globalisation was published in the UK in 2002. He is now an independent researcher and writer on issues related to global change.

Other articles by this Author

All articles by Peter McMahon

Creative Commons LicenseThis work is licensed under a Creative Commons License.

Photo of Peter McMahon
Article Tools
Comment 2 comments
Print Printable version
Subscribe Subscribe
Email Email a friend
Advertisement

About Us Search Discuss Feedback Legals Privacy