Civilization is one of the most popular and enduring of multiplayer computer games. The first version was launched on DOS-based PCs in 1991. It was set circa 4,000 BCE. The objective was to build enduring civilisations. The latest versions are set in science fictional galaxies.
Human players can assume the roles of historical characters such as Alexander, Bismarck, Ghenghis, Haroun-al-Raschid and so on. The characters can also be managed by the computer, which has an artificial intelligence (AI) designed to imitate known behaviour traits. Thus, Bismarck is a diplomat by preference, while Ghenghis and Alexander are more warlike.
The most notable AI failure is Mahatma Gandhi. In general, Civilization‘s Gandhi is friendly and inclined towards diplomacy. But the Gandhi AI also loves nuclear weapons – it is far more inclined to use nukes than any other historical character.
There has been a lot of speculation as to how and why this schizophrenic error was coded. Gamers like nukes and there is hilarious commentary centred on Gandhi frequently nuking competing civilisations. It has been retained in new versions as a running joke. But the fact that such an egregious error could be hard-coded may also be referenced as a warning.
AI could be dangerous. The programmers may make critical mistakes. The IQs of artificial intelligences will eventually exceed that of their human creators. These AIs will be “self-learning” and self-correcting. Put those three possibilities together.
What happens if a super-intelligent, self-willed AI “wakes” up in a bad temper? Skynet in the Terminator series, Hal 9000 in 2001: A Space Odyssey, – the trope of the intelligent machine that decides to exterminate its creators, has been around for a very long time.
Fiction is one thing but given rapid advances in AI, hard-headed experts have started taking the possibility of psychopathic hyper-intelligent artificial entities seriously. Puerto Rico recently hosted a closed-door seminar, “The Future of AI: Opportunities and Challenges,” where over 100 tech luminaries signed a manifesto promising to develop AI only for good. Elon Musk (who was one of the delegates) donated $ 10 million to research studying the legal and economic impact of intelligent robots.
Bill Gates held a Reddit AMA recently. He often outlines his future plans and thoughts in “Ask Me Anything” interactions and he wrote, “I am concerned about super intelligence. The machines will do a lot of jobs for us and – that should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk on this.”
Microsoft has put enormous resources into researching AI and other pathways to machine intelligence. Microsoft’s Project Adam, for instance, uses AI techniques to try and achieve an ambitious goal – develop software that can visually recognise anything and everything.
A range of tasks that traditionally required human judgement is being managed by autonomous computers with little or no supervision. Self-driven car projects are proliferating by the dozen. Many major carmakers and, of course, Google, have developed self-driven cars. Aircraft are increasingly auto-piloted. The recent sequence of crashes and disappearances make it likely that auto-pilots will have more responsibility.
Autonomous computer programs manage most of the money wagered in financial markets. Computers play better chess, bridge and poker than most humans. They can put together diverse strands of information efficiently enough to win at quiz shows and to grade internet search results accurately by likely relevance.
Stephen Hawking is another man who has publicly expressed unease about AI. Hawking recently “reviewed” the science fiction movie, Transcendence, which has a complex plot involving a sentient computer. While he understands the likely benefits of good AI, he is also worried by the proliferation of autonomous or near-autonomous weapons systems (such as killer drones). Hawking says, this could have the same impact as a more intelligent alien civilisation suddenly connecting with Earth.
It may seem like an odd metaphor, but machine intelligence doesn’t process information the same way humans do. Machine thought processes will become more opaque as they become more intelligent and more connected. AIs could indeed become the Frankensteins of the near future. Rather than sensational scaremongering, a sensible set of guidelines must be developed to cope with the disruptive possibilities of human beings sharing their planet with more intelligent creatures.