Why AI will not end mankind, but we still need to prepare for challenges
Marco Gercke is a global thinker and writer focussing on Global Security, Cybersecurity and AI. For more than ten years he has advised governments, international organisations and the private sector. He is Director of the Cybercrime Research Institute, an independent global think tank.
Glorious future or the end of mankind? Terminator or Knight Rider? It is surprising how often the discussion about chances and risks of artificial intelligence ends up with oversimplified stereotypes and how often the Hollywood movie Terminator is quoted in this regard. The discussion about the impact of current developments for security and society in general is important, especially the future of labor, but it is one of the unlikely scenarios that AI will end mankind.
The impact of global warming and armed conflicts pose a significantly greater risk. The equally oversimplified response to the oversimplified question is that if AI and humans end up in a conflict about resources, computer-controlled robots with AI can “leave” planet Earth and exploit the resources of other planets while mankind will have significantly more challenges to adopt.
The discussion about the ultimate conflict distracts from more likely conflicts. The willingness to trust suggestions made by artificial intelligence once it surpassed our capability to verify the “decision making process” is one of them.
The following experiment highlights the challenge: various tech companies are currently developing AI systems that are able to negotiate. To verify the abilities of AI, twelve professional negotiators were invited to negotiate a fictitious trade agreement. It took a full day. Having got this human benchmark two AI systems carried out the same negotiation. It took them less than a second to carry out the task and as both humans and machines referred largely to standard contractual clauses the agreements reached by machines looked almost identical to the one drafted by their human counterparts.
One clause that differed caught the attention of the reviewers. It took a team of lawyers more than 100 hours to come to the conclusion that the clause selected by the machine in less than a second was actually better suited as the one picked by the human negotiators was contested by two national courts in not widely known court decisions.
What if after one year of calculation an AI presents the solution to climate change – the release of a complex combination of highly toxic gases into the atmosphere. Something that – just like chemotherapy from the outside – does not look like a solution but could create new problems. And what if the verification of the complex climate modeling will take an estimated 100.000 scientist more than 1.000 years? Will we trust the conclusion reached by AI or will our desire to verify decisions prior to their execution set the limit?
Leveraging Business Speakers to Drive Sustainable Strategy, Growth, and Profits amid Global Climate ChangesSustainability has transcended the realm of buzzwords and solidified its position as the driving force of business today. The world faces pressing challenges – from climate change to resource scarcity… Read more
Rory Sutherland, Gerald Ashley: The Human Risk PodcastRory Sutherland and Gerald Ashley, acclaimed business speakers, host a captivating podcast called Human Risk, where they delve into the fascinating realm of decision-making in various risk scenarios. With their… Read more