AI Will Not Kill Us

It’s all too common to read an article or watch a presentation by artificial intelligence (AI) gurus that give audiences a bleak prediction for the technology’s future — robots will be sentient, and surely they’ll decide to either kill or enslave all humans. In other words, it’s only a matter of time before our robot overlords doom us all, and there’s not much we can do.

This is NOT our future!

These doomsday prophets always give three justifications for their predictions:

(1) AI-enabled systems are becoming more and more intelligent and have already surpassed human intelligence in some applications. For example, AI systems managed to beat the world champions in chess and Go matches.

(2) AI and machine learning’s development has accelerated to the point of no return and will eventually enable a larger number of systems to become totally autonomous.

(3) AI models have become so complicated that even the data scientists who develop them don’t understand how these models operate and make decisions.

Let’s discuss each of these arguments in detail.

Doomsday Argument #1: AI is better than humans

AI and machine learning-augmented systems have been around us for some time already and have done far more good than harm. For example, the credit industry has used machine learning for at least three decades now in credit scoring to offer loans and mortgages in most of the industrialized world. This automation resulted in making loans and credit cards accessible to a far larger segment of society than, say, in the 1960’s or 1970’s. It also made the process faster, more accurate, and more secure.

It may be hard to accept the fact that even the world’s best chess or Go champion can’t defeat computers. So yes, AI and machine learning can outperform humans in specific tasks — but that fact doesn’t need to have negative connotations. If humans are doing tasks that cause them to make errors and harm other people, we should encourage engineers and developers to create systems that perform these tasks better. We have the necessary safeguards in place today that allow humans to intervene and modify automated systems to prevent abuse and bias and protect those whose data falls within outlier status.

Doomsday Argument #2: It’s too late

The second point regarding the pace of development in AI and machine learning shouldn’t be a reason for alarm, but rather a reason to celebrate and support this progress. We’ve all benefited from automated systems that have been operating for at least two decades in the analysis of medical testing, especially concerning monitoring and screening patients for chronic diseases and conditions like high cholesterol or diabetes. Most lab work and reports today are produced automatically with the help of AI and machine learning, which alerts physicians and patients to abnormal or noteworthy results. These systems are based on statistical and machine learning models that analyze large amounts of data collected over a long time, and they correlate and identify specific conditions with test results. These automated systems outperform human physicians in pinpointing the areas of concern that physicians have to focus on. Again, in this example, these models can make medical decisions faster and more accurate than ever before, and unburden physicians from having to manually analyze each and every test result that lands on their desk. Everyone involved in this process benefits from these technological advancements.

We can summarize the above by saying that accelerated adoption of AI and machine learning models — along with the requisite deployment technology — benefited society in many fields as long as people always review and verify the decisions these intelligent systems make.

Doomsday Argument #3: AI is too complex

The last issue AI doomsayers often raise is the increased complexity of machine learning models and AI systems, even to the world’s finest data scientists and developers. Because of this complexity, doomsayers say, even the world’s brightest minds can’t make heads or tails of what goes on in AI and machine learning systems.

Of course, any data scientist worth their salt wouldn’t let this scenario happen. Data scientists today have developed and implemented Explainable AI technology, which gives users accurate insight into how these models make decisions and gather data. Vendors of AI technology — such as Altair — have included a rich collection of such tools in their software offerings, like in Altair Knowledge Studio. Because of this topic’s importance, vendors such as Altair are continuously investing in the developing tools and interfaces that make it easy for any users to understand and modify AI and machine learning models. Some of these tools go so far as to explain the decision-making rationale behind each and every data point.

Making AI a force for Good

We can debunk all of the doomsday arguments with above discussions in three simple reasons that explain why AI and machine learning is a force for good, and how we can ensure it stays that way:

(1) AI and machine learning technologies should be applied to the domains in society where they provide a better service and give more people access to services they may not have had before. Moreover, AI and machine learning should be used to the good of society and laws and legislation should be in place to guarantee that data is used in fair, equitable, and transparent ways.

(2) AI and machine learning models should be explainable. The science and technology that enables explanations of AI and machine learning models exists and is fully developed in open source and commercial software. Entities that use AI and machine learning technologies should embrace explainable AI tools and techniques as mandatory steps in their data science practices.

(3) Humans should have the final say in critical decisions, and deployment systems should allow human overrides to guarantee responsible, safe, and fair decisions. These safeguards must be a mandatory element in critical systems that may result in harm to humans or property. These safeguards and overrides will also need explainable AI capabilities to guide the timing and circumstances of invoking overrides.

AI for good!

Like any technology that changed the relationship between man and machine in the past, AI and machine learning will catalyze big changes — and with change comes the fear of the unknown. However, with disciplined and responsible implementation of technology, along with safeguards of laws and best practices, we believe AI and machine learning will prove to beneficial to mankind, just like prior major technologies that transformed out life throughout history.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mamdouh Refaat

I am the Chief Data Scientist in Altair Engineering. I have been working in ML and AI for over 25 years in business and engineering applications.