jump to navigation

A Caution Moving Forward with AI December 4, 2016

Posted by 7942francor in AI Present and Future.
trackback

As a citizen of the world I would caution against the funding of research in impractical humanoid AI robotics and the use of fully autonomous AI. I would encourage the support of research in practical AI technologies and Assisted/Augmented Intelligence instead of Autonomous Intelligence [1]. One issue with modern research in AI is the focus on creating humanoid AI robots. AI most definitely has its uses; however I don’t believe we should confine it to a humanoid structure. That is, we have been making AI less efficient by throwing money into research that has of little practical use. Creating robots that look, act and feel like humans isn’t the most efficient way to implement AI [2]. If we want to use AI to solve business, engineering and science problems, we don’t need it to have emotions or hands and feet for that matter. We need it to solve complex problems and output decisions with explanations of the risk/reward/statistics. Alternatively, if we need to utilize AI to perform physical tasks, I think in most cases we can leave AI out of it and use robots with programmed algorithms to perform the tasks at hand.

The major issue with fully autonomous AI is the lack of moral responsibility. If we leave major decisions completely up to AI, then who is to blame when such decisions negatively impact people’s lives? There are scenarios when AI may have to make a decision that either way will have a negative impact on humans. The argument of the autonomous car choosing between hitting a pedestrian or steering into a tree to avoid the pedestrian comes to mind [3]. We could blame the creator of the autonomous AI, but if we are discussing true AI, than the creator may not be to blame. What if the AI made decisions that were influenced by external malicious information it “learned”? One such example was the Microsoft Twitter bot Tay [4]. It was an AI that would learn through conversations it had on Twitter with the general population. Unfortunately it was shut down after a day as it turned into a racist, Hitler loving, sexist, conspiracy theorist. There is always the issue of bugs affecting algorithms. Such bugs in an assistant AI that brews coffee in the morning wouldn’t be so bad, however what about when the autonomous Tesla confused a semi-truck with the sky and drove under it killing the driver [5]?

We should concentrate on keeping research focused on practical uses for AI such as helping businessmen/women, engineers, scientists and doctors make complex decisions. We are a long way away from utilizing autonomous AI in a safe manner and it would be in our best interest to concentrate on implementing AI in an assistant/augmented format. Fully autonomous AI has been the end goal for many researchers and dreamers alike. Science fiction has played a large part in pushing these dreams into a reality as humans continually try to create life in an effort to describe its meaning. However, we must think practically and consider the possible dangers of overreaching with AI technology and heed the warnings of engineers, such as Elon Musk, who believe that AI is our “biggest existential threat” [6].

Sources:

[1] Rao, Anand. AI everywhere & nowhere. May 20, 2016. <http://usblogs.pwc.com/emerging-technology/ai-everywhere-nowhere-part-3-ai-is-aaai-assisted-augmented-autonomous-intelligence/&gt;

[2] Incredibles !. 5 Coolest ROBOTS You Can Actually Own! (2016). June 16, 2016. <https://www.youtube.com/watch?v=0XmUaHf-11A&gt;

[3] Markoff, John. Should your driverless car hit a pedestrian to save your life?. June 23, 2016. <http://www.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html?action=click&contentCollection=Technology&module=RelatedCoverage&region=EndOfArticle&pgtype=article&_r=0&gt;

[4] King, Hope. After racist tweets. Microsoft muzzles teen chat bot Tay. March 24, 2016. <http://money.cnn.com/2016/03/24/technology/tay-racist-microsoft/index.html&gt;

[5] Yadron, Danny. Testla driver dies in first fatal crash while using autopilot mode.  June 30, 2016.  <https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk&gt;

[6] Gibbs, Samuel. Elon Musk: artificial intelligence is our biggest existential threat. October 27, 2014. <https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat&gt;

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: