jump to navigation

Defining Ethics with AI April 28, 2017

Posted by tleamon in Machine Learning.
trackback

One of the biggest takeaways I’ve had about the discussion of AI this semester is how the focus on AI is ever evolving and changing. With each change, it seems that AI progress seems to have peaks and valleys, each followed by a correlating over hype for what is to come. Regardless, progress is still being made and we are continually benefiting. Scope of potential AI applications has increased as well and with neural networking computing [1] being able to assist more complex algorithms, the feasibility of these applications is increasing.

To make these applications and algorithms more helpful for users and increase the number of use case scenarios, data is being collected from a variety of sources to help refine existing and future models. However, it seems that the means of collecting this data has brought up some ethical issues with the users that are using the services. Ethical issues have also been brought up in regards to how AI will make human-like decisions at important decision forks in the road. In the future, AI will eventually have the ability to make decisions in many fields, some of which could put a person in danger [2].

AI advancements have been made in the medical field and driverless cars, for example. Based on a patient’s symptoms, AI could have the ability to recommend a medication, procedure, or other type of solution. As a result, there are also potential dangers with solely trusting on AI for recommendations, even if that statement itself counteracts the purpose of AI. Artificial Intelligence aims to provide a level of intelligence to rival humans, but if we can’t trust the input of AI, then what’s the point of using it?

Another similar situation is happening with driverless cars. Artificial Intelligence is being developed by several companies for controlling a car to drive from a Point A to a Point B safely. During the drive, if the AI encounters a situation where a decision must be made and two separate parties are equally likely to be put in danger, then how will the AI choose which decision to make?

These two scenarios echo my thoughts on the current feasibility of unsupervised learning and AI as a whole. While there are many applications to be integrated with AI, the output that these algorithms provide need to be screened by humans initially. This screening could be automated as well to provide a set of ethical rules that are decided upon. I’m not sure of the scope of AI in the near future, but if these same algorithms are expected to learn on their own input data and revision their own logic, then I personally would have more confidence in the future of less supervised learning.

References

[1] Vorhies, William. “Beyond Deep Learning – 3rd Generation Neural Nets.” Data Science Central. N.p., 4 Oct. 2016. Web. 28 Apr. 2017. <http://www.datasciencecentral.com/profiles/blogs/beyond-deep-learning-3rd-generation-neural-nets&gt;.

[2] Bossmann, Julia. “Top 9 Ethical Issues in Artificial Intelligence.” World Economic Forum. N.p., 21 Oct. 2016. Web. 28 Apr. 2017. <https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/&gt;.

Advertisements

Comments»

No comments yet — be the first.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: