jump to navigation

Machine Learning applications in Healthcare May 11, 2017

Posted by 1969mathelc in Machine Learning.
add a comment

The technology of Machine Learning has come a long way over the past decade, and it has a very long way to go. Many different applications now revolve around machine learning, and they serve a wide range of purposes. Some of these applications serve as a means of convenience, making everyday life tasks more effortless, or even for entertainment. An example of the use of AI for entertainment is the one in which IBM Watson was used to create a movie trailer for the sci-fi drama Morgan [4]. Other applications of machine learning include much more critical purposes that may involve human lives, or even save them. Due to its potential to save millions of lives in the near future, I believe that machine learning has found a whole new purpose that should be further explored.

In an article from Science Mag, it is said that algorithm has been developed to predict heart at an accuracy that exceeds that of doctors. To be more specific, four different machine-learning algorithms were used, and each scored at an accuracy between 74.5% and 76.4% on approximately 83,000 records compared to 72.8% scored by ACA/AHA guidelines [2]. The algorithms trained themselves on 295,267 records and tested themselves by using available record data from 2005 to predict the outcomes on available data from 2015 [2]. During this analysis, the algorithms were allowed to take into account 22 more data points than the ACC/AHA guidelines, including ethnicity, arthritis and kidney disease [2]. On 83,000 samples, the best algorithm from the four correctly predicted 7.6% more events and raised 1.6% fewer false alarms than the ACC/AHA method.

There is starting to be research revolving around the interpreting DNA with the use of machine learning and AI applications. This is being explored by creating a system that creates molecular effects of genetic variation [3]. The eventual goal of this technology is to allow personal insights to be provided to individuals based on their idiosyncratic biological dispositions [3].

Medical Image Diagnosis is an area where machine learning can excel at. There is currently work being done by Microsoft’s InnerEye, which started as early as 2010, in such area. The InnerEye software is able to recreate scan as a 3D model while pinpointing the extent of and any growth in tumors [1]. Such tool can drastically reduce the time it takes to plan treatment for a patient.

With the proven results we’ve gotten with machine learning and AI, there should no longer be in conversations about the ‘potential’ of the technology, but rather in conversations about how soon the technology should be included in daily activities. The technology has already proven itself of its capabilities, however, doctors’ acceptance of the technology seems to be the biggest burden that needs to be overcome. Doctors, in general, are known to be prideful of their tremendous work, and it is understandable considering the hard work that they put forth and the criticality of their work. Furthermore, just as any other profession, it is not the most pleasant thing to hear that a computer is better at your job than you are. Nevertheless, the goal is not to replace doctors, but rather to assist them. At the end of the day, computers will always be able to process and maintain information at rates that humans cannot. Taking advantage of these processing capabilities would not only make doctors’ jobs easier, but would also have the potential to save millions of lives. It is evident that computers are already being used to process data and help doctors, however the complexity of Machine Learning and Artificial Intelligence takes the scope to a whole new level. The healthcare business is already extremely complicated; it is reasonable that adding a feature, which not many completely understand, can make matters much more complicated. I do not believe that these algorithms should be given the authority to make the final decision on patient cases, or that they should ever have that authority. However, I do believe that it is overdue that these proven technologies should be included in healthcare standards that would allow thousands of lives to be positively affected.

[1] https://www.techemergence.com/machine-learning-healthcare-applications/

[2] http://www.sciencemag.org/news/2017/04/self-taught-artificial-intelligence-beats-doctors-predicting-heart-attacks

[3] https://techcrunch.com/2017/03/16/advances-in-ai-and-ml-are-reshaping-healthcare/

[4] http://www.wired.co.uk/article/ibm-watson-ai-film-trailer

Advertisements

Mobile Machine Learning May 11, 2017

Posted by kristenkozmary in Machine Learning.
add a comment

The emergence of smartphones within the last two decades has changed the way humans interact with one another. Providing new ways to connect and access information, the smartphone has become an integral part of life. New technologies have allowed phones to be able to complete tasks that once could only be accomplished by super computers. Now, smartphones have the ability to learn without being programmed, otherwise known as machine learning.

Deloitte has predicted that over 300 million smartphones will have machine-learning capability in 2017 [1]. This will allow smartphones to perform machine learning tasks without having to be connected to a network. Machine learning tasks on smartphones include indoor navigation, image classification, augmented reality, speech recognition, and language translation [1]. Traditionally, machine learning capabilities on smartphones could only be performed by collecting data on the smartphone, send it to a data center for processing and training, and then send back the results to the smartphone. Local machine learning on smartphones cuts out the “middle-man” and processes the information directly on the smartphone. This increases the speed at which machine learning tasks can be done. Local machine learning is also more secure because private user data doesn’t need to be sent outside the network.

According to Business Insider Australia, technologies such as health and life predictions and user moods and emotions are going to become more prevalent in 2017 due to machine learning [2]. The data collected could aid the mental and physical of health of users by giving reports on well-being. Business Insider Australia also makes the claim that machine learning on smartphones can also help to protect against cyberattacks, although specific details were not provided.

Google recently published an article about the capabilities of machine learning on smartphones [3]. In this article, Google introduces a new approach to machine learning called Federated Learning which “enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud” [3]. The approach works by first downloading the current model to the smartphone. The phone then improves the model based on user data and “summarizes the changes as a small focused update” [3]. These updates only occur when the phone is plugged in and on a wireless connection, so that it won’t affect the phone’s performance. The update is then sent to the cloud and aggregated with other user updates to form a change in order to improve the shared model. The researchers at Google created a Secure Aggregation protocol for the aggregation of user updates. This protocol uses cryptographic techniques and only allows the server to decrypt information if there is enough user participation. This means that a small update from a single smartphone cannot be decrypted and read. The Federated Learning approach allows machine learning on smartphones to be faster because data doesn’t need to be sent to the cloud, which also ensures privacy [3].

Machine learning on smartphones has become a possibility because of better performance in processing units. In April 2017, Google announced its custom ASIC for machine learning called the Tensor Processing Unit (TPU) [4]. Google boasts that “its TPUs are 15x to 30x faster than contemporary CPUs and GPUs” [4]. With this faster processing, we’ll soon see large increases in capabilities of machine learning on smartphones.

 

References

[1] https://www2.deloitte.com/us/en/pages/technology-media-and-telecommunications/articles/tmt-predictions.html

[2] https://www.businessinsider.com.au/machine-learning-in-your-smartphone-is-the-megatrend-of-2017-2017-1

[3] https://research.googleblog.com/2017/04/federated-learning-collaborative.html

[4] http://www.silicon.co.uk/cloud/datacenter/google-custom-ai-chips-208833

AI and Machine Learning, What Does It Really Mean? May 11, 2017

Posted by Anthony Mason in Machine Learning.
add a comment

Before I enrolled in the Machine Learning seminar at Marquette I often saw the buzzwords Machine Learning (ML) and Artificial Intelligence (AI) being thrown around on the web. In some cases, these terms were used irresponsibly and utilized to simply generate hype amongst the masses. In other instances, many established experts in the field have gone to great lengths to discuss the realistic expectations and current progress that is being made in the sector. Now that I have completed this seminar I can confidently say that I am walking away with a solid understanding of what is realistically achievable in today’s machine learning and AI space. In addition, I think I have a better ability to gauge hype vs. reality when it comes to future machine learning and artificial intelligence advancements. With that being said, I have enjoyed learning about the topic and would like to speak to one article I found to be quite fascinating.

In one of our weekly discussions, our class explored the notion of an Interlingua language used by a Google AI Translation system. Google is currently developing an AI language translation tool that is demonstrating the ability to interpret and understand relationships between words in different languages. Devin Coldewey highlights the power of Google’s system in his article, Google’s AI Translation Tool Seems to Have Invented its Own Secret Internal Language, by explaining that the system can translate between two languages it has not been trained on by leveraging trained data sets between known languages.

For example, Google’s system can be trained to translate English words to Korean words and vice versa. Then the system can be trained to translate English words to Japanese words and vice-versa. Yet, what is fascinating is the system is capable of translating Japanese words to Korean words or vice-versa without ever being trained to translate between the two languages. The article credits the ability to infer the relationship between Japanese and Korean words to the system’s ability to define it’s own internal language, called Interlingua.

What I found most interesting was the rationale behind this line of thinking. Simply speaking, Google’s system deep neural networks are able to contextualize the meanings of words, to a certain degree. Using this knowledge the system is able to make logical fundamentals decisions when translating un-paired languages by referencing understood relationships between paired languages. This capability is absolutely jaw dropping to me and makes me really excited for what the future beholds.

If there is one thing I took away from this course is that Machine Learning and Artificial Intelligence applications are not only on the horizon but they are here and here to stay. I am excited to continue to read up more on the future advancements in this space.

Supervised Learning and Artificial Intelligence May 10, 2017

Posted by nikhil.j.tomy in Machine Learning.
1 comment so far

Artificial Intelligence has gained tremendous momentum in recent history, whether it be through autonomous cars, the emergence of Amazon’s Echo, or applications that predict consumer behavior. However, through the hype generated by emerging applications has further blurred the expectations and the possibilities of Artificial Intelligence. Not to mention, the ethical aspects of Artificial Intelligence has further raised controversy. Regardless, the insights gained over the past five months has significantly shaped my view of Artificial Intelligence. Aspects like using Artificial Intelligence to cure some of the major issues in the world today like cancer and even using AI to make better decisions whether it is through supervised learning or unsupervised learning makes Artificial Intelligence a technology with a high ceiling.

It was barely a little over a decade when the frenzy of mobile devices gained traction in the technology world as Apple and Android competed with Blackberry to break into the market. This was significant as it was probably the genesis of the concept of Internet of Things. As more data became more available, Artificial Intelligence was being re-shaped through supervised learning. For example, data gathered from mobile devices allowed Google Maps to provide real-time traffic updates. This was probably one of the most revolutionary mobile applications at one point and the truly an example of how much data could potentially be gathered. This has also further shaped supervised learning. Based on the input gathered, advertisements became more catered to the user, businesses were able to further study consumer behavior. In summary, long gone are the days when a user has to manually enter information for a model to shape and provide insight.

Today, supervised learning continues to shape in a manner where image recognition is an area that is well invested in. This is largely due to the returns that it could potentially offer, whether it be in regard to security, image search, or tagging. As discussed in the article, Machine Learning Opens Up New Ways to Help Disabled People by Simonite, it is only a matter of time where similar to subtitles, a mechanism will exist on every YouTube video where speech to text will be automatically captioned. Similarly, Facebook continues to invest in Image recognition to automatically tag pictures. This would be beneficial for people with Autism. It provides an opportunity for active training as it clarifies misconceptions for Autistic patients. [1]

The future of unsupervised learning is something that is quite intriguing. Much research has been invested in research regarding this topic where approaches include clustering, anomaly detections and neural networks [2]. The future lies where Artificial Intelligence will allow diagnose the potential of being stricken by Cancer whether it be through image detection or through genetics. For example, a recent article published by BBC was title Artificial Intelligence “as good as doctors.” The article discusses the potential of having smartphones act like cancer scanners. This was based on software developed by Google, which was able to differentiate between cats and dogs. Similarly, the repurposed application is now able to some of the major types of skin cancer like carcinoma and melanoma. Similarly AI initiatives are taking place, which will be able to detect when the heart is likely to fail [3]

Finally, it is critical that the technology is used to benefit humanity rather than eventually replace humanity. Several highly regarded figures in science and technology have expressed concerns regarding the growing dependence on technology and the emergence of Artificial Intelligence. In addition, it is important to determine whether Artificial is generally hyped or if it is practically becoming a force. It has made strides in academia and research over several years but as it continue to gain traction and momentum in the eyes of the general public, it opens more doors of opportunity. In essence, time will be the judge.

[1] https://www.technologyreview.com/s/603899/machine-learning-opens-up-new-ways-to-help-disabled-people/?utm_campaign=add_this&utm_source=email&utm_medium=post

[2] https://en.wikipedia.org/wiki/Unsupervised_learning

[3] http://www.bbc.com/news/health-38717928

Why is Machine Learning Import to Clinical Medicine? May 10, 2017

Posted by Dawn Turzinski in Machine Learning.
add a comment

What is wrong with me? Why am I sick?  We have all been there, sitting in our chair trying to figure out what is going on with us.  We search the internet looking for answers.  But, we end up even more confused than we originally were.  Or maybe thinking the worst-case scenario.  We go from there to ending up in the doctor’s office listing all our symptoms.  As he/she pencils them down a diagnosis may be formed. What if we could improve our health care professional’s ability to establish a diagnosis.  Improvements in this area will be done by machine learning.

Anywhere you turn you turn, you see Machine Learning.  It is on Internet Searches learning your searches pulling in suggestions and posting them in Facebook.  It’s in the Google Car, learning how to drive itself.  It is on Amazon, learning your choices and showing what others bought the article you purchased and recommending something else. Machine learning is giving computers the capability to learn through learning algorithms without being programmed to do so.   So, how will we get there with Clinical medicine?

With advances in computational power, trending big data and Internet of things will bring change to many fields such as Clinical medicine.   For example, what if there were sensors attached to your body like the Fitbit that calculates data about your body and stores it.  Then a medical system would request this information from the sensor and pull that data to analyze it.  It would run a machine learning tool to figure out what is going on with you and give you a diagnosis.   Another example; you could pull data about a patient from other health/claim databases to allow the learning predictors to have access to lots of variables to lead to a prediction.  If we had this data available we might be able to predict cancer sooner rather than later.  With this information, we could improve patient diagnosis and help plan long term care for someone who could be terminal. What is another reason why machine learning is important to Clinical medicine?

Have you gone to the doctor and he/she orders you down to Radiology for some X-rays, MRI’s, CT Scans or etc.  What if these images were scanned in and a learning algorithm read them?  With advances in computer vision being applied to the big data will result in faster performance and be able to see things that a human eye may not see.  For example, if the machine knows precisely the anomaly that causes a certain seizure they can prepare the treatment plan for the patient.  And they know this by learning many scans with the same reading and know what the diagnosis is.   There will be no room for error with computers.

Now, we all have been frustrated with doctors; when they don’t know, or miss-diagnosis you.  Or have you go for testing that was not needed.  These errors are shocking to the medicine field and they would like to reduce them.  Which would reduce the risk and liability of the provider.  Machine learning algorithms would be able to generate possible diagnosis and suggest the tests that would be needed and reduce the over ordering of useless testing being done.  So, you might be asking when this will happen in the future?

This will happen slowly over the next couple of decades for many reasons.  Things will start to change with baby steps and not all at once.  Change is always hard for people who have been doing these repetitive tasks over the years.  For example, the people who read the x-rays will need to be re-trained/moved over to a new professional position.  Since their position will be replaced with a machine.  In addition to this the provider; will need to grow and understand the complex machine learning tools to succeed as a provider and understand their patient’s needs.  Advances will continue to happen slowly and machine learning is here to stay.

References:

  1. http://catalyst.nejm.org/big-data-machine-learning-clinical-medicine/

Machine Learning — The Leader of Future May 10, 2017

Posted by imrbks in Machine Learning.
add a comment

Machine Learning is a multidisciplinary interdisciplinary subject, including the most common statistics, approximation, and algorithm complexity analysis theory, as well as some issues related to the biological science. The most common artificial neural network mimics various types of neurons and connections that modeled as the functioning of the human brain. Another algorithm is a genetic algorithm that simulation of gene mutation process.

Machine learning mainly studied how the computer would model and to achieve the same way as human to learn and to think; the algorithm has the feedback mechanism of new problem in order to optimize their own knowledge has been learned. However, the artificial intelligence algorithm will vary depend on the quality of machine learning algorithm selection and development; thus, there are many software and hardware developers working on this field.

The way and effect of machine learning are critical to the development of mankind in the near future; at least, the high efficiency machine learning algorithm can replace repetitive labor work in some circumstances. Therefore, machine learning combining with artificial intelligence is still very promising in the future, especially in the era of big data, exposing around the huge amount of information, more and more human behavior-related and natural scientific-related data are recorded, such as the language, sound and videos from the social media, bio-informatics, weather and environmental data from the organization database, for the algorithm as a material to learn. In addition, some of the results produced by machine learning algorithm have been successfully applied in many areas, including data mining, natural language processing, etc.

A small mosquito in nature can fly freely – such efficient mechanism in the biological world is everywhere; however, human beings cannot create such a highly efficient machine. Although human have made some breakthroughs in the research on machine learning nowadays, we still did not reach the degree of artificial intelligence, and there are many problems that need to research and development; therefore, the future development is full of opportunities but challenges. Such as the development of the hardware architecture that boost the speed of machine learning algorithm. For example, NVIDIA, due to its unique architecture of hardware chips, is extremely talented in dealing with machine learning problems in parallel to achieve unbeatable speedup, which has become industry standard for machine learning. More models and algorithms are born to handle more and more complicated problems. I believe that with the development of machine learning, we will eventually achieve the artificial intelligence that will change the world in the future.

On the other hand, machine learning has become cheaper, easier, and come to every one of us – mathematics and algorithms are not necessary, most common algorithms are readily available, such as Microsoft Azure contains Machine Learning packages for developers to use and analyze database collected from customers. The same model is also simplified and appear as an AI package in different software such as SAS and MATLAB, etc.; Just few code is better than thousands of word. Besides, the machine learning engine comes to the smaller devices, such as smartphone and smart watch. For instance, Mito – an app for beauty when taking the photo [1]. Many smart phones can automatically beautify the user self-timer, but the Mito M8 smart self-timer technology goes further. It is not just a simple face recognition, but a portrait recognition technology. The former is to identify your gender, age, facial features, the latter can identify your body, contour, self-timer when the face of light and other dimensions, and then the depth of your skin color, hair and other information, which can be the overall shape of the user (Face and body information) to identify more accurate figure: for each user custom landscaping program, to achieve beautiful “thousands of people face”. More successful examples can be found here [2].

In sum, in the era of large data, the machine learning obtains a great growth, the importance of individual thinking will decrease. Machine learning is based on large data that summarize the regular events from human world, and it picks out valuable information from complicated data in order to produce deterministic results for the future.

[1] http://news.zol.com.cn/638/6387876.html

[2] http://www.ithome.com/html/next/290924.htm

Machine Learning Impacting our Future Lifestyle May 10, 2017

Posted by banabithibose in Machine Learning.
Tags: , ,
add a comment

We are living in the age where many of our tasks are intelligently handled by powerful machines. We have already seen how the repeating tasks are replaced by machine. Now we are in the age when the machines are actually thinking. Machines are developing not to follow the orders rather thinking which one to follow. Machine learning gained much hype during the 60s and 70s with the advent of computers. Many of our forefathers thought that machine will completely take up human jobs. But that hype fails to keep up the expectation because there is limitation of the processing capabilities at that time. Our knowledge of the computer was little. But in current times we have gained knowledge and access to several high computing devices. Starting from Super computers, graphical processing units, cloud computing, neural networks, smart phones we actually believe that the hype raised 4 decades back will come true. It’s no longer a hype anymore and soon to become a reality.

We have already started to experience some of the benefits of machine learning. Face & Voice recognition enables us to develop translating tools which is closing down the barriers between different languages. These tools are now available to the masses which mean that large scale computing is not much expensive anymore. Our lifestyles are already getting changed with the introduction of smart devices which not only do their job but also able to think what they are doing.

Large scale unsupervised learning whereby machine starts learning based on its past experience are now possible as we have got graphical processing units to perform large computing within small period of time. In some cases supervised or semi supervised learning with test datasets are enabling learning much faster. Several researches in these fields have enabled scientists to tap the full potential of machine learning.

With machine learning in full bloom we should be seeing many changes in the future near and far. One of the major impacts will be coming in the field of healthcare. This is the field where we have huge amount of past data to diagnose any future disease. We need to utilize this data in more efficient way to make effective decisions. Currently we fed our critical data to the machine to determine the metabolic issue. This will get extrapolated to getting advice on critical disease from the machines.

Another foreseeable change will come in the automobile industry. With the advent to self-driven cars we will have fewer losses from roads and more reliable mechanism for transportation. Our future generation will have cars which can drive safely without learning to drive and reducing hazards.

Machine learning will come to a full circle when the smart devices will begin communicating with each other. We already control some of these devices from our phones. If the machine starts to communicate then the controlling will be centrally and efficiently managed. Our lifestyle will change when we do not have to do the boring repetitive household chores. Instead we will get more time to spend with families and can prosper more intelligently.

But with all being good we need to understand that this is a giant which always need to be tamed for the benefit of mankind. If it is unleashed and it fells in the wrong hand it can create havoc. We need to take utmost care in making it the most secured mechanism to save our beautiful world.

Predictive Analysis in Healthcare May 9, 2017

Posted by francinearchie82 in Machine Learning.
add a comment

Predictive analysis is a growing trend that deserves all the hype it receives. Important decisions can be made based upon undiscovered patterns that exist within data sets by leveraging predictive analysis decision support. In healthcare, machine learning could benefit the area of predictive analysis by helping with managing the surrounding population of healthcare organizations.

Consider an 18-year-old patient that has electronic health records for the duration of their life; now multiply that by all 18-year-old patients that have electronic health records in a single healthcare organization, and you can imagine how you could have a nice amount of data to review. From birth to 18, you can track, in addition to age, patient attributes such as weight height, demographics like address/location/area and salary, frequencies of visits to a primary care physician, proximity to the healthcare facility, and medical conditions like high blood pressure and diabetes. With a data set that rich, predictive analysis can begin to predict not only patient outcomes, such as medical conditions that present with frequent item sets like high weight and low primary care physician visits, but also, how many patients with that confidence of the above union, annually visit their primary care physician, and if there is commonality, begin a study to promote annual visits to the doctor, and possibly build a clinic in areas of concentrated need. A study like this is a way to collectively manage the patient population to ensure healthy populations, but also to be proactive about healthcare versus treating a patient in facilities or departments that could be avoided like the emergency room or the urgent care.

Algorithms can teach and build models to aide in making intelligent decisions without human intervention. Decision support would have a valid impact from rich data, and healthcare can then become personalized. There are currently many real world activities being done from clinical genomic analysis to neuroscience, to drug resistance tuberculosis, and they will all contribute to the growing field of population health and managed care. On the radar also are the projections that could be coming, such as health insurance agencies billing based on patient attributes, using data analysis form predictive analysis.

Source:

“Cognitive Analytics and Solutions.” IBM. https://www.research.ibm.com/haifa/dept/vst/mldm.shtml

“Machine Learning and Health Care mean $6M for Prdilytics.” Harris, Derrick. 5 Sept. 2012. Gigaom. https://www.research.ibm.com/haifa/dept/vst/mldm.shtml

AI and Translation May 8, 2017

Posted by 0687peis in Machine Learning.
add a comment

This seminar discussion course is my last course in Marquette university. It is a very pleasant journey to discuss topics related to Machine Learning and AI. Discussion in this course includes different aspects of AI, from technology to ethic. I receive a lot of inspiration from viewpoints of classmates. With them I expand my vision and can think differently.

Before I come to Marquette University, I have eight years’ experience as a professional translator. In the past two years Google, Microsoft both released a series of new technology to apply AI to language translation. In my opinion, AI translation is a very powerful invention. It can greatly increase accuracy and efficiency of translation, remarkably improve user experience, supports cross-culture communication. On the other hand, AI translation can be a double-edged sword, and a typical example of AI caused unemployment of human. With more automated and precise machine translation, human translator will be no longer needed.

Language is a human-specific communication tool. Under condition of enough computing resource, computer can identify every word you speak, which is a very big historic breakthrough and a major milestone in the perception of artificial intelligence. [1] Last December Microsoft released a new feature in Microsoft Translator, makes it possible to support more than 100 languages real-time translation. People from different countries, using the APP, can communicate with each other, say a word, or enter a text, the output is the mother tongue of each other. [2] Google translation uses neural network to real-time translate images. When you put your mobile phone camera on a road sign or restaurant menu, the software will recognize the text in the image real-time through the neural network, and cover original text with translated version, even the font and size are the same. [3]

Due to the progress of machine translation technology and the popularity of foreign language education, the value of translation has been greatly weakened. In fact, the impact of machine translation on the translation industry will be fatal, and ordinary translators who undertake simple translation tasks that do not require high accuracy will be completely replaced by the machine.

AI could possibly make a lot of translators lost job, but if we look at it from another aspect, it liberates translators from so heavy work, gives them more free time to enjoy life. Based upon my personal experience, if I must choose between more paid, heavy, hard manual translation, and easy, less paid, machine translation. I prefer the latter, because it saves me more time that I can learn more things to promote my career, which is more promising, rather than stick to exhausting translation forever.

 

 

 

 

 

[1] http://it.sohu.com/20170421/n489975326.shtml

[2] http://news.sciencenet.cn/htmlnews/2017/4/373479.shtm

[3] http://www.sohu.com/a/132433344_361833?_f=v2-index-feeds

 

Machine Learning and Cybersecurity May 6, 2017

Posted by lacallag in Machine Learning.
add a comment

“Security is an arms race, and cybercriminals are fine-tuning their methods with the help of machine learning.”  – Eric Peterson, Intel Security [1]

Similar to other industries, cybersecurity developers are looking for ways to capitalize on machine learning (ML) to make their tools more efficient–including cybercriminals. Whether it’s performing sophisticated target-selection analyses or monitoring a network’s traffic patterns to learn how to blend in, cybercriminals are just as busy as security experts when it comes to the next evolution of online attacks.

One strategy is to make ML algorithms misbehave. This can be accomplished in a few different ways:

  1. Cause the ML algorithm to mislabel events by feeding it customized examples, which will alter the trained model’s determinations,
  2. Find bugs in the code and attack the ML implementation, or
  3. Do a black box attack to trick the ML without knowing its architecture. [2]

Adversarial machine learning attacks such as these are hard to defend against and can create a blind spot within the trained model that can be exploited.

Business email compromise (BEC) scams are thought to use ML to target CEOs, CFOs and others who hold positions of financial responsibility within companies. An array of data from the public domain is gathered (e.g. SEC filings, press coverage, Facebook) and correlations can be assessed (e.g. between social media and employee departures, quarterly reports and travel, stock price and volume of network traffic) for inclusion into the ML model, which will determine optimal targets and when to approach them. Once targeted, social engineering is used to trick the mark into making a fund transfer to a fraudulent account. The FBI estimates that more than $3 billion has been stolen through BEC scams. [3]

Darktrace, an ML cybersecurity company, has also seen attacks where intruders are able to breach a network and then use ML to rapidly learn how the network and its users behave. Once it puts together the network’s profile it is able to use the background noise of the network as camouflage and virtually disappear. “Had we not used our own machine learning to spot it quickly, it would never have been detected.” [4]

Luckily cybersecurity firms are equally hard at work figuring out ways that ML can detect these kinds of attacks, so that responses can be deployed quickly to mitigate their effects. The ability to respond in real-time will become increasingly important as the window of detection shrinks, as will information sharing within and between industries to identify emerging threats.

References:

[1] Eric Peterson, “Machine learning accelerates social engineering attacks,” 2017 Threats Predictions, McAfee Labs, November 2016 https://www.mcafee.com/us/resources/reports/rp-threats-predictions-2017.pdf

[2]  Karen Epper Hoffman, “Machines learning evolves, and hackers stand to gain,” GCN 6 April 2017
https://gcn.com/articles/2017/04/06/adversarial-machine-learning.aspx

[3]  Tara Seals, “McAfee: Machine Learning a Key 2017 Tool for Socially Engineering Hacks,” Infosecurity Magazine 29 November 2016
https://www.infosecurity-magazine.com/news/mcafee-machine-learning-a-key-2017/

[4]  Ben Rossi, “Robot wars: the British company at the heart of the news security landscape,” Information Age 3 January 2017
http://www.information-age.com/robot-wars-british-company-heart-new-security-landscape-123463822/

The Future Ahead With ML May 6, 2017

Posted by Rani Sebastian in Machine Learning.
add a comment

“Machine learning is the next internet” -Tony Tether, Former Director, DARPA

This field of Artificial Intelligence where we get computers to program themselves is now making the news almost daily with novel findings and advancements in every day applications. I don’t even need to make a list of applications of ML to show how big a part it plays in our lives now.

ML is significant because it is changing the way we look at problems, gives us new insights about the problems we are trying to solve and in this process, discovers brand new applications as well. These applications can be further extended either by themselves or combined to be applied to other completely unrelated fields. Technological advancements in general and in other fields are also impacting further development of ML. New strategies to ML are being researched both in terms of hardware and software approaches. The hardware improvements in GPUs and others make large scale deployment of ML a reality. Latest ideas in software approaches gives more options to explore from the software side.

Personally, I am not a fan of Super Intelligence or AGI – nothing against it, but I still feel that a combination of human-machine will be much more powerful. “The repeated failure of autonomous cars has made one point clear – that even learning machines cannot surpass the natural thinking faculties bestowed by nature on human beings. If autonomous or self-guided machines have to be useful to human society, then the current Artificial Intelligence and Machine Learning research should focus on acknowledging the limits of machine power and assign tasks that are suitable for the machines and include more human interventions at necessary checkpoints to avert disasters.” [1] We may not even be able to reach the level of AGI in the immediate future, but the results we already have shown that we must invest in research in ML with reliable data and wise integration in mind. Also, I think it is important that we channel the ultimate goals of AI/ML into common beneficial applications rather than concentrating on corporate profit motives. Employed to the “right” purposes, AI-ML can be a powerful tool. AI ethics by itself is another topic, but it is time we addressed some of the ethics and accountability issues, for change is coming around at a rapid pace. Many well-meaning implementations have backfired, like Microsoft’s chat-bot Tay which had to be taken offline after it ‘learned’ to be racist. Dangers of AI is the other side of the coin. By ‘danger’, humans losing their jobs to AI is not the only concern. Many prominent figures including Stephen Hawking have quoted on the dangers of super-intelligence. Autonomous cars have been proved not completely immune to infiltration by now, will it happen with autonomous weapons too? Will AI be abused by governments or corporates to stifle out conflicts? Research communities work best when they include people with different views and different sub-interests, and AI research must stay grounded in reality. [2] My personal viewpoint is that we should focus on beneficial ML applications which are easily accessible to the common man. There is immense potential for ML in the fields of preventive health care, assistive applications, banking and so on that could make the world a better place. That is also another reason why ML/AI should be democratized and not vested to a single corporation.

Keeping the hypes aside, ML road ahead looks bright. With the current technology trends, we can expect cheaper but more powerful hardware, memory and storage solutions in the coming years. Advancements in cloud, IoT and smart apps fields will only pave a steady increase for ML applications. And that is exactly why Machine Learning is important – because of the endless possibilities it holds for us.

“A breakthrough in machine learning would be worth ten Microsofts” – Bill Gates

 

References

[1] http://www.dataversity.net/2017-machine-learning-trends/

[2] https://www.prospectmagazine.co.uk/britishacademy/the-ai-debate-must-stay-grounded-in-reality

Why Machine Learning is Important May 4, 2017

Posted by kevinmea in Machine Learning.
add a comment

The first effect of machine intelligence will be to lower the cost of goods and services that rely on prediction. This will affect transportation, agriculture, healthcare, energy, manufacturing, and retail.  When these costs fall precipitously, there are two other well-established economic implications. First, we will start using prediction to perform tasks where we previously didn’t. Second, the value of other things that complement prediction will rise. [1]

This excerpt from the Harvard Business Review demonstrates why machine learning is important.

First, most companies compete on cost. Removing cost from a service is one of every business’s main goals. Secondly, having predictive analytics embedded in machines removes risk from a business. Finally, using a precise machine to perform a task rather than a human will reduce risk for a business. These are all value propositions for any business in any industry to consider implementing machine learning.

The effects of machine learning are also clearly widespread, touching multiple if not all industries. The rise of home computers touched all industries and look how that revolutionized the world. We are living in the information age because of that. When machine learning becomes entrenched in just as many industries due to the positive business benefits, it might be fair to consider that to be the next age.

Safety and opportunity are the final considerations that will complement machine learning prediction. Most of the local media coverage the average public sees regarding machine learning recently is manufacturing and the replacement of human workers. Machine learning can now be programmed into robotics to perform repetitive tasks which used to be done by a human. One example of this would be sorting or picking machines. The fact is that manufacturing is often a dangerous job for a human. There were approximately 2.9 million nonfatal workplace injuries reported by private industry U.S. employers in 2015. [2] Having machine learning and predictive analytics replace human workers removes the human from the risk of injury, removes the human from a dangerous environment, and offers new opportunities for humans to work on the machines and machine learning software rather than work directly on the product manufacturing.

References:
[1] https://hbr.org/2016/11/the-simple-economics-of-machine-intelligence
[2] https://www.bls.gov/news.release/pdf/osh.pdf

The Near and Far Future of AI April 30, 2017

Posted by omarabed15 in Machine Learning.
add a comment

We all have a preconception that forms in our minds when we hear “AI”. Numerous pop culture movies (my favorite being “iRobot” with Will Smith) create a sensationalized idea of the role that AI will play in our future. However, I believe machine learning has a long way to go before we achieve full technological consciousness (although, I do believe it is somewhere on the distant horizon).

We are currently seeing the smooth and steady progression of AI in everyday applications. The rapid advances in smartphone technology over the last 10 years present a perfect use case for this technology. The immense amount of content created by billions of social media accounts gives humanistic AI a vast set of training data. In contrast to training off specific and tailored data sets, machine learning meant to imitate humanistic consciousness can use this broad data to learn and think like a real person. This, in my opinion, is the true value of the future of AI. Specialized applications will always be used by corporations to automate and improve operations. But at a consumer level, I think the real need will be for AI to satisfy a need for human interaction, rather than accomplish any specific or ‘useful’ task. Relationships between people are changing, as social media interactions alter the way we have in-person interactions. As people trade in-person interactions for online interactions, I think AI will be able to fill a social void for many people.

This thought definitely invokes ideas of the movie “Her”, but I don’t think that movie has sensationalized anything to any extremes. AOL chatbots have existed for over 15 years, and provided people a fun way of interacting online with a robotic “person”. Since then, AI has progressed to allow people to have online relationships with robotic personalities. Perhaps we are already living in the future depicted by “Her” (or are a few years away), but we simply don’t notice because we don’t see our lives sensationalized in a movie.

In order for AI to make use of the vast amounts of data online to simulate a human personality, I think the use of Neural Networks will play a prevalent part. If you want to train a machine to think and act like a person, give it a “brain” like a person’s, and train it using human data. And though large corporations have the resources to start us down this path (think Google), I think a fundamental limitation of AIs progress is the fact that its advances exist within the hands of large corporations.

We interact with a few large corporations online and create data for these corporations, but then that data is hoarded by the companies and used for their own purposes, perhaps to create AI algorithms that exploit the users through advertisements and such. However, the future of technology belongs to those entrepreneurs with ideas and intellect to accomplish amazing feats, but who perhaps don’t have the resources or funding of a large corporation. I think opening the AI realm to individual contributors would vastly speed up the development of AI. A good way of leveling the playing field, in my opinion, would be to open the public/non-sensitive data to the world through a series of APIs. If Google, Facebook, Twitter, and more would provide their private data to the world through open interfaces, users all over the world could produce advances in the AI field.

Unfortunately, I don’t think this is likely to happen by the decision of the data owners. The data on their users is far to valuable to give away for free. Perhaps governments could establish laws around opening data access to public consumers. Or maybe a third-party contributor will enter the playing field and find a way to provide these mass amounts of data to the public. Regardless of how it happens, I think the public is in a bit of danger if the future of AI resides in the hands of large corporations like Google and Facebook. In the end, we (the consumers) rely on the services provided by Google and Facebook, but we do not pay for these services. We are the source of the revenue stream for these companies, and that’s how users are viewed by most companies. If that is the case, AI will be utilized by these corporations to promote that relationship. Instead, putting this power in the hands of users, to benefit the “common man” is the major paradigm shift that needs to take place to push AI to the next level.

 

Defining Ethics with AI April 28, 2017

Posted by tleamon in Machine Learning.
add a comment

One of the biggest takeaways I’ve had about the discussion of AI this semester is how the focus on AI is ever evolving and changing. With each change, it seems that AI progress seems to have peaks and valleys, each followed by a correlating over hype for what is to come. Regardless, progress is still being made and we are continually benefiting. Scope of potential AI applications has increased as well and with neural networking computing [1] being able to assist more complex algorithms, the feasibility of these applications is increasing.

To make these applications and algorithms more helpful for users and increase the number of use case scenarios, data is being collected from a variety of sources to help refine existing and future models. However, it seems that the means of collecting this data has brought up some ethical issues with the users that are using the services. Ethical issues have also been brought up in regards to how AI will make human-like decisions at important decision forks in the road. In the future, AI will eventually have the ability to make decisions in many fields, some of which could put a person in danger [2].

AI advancements have been made in the medical field and driverless cars, for example. Based on a patient’s symptoms, AI could have the ability to recommend a medication, procedure, or other type of solution. As a result, there are also potential dangers with solely trusting on AI for recommendations, even if that statement itself counteracts the purpose of AI. Artificial Intelligence aims to provide a level of intelligence to rival humans, but if we can’t trust the input of AI, then what’s the point of using it?

Another similar situation is happening with driverless cars. Artificial Intelligence is being developed by several companies for controlling a car to drive from a Point A to a Point B safely. During the drive, if the AI encounters a situation where a decision must be made and two separate parties are equally likely to be put in danger, then how will the AI choose which decision to make?

These two scenarios echo my thoughts on the current feasibility of unsupervised learning and AI as a whole. While there are many applications to be integrated with AI, the output that these algorithms provide need to be screened by humans initially. This screening could be automated as well to provide a set of ethical rules that are decided upon. I’m not sure of the scope of AI in the near future, but if these same algorithms are expected to learn on their own input data and revision their own logic, then I personally would have more confidence in the future of less supervised learning.

References

[1] Vorhies, William. “Beyond Deep Learning – 3rd Generation Neural Nets.” Data Science Central. N.p., 4 Oct. 2016. Web. 28 Apr. 2017. <http://www.datasciencecentral.com/profiles/blogs/beyond-deep-learning-3rd-generation-neural-nets&gt;.

[2] Bossmann, Julia. “Top 9 Ethical Issues in Artificial Intelligence.” World Economic Forum. N.p., 21 Oct. 2016. Web. 28 Apr. 2017. <https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/&gt;.

Machine Learning April 25, 2017

Posted by Marquette MS Computing in Machine Learning.
1 comment so far

Various forms of Machine Learning (ML) are becoming very important. Students have been reading about ML with an emphasis on unsupervised learning.