Primary tabs

Machine Learning (ML), along with the Internet of Things (IoT) seems to be the next big revolution in science and technology. AI experts are debating why machine learning is the most wondrous thing, today. They are trying to predict the way ML can affect the future and its evolution.

The ability to feed the machine with big amounts of data, so that the machine can learn concepts and rules to focus on specific categories of problems and solutions, is a critical part of AI development.

 

 

In 1959, the term 'machine learning' was coined by Arthur Samuel, an AI professional. Since then the concept has been considered as the main field of artificial intelligence, an area that promises many changes in our society.

Now, the main idea of this innovation was to build a system that learns from complex data, without being strictly planned, for a particular function. With these guidelines, scientists have used higher mathematical models to create a structure and then, large quantities of valuable information that fed the system for the training process, which, in turn, modified the parameters of the architecture.

In this way, machines that possess intelligence have been created. These machines could respond to every problem in nature, be it in the healthcare domain, financial services, transportation, or manufacturing.

Face recognition is one such example where the systems are developed through machine learning. More specifically, the machine is trained to recognize a person by one or multiple characteristics of the face (e.g., nose, eyes, etc.) as the main parameter. The system is taught with millions of photographs, shaping the structure in such a way that categorizes each of these new photos, based on this specific feature(s).

A typical example of application domain can be the airport, one that identifies every passenger based on the characteristics of their faces. The additional information is used to let the system know what the individual’s preferences are, in order to provide the appropriate service. This process further leads to personalized interaction, as the system recognizes certain preferences such as the tastes of passengers.

This is the main view of Ian Massingham, global head of technical and developer evangelism at Amazon Web Services. He said, “These kinds of services end up playing a role in the decision support or in-service support. A machine can't go up to you and warmly say hello, nice to meet you and smile. So, human beings will have that stuff, but they'll be better informed because they will be using AI and machine learning as part of the customer services workflow.”

Experts weigh in on the question – how far away is artificial general intelligence? (Source: The Artificial Intelligence Channel)

What’s So Special about Machine Learning

Other technologies, where machine learning plays a deterministic role, are speech recognition and natural language processing. Also, nowadays, smartphone users can interact with virtual assistants like Siri or Google Assistant.

For the above technologies, in the first stage of development, audio is converted into text. Then natural language processing is used that, understandable by the machine, is the content of the text. For this approach, scientists and ML experts build complex algorithms and feed them with invaluable data.

In a similar way, Google researchers have detected the voice of a specific speaker, in a big crowd, according to a report. Microsoft has also tried to develop an application where a user can speak in multiple languages, and the virtual assistant interacts with the individual. At the same time, smartphones also have ML chips which lead to the use of AI.

We see science fiction scenarios coming to life due to the fact that AI has grown exponentially, while more complex algorithms are being created every day!

According to professionals at services company, Deloitte's Technology Media and Telecommunications Predictions 2018 report, the pace of advancement in the area of machine learning are so rapid that in 50 years today’s developments will be considered as baby steps. The report also states that the number of ML projects will double, compared the previous years and the fund on cognitive and artificial intelligence systems will have a 54.2 percent on-year jump, in 2018, to $19.1 billion.

A report from McKinsey & Company predicted that, by 2030, more than 800 million workers could be replaced by robots. One possible solution in this situation could be to invent new jobs, and those of high quality, that trains individuals appropriately, as is required for such positions.

AI is the future. More broadly, this field will remain a key spending area for companies, according to the International Data Corporation. The next step is artificial general intelligence, something of a "holy grail" for many artificial intelligence researchers.

If in the long-run, a machine is able to perform every intellectual task that a human can, we would have achieved the biggest challenge of our generation.

A single AGI would be able to accomplish a variety of cognitive tasks similar to that of people. We are very close to this step, and a bright future is in our able hands.

Top Image: Robot running. Researchers predict that artificial intelligence will reach new heights by learning without being programmed. (Source: Pixabay)

References

Saheli Roy Choudhury, 2018. Machines will soon be able to learn without being programmed [Online] Available at: https://www.cnbc.com/2018/04/17/machine-learning-investing-in-ai-next-big-thing.html

Brian King 2018. Could a Robot be Conscious? [Online] Available at: https://philosophynow.org/issues/125/Could_a_Robot_be_Conscious

October 23, 2017. Understanding Artificial Intelligence – An interview with Hiroshi Yamakawa. [Online] Available at: https://futureoflife.org/2017/10/23/understanding-agi-an-interview-with-hiroshi-yamakawa/

Nikos Dimitris Fakotakis's picture

Nikos Dimitris Fakotakis

Fakotakis Nikos Dimitris received his BSc-MEng in Computer and Information Engineering from the Polytechnic School of Patras, Greece, in 2015. He is currently a PhD student in the Wireless Communication Laboratory of the same department. His research interests are in the field of Artificial Intelligence and Human Computer Interaction. In parallel with his studies he has been working as computer and network engineer, database administrator, and software developer (Java, python, etc.).Read More

No comment

Leave a Response