Artificial Intelligence (AI)
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
What is artificial intelligence?
Although a number of definitions of artificial intelligence (AI) have emerged over the past few decades, John McCarthy offers the following in this article published in 2004: "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI need not be limited to methods that are biologically observable."
However, decades before this definition, the conversation of artificial intelligence began with Alan Turing's landmark paper, "Computational Machinery and Intelligence" (PDF, 89.8 KB) (external link to IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science," asks the question, "Can a machine think?" From there, he offers a test, now known as the "Turing test," where a human interrogator would try to distinguish between a computer's text response and that of a human. While this test has come under much scrutiny since its publication, it remains an important part of the history of AI, as well as an ongoing concept within philosophy, as it uses ideas around linguistics.
In its simplest form, artificial intelligence is a field that combines computer science and robust data sets to enable problem solving. It also encompasses the subfields of machine learning and deep learning, which are often mentioned in conjunction with artificial intelligence. These disciplines are composed of AI algorithms that seek to create expert systems that make predictions or classifications based on input data.
Today, there is still a lot of hype surrounding the development of AI, which is expected of any new emerging technology in the market. As noted in Gartner's Hype Cycle (external link to IBM), product innovations such as, for example, autonomous vehicles and personal assistants, follow "a typical progression of innovation, from over-enthusiasm, to a period of disillusionment to an eventual realization of the innovation's relevance and role in a market or domain." As Lex Fridman points out here (external link to IBM) in his 2019 MIT lecture, we are at the peak of over-hyped expectations, approaching disillusionment
Deep learning vs. machine learning
Since deep learning and machine learning tend to be used interchangeably, it is worth noting their differences. As mentioned above, both are subfields of artificial intelligence, and deep learning is actually a subfield of machine learning.
Deep learning is actually composed of neural networks. "Deep" refers to a neural network composed of more than three layers, which would include inputs and output, which can be considered a deep learning algorithm. This is usually represented by the following diagram:
The difference between deep learning and machine learning is how each algorithm learns. Deep learning automates much of the feature extraction phase of the process, which eliminates some of the manual human intervention required and allows the use of larger data sets. Deep learning could be considered "scalable machine learning," as Lex Fridman pointed out at the same MIT conference mentioned above. Traditional, or "non-deep" machine learning relies more on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, which generally requires more structured data to learn.
"Deep" machine learning can use labeled data sets, also known as supervised learning, to inform its algorithm, but does not necessarily require a labeled data set. It can ingest unstructured data in its original form (such as text or images) and can automatically determine the hierarchy of features that distinguish different categories of data. Unlike machine learning, it does not require human intervention to process data, allowing it to scale in more interesting ways.
Artificial intelligence applications
Today there are numerous practical applications of AI systems. Some of the most common examples are:
Speech recognition: also called automatic speech recognition (ASR), computer speech recognition, or speech-to-text conversion, and is a functionality that uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to perform voice searches, e.g. Siri, or provide more accessibility in relation to text messaging.
Customer service: online chatbots are replacing human agents for customer routing. They answer frequently asked questions on different topics (such as shipping) or provide personalized advice, cross-sell products or suggest sizes for users, changed the way they interact with customers on websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps (such as Slack and Facebook Messenger), and tasks generally performed by virtual assistants and voice assistants.
Computer vision: this AI technology enables computers and systems to obtain meaningful information from digital images, videos, and other visual inputs, and to act on it. This ability to provide recommendations distinguishes it from image recognition tasks. Driven by convolutional neural networks, machine vision can be applied to photo tagging in social networks, radiological imaging in healthcare, and autonomous vehicles in the automotive industry.
Recommendation engines: using data from past consumer behavior, AI algorithms can help uncover data trends to develop more effective cross-selling strategies. This is used to enable online retailers to make additional relevant recommendations to customers during the buying process.
Automated stock trading: designed to optimize stock portfolios, AI-powered high-frequency trading platforms make thousands or even millions of trades per day without human intervention.
No hay comentarios.:
Publicar un comentario