lunes, 21 de noviembre de 2022

 Artificial Intelligence (AI)

Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.

What is artificial intelligence?

Although a number of definitions of artificial intelligence (AI) have emerged over the past few decades, John McCarthy offers the following in this article published in 2004: "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI need not be limited to methods that are biologically observable."

However, decades before this definition, the conversation of artificial intelligence began with Alan Turing's landmark paper, "Computational Machinery and Intelligence" (PDF, 89.8 KB) (external link to IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science," asks the question, "Can a machine think?" From there, he offers a test, now known as the "Turing test," where a human interrogator would try to distinguish between a computer's text response and that of a human. While this test has come under much scrutiny since its publication, it remains an important part of the history of AI, as well as an ongoing concept within philosophy, as it uses ideas around linguistics.

In its simplest form, artificial intelligence is a field that combines computer science and robust data sets to enable problem solving. It also encompasses the subfields of machine learning and deep learning, which are often mentioned in conjunction with artificial intelligence. These disciplines are composed of AI algorithms that seek to create expert systems that make predictions or classifications based on input data.

Today, there is still a lot of hype surrounding the development of AI, which is expected of any new emerging technology in the market. As noted in Gartner's Hype Cycle (external link to IBM), product innovations such as, for example, autonomous vehicles and personal assistants, follow "a typical progression of innovation, from over-enthusiasm, to a period of disillusionment to an eventual realization of the innovation's relevance and role in a market or domain." As Lex Fridman points out here (external link to IBM) in his 2019 MIT lecture, we are at the peak of over-hyped expectations, approaching disillusionment

Deep learning vs. machine learning

Since deep learning and machine learning tend to be used interchangeably, it is worth noting their differences. As mentioned above, both are subfields of artificial intelligence, and deep learning is actually a subfield of machine learning.


Deep learning is actually composed of neural networks. "Deep" refers to a neural network composed of more than three layers, which would include inputs and output, which can be considered a deep learning algorithm. This is usually represented by the following diagram:


The difference between deep learning and machine learning is how each algorithm learns. Deep learning automates much of the feature extraction phase of the process, which eliminates some of the manual human intervention required and allows the use of larger data sets. Deep learning could be considered "scalable machine learning," as Lex Fridman pointed out at the same MIT conference mentioned above. Traditional, or "non-deep" machine learning relies more on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, which generally requires more structured data to learn.

"Deep" machine learning can use labeled data sets, also known as supervised learning, to inform its algorithm, but does not necessarily require a labeled data set. It can ingest unstructured data in its original form (such as text or images) and can automatically determine the hierarchy of features that distinguish different categories of data. Unlike machine learning, it does not require human intervention to process data, allowing it to scale in more interesting ways.

Artificial intelligence applications

Today there are numerous practical applications of AI systems. Some of the most common examples are:

Speech recognition: also called automatic speech recognition (ASR), computer speech recognition, or speech-to-text conversion, and is a functionality that uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to perform voice searches, e.g. Siri, or provide more accessibility in relation to text messaging.

Customer service: online chatbots are replacing human agents for customer routing. They answer frequently asked questions on different topics (such as shipping) or provide personalized advice, cross-sell products or suggest sizes for users, changed the way they interact with customers on websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps (such as Slack and Facebook Messenger), and tasks generally performed by virtual assistants and voice assistants.


Computer vision: this AI technology enables computers and systems to obtain meaningful information from digital images, videos, and other visual inputs, and to act on it. This ability to provide recommendations distinguishes it from image recognition tasks. Driven by convolutional neural networks, machine vision can be applied to photo tagging in social networks, radiological imaging in healthcare, and autonomous vehicles in the automotive industry.

Recommendation engines: using data from past consumer behavior, AI algorithms can help uncover data trends to develop more effective cross-selling strategies. This is used to enable online retailers to make additional relevant recommendations to customers during the buying process.

Automated stock trading: designed to optimize stock portfolios, AI-powered high-frequency trading platforms make thousands or even millions of trades per day without human intervention.


History of artificial intelligence: key dates and names
The idea of "a machine that thinks" dates back to ancient Greece. But, since the advent of electronic computing (and in relation to some of the topics discussed in this article), there have been important events and milestones in the evolution of artificial intelligence:

1950: Alan Turing publishes Computing Machinery and Intelligence. In this article, Turing, famous for cracking the Nazis' ENIGMA code during World War II, proposes to answer the question "can machines think?" and introduces the Turing test to determine whether a computer can demonstrate the same intelligence (or the results of the same intelligence) as a person. The value of the Turing test has been debated ever since.

1956: John McCarthy coins the term "artificial intelligence" at the first AI conference at Dartmouth College (McCarthy would go on to invent the Lisp language). Later that year, Allen Newell, J.C. Shaw and Herbert Simon create Logic Theorist, the first AI software program.

1967: Frank Rosenblatt creates the Mark 1 perceptron, the first neural network-based computer that "learned" by trial and error. Just a year later, Marvin Minsky and Seymour Papert publish a book entitled Perceptrons, which becomes the landmark work on neural networks and, at least for a time, an argument against future neural network research projects.

1980: Neural networks that use a backpropagation algorithm to train themselves are widely used in AI applications.

1997: IBM's Deep Blue system defeats world chess champion Garry Kasparov in a chess match (and rematch).

2011: IBM Watson beats champions Ken Jennings and Brad Rutter in Jeopardy!

2015: Baidu's Minwa supercomputer uses a special type of deep neural network, called a convolutional neural network, to identify and categorize images with higher accuracy than the average human.

2016: DeepMind's AlphaGo program, powered by a deep neural network, defeats Lee Sodol, the world Go champion, in a five-game match. The victory is significant given the sheer number of possible moves as the game progresses (over 14.5 billion after only four moves!). Google later bought DeepMind for $400 million.



No hay comentarios.:

Publicar un comentario

 The most lethal viruses  Download here the file