Y Udaya Chandar
In very simple terms, artificial intelligence (AI) is defined as a branch of computer science dealing with the simulation of intelligent behaviour in computers, or the capability of a machine to imitate intelligent human behaviour. Some others have explained AI as virtual reality, augmented reality or 3D printing.
In practice, AI is the ability of a computer-controlled robot to perform tasks commonly associated with intelligent beings, humans or animals. The term is primarily used for developing systems that are capable of performing intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalise or learn from past experience. AI-enabled computers can be programmed to carry out very complex tasks – like playing chess – with great proficiency. Another example is the highly accurate extraction of molecular and genetic information from a tumour biopsy slide in a matter of seconds. However, there are still many limitations to the application of AI. There are, as yet, no programs that can match human flexibility over wider domains or in tasks requiring a great deal of everyday knowledge, despite AI attaining proficiency in some fields to the point that it seemingly exceeds humans’ capabilities.
Research in AI has focused chiefly on the following components of AI: learning, reasoning, problem solving, perception and language use.
There are a number of different forms of learning that can be applied to AI. The simplest is learning by trial and error, such as a simple computer program for solving mate-in-one chess problems. This simple memorisation of individual items and procedures, known as rote learning, is relatively easy to implement on a computer. The process of learning in AI has been mastered to a good degree by now.
The act of reasoning calls for drawing inferences appropriate to a situation. Inferences are classified as either deductive or inductive. The difference between these forms of reasoning is that in the deductive case the truth of the conclusion is guaranteed, whereas in the inductive case support to the conclusion is obtained without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to predict future behaviours until the appearance of anomalous data forces a revision of the model. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built from a small set of basic axioms and rules where assured results can be anticipated. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the most challenging problems confronting AI even today.
Problem solving, particularly in AI, is a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods can be classified into special purpose and general purpose. A special-purpose method is to find a solution to a specific problem and often exploits very specific features of the situation in which the problem exists. In contrast, a general-purpose method is applicable to a wide variety of problems.
AI programs have already been used to solve a diverse set of problems. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs and manipulating ‘virtual objects’ in a computer-generated world.
In perception, the environment is scanned by various sensory organs, real or artificial, and the scene is decomposed into a number of separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination of the scene and how much the object contrasts with the surrounding field.
At present, artificial perception is sufficiently advanced enough to enable optical sensors to identify individuals, to help autonomous vehicles to drive at moderate speeds on an open road and to send robots roaming through buildings collecting empty cans.
A language is a system of signs that have a meaning agreed by convention. In this sense, language need not be confined to the spoken word. Many day-to-day practices form a mini-language, based on their interpretation being a matter of convention. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called ‘natural meaning’, exemplified in statements such as ‘Those clouds mean rain’ and ‘The fall in pressure means the valve is malfunctioning’.
It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements.
Symbolic vs. Connectionist Approaches
AI research follows two distinct, and to some extent competing, methods: the symbolic (or ‘top-down’) approach and the connectionist (or ‘bottom-up’) approach. The top-down approach seeks to replicate intelligence by analysing cognition independent of the biological structure of the brain, in terms of the processing of symbols in the form of symbolic labels. By contrast, the bottom-up approach involves creating artificial neural networks that imitate the brain’s structure, creating connectionist labels.
Today, both approaches are followed, and both are acknowledged as having some difficulties.
Strong AI, Applied AI and Cognitive Simulation
Employing the methods outlined above, most AI research attempts to reach one of three goals: strong AI, applied AI or cognitive simulation. Strong AI aims to build machines that think. The ambition of this area is to produce a machine whose overall intellectual ability is indistinguishable from that of a human being. To date, progress has been meagre in this, and some critics doubt whether research in the foreseeable future will even produce a system with the overall intellectual ability of an ant. Indeed, some researchers view strong AI as an area not worth pursuing.
Applied AI, also known as advanced information processing, aims to produce commercially viable ‘smart’ systems, such as ‘expert’ medical diagnosis systems and stock-trading systems. Applied AI has enjoyed considerable success here.
In cognitive simulation, computers are used to test theories about how the human mind works. This approach is marked, for example, by theories about how people recognise faces or recall memories. Cognitive simulation is already a powerful tool in both neuroscience and cognitive psychology.
Applications of AI
* AI in Healthcare. The biggest bets in this sector are on improving patient outcomes and reducing costs. Today, companies are applying machine learning to make better and faster diagnoses than humans. They understand natural language and is capable of responding to questions. The system mines patient data and other available information sources to form a hypothesis, which it then presents with a confidence-scoring schema. Other AI applications include chatbots, online computer programs used to answer questions and assist customers, to help schedule follow-up appointments or to guide patients through the billing process. Virtual health assistants provide basic medical feedback, which is practised already.
* AI in Business. Robotic process automation is being applied to highly repetitive tasks that humans perform normally. Machine learning algorithms are being integrated into analytics and CRM platforms to uncover information on how to serve customers better. Chatbots have been incorporated into websites to provide immediate service to customers. The automation of job positions has also become a talking point among academics and IT consultancies.
* AI in Education. AI can automate grading, giving educators more time. AI can also assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. AI could change where and how students learn, perhaps even replacing some teachers.
* AI in Finance. AI applied to personal finance applications, such as Mint or Turbo Tax, is upending the world’s financial institutions. Applications like these can collect personal data and provide financial advice. Other programs, including Watson, have been applied to the process of buying a home. Today, software performs much of the trading on Wall Street.
* AI in Law. The legal process of ‘discovery’, the sifting through of documents, is often overwhelming for humans. Automating this process is a better use of time and a more efficient process. Start-ups are also building question-and-answer computer assistants that can sift programmed-to-answer questions by examining the taxonomies associated with a database.
* AI in Manufacturing. Industry is an area that has been at the forefront of incorporating robots into the workflow. Industrial robots used to perform single tasks and were separated from human workers, but as technology has advanced, this has changed. Industrial robots are a common place to-day.
* Besides the above AI is being applied increasingly in many other fields.
We can now say that more is yet to be done to draw all the potential benefits from AI, and AI scientists are burning a lot of midnight oil to give all of us an easier to deal with world.
(The writer is a retired Colonel from the Indian Army. A passionate student of Sociology with a PhD in the subject. An Author of many books.)
Download Daily Excelsior Apps Now: