Artificial intelligence is use of computers to mimic human cognitive process for decision making or to perform functions that normally require human intelligence. These functions include the ability to learn, reason, analyse, take decisions, recognise speech and visual perception among others. In simple terms, AI is the ability of software to develop and apply intelligence like humans.
AI is not a new phenomenon, with much of its theoretical and technological underpinning developed over the past 70 years by computer scientists such as Alan Turing, Marvin Minsky and John McCarthy. AI has already existed to some degree in many industries and governments. Now, thanks to virtually unlimited computing power and the decreasing costs of data storage, we are on the cusp of the exponential age of AI as organisations learn to unlock the value trapped in vast volumes of data.
The scope of AI is disputed as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from “artificial intelligence” since it has become a routine technology.
Categories of AI
AI gets categorised in different ways and it may be useful to understand the various categories, their rationale and the implications.
a) Weak AI vs. Strong AI:
Weak AI describes “simulated” thinking. That is, a system which appears to behave intelligently, but doesn’t have any kind of consciousness about what it’s doing. For example, a chatbot might appear to hold a natural conversation, but it has no sense of who it is or why it’s talking to you.
Strong AI describes “actual” thinking. That is, behaving intelligently, thinking as human does, with a conscious, subjective mind. For example, when two humans converse, they most likely know exactly who they are, what they’re doing, and why.
b) Narrow AI vs. General AI:
Narrow AI describes an AI that is limited to a single task or a set number of tasks.
For example, the capabilities of IBM’s Deep Blue, the chess playing computer that beat world champion Gary Kasparov in 1997, were limited to playing chess. It wouldn’t have been able to win a game of tic-tac-toe – or even know how to play.
General AI describes an AI which can be used to complete a wide range of tasks in a wide range of environments. As such, it’s much closer to human intelligence.
The term “superintelligence” is often used to refer to general and strong AI at the point at which it surpasses human intelligence, if it ever does.
Dimensions of Artificial Intelligence
- Machine Learning, a term coined by Artur Samuel in 1959, meant “the ability to learn without being explicitly programmed.” Machine Learning involves the use of algorithms to parse data and learn from it, and making a determination or prediction as a result. Instead of hand coding software libraries with well-defined specific instructions for a particular task, the machine gets “trained” using large amounts of data and algorithms, and in turn gains the capability to perform specific tasks.
- Deep Learning is a technique for implementing Machine Learning. Deep Learning was inspired by the structure and function of the brain, specifically the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that are based on the biological structure of the brain. In ANNs, there are ‘neurons’ which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer
- Robotic Process Automation: Automation is the process of making a system or processes function automatically. Robots can be programmed to perform high-volume, repeatable tasks normally performed by humans and further it is different from IT automation because of its agility and adaptability to the changing circumstances.
- Natural Language Processing (NLP) is the processing of human language and not computer language by a computer program. For Example, spam detection, which looks at the subject line and the text of an email and decides if it’s junk.
- Voice /Speech Recognition is the inter-disciplinary sub-field of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields.
- Pattern recognition is a branch of machine learning that focuses on identifying patterns in data.
- Machine vision is the science of making computers visualize by capturing and analysing visual information using a camera, analog-to-digital conversion, and digital signal processing. It is often compared to human eyesight, but machine vision isn’t bound by biology and can be programmed to see through walls. It is used in a range of applications from signature identification to medical image analysis.
- Robotics is a field of engineering focused on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently.