top of page
  • Dr. Hafssa

A Complete Guide to Artificial Intelligence (AI)


Intelligence is the ability to learn and solve problems. Artificial intelligence is the intelligence exhibited by machines or software.


Artificial Intelligence is a method of making a computer, a computer-controlled robot, or software think intelligently like the human mind.


AI is accomplished by studying the patterns of the human brain and by analyzing the cognitive process. The outcome of these studies develops intelligent software and systems.


AI is the simulation of human intelligence processes by computers.

Artificial intelligence allows machines to model, and even improve upon, the capabilities of the human mind. From the development of self-driving cars to the proliferation of smart assistants like Siri and Alexa, AI is a growing part of everyday life.


What is artificial intelligence?


Artificial intelligence is a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.


AI is a broad field that encompasses many different disciplines, including computer science, data analytics and statistics, hardware and software engineering, linguistics, neuroscience, biology, and even philosophy and psychology.


Types of artificial intelligence


There are 3 types of Artificial Intelligence-based on capabilities :

  1. Narrow AI

  2. General AI

  3. Super AI


Under functionalities, we have 4 types of Artificial Intelligence:

  1. Purely reactive

  2. Limited Theory

  3. Theory of Mind

  4. Self-aware


3 Types of AI based on capabilities


types of artificial intelligence, types of AI, narrow AI, General AI, Super AI

1. Narrow AI

Narrow AI, also called Weak AI, focuses on one narrow task and cannot perform beyond its limitations. It targets a single subset of cognitive abilities and advances in that spectrum. For example, we find:

  • Apple Siri is an example of a Narrow AI that operates with a limited pre-defined range of functions.

  • IBM Watson supercomputer is another example of a Narrow AI. It applies cognitive computing, machine learning, and natural language processing to process information and answers your queries.

  • Other examples of Narrow AI include google translate, image recognition software, recommendation systems, spam filtering, and Google’s page-ranking algorithm.

2. General AI

General AI, also known as strong AI, can understand and learn any intellectual task that a human being can. It allows a machine to apply knowledge and skills in different contexts.


AI researchers have not been able to achieve strong AI so far. They would need to find a method to make machines conscious, programming a full cognitive ability set.

  • Fujitsu has built the K computer, one of the world's fastest supercomputers. It is one of the significant attempts at achieving strong AI. It took nearly 40 minutes to simulate a single second of neural activity. Hence, it is difficult to determine whether strong AI will be achieved shortly.

  • Tianhe-2 is a supercomputer that was developed by China's National University of Defense Technology. It holds the record for cps (calculations per second) at 33.86 petaflops (quadrillions of cps). Although that sounds exciting, the human brain is estimated to be capable of one exaflop, i.e., a billion cps.

3. Super AI

Super AI surpasses human intelligence and can perform any task better than humans.


The concept of artificial superintelligence sees AI evolved to be so akin to human sentiments and experiences that it doesn't merely understand them; it also evokes emotions, needs, beliefs, and desires of its own. Its existence is still hypothetical.


4 types of AI based on functionalities


types of artificial intelligence, purely reactive, limited memory, theory of mind, self-aware

1. Purely Reactive


These machines do not have any memory or data to work with, specializing in just one field of work. For example, in a chess game, the machine observes the moves and makes the best possible decision to win.

  • IBM’s Deep Blue defeated chess grandmaster Garry Kasparov is a reactive machine that sees the chessboard pieces and reacts to them. Deep Blue cannot refer to any of its prior experiences or improve with practice. It can identify the pieces on a chessboard and know how each moves.

  • Deep Blue can make predictions about what moves might be next for it and its opponent.


2. Limited Memory


These machines collect previous data and continue adding it to their memory. They have enough memory or experience to make proper decisions, but memory is minimal. For example, this machine can suggest a restaurant based on the location data that has been gathered.

  • Limited Memory AI observes how other vehicles are moving around them, at present, and as time passes.

  • This ongoing, collected data gets added to the AI machine's static data, such as lane markers and traffic lights.

  • They are included when the vehicle decides when to change lanes, avoid cutting off another driver, or hit a nearby vehicle.


3. Theory of Mind

This kind of AI can understand thoughts and emotions, as well as interact socially. However, a machine based on this type is yet to be built.

  • One real-world example of the theory of mind AI is Kismet. Kismet is a robot head made in the late 90s by a Massachusetts Institute of Technology researcher. Kismet can mimic human emotions and recognize them, but Kismet can’t follow gazes or convey attention to humans.

  • Sophia from Hanson Robotics is another example where the theory of mind AI was implemented. Cameras present in Sophia's eyes, combined with computer algorithms, allow her to see. She can sustain eye contact, recognize individuals, and follow faces.


4. Self-Aware


Self-aware machines are the future generation of these new technologies. They will be intelligent, sentient, and conscious. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, and beliefs of its own.


How Does Artificial Intelligence Work?


Put simply, AI systems work by merging large with intelligent, iterative processing algorithms. This combination allows AI to learn from patterns and features in the analyzed data.


Each time an Artificial Intelligence system performs a round of data processing, it tests and measures its performance and uses the results to develop additional expertise.


The 3 Ways of Implementing AI


a complete guide to artificial intelligence for surgeons

1. Machine Learning


Machine learning gives AI the ability to learn. This is done by using algorithms to discover patterns and generate insights from the data they are exposed to. It involves the use of data and training to allow a machine to recognize patterns, make decisions, and perform tasks without being explicitly programmed to do so.


2. Deep Learning


Deep learning, which is a subcategory of machine learning, provides AI with the ability to mimic a human brain’s neural network. It can make sense of patterns, noise, and sources of confusion in the data.


Artificial neural networks are inspired by the structure and function of the human brain, and are made up of interconnected "neurons" that process and transmit information.


3. Artificial neural networks


Artificial neural networks are software algorithms that were created to mimic the billions of actual neurons that comprise the human brain ( ANN has unlocked the most growth of AI in healthcare). These artificial neural networks leverage vast networks of interconnected, software-derived nodes.


There is a much larger neural network, known as deep neural networks, with powerful capabilities.


The 4 Artificial intelligence training models


When businesses talk about AI, they often talk about “training data.” But what does that mean? Machine learning is a subset of artificial intelligence that uses algorithms to train data to obtain results.

There are 4 different types of machine learning:

1. Supervised learning


Supervised learning is a machine learning model that maps a specific input to an output using labeled training data. In simple terms, to train the algorithm to recognize pictures of cats, feed it pictures labeled as cats.

2. Unsupervised learning


Unsupervised learning is a machine learning model that learns patterns based on unlabeled data. Unlike supervised learning, the result is not known ahead of time. Rather, the algorithm learns from the data, categorizing it into groups based on attributes.


For instance, unsupervised learning is good at pattern matching and descriptive modeling.

3. Semi-supervised learning:


In addition to supervised and unsupervised learning, a mixed approach called semi-supervised learning is often employed, where only some data is labeled.


In semi-supervised learning, a result is known, but the algorithm must figure out how to organize and structure the data to achieve the desired results.

4. Reinforcement learning


Reinforcement learning is a machine learning model that can be broadly described as “learning by doing.” An “agent” learns to perform a defined task by trial and error until its performance is within a desirable range.


The agent receives positive reinforcement when it performs the task well and negative reinforcement when it performs poorly.


An example of reinforcement learning would be teaching a robotic hand to pick up a ball.


Applications and use cases for artificial intelligence


  1. Speech recognition: Automatically convert spoken speech into written text.

  2. Image recognition: Identify and categorize various aspects of an image.

  3. Translation: Translate written or spoken words from one language into another.

  4. Predictive modeling: mine data to forecast specific outcomes with high degrees of granularity.

  5. Data analytics: Find patterns and relationships in data for business intelligence.

  6. Cybersecurity: Autonomously scan networks for cyber-attacks and threats.


Artificial intelligence plays a significant role in innovation in surgery. It is already the primary driver of developing technologies such as big data, surgical robots, and it will continue to be a technical pioneer in the foreseeable future.


Comments


bottom of page