Artificial Intelligence – a very short introduction

Alan Turing
Alan Turing

Definitions of Artificial Intelligence

Below we highlight four definitions of Artificial Intelligence (AI).

  • Artificial Intelligence is a discipline devoted to the simulations of human cognitive capabilities on the computer” (Rajaram, 1990).
  • Artificial Intelligence is a new science of researching theories, methods and technologies in simulating or developing thinking process of human beings” (Ling-fang, 2010).
  • Artificial Intelligence is an attempt to understand the substance of intelligence, and produce a new intelligent machine that could make reactions similar to the human intelligence (Ning and Yan, 2010).
  • “The capability of a device to perform functions that are normally associated with human intelligence, such as reasoning and manipulating factual and heuristic knowledge (Hosea, Harikrishnan and Rajkumar, 2011).

The field of Artificial Intelligence (AI) connects to with other science fields such as information theory, cybernetics, automation, bionics, biology, phycology, mathematical logic, linguistics, medicine and philosophy (Ning and Yan, 2010).

Hosea, Harikrishnan and Rajkumar (Hosea, Harikrishnan and Rajkumar, 2011) argue that a machine is truly AI if it solves certain classes of problems requiring intelligence in humans, or survives in an intellectually demanding environment. Following this, one could divide the definition into two parts, the epistemological part, that is, the real world representation of facts, and the heuristic part, where the facts help solve the problem through rules. The authors identify four requirements for a device to have in order to be said to have artificial intelligence, and highlight the advantages and disadvantages of Artificial Intelligence.

  • Requirements: Human Emotion; Create data associations to make decisions; Self-consciousness; and Creativity and Imagination.
  • Advantages of AI: No need for pauses or sleep; Rational or pre-programmed emotions could make for better decision-making; Easy to make multiple copies.
  • Disadvantages of AI: Limited sensory input compared to humans; Humans can deteriorate but still function, devices and applications quickly grind to a halt when minor faults set in.

AI is generally seen as an intelligent aid. Humans regard themselves as always making rational optimal choices. In that light intelligent computers will always try to find the correct medical diagnosis or try to win at a game. However, reality is more blurred. Humans can have hidden motives for loosing a game, perhaps to let a child build confidence or prescribe different medicine based on the patients attitude (Waltz, 2006).

arif wahid 266541

Photo: Arif Wahid

marvin minsky
marvin minsky

AI evolves more around engineering and has no fixed theories or paradigms. Having said that, the main two paradigms to receive traction are J. B. Baars Global Workspace Theory from his 1988 book “A cognitive theory of consciousness” (Baars, 2005) and the agent-based model independently invented and championed by R. A. Brooks (Brooks, 1990), and Marvin Minsky, in his book “Society of Mind” from 1988 (Brunette, Flemmer and Flemmer, 2009).

Læs også  The Big Picture - Multimedia Ontologies and MPEG-7 (part 1 of 2)

Baars “Global Workspace Theory uses a theatre metaphor of a spotlight shining on one area (on the stage), but there is a lot going on behind the scene. Humans can complete and focus on a task, while many others things are going on at the same time.

Minsky: Believes that consciousness is made up of many smaller parts or agents, which collectively work together to produce intelligence.

Brooks: Builds cognition using a layered approach, where each layer can act upon or suppress input from layers below it.

tim de groot 105620

Photo: Tim de Groot

claude shannon
claude shannon

The year 1956 and Dartmouth College are regarded as the birthdate and birthplace of AI, since this is the first time the phrase Artificial Intelligence is used. Many of the attendees (John McCarthy, Marvin Minsky, Claude Shannon, Nathan Rochester, Arthur Samual, Allen Newell, and Herbert Simon) become leaders within the field of AI and go on to open departments at MIT, Stanford, Edinburgh, and Carnegie Mellon University (Brunette, Flemmer and Flemmer, 2009).

However, Alan Turing’s Turing Test from 1950, captures the ideas of programming a digital computer to behave intelligently, so that it’s behaviour is indistinguisable from natural interactions with humans, and Strachey’s slightly cynical Love Letters program from 1952 are also examples of intelligent computers (Hosea, Harikrishnan and Rajkumar, 2011), so too is Vannevar Bush’s Memex-concept from 1945, and “The Turk” from the eighteenth century (Buchanan, 2005).

h heyerlein 199082
h heyerlein 199082

Photo: h heyerlein

john mccarthy
john mccarthy

1950 – 1969: The 1950’s and 1960’s saw a rise in methodologies and applications for problem- solving pattern recognition and natural language processing. The programming language LISP was invented in 1960 by John McCarthy (Brunette, Flemmer and Flemmer, 2009). However these applications have trouble scaling to take on larger problems (Singh and Gupta, 2009). In 1969 the International Joint Conferences on Artificial Intelligence (IJCAI) is formed.

1970 – 1989: The 1970 and early 1980’s saw the rise of expert systems like Deep Blue, but also a dawning of the complexity of AI and the understanding that this was a lot more complicated than first thought. The programming language PROLOG is added to the AI stack, to be able to use logic to reason about a knowledge base. The late 1980’s saw the introduction of intelligent agents that react to their environment (Brunette, Flemmer and Flemmer, 2009).

Robotic hand holding a lightbulb.
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.

1990 – 1999: In the 1990’s, intelligent agents, robotics and embodied intelligence find it’s way into R&D projects, with the improvement of computing power, sensors and the underlying theory. Applications begin to focus on helping businesses and organisations. The late 1990’s sees connecting intelligent agents, and leads to the idea of Distributed Artificial Intelligence via the web.

Læs også  A brief history of Open Source Software

2000 – present: A main focus is adding consciousness, human like behaviour and emotions to machines (Brunette, Flemmer and Flemmer, 2009). Another area of focus is machine learning, data mining, algorithms, and collective intelligence, due to the amount of unstructured available data on the web (and in databases), and the need to make sense of it (Singh and Gupta, 2009). AI also plays a major role in social sciences and Social Network Analysis (Ling-fang, 2010).

The future of Artificial Intelligence: Waltz (Waltz, 2006) predicts that the future of AI, for the next 20 years, will be determined by the interaction of three factors: Financial Factors (funding); Technical Factors (useful applications), Scientific Factors (intelligent progress), with a main focus on “cognitive prosthesis” and semantic applications, i.e. converging to a more industrial revolutionary outlook in helping humans complete tasks they dislike or do poorly. Research into the underlying theory will diminish. Funding will come from private companies like Google, Yahoo and Microsoft in collaboration with academia. NASA, the National Science Foundation (NSF) and other government bodies will not be willing to continue to fund AI research. Waltz identifies five areas that will thrive, namely:

  • Expert IA (machine learning and data mining)
  • Autonomous Robots (Reconnaissance, care taking, space exploration)
  • Cognitive Prosthesis (Semantic web applications)

and two other fields: AI theory and algorithms, and Turing Test AI, which Waltz regardeds as wildcard areas, since they can’t realistically produce practical results.

Concepts in Artificial Intelligence

Alan Turing
Mosaic portrait of Alan Turing using the mathematical analysis used to decode the Enigma machines during the World War II.

Expert Systems (Expert AI): Expert systems rely on an inference engine and a knowledge base. The engine is often rule based (Rajaram, 1990). Expert Systems are used to assist in decisionmaking. Usage examples: Blood infection diagnostics and credit authorisation (Ling-fang, 2010).

Symbolic Mathematical Systems: Computer programs problem-solve using symbols instead of numbers (Rajaram, 1990).

Intelligent Communication Systems: Allows for communication between humans and machines (Rajaram, 1990).

Signal Based Systems: Signal based communication refers to input (vision and speech recognition) and output (visualisation and speech generation) (Rajaram, 1990).

Symbol Based Systems and Natural Language Processing: Symbol based communication refers to understanding natural language, i.e. semantics or reasoning about what is meant in a sentence (Rajaram, 1990). Currently this is an area that gets a lot of attention, due to the amount of data available on Social Media and on the web (Ling-fang, 2010)

Læs også  The Big Picture - A list of Multimedia Ontologies for MPEG-7 (part 2 of 2)

Machine Learning: Machine-learning reasons about data by studying examples and using problem-solving and decision-making skills, rather than following a set of rules (Rajaram, 1990).

Logic-Based Learning Systems: Here the computer uses logic to reason about the input, i.e. if this and this and this is true, then that is true also (Rajaram, 1990).

Biological Analog Learning Systems: Computers built to resemble the biological system of the human body and brain (Rajaram, 1990).

Robotics: The goal is to create machines that can perform task for humans, not only in an industrial age type of way with continuous automation, but to intelligently analyse each step and take action depending on the task needed (Ling-fang, 2010).

The Asimo Robot
A robot is a mechanical or virtual artificial agent, usually an electro-mechanical machine that is guided by a computer program or electronic circuitry. Robots can be autonomous, semi-autonomous or remotely controlled and range from humanoids such as ASIMO and TOPIO to nano robots, ‘swarm’ robots, and industrial robots. By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.


  • Baars, J. B. (2005) Global workspace theory of consciousness: toward a cognitive neuroscience of human experience?”, Progress in Brain Research, Vol. 150, pp. 45 – 52.
  • Brooks, R. A. (1990) “Elephants Don’t Play Chess”, Robotics and Autonomous Systems Vol. 6, pp. 3 – 15.
  • Brunette, E. S., Flemmer, R. C. and Flemmer, C. L. (2009). “A Review of Artificial Intelligence. Proceedings of the 4th International Conference on Autonomous Robots and Agents., Wellington, New Zealand, p385-392.
  • Buchanan, B. G. (2005). “A (very) Brief History of Artificial Intelligence”, American Association for Artificial Intelligence – 25th anniversary issue, pp. 53 – 60.
  • Hosea, S., Harikrishnan, V. H.  and Rajkumar, K. (2011) “Artificial Intelligence”, 3rd International Conference on Electronics Computer Technology, Vol. 1, pp. 124 – 129.
  • Ling-fang, H. (2010) Artificial Intelligence, 2nd International Conference on Computer and Automation Engineering (ICCAE), Vol. 4, pp. 575 – 578.
  • Ning, S.,  Yan, M. (2010) “Discussion on Research and Development of Artificial Intelligence”, IEEE International Conference on Advanced Management Science(ICAMS 2010), Vol. 1 , pp. 110 – 112.
  • Rajaram, N. S. (1990) “Artificial Intelligence: A Technological Review”. ISA Transactions. Vol. 29 (1), pp 1 – 3.
  • Singh, V. K. and Gupta, A. K. (2009) “From Artificial to Collective Intelligence: Perspectives and Implications”, 5th International Symposium on Applied Computational Intelligence and Informatics, Timisoara, Romania, pp. 545 – 549.
  • Waltz, D. A. (2006) “Evolution, Sociobiology, and the Future of Artificial Intelligence”, IEEE Intelligent Systems, pp 66 – 69.
%d bloggers like this: