Artificial Intelligence

Artificial intelligence is a field that attempts to provide machines with human-like thinking.

History
But despite some significant results, the grand promises failed to materialise and the public started to see AI as failing to live up to its potential [this is not impersonal, this is a opinion from someone, hence this is wrong]. This culminated in the "AI winter" of the 1990s, when the term AI itf fell out of favour, funding decreased and the interest in the field temporarily dropped. Researchers concentrated on more focused goals, such as machine learning, robotics, and computer vision, though research in pure AI continued at reduced levels.

However, computer power has increased exponentially since the 1960s and with every increase in power A.I. programs have been able to tackle new problems using old methods with great success. A.I. has contributed to the state of the art in many areas, for example speech recognition, machine translation and robotics.

Approaches
Historically there were two main approaches to AI:
 * classical approach (designing the AI), based on symbolic reasoning - a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences, which are then processed according to the rules of logic.
 * a connectionist approach (letting AI develop), based on artificial neural networks, which imitate the way neurons work, and genetic algorithms, which imitate inheritance and fitness to evolve better solutions to a problem with every generation.

Symbolic reasoning have been successfully used in expert systems and other fields. Neural nets are used in many areas, from computer games to DNA sequencing. But both approaches have severe limitations. A human brain is neither a large inference system, nor a huge homogenous neural net, but rather a collection of specialised modules. The best way to mimic the way humans think appears to be specifically programming a computer to perform individual functions (speech recognition, reconstruction of 3D environments, many domain-specific functions) and then combining them together.

Additional approaches:
 * genetics, evolution
 * Bayesian probabily inferencing
 * combinations - ie: "evolved (genetic) neural networks that influence probability distributions of formal expert systems"

Current state
By breaking up AI research into more specific problems, such as computer vision, speech recognition and automatic planning, which had more clearly definable goals, scientists managed to create a critical mass of work aimed at solving these individual problems.

Some of the fields, where technology has matured and enabled practical applications, are: Some examples of real-world systems based on artificial intelligence are:
 * lollypop
 * Skynet
 * Speech Recognition Systems.
 * Computer Vision
 * Text Analysis
 * Robot Control
 * Planning. Humans excel at creating real world plans on a daily basis seemingly effortlessly. A review of the problem of creating a problem however will reveal that the creation of a real world plan requires a vast amount of knowlege about the real world. Consider for instance that a friend asks to be met at the Tate Museum in London, 21 days hence. The meeting should be in time for lunch at a restaurant within walking distance of the museum. An agreement to make the meeting would require a plan that takes many things into consideration. For instance, if you were to agree to meet your friend at the Tate Museum and you were situated in North America, then you would first have to realize that there are no bridges to England. It is not possible to walk there. Likewise it is not possible to take a train. One might take an ocean liner or one might fly. Entering England requires authentic identification papers. Therefore a passport is required. Were you a US citizen then you would be obliged to acquire a passport if you did not already have one. Delays are associated with acquiring a passport. The US government is not driven by the profit motive. Therefore a wait of several weeks might not be unreasonable. British is a soverign nation. It has its own legal system as well as its own legal tender. The British Pound. One would therefore be obliged to convert US dollars into British Pounds prior to making the trip, or immediately upon arrival. The US dollar floats relative to other currencies. One would therefore need to account for exchange rate shifts. Britain is in a time zone that is advanced by 6 hours from that of East Coast time (US). Upon arriving, therefore one might find oneself psychologically and physiologically off balance - hence, resting might be in order. One will very likely not rest in a public place but acquire accommodations. The Tate Museum is not a hotel, therefore one could not rest there for the time needed. And on it goes. In order to make a simple yes or no answer to making the meeting one must factor in a huge amount of real world knowledge. Enabling machines to create real world ad hoc plans therefore requires that an extensive knowledge base be created to account for the combinatorial explosion that typically arises when trying to move forward.
 * Plan Recognition. A computing system that is capable of recognizing the plan that an actor or agent is formulating is in the position of either assisting in implementing the plan or maybe able to thwart the plan ([Henry Kautz - Plan Recognition]). Using Kautz' system, one can create a knowledge base about a specific domain. The expert system that uses this knowledge base is then able to recognize the activity in question and make inferences about the state of completion of the plan. Sufficiently robust plan recognizers can identify the actions of both individual agents but also teams of agents all of whom may be actively attempting to coordinate their efforts to accoplish a goal.
 * Expert Systems. These information systems draw upon several areas of artificial intelligence to perform their operations. Developing an expert system requires an understanding of knowledge representation. Human knowledge can be represented as production rules, i.e. simple or complex if-then combinations if antecedent-consequent constructions, first order logical constructions such as: likes(mary, wine), or extended representations called frames. Frames represent human knowledge as objects that have attributes which have values. These O-V-A combinations can be assembled into recursive structures that map very closely to their real world counterparts. Knowledge about the world and the functioning of these objects can then be applied to real world situations. Knowledge is not always exact, therefore a robust knowledge representation scheme need to have some for of representing ambiguity. Fuzzy logic is a way of capturing ambiguity about a real world phenomena. For instance one might make the observation that a man it "tall" because he stands 6 feet, 2 inches in height. Would a man who is 6 feet, 1 inch still tall? A man who is 6 feet and 1/2 inches? Fuzzy logic enables ranges of confidence to be expressed about a particular object or rule. Once some form of representation scheme is identified an inferencing strategy is required. Humans use both forward-chaining and backward-chaining strategies. They often use combinations of the two to solve complex problems. Problems that require identification from a few facts is a typical use of forward-chaining. A forward-chainer can be provided with a small set of facts about a situation or object and reason about the problem. For instance an opthomologist can quickly assess the type of problem that a patient presents just based upon a few facts and observations. The problem can quickly be identified and classified as to whether it is caused by trauma, infection, toxicity, congential or systemic (e.g. detached retina as a result of high blood pressure). A backward-chaining system can create theories. A famous backward-chainer was Sherlock Holmes. Holmes was presented with a finished result. A person might have been murdered under mysterious circumstances. Holmes was able to reason backward from the presented corpse to how the person ended up in a terminal state. He created theories based upon observations and knowledlge about the real world. In order to rapidly achieve a goal or formulate a correct theory means that the chainer utilize operators and meta-operators. Solving a Rubik's cube requires the use of both. This is because the act of putting a Rubik's cube back in order involves interacting-subgoals - i.e. partial goals that conflict with solving the larger puzzle. Operators and meta-operators are used to navigate the search space and constrain the combinatorial explosion that results when one attempts to solve a problem. Knowledge can be characterized in terms of the "strength" of the knowledge. This characterization is described as "f-hat" i.e. the letter "f" with a carat over it. The more effective the knowledge, the less time is required to traverse the space.
 * Google
 * Intelligence Distribution Agent (IDA), developed for the U.S. Navy, helps assign sailors new jobs at the end of their tours of duty by negotiating with them via email.
 * Systems that trade stocks and commodities without human intervention.
 * Banking software for approving bank loans and detecting credit card fraud (developed by Fair Isaac Corp.).
 * Search engines such as Brain Boost (or even Google.)
 * Intelligent robots, such as ASIMO, QRIO, AIBO.
 * Intelligent help systems capable of providing context sensitive help to software system users. These systems are able to infer the correct level of help needed to provide because they can a) make inferences about the level of skill of the user and b) utilize deep knowledge about the software application itself. Using these areas of knowledge it is possible to identify the types of mistakes that users of varying skill levels are likely to make. Novice users who have no conceptual insight into an application tend to make syntactic and semantic mistakes, niaive users tend to make more semantic mistakes whereas expert users tend to make thematic mistakes - i.e. inferring incorrectly that one way of assembling commands to solve a particular problem can be generalized to solve another problem using a comparable sequence of commands.
 * Intelligent help to operators of complex and potentially dangerous industrial process such as nuclear power plants. Human operators of high risk industrial processes have limited attention span and typically perform poorly in situations where cascades of sequential problem sets can result in an inappropriate remedy.
 * "Common sense" reasoning. An ongoing example is the project called [CYC]. CYC attempts to capture and use knowledge about the world to performing reasoning about specific topics. CYC drives it's inferencing capability by using an encyclopediac amount of knowlege about the world. Its current knowlege base consists of 300,000 concepts, 3,000,000 assertions, and 26,000 relations (07/2008). CYC can further be trained by interaction with humans in the outside world. CYC's ability to reason can be characterized by taking for instance, a picture of a group of people. This group of people can be occupationally characterized by their attire. Among the people is an athelete who very evidently has just run a foot race for an extended period of time. CYC can be queried as to which one is wet. CYC can correctly infer that people who physically exert themselves perspire. From that CYC can infer that people who perspire will momentarily be wet. CYC can therefore conclude that it is the athelete who is wet.

Computer vision
Things that computer vision is currently good at as of 2007 include:

1. Detecting human faces in a scene.

2. Recognizing people from non-frontal views.

3. Determining the gaze direction of someone with high accuracy.

4. Recognizing people as they age, wear a hat, shave, or grow a beard.

5. Recognizing whether a face is that of a male or female, or a person who is young or old, or just about any other kind of discrimination?.

6. Compensating for camera motion in tracking objects.

7. Forming geometric models of objects.

8. Determining the rough three-dimensional structure of a scene over a distance of six meters.

Things that computer vision is still not good at as of 2007 include:

1. Recognizing what people are wearing.

2. Determining the material properties of something that is viewed.

3. Discriminating general objects from the background.

4. Recognizing general objects.

5. Lip reading.

6. Recognizing emotion.

7. Gesture recognition.

Top 10 Automatic Speech Recognition Software

 * Dragon Naturally Speaking
 * IBM Viavoice
 * TomTom
 * Windows Vista
 * Windows 7
 * Siri

'NEED EXPANDING'

Robotics
Robotics

Ongoing projects
Cyc is a 22 year old project based on symbolic reasoning with the aim of amassing general knowledge and acquiring common sense. Online access to Cyc will be opened in mid-2005. The volume of knowledge it has accumulated makes it able to learn new things by itself. Cyc will converse with Internet users and acquire new knowledge from them.

Mind.Forth -- shows thinking by the use of spreading activation

Open Mind and mindpixel are similar projects.

These projects are unlikely to directly lead to the creation of AI, but can be helpful when teaching the artificial intelligence about English language and the human-world domain.

Artificial General Intelligence (AGI) projects

 * Novamente is a project aiming for AGI (Artificial general intelligence).
 * Adaptive AI a company founded in 2001 with 13 employees.
 * Other projects: Pei Wang's NARS project, John Weng's SAIL architecture, Nick Cassimatis's PolyScheme, Stan Franklin's LIDA, Jeff Hawkins Numenta, and Stuart Shapiro's SnEPs.

Collateral Effects

 * See main page: Psychology.

Future prospects
In the next 10 years technologies in narrow fields such as speech recognition will continue to improve and will reach human levels. In 10 years AI will be able to communicate with humans in unstructured English using text or voice, navigate (not perfectly) in an unprepared environment and will have some rudimentary common sense (and domain-specific intelligence).

We will recreate some parts of the human (animal) brain in silicon. The feasibility of this is demonstrated by tentative hippocampus experiments in rats. There are two major projects aiming for human brain simulation, CCortex and IBM Blue Brain.

There will be an increasing number of practical applications based on digitally recreated aspects human intelligence, such as cognition, perception, rehearsal learning, or learning by repetitive practice.

Robots take over everyones jobs

The development of meaningful artificial intelligence will require that machines acquire some variant of human consciousness. Systems that do not possess self-awareness and sentience will at best always be very brittle. Without these uniquely human characteristics, truely useful and powerful assistants will remain a goal to achieve. To be sure, advances in hardware, storage, parallel processing architectures will enable ever greater leaps in functionality. But these systems will remain mechanistic zombies. Systems that are able to demonstrate conclusively that they possess self awareness, language skills, surface, shallow and deep knowledge about the world around them and their role within it will be needed going forward. However the field of artificial consciousness remains in its infancy. The early years of the 21st century should see dramatic strides forward in this area however.

During the early 2010's new services can be foreseen to arise that will utilize large and very large arrays of processors. These networks of processors will be available on a lease or purchase basis. They will be architected to form parallel processing ensembles. They will allow for reconfigurable topologies such as nearest neighbor based meshes, rings or trees. They will be available via an Internet or WIFI connection. A user will have access to systems whose power will rival that of governments in the 1980's or 1990's. Because of the nature of nearest neighbor topology, higher dimension hypercubes (e.g. D10 or D20), can be assembled on an ad-hoc basis as necessary. A D10 ensemble, i.e. 1024 processors, is well within the grasp of today's technology. A D20, i.e. 2,097,152 processors is well withing the reach of an ISP or a processor provider. Enterprising concerns will make these systems available using business models comparable to contracting with an ISP to have web space for a web site. Application specific ensembles will gain early popularity because they will offer well defined and understood application software that can be recursively configured onto larger and larger ensembles. These larger ensembles will allow for increasingly fine grained computational modeling of real world problem domains. Over time, market awareness and sophistication will grow. With this grow will come the increasing need for more dedicated and specific types of computing ensembles.

'NEED EXPANDING'

Timeline:


 * Invention
 * first AI laboratory
 * chess champion
 * speech recognition
 * autonomous humanoid robots
 * Turing test passed (won't happen in our lifetimes, turing test is flawed)

Don't know what examples are good...

Links

 * Artificial Intelligence - Myth or Reality
 * Why Artificial General Intelligence may be near - this article describes what kind of work leading in this direction is being being done and what can be done in the future.
 * Robotic Nation How robots will afect our economy.
 * Artificial General Intelligence: Now Is the Time by Ben Goertzel. Why AGI can be created in 10 years if we really try.