Artificial Intelligence History And Forecast Of Future Developments Philosophy Essay

Artificial intelligence (AI) is the intelligence when applied to machines consisting of semiconductor devices along with computer science that aims to create it. It is defined as "the study and design of intelligent agents " where an intelligent agent is a system that takes in the surrounding environmental factors and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines." In other words Artificial Intelligence is a field that attempts to provide machines with human-like thinking.

Artificial Intelligence (AI) focuses on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought; understand speech beat the best human chess player, and countless other feats never before possible. Find out how the military is applying AI logic to its hi-tech systems, and how in the near future Artificial Intelligence may impact our lives. Before we proceed to discuss on the current and future of AI, let us trace the history of AI.


The history of artificial intelligence began in antiquity with myths and stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. Experiments to build artificial intelligence go back a long way. Well before the modern age men have sought to build or endow intelligence onto machines. AI began with "an ancient wish to forge the Gods". Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari and Wolfgang von Kempelen.

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical-or formal- reasoning is a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of syllogism), Euclid whose Elements was a model of formal reasoning), al-Khwarizmi (who developed algebra and gave his name to ‘algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.

The classical Greek mythology is full of intelligent machines and devices. The oldest known automatons were the sacred statues of ancient Egypt and Greece. The Greek god Hephaestus is said to have built two golden robots to help him move because of his paralysis, and the monster in Mary Shelley's Frankenstein popularized the idea of creating a being capable of thought back in the nineteenth century.

The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion. Hermes Trismegistus wrote that "by discovering the true nature of gods, man has been able to reproduce it." While men like Hero and Daedalus constructed the hardware, philosophers like Aristotle invented the first formal deductive reasoning system for machines known as syllogistic logic.

Mechanical men and artificial beings appear in Greek myths such as the golden robots of Hephaestus and Pygmalion’s Galatea. In the middle Ages there were rumors of secret mystical or alchemical means of placing mind into matter such as Jabir ibn hayyan’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem. By the 19th century, ideas about artificial men and thinking machines were developed in fiction as in Frankenstein and speculation such as Samuel Butler’s "Darwin among the Machines". AI continued to be an element of science fiction into the present.


Majorcan philosopher Ramon Llull (1232-1315) developed several logical machines devoted to the production of knowledge by logical means. Llull described his machines as mechanical entities that combine basic and undeniable truths by simple logical operations produced by the machines by mechanical meanings in such ways as to produce all possible knowledge. Llull’s work had a great influence on Gottfried Leibniz who developed his ideas.

In the 17th century, Leibniz, Thomas Hobbes and others explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes famously wrote in Leviathan: "reason is nothing but reckoning." Leibniz envisioned a universal language of reasoning which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants." These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Based on this, Rossell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. David Hilbert challenged the mathematicians of the 1920s and 30s to answer the fundamental question "can all of the mathematical reasoning be formalized?" his question was answered by Godel’s incompleteness proof, Turing’s machine and Church’s lambda Calculus. This answer was surprising in two ways. First it proved that there were limits to what mathematical logic could accomplish.

But their second answer is more important for AI. They said that within these limits, any form of mathematical reasoning could be mechanized. The Church Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The Turing machine is a simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.


The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mathematical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940’s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The mathematician Alan Turing started writing a computer chess program as far ago as 1948 – even though he didn't have a computer powerful enough to run it. In 1950, Turing wrote 'Computing Machinery and Intelligence' for the journal Mind, in which he outlined the necessary criteria for a machine to be judged as genuinely intelligent.

THE TURING TEST: Alan Turing brought us the Turing Test, still the holy grail of AI researchers as you may well know, this was called the Turing Test, and it stated that a machine could be judged as intelligent if it could comprehensively fool a human examiner into thinking the machine was human.

The Turing Test has since become the basis for some of the AI community's challenges and prizes, including the annual Loebner Prize, in which the judges quiz a computer and a human being via another computer and work out which is which. The most convincing AI system wins the prize.

Turing aside, there were also plenty of other advances in the 1950s. Professor King cites the Logic Theorist program as one of the earliest milestones. Developed between 1955 and 1956 by JC Shaw, Alan Newell and Herbert Simon, Logic Theorist introduced the idea of solving logic problems with a computer via a virtual reasoning system that used decision trees.

Not only that, but it also brought us a 'heuristics' system to disqualify trees that were unlikely to lead to a satisfactory solution. The field of artificial intelligence research was founded at a conference on the campus of Dartmouth college in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make his vision come true.

Logic Theorist was demonstrated in 1956 at the Dartmouth Summer Research Conference on Artificial Intelligence, organized by computer scientist John McCarthy, which saw the first use of the term 'artificial intelligence'.

The conference bravely stated the working principle that 'every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it'.

The AI revolution had kicked off with a bang, and these impressive early breakthroughs led many to believe that fully fledged thinking machines would arrive by the turn of the millennium. In 1967, Herman Khan and Anthony J Wiener predicted that "by the year 2000, computers are likely to match, simulate or surpass some of man's most 'human-like' intellectual abilities."

Meanwhile, Marvin Minsky, one of the organizers of the Dartmouth AI conference and winner of the Turing Award in 1969, suggested in 1967 that "within a generation the problem of creating 'artificial intelligence' will substantially be solved". In 1973, in response to the criticism of Sir James Lighthill and ongoing pressure from congress, the U.S. and British governments stopped funding undirected research into artificial Intelligence. Seven years later, a visionary initiative by Japanese Government inspired governments and industry to provide AI with billions of dollars but by the late 80,s the investors became disillusioned and withdrew funding again. The cycle of boom and bust of AI winters and summers continued to haunt."

Progress in AI continued despite the rise and fall of its reputation in the eyes of government, bureaucrats and venture capitalists. Problems that had begun to seem impossible in 1970 have been solved and the solutions are now used in successful commercial products. However no machine has been built with a human level of intelligence."We can see a short distance ahead", admitted Alan Turing in a famous 1950 paper that catalyzed the modern research for machines that think. But, he added, we can see much that must be done."

The computer power has increased exponentially since the 1960’s and with every increase in power A.I programs have been able to tackle new problems using old methods with great success .AI has contributed to the state of the art in many areas, for example speech recognition, machine translation and robotics.


All being well, IBM plans to enter its Watson computer into the US game show .in 2010. In order to win, the machine will not only have to understand the questions, but dig out the correct answers and speak them intelligibly. After all the broken promises from the over-optimistic visionaries of the '50s and '60s, are we finally moving towards a real-life HAL?

It's been 41 years since Stanley Kubrick directed 2001: A Space Odyssey, but even in 2009 the super-intelligent HAL still looks like the stuff of sci-fi. Despite masses of research into artificial intelligence, we still haven't developed a computer clever enough for a human to have a conversation with.

The Watson project isn't a million miles from the fictional HAL project: it can listen to human questions, and even respond with answers. Even so, it's taken us a long time to get here. People have been speculating about 'thinking machines' for millennia. Once computers arrived, the idea of artificial intelligence was bolstered by early advances in the field.

There are many applications of AI at present. Some of them have been listed here.

Banks and other financial institutions rely on intelligent software, which provide accurate analysis of the data and helps make predictions based upon that data.

Stocks and commodities are being traded without any human interference - all thanks to the intelligent systems.

Artificial intelligence is used for weather forecasting.

It is used by airlines to keep a check on its flight system.

Robotics is the greatest success story, in the field of artificial intelligence. Spacecrafts are sent by NASA and other space organizations into space, which are completely manned by robots. Even some manufacturing processes are now being completely undertaken by robots. Robots are being used in industrial processes that are dangerous to human beings, such as in nuclear power plants.

Usage of artificial intelligence is quite evident in various speech recognition systems, such as IBM ViaVoice software and Windows Vista.


Historically there were two main approaches to AI:

I Classical approach (designing the AI) based on symbolic reasoning- a mathematical approach in which ideas and concepts are represented by symbols such as words, phrases or sentences which are then processed according to the rules of the logic.

II A connectionist approach (letting AI develop) based on artificial neural networks which imitate the way in which neurons work, and genetic algorithms, which imitate inheritance and fitness to evolve better solutions to a problem with every generation.

Symbolic reasoning has been successfully used in expert systems and other fields. Neural nets are used in many areas from computer games to DNA sequencing. Current approach appears to be specifically programming a computer to perform individual functions (speech recognition, reconstruction of 3D environments, many domain specific functions) and then combining them together.


What is the future of artificial intelligence? Can machines ever be as thoughtful, self-aware and intelligent as human beings? The answer to both these questions is inter-related. Artificial intelligence in the future will churn out machines and computers, which are much more sophisticated than the ones that we have today. For example, the speech recognition systems that we see today will become more sophisticated and it is expected that they will reach the human performance levels in the future. It is also believed that they will be able to communicate with human beings, using both text and voice, in unstructured English in the coming few years. In the next 10 years, technologies in narrow fields such as speech recognition will continue to improve and will reach human levels. In 10 years AI will be able to communicate with humans in unstructured English using text or voice navigate (not perfectly) in an unprepared environment and will have some rudimentary common sense (and domain-specific intelligence)

However, will artificial intelligence be able to create machines that are self-aware and even more intelligent than human beings - is a question that nobody has an answer to. Also, even if this is possible, how much time it is going to take, cannot be predicted at present.

It is expected that in the future, such machines will be developed having basic common sense, similar to human beings, although pertaining to specific areas only. It is also expected that the human mind functions, such as learning by experience, learning by rehearsal, cognition and perception will also be performed by future intelligent machines. In fact, research and experiments are being conducted to recreate the human brain. CCortex , a project by Artificial Development Inc., California, and Swiss government's IBM sponsored Blue Brain Project, are two main ventures, whose goal is to simulate the human brain. Whether this brain will have human consciousness incorporated in it - there is still no answer for that.It is expected that the robots in future, will take on everybody's work. Whether it is office work or the work at home, robots will accomplish it even faster and efficiently than human beings. So if somebody's falling ill, they can obtain a robot nurse who will give periodic medicines to them. How much care, concern and empathy the robot nurse will have towards the patient is anybody's guess!

There will be increasing number of practical applications based on digitally recreated aspects of human intelligence such as cognition, rehearsal learning, or learning by repetitive practice.

The recent invention of the first artificial kidney by a U.S. based Indian Scientist at University of California is the latest use of AI. If this passes through human trials, it will be a boon to thousands of patients who are suffering from chronic kidney diseases and requiring kidney dialysis.

The development of meaningful artificial intelligence will require that machines acquire some variant of human consciousness .systems that do not possess self awareness and sentience will at best always be very brittle Without these uniquely human characteristics, truly useful and powerful assistants will remain a goal to achieve. The field of artificial consciousness remains in its infancy. The early years of 21st century should see dramatic strides forward in this area however.

Another weird forecast comes from David Levy, on Artificial Intelligence .In his thesis, "Intimate Relationships with Artificial Partners," Levy conjectures that robots will become so human-like in appearance, function and personality that many people will fall in love with them, have sex with them and even marry them.


And so it has continued until today, several hundred years later, when we are coming to a point where we can see the beginnings of real artificial intelligence brought about by our advanced knowledge of technology. Where it will lead us is uncertain. There are really no technological limits apparently and what we do with our intelligent machines and to what extent and purpose we develop them is another part of the ethical and moral issue surrounding artificial intelligence.

There are many science fiction stories movies and television series about (android) robots being a part of the human society, living and working with us in harmony. In other scenarios, intelligent machines, not at all remotely looking like humans, are at war with us for dominance. One of the more popular story lines concerns military equipment with a mind of its own, fighting our battles for us against another (robot) army, or at war with us ,their makers. All of these robots and machines have artificial intelligence in one form or another. Would you be surprised if machines turn against us? If we start to make them smarter than us, more efficient without the reasoning aspects which make us rationally weak, they are bound to see us for what we really are- soft bodied, weak-willed, chaotic, illogical creatures that basically parasite off the planet until there is nothing left. Not a pretty picture perhaps, but certainly a side of the truth most of us don’t like to think of. Well, machines don’t have that choice. So ridding the place of humans makes perfect logic for them.

Historical disasters have all been attributed to the Gods taking revenge on us for our bad behavior. God made us in his image according to Biblical history and now we are making machines in ours. Will the future history show anything else other than our extinction at the hands of those we created in our own image?

Thus, Artificial intelligence is still in its infancy, and artificial intelligence's future depends upon the capability of the scientists to crack the puzzle of the human mind. Will they be able to solve "the problem of the mind" and incorporate all the human, mental, emotional qualities in the machines? Let's wait and watch!