Consciousness is only marginally relevant to artiﬁcial intelligence (AI), because to most researchers in the ﬁeld other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis,according to which consciousness is the property a system has by virtue of modelling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be veriﬁed or refuted until and unless ‘artificial intelligence’ is successful in ﬁnding computational solutions of diﬃcult problems such as vision, language, and locomotion.
As George Miller wrote in 1962, “Consciousness is a word worn smooth by a million tongues.” The term means different things to different people today, and no universally agreed “core meaning” exists. This uncertainty about how to define consciousness is partly brought about by the way global theories about consciousness (or even about the nature of the universe) have intruded into definitions. In classical Indian writings such as the Upanishads,‘consciousness’ is thought to be the essence of ‘Atman’, a primal, immanent self that is ultimately identified with ‘Brahman’- a pure,transcendental consciousness that underlies and provides the ground of being of both Man and Nature (Sen, 2008). In the classical Western tradition, “substance dualists” such as Plato and Descartes bifurcated the universe, believing it to consist of two fundamental kinds of stuff, material stuff and the stuff of consciousness (a substance associated with soul or spirit). Following the success of the brain sciences and related sciences, 20th Century theories of mind in the West became increasingly materialistic, assuming physical “stuff” to be basic, and consciousness in some way “supervening” or dependent on the existence of physical forms.
Scientists and philosophers have been pondering if we will ever create artificial intelligence that will rival human intelligence. The answer is yes ! We will be able to create artificial intelligence that will rival and even surpass human intelligence. In some areas like mathematical computation machines outpace humans by a great margin. The next question, a little tougher, will we ever be able to give this artificial intelligence sentience? Time, space and matter are the tougher phenomena to explain. Life and sentience are easier phenomenon as they use the existing time, space and matter to be created.
Examples of the kinds of stimuli that may be used to determine a patient’s responsiveness as a measure of consciousness include calling him by name, producing a sharp noise, giving simple commands, gentle shaking, pinching the biceps, and application of a blood pressure cuff. Responses to stimuli should be reported in specific terms relative to how the patient responded, whether the response was appropriate, and what occurred immediately after the response.Today, only basic examples of ‘artificial intelligence’ technology exist. Ranging from video games to medical research, AI has permeated through various niches of society. The particular niche of interest to many scientists is the creation of an AI system that can simulate the mind of a human. To create a piece of AI technology that is aware of its own existence is certainly a prospect that requires deep consideration. Such pieces of technology could think independently and, more importantly, make decisions based on their own will. Learning could be done by these systems; just as a child learns to read, an intelligent machine could use its own artificial mind to simulate the same behaviour. These forms of AI, these “living” machines, cannot be readily welcomed without first examining the negative consequences that can arise.
- Asimov, I. 1942. ‘Runaround’, Astounding Science Fiction, March 1942.
- Beauchamp, T. and Chilress, J. Principles of Biomedical Ethics. Oxford: Oxford University
- Baars, B. J. (1988). A cognitive theory of consciousness. New York: Guilford Press.
- Bostrom, N. 2004. ‘The Future of Human Evolution’, in Death and Anti‐Death: Two
- Hundred Years after Kant, Fifty Years after Turing, ed. Charles Tandy (Palo Alto, California: Ria University Press)