Lecture by Graeme Lindenmayer, Atheist Society Melbourne, 11 December 2018
Some people expect that we will soon be able to make computers that will be conscious.
Our bodies, and in particular our brains, are intelligent physical structures that are conscious, so it should also be possible for other intelligent physical structures, such as computers, to be made conscious.
The idea of manufacturing machines that are conscious raises a few issues.
Why would we do it?
What kinds of things would we want machines to be conscious of?
How would machine consciousness be produced?
How would we know whether a particular machine actually was conscious?
So, why would we want to give machines consciousness?
Wanting to do it is an example of the propensity of human beings to search for new things. and to try to break boundaries. Explorers and scientists have always wanted to discover the unknown. Some people are planning to put human settlements on the moon and Mars. Information scientists and computer technologists have the aspiration to make machines equal or superior to humans in every aspect of intelligence. Making machines conscious would be part of that challenge.
What advantages would machine consciousness be expected to provide, and who would benefit? One expected advantage would be the thrill and the kudos for the first people to do it. A more worthy advantage might be that by making them conscious, some machines could be more companionable and more humane. This might even make some humans more companionable and humane. Improving humaneness could be relevant to the institutions that care for children and aged and handicapped people, where there now seems to be a lot of abuse and neglect. Also, consciousness might help machines understand what they are doing, which might make them more efficient, more effective, and less likely to accidentally harm us. This could apply to driverless vehicles.
Armament manufacturers might think that consciousness could make
intelligent weapons more effective. And perhaps you could blame
conscious machines for any breaches of international rules
Most people probably think that human consciousness is very useful.
Our human experiences contribute to our wellbeing. For example,
being conscious of pain is an important function for the preservation
of the body, for humans and many other species. When people have
no sense of pain they have no indication that they need to act to
prevent or treat any damage to their body parts. This happens with people
who have leprosy and often to people with advanced diabetes.
Also, there are conditions, such as the early stages of cancer, that cause
no pain but become increasingly dangerous with untreated development.
Experiencing severe pain affects the cognitive and emotional areas of the brain in a way that would not have happened if no pain had been felt. Emotional memories are created of the pain and of the incident that caused it, which could send warning signals whenever a future situation occurs that resembles one that previously caused pain.
For most people, pain is not a major part of their consciousness. Many
conscious feelings are not painful but very pleasant or exciting.
There are many things that we enjoy in life.
Our consciousness of what we see and hear and taste, etc., gives us a feeling of what the outside world is like. We conjure up memories if these things, and put "pictures" of them into our consciousness. Most of us have many happy memories stored away that we like to bring back to consciousness from time to time.
These feelings, and the conscious reminders of these feelings, give a sense of reality to everything that we know about. They are much more than the representations of the outside world that cameras and camcorders put into the memories of the present intelligent non-conscious machines. So a similar sense of reality might occur for conscious machines, and make them more "understanding" of what they were doing, and safer and easier to work with. But each machine and kind of consciousness would need its own special treatment.
Any consequences of having conscious machines would, presumably, depend on the kind of consciousness they were given.
What kinds of consciousness might we want to give machines?
Human consciousness has very many facets, some of which have already been mentioned. We would need to choose which ones should be given to machines to suit their specific purposes, and which ones to avoid. Some kinds of consciousness would make us morally obliged to treat the machine "humanely" – or compassionately.
One kind of human consciousness relates to the outside world and what is happening in it. Another kind of consciousness relates to our inner state.
Consciousness relating to the outside world is based on inputs
from our sensory systems – sight, hearing, taste, smell, touch and
pain, etc. So this type of consciousness is our awareness, at the particular
time, of what we are observing and doing in the physical world, which includes
our own physical body. It also includes the information that we
get from reading and listening. A lot of processing in the brain
is required for us to be able to make sense of the sensory inputs, such
as the conversion of the inputs from the optic nerve into coherent pictures,
but we are conscious of only the result of this processing and not
of the processing itself.
Consciousness of our inner state is based on the memories derived from our sensory inputs, and our cognitive processing of these memories, and from memories resulting from our cognitive processes. We consciously think and create ideas about both the outside world and of our inner selves.
We also have another kind of inner consciousness. This is the wide range and the many degrees of our emotions. They range from love to liking and to hate. They range from disgust to dislike to ambivalence to appreciation to respect and reverence. And they range from fear to anxiety to restiveness to calmness and confidence, and they range from despair to depression to equanimity and satisfaction to exuberance. And there are more shades of emotional consciousness than these that I have just listed.
There is the consciousness of wanting something, and
of ambition and the urge to take some specific action. And,
of course, we are continually consciously taking actions. Machines
that were intent on self-preservation could become dangerous, particularly
if they could take independent action, or control other machines, such
as self-driving vehicles or intelligent weapons. Ambition and other inclinations
are abstruse feelings. Providing machine algorithms for them and
linking them to the issues of the outside world would be very tricky and
could produce unintended and dangerous outcomes.
Things that enter our consciousness are initially stored in our short-term
memory. But much of what our eyes see and our ears hear is not passed
on to our consciousness. Much of what we experience is of very little
significance to us, and it disappears from memory. What is significant
is put into the long-term memory, and may be recalled later – sometimes
with difficulty. It would be necessary to decide which received information
detected by a conscious machine should be kept, and what should
be discarded. The amount of such detail that was detected and stored,
and the use it was put to, would determine how much additional
memory and computing power the machine would need.
Machines are given access to specific kinds of information that are
necessary for the processes of their specific purposes. They can be connected
to devices that measure aspects of their environment, such weight and temperature
and light and sound, and the concentration of particles in the atmosphere,
etc. This does not mean that they know what weight is,
or experience its effects, or feel hot or cold. And it does not
mean that they know what different objects actually are, even when
they can detect, identify and name them.
Feeling hot is quite different from measuring one’s own temperature. Feeling the weight of something that we are lifting or holding is quite different from measurement of weight. Our consciousness of weight and temperature tells us, among other things, that something is too heavy for us to carry or that it is light enough to carry, or whether something we are touching, or our environment, is too hot or too cold for our safety. Making machines conscious of such things could give them more understanding.
Machines also detect sounds and colours. They recognise patterns of all kinds, such as pictures and other shapes, including patterns of letters and words and numbers, etc. They translate spoken words into text, and text into spoken words. They translate words, numbers and patterns of any kind into commands to do something, or recognise something or someone, or decide the optimum action to take in a process or competitive game.
There is no reason to think that they have similar conscious feelings to what we get from those same sounds, pictures, patterns and words. A machine using a picture to identify someone would be recognising a pattern not a person.
Computerised processes are just the operations of established laws of physics, using the minimum amount of information needed for the particular tasks.
Some of the things that machines do are called mental tasks when
we do them. Few people would accept that the machines understand
the significance of what they are doing in any of these things. In each
case, we would need to consider what advantages and disadvantages would
result from making the machine conscious.
Some kinds of consciousness would need to be decided arbitrarily. For
example, human sensations of colour might not be the same or even similar
for all people. We may agree on how we name the particular colours of something,
but that could be explained by the fact that our eyes all have similar
mechanisms for detecting and representing the various wavelengths of visible
light. Similarly, we are not able to experience what other people
experience regarding, sound, tastes and smells, or their emotions. If we
were to give machines consciousness of these we would have to provide more
than just detection and measurement: we would have to provide something
that could deliver the appropriate sensations.
Machines have thermostats that take specific action when a particular
temperature is reached, and sensors for degrees of strain when they are
bearing weight, and for humidity, etc. These might be made conscious in
Humans don’t need to be conscious of every detail that they detect. The same would apply to machines. with their equivalent of sensory organs, such as camcorders etc. It would be important to determine, for example, how much of what was happening along a road would be necessary to make self-driving cars safer. It might require good recognition of human gestures, both hand and facial. The value of consciousness would be dependent on the purposes the machine was put to. It would be necessary to establish that being conscious of a particular kind of thing actually did serve the particular purpose.
Should we make machines feel pain or anxiety or fear, or pleasure or
confidence or happiness, or anger? Some people might think that these emotions
would make machines more companionable, or more suitable for particular
tasks, or as soldiers. They would have in their memories the details of
situations they had had with particular people, and conclude that humans
might have similar emotions and similar memories. This might make the machines
genuinely empathetic and companionable to humans. Or they might outsmart
As mentioned earlier, humans are conscious of a great range of emotions. Our emotions are regarded to be the most significant influence of our decision-making, eclipsing our reasoning. They may give us the incentive to achieve what we might otherwise not have started. But emotions can also make us do inappropriate, or silly, or dangerous things. A good balance between emotion and rationality is important for our dealing with our very complex environment. A similar balance would be desirable for machines that felt emotions
Giving machines a range of emotions might also mean that they could develop psychiatric problems. This might be useful for research into treating these conditions in humans. But once machines could have such experiences, the same ethical issues that apply to humans and animals would have to apply to machines.
There would be no need to make machines conscious to make them obsessive:
some are already non-consciously obsessive. Perhaps they might be appropriately
tempered by consciousness.
Humans and some other organisms have memories of their lives. Throughout
their lives they are aware of the changes in their bodies and minds, and
also their continual interactions with their human and non-human environment.
They are conscious of a lot of these memories of their experiences. None
of these aspects of consciousness seem relevant to machines.
A brain learns to do tasks unconsciously. Humans and some other organisms learn through constant practice to perform complex tasks without thinking about how they are doing them. Often humans are more skilful when doing things unconsciously than consciously. Examples are manipulative tasks such as playing sport, using a keyboard or writing, and walking, and mental tasks like calculating. There is no time to think about how to hit a tennis ball that is speeding towards you, but your unconscious reaction that has developed through practice will perform the task.
Similarly, it would often be more efficient and more reliable and safer for machines to just rely on non-conscious algorithms. It would not be necessary for a machine to be conscious, for it to give or be given warnings, such as the equivalent of pain. In some cases, warnings, and other information would preferably be sent directly to humans. also, it would not be necessary for a machine to be conscious in order to detect the changing moods of individual people. The present machines sometimes misjudge the situation, but we often do that too.
All these may be conundrums. But choosing
whether to give a machine consciousness, and what kinds or conscience to
give it, will not be the biggest problem.
How could consciousness be produced in a machine?
It might be argued that to be conscious
it is necessary to be alive, so machines could not become
conscious. The only argument for this is that all the conscious entities
that we are aware of are living organisms. But we are not sure whether
every living organism is conscious. All that we know about any physical
characteristics of consciousness is that the content of consciousness
seems to be dependent on information held in the brain. And machines contain
information. But consciousness is not the same as information.
I think that for any person, or any inanimate thing, to have any conscious experiences, there must be certain conditions. There must be:
There is plenty of evidence that the processing of information in the brain is the only source of the content of the consciousness of human beings. And here are a few ideas about achieving it from the information in machines.
Some people think that when a brain has developed a certain degree of complexity it automatically becomes able to be conscious. This gives no clue to what kind of role complexity may have.
There is no apparent reason why sheer complexity in any kind of system should, of itself, automatically produce consciousness. There are no plausible suggestions of what kind of complexity, or how much would be enough, or of whether different kinds of complexity would be needed to create different kinds of consciousness, such as for pain, for seeing colour and for being happy.
There is a branch of mathematics called complexity theory. It deals
with two aspects of complexity. One is the analysis of complex and chaotic
systems, including the solving of very difficult mathematical equations.
The other examines processes by which apparently independent elements can
come together to produce coherent complex systems. But complexity theory
does not show how consciousness might occur.
One suggested idea is to "download a human mind" onto a machine that is already suitably equipped.
This might seem like a straightforward process. every operation of a brain involves electric currents, which can be detected using wires attached to specific parts of the head. Also, structures within the brain can be detected using magnetic resonance imaging (MRI). Processes within the brain can be watched using functional MRI (fMRI). "Brain scans" using these technologies have been performed for a long time, for diagnostic and scientific purposes.
People can now control devices, including wheelchairs and prosthetic limbs, using the patterns of electric signals generated by the process of thinking specific thoughts.
All this seems to suggest that, even
though we might not know how a brain produces consciousness, we
could produce artificial consciousness by copying all the information
in a brain, and keeping that information in exactly the same structural
format as it was in the brain.
But detecting the electric currents in the brain does not provide
a picture of the structure of the neurons and their connections.
It’s like hearing the sounds of the brain’s processing. CT
scans and MRI might provide detailed 3D pictures. but there is a lot of
difference between a picture and complete knowledge of the thing pictured
. A comprehensive detailed examination of the brain would be needed.
The human brain has tens of billions of neurons and many other kinds of cells. Each neuron has multiple connections. The brain is three dimensional, so access to individual connections between neurons would have to go through other brain matter. The neurons are not idle, not even when the person is asleep or anaesthetised.
Downloading a live brain would not be feasible. So
a dead brain would be necessary. And the brain would have to have retained
all the connections and their content that it had when it was alive. But,
since brain death is the criterion for death, the brain might already have
had some damage.
The dead brain would need to be kept at a temperature that prevented any deterioration of the tissues. This would mean cooling the entire body from immediately after the death of the person. The downloaded information would then be needed to create a replica that could operate using a suitable power supply, and then be connected to the machine that was to be made conscious.
In 2015 a scientist at Harvard University completed a six-year project, completely analysing the structure of a tiny fragment of mouse brain. The volume of brain tissue was 1500 cubic microns, equivalent to a cube whose sides were slightly longer than a hundredth of a millimetre. While developments in technology will probably increase the speed of such projects, which are still ongoing, completely downloading an entire non-conscious dead human brain would take many decades to complete.
Constructing the downloaded replica suitable for attaching to a machine would take even longer. Once completed, it would need to be appropriately attached to the machine, given a power supply and switched on. But switching on might not make it start functioning.
But what if, despite these problems, all this actually were to be achieved?
A machine fitted with such a replicated brain would have the knowledge, the personality and the consciousness of the person whose brain it was copied from. And it would expect to have all the sensory inputs that that person had. So it would need to have visual inputs equivalent to those delivered to a brain by the optic nerve, otherwise it would be visually impaired or blind. It would also need to have the equivalent of the motor nerves that cause eyes to move and to focus. The same would apply to all the sensory and motor nerves so as to match those of the person whose brain had been copied.
With a person, loss of a limb often causes "phantom pains", and a similar effect would apply if such a conscious machine was not given the equivalent feelings of active arms and legs.
Lots of sensory and motor devices would need to be attached to the machine, otherwise it would suffer a continual agony and anxiety. And the machine would want to do the kinds of things the person would have wanted to do. The immorality of making machines conscious in this way without such attachments would be an important social issue. Providing all the necessary attachments would be costly.
One alternative might be to give a machine the consciousness of a dog,
or a mouse or a cockroach. That might sometimes be sufficient. The cockroach
would be easier.
There is no evidence or theory of how a brain might have
the capability of being conscious, or of how information
that is stored in the brain might be converted into its specific
content of consciousness. All this makes me think that the only way
to provide machine consciousness is to find out how organic consciousness
Some people dismiss all this arguing about consciousness. They say it is a non-issue; we all have it, so we should just accept it. This attitude is of no help to anyone who might want to produce a conscious machine.
Some people say there is no such thing as consciousness.
Until there is some physical theory that explains how conscious
arises from the patterns of connections in brains, we cannot begin
to work out how to produce consciousness in machines.
How would we know whether a machine was conscious?
If all the scientific and technological problems relating to producing consciousness were to be solved, how would we know whether a particular machine was actually conscious?
The only kind or consciousness that we can discuss with confidence is the consciousness that humans experience. Each person feels their own consciousness and assumes that other people have similar feelings of consciousness. These assumptions are based on the observation that other people are similar to us, and behave similarly to us, and can talk about the things that they and we are conscious of. This seems eminently reasonable, but it is not direct evidence that other people are conscious.
We also deduce similar things about the likely consciousness of other species of organisms, but we are unable to have conversations with them about it.
Most people think that some other animals are conscious, but they are
less sure about animals that are smaller and very different from humans.
Most people assume that plants, fungi and microorganisms are not conscious.
We might think that this is because their sensory systems are different
from ours and of other mammals. There is no valid
evidence of which are conscious and which not.
Conscious machines would cost more than their non-conscious counterparts. This is because of the additional complex attachments and programming they would need, and their attributed advantages. So it would be important for the purchasers to be able to tell that what they were buying actually was conscious, otherwise there could be a lot of customers not realising that they were not getting what they paid for.
How would they tell?
A machine might be conscious of only a few aspects of the outside world, and/or of a few emotions or dispositions, so separate tests would be necessary for each. For example, a test of whether a machine felt hopeful, or liked caring for children and elderly people, might have to include observing its behaviour.
Would some kind of Turing test be reliable? In the Turing test, which is named after Alan Turing, who suggested it, a person has a conversation with an unseen person or machine, and has to decide which it is. The person doing the test chooses what to talk about and what questions to ask, expecting that a machine would reply in a different way from a person. People doing the Turing test often come to the wrong conclusion.
Telling whether a machine was conscious might be similar. A machine might be asked about its experiences, such as describing them, liking or disliking them, and what made them good or bad, and what things made the machine happy or sad. It might be asked if it was conscious.
Whatever the questions, answers and discussions were, the machine could have been lying, consciously or unconsciously. Asking a machine to do tasks, like identifying or finding something, or solving a problem, would not identify whether it was conscious.
Testing how good it was at playing chess would not be very useful –
unless you thought that it might be conscious if it lost.
Just as the content of human consciousness seems to be entirely dependent on the information contained in the brain, so would we expect the content of the consciousness of a machine to be entirely dependent on the information contained in its memory. A machine that was not conscious should be equally capable, or equally incapable, of passing tests as a similar machine that was conscious.
You could program a machine to say "ouch" whenever someone hit it, but that would not mean it was conscious.
In all cases it could have been programmed to give a false answer. Even a machine programmed to tell the truth could be programmed to "believe" it was conscious.
There seems to be no alternative test.
So how would the developers know whether they have succeeded? How would they convince the doubters?
If there are ever going to be conscious machines, there are sure to be some buyers.
I wish them luck.