Fireduck


first prev next last
2005.04.28.20.32.58

On Intelligence

I have a theory on the basis of intelligence. This theory is that all decision making entities are rule based. These rules dictate how an intelligent agent acts. The rules may be learned or innate. The rules for a biological decision making entity might be:

1) Stay physically safe. Either fight or run away from anything threatening.
2) Maintain food intake. Eat.
3) Continue the species. Procreate.


These rules might apply for almost any animal with any brain to speak of. The rules have an order of precedence. An animal running away from another animal that is trying to eat it would probably not be worried about eating or procreating at the time. Lower importance rules are only handled after the higher priority ones.

An advanced social animal’s list of rules might start with the three above and then have additional rules:

4) Advance position in social hierarchy.
5) Improve personal prosperity.
6) Improve prosperity of society.


Number 4 here might include activities such as challenging the alpha male for dominance, running for mayor or joining a country club. Several of these goals may be met by a single action. Building a bigger house (5) might lead to higher social standing (4). These might be in differed orders for different people. I’m not trying to assert anything about their orders, other than for each entity there probably is some such ordering of rules.

I believe that intelligence can be measured by the ability of an entity to change the relative importance of the rules. For example, a person might decide that the betterment of society is more important than physical safety or eating (Gandhi for example). To do this, I believe a certain level of introspection is required. On some level, an intelligent entity must understand the rules on which they operate and change them. Maybe this is what people mean when they brandish terms like “self-aware” about.

On Computer Intelligence

I think the same concepts of rules and the ability to change them can apply quite nicely to computer intelligence. I purposely do not use the term artificial intelligence because it implies the intelligence created by us humans with machines is necessarily less authentic than our own intelligence.

Consider a simple computer entity in charge of electrical power distribution on a space station. It might have a set of rules like:
1) Supply power to life support.
2) Supply power to hydroponics systems.
3) Supply power to communications systems.


Suppose that there is a long term power shortage and the there is only enough power for life support. The agent will using the rules cut power for hydroponics and communications in order to keep life support functioning. Then all the plants in hydroponics die and the crew starves to death a month later.

Suppose we had an intelligent agent using the same rules. This agent might decide that life support can be cut back and part of the station evacuated in order to save hydroponics and thus the crews lives. Or perhaps it could have set aside some power for an emergency communications broadcast asking for help from earth. One could certainly argue that the simple agent was poorly written and should have included instructions to handle this long term power shortage scenario. However, the reason we want intelligent computer agents to be able to handle situations that were not foreseen. It seems that we do want an agent that has the ability to change its own operational rules based on the situation.

Suppose we have the same station. This time, the station’s automated systems are controlled by an intelligent agent that includes the abstract rules of: 1) promote the prosperity of humans on this station 2) promote the prosperity of humanity in general

Suppose the station observes an asteroid pass that is on a collision course with earth and at the same time there is a dust cloud blocking communications to earth. Lets further suppose that it is possible to send a warning to earth, but it will take almost all of the stations power and will have to be sustained for at least 10 hours to ensure that the signal gets through. If the station does this, all the crew will die due to lack of life support. In this case, the agent should put rule two above one and try to send the warning. The needs of the many outweigh the needs of the few.

Suppose that in a separate case, the agent controlling the space station decides that humanity would be best served if they were not so damn stupid and decides on an aggressive plan of killing off any humans who have an IQ of less than 110. In this case, the agent decides that the needs of the many outweigh the needs of the few.

Both cases are logically similar, sacrificing some for the benefit of the rest. How can we build an intelligent agent that supports one conclusion and not the other? Surely we can put in some empathy rules and some respect for the rights of the individual rules, but that might be enough. History has examples of humans deciding that a genetic cleansing was a good idea and in theory we all have empathy for each other. Can we build a system which is smart enough to be useful and predictable enough to be trusted?

Asimov had some similar ideas expressed in I, Robot as well as some of his other books. However, I think he neglected the ability of an intelligent entity to change their world view and the rules until any action is justified. Even if we do put in rules for things to avoid doing certain things, intelligence is only real if it can change the rules. We have plenty of examples of it. We (humanity) set rules for ourselves and then justify breaking them. It was ok to steal because my family was staving. It was ok to kill him because he had hurt others. It was ok kill these people because they are less than human.


first prev next last

single page RSS Feed



PGP Key

©1999-2009 Joseph Gleason. Duplication of above materials prohibited without express written permision. All Rights Reserved.