Artificial
Intelligence IV
The neural net
approach to AI has proven very productive. To reiterate the general idea is to
train the AI with lots of examples curated by a human. Thus for example by
giving the AI lots of pictures of dogs and saying what type of dog the AI
“leans” and can identify the type of dog on an unknown image. This is thought
to be the way the brain works by reinforcing pathways which are correct and slimming
down those which are not.
The big problem
with this approach is that a lot of training data is needed which must be
annotated by a human. The good thing about this approach is that it appears
completely scaleable ie. there seems to
be no upper limit to the number of examples which is constrained only by the
size of the computer. The practical limit is imposed by the size of the
training set and the cost of the human describing that set. Big training sets
are better but amount of human effort becomes very large and expensive.
Computer hardware costs are falling so fast that computer power is not a
limiting factor.
A recent
development is the self referencing AI. Suppose a text is taken as input. The
AI takes a word and tries to estimate what the next word will be. It can check
the next word and cycle around until it gets the right answer learning as it
goes.. Language contains a lot of redundancy which makes the next word guess
much easier. The huge advantage is that the training set doesn’t need the slow
and costly human step.
In the jargon
of area the coefficient ( weight )
applied to different calculations is called a parameter. To experimenters
surprise models with a large number of parameters showed improvements above
simply scale. For example through text analysis an AI could correctly interpret
a simple addition when that was expressed as a human might as two plus two
rather than symbolic 2+2.
Because much
larger models with more parameters are more easily possible it is becoming
easier to combine into less specialist tasks. These are known as foundation
models. Previously AI’s were useful for specific tasks. An AI trained to
identify types of dog would be of little use for anything else although
excellent and far better than a human in its particular area.. It wouldn’t
exhibit enough intelligence to know if it was presented with the image of a
cat.
Adding many
more and different parameters makes the AI far more generalist. As the number
of parameters increases( and we are talking billions ) so the AI increasingly
becomes intelligent in a human sense This seems to be improving faster than
just linearly with the number of parameters.
One famous test
of computer intelligence is the Turing test, named after computer scientist
Alan Turing. This imagines a situation where a human communicates with an
unseen device by teletype. If the human cannot see any difference between a
computer or a human then the Turing test is passed. Quite how flexible AI’s
become remains to be seen..
It seems clear
that this point is very near. It perhaps needs to be admitted there have been
false dawns before and it could turn out this is another., There are many
issues and hurdles along the way. If there are problems with the training
data hideous issues can occur. This
seems to hark back to the old computer GIGO joke, garbage in, garbage out.
I’m adding a
minor personal note. I usually take my comments on science and technology from
the specialist press. I try and keep abreast of current developments both in
science generally and in particular areas of technology in which I’m
interested. In this case however my information comes from a longer article
published in the “Economist”. I’m finding that AI is not being covered very well in the specialist
press. The “Economist” has an honourable tradition of specialist “in depth “
reporting which is in both a weekly
science section and a quarterly review devoted to technology. However in this
case the item on AI appears as a stand alone article not within its specific
sections and I have not seen it reported upon elsewhere.
A curious
footnote is that an AI scientist at Google has apparently claimed that a chatbot
on which he was working is sentient. His employer disclaims this and his view
is not held by other scientists. His mistake is suggested to be because of the
human tendency to anthropomorphise ( attribute human characteristics to ) inanimate inventions such as cartoon
characters. Pet owners commonly attribute human like characteristics to their
pets.
No comments:
Post a Comment