Thursday, September 10, 2009

Motivation: A requisite for useful artificial intelligence?

Edward Boyden has a fascinating essay at MIT's Technology Review website in which he describes a problem that could possibly arise from super-smart artificial intelligence. The problem, Boyden notes, is motivation: even with all of that intelligence and computational, how does a possibly sentient computer become moved to utilize that power?
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless. (A corollary of this thinking might explain why we haven't found extraterrestrial life yet: intelligences on the cusp of achieving interstellar travel might be prone to thinking that with the galaxies boiling away in just 1019 years, it might be better just to stay home and watch TV.) Thus, if one is trying to build an intelligent machine capable of devising more intelligent machines, it is important to find a way to build in not only motivation, but motivation amplification--the continued desire to build in self-sustaining motivation, as intelligence amplifies. If such motivation is to be possessed by future generations of intelligence--meta-motivation, as it were--then it's important to discover these principles now.
A second possibility that Boyden theorizes is that a strong AI might simply become overwhelmed by its own decision-making process and become locked-up from contemplating factors and uncertainties (which sounds a lot like the "rampancy" that eventually afflicts AIs in the Halo franchise).

It's a very deep and most intriguing read about what may or may not be waiting for us around the corner from the realm of computers and neuroscience. Click here and partake of the article... if you think your brains can handle it :-)

1 comment:

Lee Shelton IV said...

Interesting. I think this could be what saves us from a Matrix or Terminator-type future. If the intelligent machines we create are only half as lazy and unmotivated as we are, then we may have nothing to worry about.

Thursday, September 10, 2009

Motivation: A requisite for useful artificial intelligence?

Edward Boyden has a fascinating essay at MIT's Technology Review website in which he describes a problem that could possibly arise from super-smart artificial intelligence. The problem, Boyden notes, is motivation: even with all of that intelligence and computational, how does a possibly sentient computer become moved to utilize that power?
Indeed, a really advanced intelligence, improperly motivated, might realize the impermanence of all things, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence, concluding that inventing an even smarter machine is pointless. (A corollary of this thinking might explain why we haven't found extraterrestrial life yet: intelligences on the cusp of achieving interstellar travel might be prone to thinking that with the galaxies boiling away in just 1019 years, it might be better just to stay home and watch TV.) Thus, if one is trying to build an intelligent machine capable of devising more intelligent machines, it is important to find a way to build in not only motivation, but motivation amplification--the continued desire to build in self-sustaining motivation, as intelligence amplifies. If such motivation is to be possessed by future generations of intelligence--meta-motivation, as it were--then it's important to discover these principles now.
A second possibility that Boyden theorizes is that a strong AI might simply become overwhelmed by its own decision-making process and become locked-up from contemplating factors and uncertainties (which sounds a lot like the "rampancy" that eventually afflicts AIs in the Halo franchise).

It's a very deep and most intriguing read about what may or may not be waiting for us around the corner from the realm of computers and neuroscience. Click here and partake of the article... if you think your brains can handle it :-)

1 comment:

Lee Shelton IV said...

Interesting. I think this could be what saves us from a Matrix or Terminator-type future. If the intelligent machines we create are only half as lazy and unmotivated as we are, then we may have nothing to worry about.