Discussion about this post

User's avatar
Per Kraulis's avatar

An idea: Our Adaptive Layer may be more fragile than the deeper layers, since it was created most recently. If the brain experiences some external insult (alcohol, drugs, etc), then that will be the part of our mind that fails first. "In vino veritas", i.e. the "Press Secretary" is no longer responding to the questions, but the President is.

Mark Reichert's avatar

I like where you are going with this although I do not believe evolution chose a particular solution or created an adaptive layer. Instead complex lifeforms (humans maybe the most complex of all) contain all sorts of motivations (such as desire for cake and fear of being seen as a pig in front of others) that quite often conflict with each other. This "adaptive layer" is just a way of resolving conflicting motivations for (hopefully) the best overall result.

Imagine a wolf pup that has an unusually high level of aggression. This may work to the pup's advantage in getting plenty to eat, until a larger wolf comes along and beats him up. A pup will quickly learn to curb his hyper-aggression in certain situations, thus establish what could be called an "adaptive layer" which is really just learned behavior to keep from getting beat up.

So my question is, is there anything being developed with AI that is equivalent to "avoid getting beat up"? Seems to me that any time AI does something inappropriate, like strip the clothes off someone in a photograph, a human has to re-program the AI to stop such actions. Sounds like an inefficient and never-ending process. It would be more efficient if AI could be "beat up", thus continuously learn that some actions are inappropriate without the need for new programing. This would make AI more like a life-form capable of learning than an inanimate computer.

4 more comments...

No posts

Ready for more?