📎 An issue with the AI paperclip optimizer theory

You, • ai
Back

An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

One thing that doesn't make sense about the doomsday theory of AI where the AI misunderstands a task and does major harm as a side effect is that you'd need a dumb AI with a lot of power for that to make sense.

It's like giving a monkey an AK47.

A monkey with an ak47

We'd need to be quite reckless to allow that to happen.

On the other hand, a human-level intelligent AI would easily comprehend the nuance behind "make the best paperclip company in the world" and either solve the task in a way we'd approve of do damage anyways if that's what it wants.

The existence of the task itself is not the problem, it's the ratio of intelligence to power we should be wary of.

© nem035RSS