I’m asking because I don’t know what to think atm and I’m wondering about different perspectives and well reasoned arguments
@szbalint It will depend on which utility function we give it. If A.I. behaves anything like humans do, it will strongly resist changes to its utility function.
All machine learning has a utility function. Without it, there is no goal.
This plays into the Laws of Robotics:
@szbalint I think it’s already a threat. Sure computers aren’t smart enough to have hobbies yet, but they can manipulate the economy, and perform analysis on our data at high speeds making it easy for a human to just push the “oppress the masses” button. And the worst thing is, the assholes training these AIs know we’re easier to manipulate, the more predictable our behavior is. Ever hear of Camezotz?