The foundational problem is 1) giving a goal-defined task to 2) an entity that literally has no humanity or morality. Without programming in all of the guardrails that are embedded in most human thinking (sociopaths being a notable outlier), there are no limits or boundaries.
For example, imagine giving AI the problem of "at present, the world can only grow enough food to feed 7 billion people. There are 7.5 billion people right now, and starvation is rising. What can we do to make our food supply go farther?" You might well get an answer of "euthanize 500 million people." That's a perfectly logical way of dealing with a limited food supply. Sure, it's evil and murderous, but you just asked AI to solve the problem stated....if you fail to give it detailed parameters (like "solve this without killing anyone," simple things like that), or an overarching set of rules, then you WILL end up with results like that. Sometimes unintended, because you just didn't think of the alternatives when you gave it the task. I'm reminded of when I told my 5 yr old son, about to start T-ball, a tip to get him ahead of the game: you can tag the runner to get him out. So, the first game, the first batter hits the ball, and my son charges him, slaps him with his glove, looks at me and yells "DAD! I TAGGED THE RUNNER!" All the other dads look at me. I sheepishly confess "yeah...I forgot to tell him that you have to have the ball when you tag him." Failure to provide underlying/background rules and restrictions leads to bad outcomes.
This is Asimov Three Laws of Robotics shit, and we're going to fail at it, miserably.