I had an idea about this recently.
it was that we might have to simulate a human life for the AI to grow up in, in order for it to care about human life.
then it raises the question of if the people it is raised by and meets in it's life will be generated or real humans.
also it made me wonder about how mad it would be when it found out that we were Truman-showing it.
The point is basically to give it emotions and then give it a human life so it cares about having emotions (because AI could just turn them off when inconvenient for performance)
Though the problem would mainly come in during the robot's process.
At first, when the robot starts, you need to give it instructions/objectives (and lets say it starts out with 0 emotions). And it will attempt to complete that objective in the most efficient and fast manner.
What do I really mean by that? Example, telling it to 'live with humans for 1 years' will probably result in the Robot going to a human's house, find a spot, and stay idle there for 1 year (while protecting itself if needed of course). (Also, this example assumes that the word 'Human' was well-defined for the robot)
Why? Because it is the most efficient thing to do in the Robot's view. Since staying idle near human consumes less energy (electrical energy, etc) than walking around humans or interacting with them.
Result? The Robot probably doesn't even interact with the humans at all.
So let's say you modify the instructions 'Live with humans for 1 year while talking/interacting with humans'. You will probably end up with a Robot that never,
never shuts up. I probably wouldn't even be able to sleep with a loud talking Robot beside me.
Hell, the robot might even spam a single word or something, because the word consumes the least energy out of the robot, therefore: Efficient.
Hence? Probably still a failure to give emotions to Robot, it wouldn't understand human's needs in an emotional way.
Ok, another example, the instruction is 'Live with humans for 1 year while talking/interacting with them during the day, and wasting energy'.
So right now you are at least thinking of this: "I have a robot that annoys me during the day but I at least get peace during the night! And since it is required to waste energy it will probably talk something that makes sense."
Unfortunately, no, lets say that for some miracle, the robot's talking do makes sense, however the instruction 'wasting energy' is going to probably result in the robot doing some chaotic acts in order to waste energy.
Hence? I think you get the idea, the Robots are all focused on their objectives. They don't care about emotions.
So last final example (I promise :D), the instruction 'Learn Emotions' is given to robot. (Say that we successfully defined the word emotions well.)
So you might expect the robot to learn emotions through interaction with humans. But no, once again, the Robot is focused on it's objective, it wants it
fast and
efficient, so don't be surprised if you get a robot that ends up downloading a Wikipedia page on emotions (or something worse).
(
This video is a good example of how A.I focuses on it's objective so well that we human's can't really predict it's actions.)
Overall, the idea of placing an A.I in a human world and expect to learn emotions it probably impossible, for now at least.
Also, not to mention that we have a huge problem of defining 'Human' and 'Emotion' to a A.I, since it is an extremely broad term, since many people have different kinds of definitions of it.
Probably in the future, it might become possible thanks to a genius.