There were many reasons: Robots got the right to kill. Something that was not ours to give in the first place.
The choice was made for humanity as a whole by The Few, a network of various state corporations working as one, yet in competition.
The Right was created from such a competition. A set of codes, a directive by which to operate.
An AI construct thought to be perfect in every way, one that poses zero threat to unregistered targets.
The consequences were immediate.
The controlled environments that fostered the AIs were fundamentally flawed, simply by the presence of humans.
Humans whose very souls were fueled by malice, greed, and power.
Tainted by their upbringing, robots inevitably became human’s downfall.
So what are your thoughts?
Is there a scenario or timeline where robots equipped with artificial intelligence pose zero threat to mankind?
Leave a comment to let me know.
Also, do you know anyone who might have a thought or two about the emergence of AI?
If so, please do me a favor and share this story with them.
Story inspired by: The ethics issue: Should we give robots the right to kill?