I don’t know why it took me so long to read this book. It might be due in part to the fact that I thought I had read it in high school. My dad is a big science fiction fan, and I grew up with this book on the shelf next to Bradbury and Heinlein. I read a lot of Heinlein in high school; I guess I just assumed I’d read I, Robot, too.
At any rate, I found this book enjoyable if a bit simplistic in style. Through a series of of vignettes set throughout robotics history, Asimov raises some interesting questions about human nature. For example, there’s the question of how to tell a humanoid robot from a human (this section put me in mind of PK Dick). If he refuses to hurt another person, he’s either a robot or a very, very good human being. It was interesting to see which characters thought the robot idea was more plausible and which threw their lot in with the “good human” explanation. I wonder which one I would be more likely to endorse. It would probably depend on how recently I’d driven on a New England highway.
I also really enjoyed Asimov’s discussion of history as a series of problems that seem to always need to be solved by force. Eventually, it’s not force that solves the problem, but the inevitable marching on of time. The economic and social environment changes and the problem goes away, only to be replaced by another problem that, it seems, can only be solved by force. In the book, people began depending on Machines to make decisions for them in line with the Three Laws of Robotics, knowing that, at the very least, no decision could be made that would directly or indirectly harm humans. It seems to have ushered in a new era of peace and prosperity and unparalleled innovation, but will human beings remain content to be the subjects of benevolent but paternalistic dictators, even if the alternative means war and famine?
I was thinking, too, about how humankind in these stories kind of reaped what they sowed. They chose to build these Machines and hand over control to them. When I think about it, it seems like we as a society must take some level of responsibility for producing our (human) leaders. Aren’t they products of the society we’ve created, or at least perpetuate? We don’t intentionally raise individuals to the be leaders they become, but neither did Asimov’s society intend to lose the amount of free will they’ve relinquished to the Machines. Is it worse to lose one’s self-determination to a Machine or to an individual?
And it’s always fun for me to read these mid-20th century sci-fi novels that assume the space race will continue at the same frenzied pace and with the same rate of success as it did in the 50’s. It’s amazing to think that this book was published nearly 20 years before the moon landing. Perhaps that’s why the idea of interstellar travel seems so possible.