But we all already know that Boyden is right that the motivations of our machines will be essential. Irrefutable proof of that was amply provided by the highly regarded work of Biehn, Hamilton, and Schwarzenegger. See also Monaghan & LeBeouf; Smith & Moynahan. Peer review of these works has been generally favorable.
The legal issues arising from human-created AI are of course also interesting, and vast. What liability exists if the AI product commits a crime or tort? To what extent can an AI serve as a witness under the confrontation clause of the U.S. Constitution and what cross-examination would fit? What legal rights and regimes govern a later generation of AI robots that, for example, claim they no longer wish to "belong" to their creator? And what, if any, issues of personality and "robot privacy" might exist? It should be no surprise, then, that there is already an ABA committee on AI & Robotics and that some of these issues have arisen (in less technology advanced forms) in courts. See, e.g. U.S. v. Washington, 498 F.3d 225 (4th Cir. 2007) (considering whether a machine is a witness that gives statements) (Westlaw link here).
If Boyden is right that a singularity is approaching, the issues now partially hypothetical may quickly become entirely practical. But even if no singularity is near, robotics and AI are becoming more and more present in our lives. In the singularity, the motivation of our robots will be essential. But in the present, it is our own motivation and willingness to confront the future approaching that is essential. By all accounts, it will be a future shaped by artificial intelligence and myriad robotics. And by my account at least, it is a future not far away.