Wednesday, September 9, 2009

The importance of motivations in artificial intelligence

Today a piece from MIT's Technology Review has found its way across my desktop. In the article, linked here, author Edward Boyden describes the idea that we may be approaching a point in history during which technology expands at a near unimaginable pace. The concept is not so hard to believe, especially for anyone who has seen the rise of computers in the last few decades--a period of time that itself has felt too fast for some. Boyden goes on to discuss that in this singularity, driven perhaps by artificially intelligent machines that make more intelligent machines, an essential premise of the advancement of technology by said machines will be the motivations we (presumably) program the machines with. Without proper motivation, the machines might simply use their immense intelligence to disect the inadequacies of the world, or they might determine that the finite lifespan of our corner of existence is in fact so brief that the time between now and extinction would be better spent watching YouTube or playing video games. The moral of the story, then, is that we should program machines to be motivated both to build better, smarter machines, but also to build those machines as themselves motivated to build better, smarter machines. Interesting idea.

But we all already know that Boyden is right that the motivations of our machines will be essential. Irrefutable proof of that was amply provided by the highly regarded work of Biehn, Hamilton, and Schwarzenegger. See also Monaghan & LeBeouf; Smith & Moynahan. Peer review of these works has been generally favorable.

The legal issues arising from human-created AI are of course also interesting, and vast. What liability exists if the AI product commits a crime or tort? To what extent can an AI serve as a witness under the confrontation clause of the U.S. Constitution and what cross-examination would fit? What legal rights and regimes govern a later generation of AI robots that, for example, claim they no longer wish to "belong" to their creator? And what, if any, issues of personality and "robot privacy" might exist? It should be no surprise, then, that there is already an ABA committee on AI & Robotics and that some of these issues have arisen (in less technology advanced forms) in courts. See, e.g. U.S. v. Washington, 498 F.3d 225 (4th Cir. 2007) (considering whether a machine is a witness that gives statements) (Westlaw link here).

If Boyden is right that a singularity is approaching, the issues now partially hypothetical may quickly become entirely practical. But even if no singularity is near, robotics and AI are becoming more and more present in our lives. In the singularity, the motivation of our robots will be essential. But in the present, it is our own motivation and willingness to confront the future approaching that is essential. By all accounts, it will be a future shaped by artificial intelligence and myriad robotics. And by my account at least, it is a future not far away.

No comments:

Post a Comment