SitNews - Stories in the News - Ketchikan, Alaska

Science - Technology

If it only had a heart...
By TOM ABATE
San Francisco Chronicle

 

February 03, 2008
Sunday


Can scientists program war-fighting robots to behave more ethically in battle than emotion-driven human soldiers? If so, what is the scientists' social responsibility for the destruction their inventions might wreak?

Stanford computer scientist Terry Winograd cautions that before academics take on such war-related research, they should ask themselves whether they support the goals and content of the studies. They should also ask whether they are free to publish their research, he says.

Another technology professor, Maarten van Veen of the Netherlands Defense Academy, echoed the sentiment, telling a group gathered at Stanford's Technology in Wartime conference last week: "We as computer professionals have a responsibility for what we make."

Organized by the Computer Professionals for Social Responsibility, the group of civilians, military, academics and human rights workers engaged the key question: "What should socially responsible computer professionals do in a time of high-tech warfare?"

Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, advanced the pro side of the robo-soldier debate.

Arkin argued that Pentagon planners are determined to create war-fighting machines that eventually will be able to decide -- autonomously -- whether or not to kill. Since war-bots are coming, Arkin said, computer scientists should help design their self-control programs.

Arkin, who said his work is funded in part by military sources, said that with the proper ethical controls, robotic soldiers could be more humane than human soldiers because they would be less prone to act out of rage in the heat of combat.

Citing a 2006 Mental Health Advisory Team study for the U.S. Army's Surgeon General, Arkin noted that 10 percent of soldiers and Marines reported mistreating civilians by unnecessarily hitting them or destroying property. "We could reduce man's inhumanity to man through technology," he said.

Peter Asaro, a computer scientist and sociologist with the Center for the Critical Analysis of Contemporary Culture at Rutgers University, countered that scientists should not dignify what he considers the naive notion that robots can be programmed to kill, but only in an ethical fashion.

Asaro said computer scientists should oppose the design of robots with the ability to make killing decisions before the technology becomes widely dispersed. "We have an opportunity with autonomy in weapons systems to think about how to control their development," he said, suggesting that society "ban them or at least restrict their use."

The discussion of robot ethics raised questions of definitions. Conference speaker Herbert Lin with the National Research Council, a nonpartisan policy study group formed by President Abraham Lincoln in 1863, asked fellow panelists to define "autonomy" in the context of whether a land mine fit the definition.

Conferees generally agreed that land mines qualified as autonomous for the purposes of the weapons discussion because they were indiscriminate killers -- in short, the only "choice" they make is to kill any person, soldier or civilian who triggers their mechanism.

 

On the Web:

-- To learn more about the conference: technologyinwartime.org

-- For information on the Martus database: www.martus.org

-- To learn more about Computer Professionals for Social Responsibility: www.cpsr.org

 

E-mail Tom Abate at tabate(at)sfchronicle.com
Distributed to subscribers for publication by
Scripps Howard News Service, http://www.scrippsnews.com



Publish A Letter in SitNews
        Read Letters/Opinions

Contact the Editor

SitNews ©2008
Stories In The News
Ketchikan, Alaska