Horizontal scalability

in

In the computer business anything that is horizontally scalable can grown to support more volume.  An example most people would be familiar with is adding more tellers in a branch to handle more customers.  When the volume goes down, tellers can be removed and reassigned. 

Achieving scalability can be deceptively difficult.  For example, to scale tellers not only does one need to have the tellers available, but their support infrastructure needs to scale as well.  For example, there needs to be enough network bandwidth to handle the additional communications traffic to the host systems. These represent limits, sometimes known, otherwise unknown. 

More broadly, limits are a fact of life and merely represent boundaries in which we operate.  They are set by the limitations on the way we currently do things; changing our method of operation can extend the boundary; innovation is the process of repositioning the boundaries (presumably in a better place) and technology is one of the tools used by innovation.  Historically technology has been applied to tools, such as: stick, spear, bow-and-arrow, cross-bow, rile, etc.  Improving the tool has increased the capabilities of the individual; extending what they can do.  In his books on the subject James Burke chronicled a number of technologies, their evolution  and their interaction (http://www.youtube.com/user/JamesBurkeWeb).   

Automation is a focused application of  technologies that has gained a lot of momentum with computerization.  Automation is the technology of replacing people with machines.  The boundaries addressed with automation are those of the individual themselves: a person needs to take breaks; they can’t work in extreme physical environments; they can’t move quickly enough, etc.  There are also belief systems that people have that place limits on what they will do, such as don’t work on religious holidays, thou shalt not kill. In more current parlance this is often packaged under the banner of work, life balance.

So it comes to this latter group of limits that represents the more interesting ones.  As the technology automating a process inherently has no moral boundaries should it? If so, whose? Should it take on the moral stance of the individual that would have otherwise undertaken the task? Or is it the general norms of the community?  Out of this comes another question: should automation commit acts that do go beyond moral boundaries? Can one delegate such acts to the machine? Human nature is such that all things left unseen are acceptable. Therefore delegating to a machine to do the dirty work is acceptable.  

And so it comes to pass that the drone, an airborne tool equipped with various things like video cameras operated by an individual, enables that person to remotely commit some act.  The acts may be benign, such as monitor traffic flow, forrest fires, etc.  By extending the capabilities with guns and missiles, military drones can be used to kill insurgents and terrorists. Each of these are rationalized through expediency: a more effective; more efficient; less risk (for the operator).

The next step in the evolution is emerging: automate the role of the operator so these drones can execute on their own [1].  Human Rights Watch characterizes the use of autonomous drones as putting us on the path to “Loosing Humanity.”  They call for a series of steps and controls including international treaties limiting the use of such technologies.  May be not too dissimilar to what was done with nuclear weapons, for example.  

Quoted within the Human Rights Watch report is Ronald Arkin, a roboticist at the Georgia Institute of
Technology, who has articulated an architecture for a compliance mechanism. Recognizing the importance of new weapons meeting legal standards, Arkin
writes, “The application of lethal force as a response must be
constrained by the LOW [law of war] and ROE [rules of engagement] before it can
be employed by the autonomous system.” 
He argues that such constraints can be achieved through an “ethical
governor.”  Arkin argues that with the ethical governor, fully
autonomous weapons would be able to comply with international humanitarian law
better than humans. 

So now the counter argument is presented that automation will better enforce actions to remain within the boundaries of morality or at least they policies that are intended to describe it. 

However, the question is whether the successful development of such a governor is feasible remains to be proven.  The capabilities of the technology necessary to implement the ethical governor are said to be in the realm of human cognition and extremely complex.  Human Rights Watch claims they are among many who believe a practical implementation of an ethical governor is not currently possible, and thus the need for human supervision is required.  

US Department of Defense directive sets boundaries using terminology similar to that provided by Human Rights Watch describing the capabilities that may be built into the machine as well as the testing to be completed to validate successful implementation of the policies.   However, they leave the implementation of the ethical governor to a human operator who authorizes the use of the automation (it is unclear about how the dynamics during the mission are addressed).  

Depending on how one feels about whether the automation is a good thing (it will ensure better enforcement of laws of war etc) or a bad thing (is will run amok) the other factor is of increaseing the ease of war.  


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *