War and Information Technology

Technology has always been an important part of war. Today, with information technology situated as the defining technology of our age it should be no surprise that IT (information technology) is a central part of war making. In fact, the US military has officially determined that information is the main "force multiplier" in battle, more important than numerical superiority or force of fire, the old standards. This is because, in the US military's view, we now have the information technologies that can not just make a crucial difference in battle, in terms of precision weapons, intelligence, and command and control, but that these technologies might actually allow the "fog" (uncertainly) of war to be dispelled so that total, almost painless victories, might be won against many opponents.
Computer professionals, when evaluating these information technologies, must start with two basic rules that come both from within computer science (especially information theory, systems analysis and information management) and from the academic disciplines represented by groups such as the Society for the Social Studies of Science (4S), the Society for the History of Technology (SHOT) and Science, Technology, and Society studies in general (STS). They are:
1) Technologies cannot be evaluated out of their context of use. Very few technologies either "work" or "don't work". They all perform at certainly levels of efficiency, but whether or not they "work" depends on what they to do in terms of their institutional and societal contexts. In the case of military technologies, for example, they have to be evaluated not just in terms of failure rates but also how the technology performs, and influences, in the context of the relevant doctrines, tactics and strategies.
2) Technologies, especially information technologies, are parts of systems. They cannot be evaluated in isolation. The systems, which often include human operators and users, either work well or not in their context. For repairing and evaluating systems it is important to know how the various components or nodes perform, but a system that fails because of so-called human-error, has failed none-the-less and that includes it's computational elements.
So, any weapon system has to be analyzed in terms of all its components (including the humans and their training), the doctrine that it serves, its tactical and strategic uses, and the political context of the war it is used in. To say B-52s are bad and so-called smart missiles are better does not make sense. The computer and other technical parts might work with a high level of reliability (or not) but that doesn't make any one weapon system better than another. The purely technical efficiencies cannot be separated out.
As a computer science professor I teach my students that computing systems in particular must be evaluated as systems, not in isolation. A perfect case is the destruction of Iranian Air flight 655 by the USS Vincennes (deploying the most sophisticated computerized weapon system used in combat: the Aegis) that killed 290 civilians. There were technical problems (the jamming of an automatic 5-inch gun, the poor reliability of the automated Phalanx machine guns, the fact that Standard missiles need miles to accelerate, the poor interface design of the combat information center), training problems (all training was for fleet combat with the Soviets), command problems (other US ships correctly identified the flight but the Vincennes was in charge because it had the best computer system), human error (misreading the computer information because of scenario fulfillment), doctrine (the source of the command and training problems), the tactics (sending the Vincennes into Iranian territorial waters to engage small Iranian patrol boats), the strategy (the illegal stupid mission the Vincennes was on -- trying to provoke an Iranian counter-attack), and the politics (supporting Saddam's Iraq in an aggressive war against Iran). So what if the Aegis is a marvel of computer engineering? Tell that to the families of the dead Iranian civilians. (For full details and documentation see: Chris Hables Gray, "AI at War: The Aegis System in Combat" in Directions and Implications of Advanced Computing 1990, Vol. III, D. Shuler, ed., Ablex, 1996, pp. 62-79.)
By the way, there is substantial evidence that the testing of the circuit cards for the Phalanx was faked, one reason their reliability was so low and so the Captain of the Vincennes could not trust them for close-in defense and so made the incorrect decision to take out the innocent airliner.
An undergraduate in one of my computer classes who tried to evaluate a system that uses computers by only looking at certain technical parameters of the computers, and not at all the relevant factors, would not pass. A military that does the same, might win some battles, but it will lose all but the easiest wars, and even then, it will often lose the peace that follows.
In Gulf War I a strong case can be made that the massive B-52 bombings shattered the main Iraq Army, the so-called "smart weapons" were not nearly as accurate as claimed (and we found out much later the Patriots hit nothing), and that the destruction of the Iraqi infrastructure (in part by so-called precision bombing with all kinds of bombs and missiles) led to the deaths of 300,000 Iraqi women and children (according to both the UN, independent aid NGO's, and a Harvard Health School study). War is complicated, weapon system performance is complicated, and accuracy (or even reliability) is hardly everything.
When Force of Fire was the main doctrine for winning war in the US military, bigger explosions were almost always considered more important than smaller accurate ones. In some situations this was (and still is) true, in many others not. The US military in particular has had a tendency to want to ignore the strategic and political realities of war. This is why it lost Vietnam. US weapons were more accurate and more powerful than the Vietnamese weapons but that was not enough. It didn't help the Soviets win in Afghanistan either. This is why there is a whole debate about Asymmetric War in the military today, by the way.
-------Some of the main issues computer professionals should look at in Gulf War II--
* To what extent are such doctrines as "Shock and Awe" and of the general U.S. strategy based on misconceptions about what IT can and cannot do in a unpredictable and uncontrollable arena such as war.
* To what extent the existence of "so-called" smart weapons leads to the doctrinal, tactical, or strategic misuse of such weapons. Americans have a particular love of technology. That some people can unequivocally say such weapons "really work" when they have hit three of the wrong countries is an important issue to explore.
* The actual performance of such weapons as the Patriot.
* The possibility and possible impact of future weapons or military information systems such as effective identification systems.
* The absolute limits of computer technology to model complex systems. CPSR played a major role in articulating this crucial part of information theory during the first round of Star Wars debates and in the Spring 2001 newsletter (ed. by Carl Page and Chris Hables Gray) there is a full bibliography of the key articles in this area.

Posted in Labels: |

0 comments:

 
ss_blog_claim=e5116a739b7c47f0b2a66d25b128106c