Vol. 16 No. 9 (September, 2006) pp.685-688

 

TRUST AND CRIME IN INFORMATION SOCIETIES, by Robin Mansell and Brian S. Collins (eds).  Northampton, MA: Edward Elgar Publishing, 2005.  480pp. Hardback. $135.00/£79.95.  ISBN: 1845421779.

 

Reviewed by Robert G. Brookshire, Technology Support and Training Management Program, University of South Carolina.  Email: brookshire [at] sc.edu.

 

The nature of trust has been explored by thinkers beginning with the ancient Greek philosophers, but information and communication technologies, especially the Internet, have introduced new complexities to this already knotty concept.  On what basis can technology users trust each other when they have no personal interaction?  Their identities may be obscured, they might not be located where they appear to be, and they may not even be human beings.  The users’ ability to trust the technology itself could be doubtful, as communications may not be secure or accurate.  These new technologies have created new opportunities for exploitation, fraud, and theft.

 

These issues and many others are explored in TRUST AND CRIME IN INFORMATION SOCIETIES.  This volume contains ten research papers and four essays arising out of the United Kingdom’s Foresight project on Cyber Trust and Crime Prevention.  Supported by the Horizon Scanning Centre, Foresight projects examine new sciences and technologies and their impacts on society, with the aim of identifying economic opportunities and informing public policy.  The Government’s Office of Science and Technology commissioned the Foresight project, and the original versions of the papers and essays contained in this volume are available for download from the Foresight web site, http://www.foresight.gov.uk.  The project was originally undertaken in 2004.

 

In their introductory essay, “Cyber Trust and Crime Prevention,” the editors, Brian S. Collins of the Department of Information Systems, Royal Military College of Sciences at Cranfield University, and Robin Mansell of the Department of Media and Communications at the London School of Economics, characterize the approach taken by the studies collected here as focusing on “the economic, social and political implications of cyberspace technologies.”  This contrasts with the dominant American approaches, which tend to be technological or managerial.  Because of the approach taken here, the studies in this volume will be accessible to most readers with backgrounds in political science or law.

 

Collins and Mansell give a broad overview of the issues of risk, trust, and ethics in information and communication technologies and how these affect our ability to control crime.  They conclude that much more needs to be learned about how people behave in cyberspace and that technology alone cannot prevent crime. 

 

Most of the content in the book on crime and crime prevention appears in the first [*686] chapter.  In spite of its title, the book is primarily concerned with issues of trust.  As such, it provides an admirable overview of the questions of trust arising out of information technologies, particularly the Internet.

 

In “Dependable Pervasive Systems,” Cliff Jones and Brian Randell, both of the School of Computing Science at the University of Newcastle upon Tyne, look at the problems that have to be tackled to make large software systems functional and dependable enough so that people can trust them.  They recommend that dependability requirements be included in system design specifications, something that is not always current practice.  The formal methods of computer science should be more widely used.  Further research on dependability architectures is necessary, and most importantly, existing systems must be adapted or evolved to make them more dependable

 

The chief building blocks of trust in information and communications technologies are identification and authentication: Are the users or computer systems who they claim to be?  These topics are addressed by Fred Piper, Matthew J. B. Robshaw, and Scarlet Schwiderski-Grosche of the Information Security Group, Royal Holloway, University of London.  They give an excellent overview of these concepts suitable for the lay reader, introducing cryptography, authentication schemes, and biometric methods.  They point out that we have a long way to go to control and administer computer security systems reliably, something that depends more on humans than on technology.

 

In “Knowledge Technologies and the Semantic Web,” Kieron O’Hara and Nigel Shadbolt of the University of Southampton examine issues of trust in the next generation of the World Wide Web.  The semantic web is the addition of more identifying content to the structure of web pages so that web software can infer more information about them. To these authors, the semantic web is both a knowledge technology itself and an environment for the creation of other knowledge technologies.  O’Hara and Shadbolt discuss the critical role trust plays in all knowledge technologies, and review the strategies and tactics that may be used to establish trust in this context.  They conclude by suggesting a number of contributions to the study of trust and technology from a variety of disciplines, including philosophy, the social sciences, management science, marketing, and other fields.  The chapter contains a useful appendix that provides a brief introduction to the semantic web.

 

Sarvapali D. Ramchurn and Nicholas R. Jennings, of the School of Engineering and Computer Science at the University of Southampton, explore “Trust in Agent-Based Software.”  The use of software agents is becoming common; you use one each time you ask Expedia or Travelocity to find the best airfare to a destination or locate hotels in a particular city.  Many software agents, particularly on the Internet, work by partnering with other software agents.  What is required for all these agents to trust each other?  The authors examine [*687] trust at the individual level, where agents evaluate the trustworthiness of each interaction partner, and at the system level, where the rules of the system enforce trust.  They develop several trust models based on sociological, machine learning, and game theories.  They find that all trust models are deficient in one or more respects, and identify a number of areas in which further research is critically necessary.

 

William H. Dutton and Adrian Shepherd of the Oxford Internet Institute review the findings of the Oxford Internet Survey conducted in 2003 in “Confidence and Risk on the Internet.”  They find that attitudes of trust toward the Internet held by adults and teens in Great Britain follow a pattern called the “certainty trough.”  Those without much experience with the Internet tend to distrust it; those who use it more tend to trust it more; but those who have the most experience with the Internet seem to be more cognizant of the risks involved and more likely to have had bad experiences, thus increasing their distrust.  Businesses and governments wishing to provide more services over the Internet must work to establish its trustworthiness.

 

“Perceptions of Risk in Cyberspace” is the title of the contribution by Jonathan Jackson, Nick Allum, and George Gaskell.  Jackson and Gaskell are associated with the Methodology Institute at the London School of Economics and Political Science, and Allum is in the Department of Sociology at the University of Surrey.  This chapter reviews the social science literature on the public perception of risk.  They examine several different models of risk perception but seem most attached to the social amplification of risk framework (SARF).  They extend their discussion to perceptions of crime but do not spend as much time on applications to the technological environment as the other authors in this volume.

 

Charles D. Raab, a Professor of Government at the University of Edinburgh, examines “The Future of Privacy Protection.”  Raab provides a succinct overview of privacy issues, both in the European context and internationally.  He then explores “privacy impact assessment,” which evaluates the effects that activities or proposals might have on individual privacy.  He calls for policy makers to try to think more clearly about privacy as a social good, the relationship between privacy and surveillance, what kinds of laws might best protect privacy, and the connection between risk and trust.

 

Information systems security mechanisms and procedures are often compromised by their poor usability.  We have to keep track of dozens of usernames and passwords for different systems, for example, which encourages us to write them down or store them in insecure places.  M. Angela Sasse, a computer scientist at University College London, addresses the usability problems with current technologies in “Usability and Trust in Information Systems.”  She makes a number of sensible recommendations on how security systems might be improved by taking into account computer users’ psychology, attitudes, and practices. [*688]

 

James Backhouse, of the Department of Information Systems at the London School of Economics and Political Science, integrates several industry case studies in his paper, “Risk Management in Cyberspace.”  The case studies, conducted by his co-authors Ayse Bener, Narisa Chauvidul-Aw, Frederick Wamala,and Robert Willison, examine the behavioral and organizational dimensions of risk and security in several global companies that use the Internet.  These cases illustrate that the behaviors and attitudes of computer users are at least as important as technology and policy in managing risk in information systems.

 

Jonathan Cave, an economist at the University of Warwick, brings the tools of economics to bear on the analysis of trust.  The first part of his paper, “The Economics of Cyber Trust between Cyber Partners,” uses a game-theoretic approach, while the second part is based on structure, conduct, and performance analysis.  The former section provides an excellent introduction to the application of game theory in the analysis of trust, and Cave succeeds in demonstrating that his discipline can make substantial contributions to the study of trust in cyberspace.

 

The final three essays, by Edward Steinmueller of the University of Sussex, Kieron O’Hara, and John Edwards, an international lawyer, are brief discussions covering areas not addressed by the previous lengthier chapters.  Steinmueller considers additional economic analyses of trust. O’Hara examines the ethics of trust in cyberspace, including the views of Hobbes, Hume, Smith, Kant, and Rousseau, and anchors the discussion of trust in the technological realm in the broader philosophical tradition. Edwards addresses regulatory and legislative issues.

 

Despite the variety of methodologies, analytical approaches, and definitions of trust and risk in this volume, the consensus of the authors is that much research remains to be done before technologists, policy makers, managers, and computer users can begin to understand the nature of trust in cyberspace.  In particular, technological and managerial approaches must be integrated with research grounded in philosophy and the social, psychological, behavioral, and economic sciences.  This collection does an excellent job of summarizing and introducing the contributions of this second set of disciplines.  This work must be extended, and the interdisciplinary dialogue must commence.

*************************************************

© Copyright 2006 by the author, Robert G. Brookshire.