Our
societies are undergoing pervasive digitalization. It is not an understatement
to say that every facet of human endeavor is being profoundly changed by the
use of computing and digital technologies. Naturally such sweeping changes also
bring forth ethical issues that computing professionals have to face and deal
with. The question is: are they being equipped to deal with such issues?
Ethical concerns
in computing are widely recognized. For example, the recent upsurge in the
popularity of applying machine learning techniques to a variety of problems has
led to several ethical questions. Biases inherent in training data can render
systems based on machine learning to be unfair in their decisions. Identifying
such sources of unfairness and making machine learning systems accountable is
an active research topic. Similarly, the rise of autonomous systems has led to
questions like how to deal with the moral aspects of autonomous decision making
and how societies can respond to people whose professions may be rendered
obsolete by the deployment of autonomous systems.
Ethics in information security: The
profession of information security has its own share of ethical considerations
it has been grappling with. Among them are: privacy concerns of large scale
data collection, the use of end-to-end cryptography in communication systems, wiretapping
and large scale surveillance, and the practice of weaponizing software
vulnerabilities for the purpose of “offensive security”.
The Vault 7 story: The latter
issue was brought forth in dramatic fashion earlier this month when Wikileaks
published a collection of documents which they called “Vault 7”. It
consisted of information on a very large number of vulnerabilities in popular
software platforms like Android and iOS that can be used to compromise end
systems based on those platforms. That national intelligence agencies use such
vulnerabilities are offensive weapons did not come as a surprise. But the Wikileaks
revelation led to a flurry of discussion on the ethics of how vulnerabilities
should be handled. Over the years, the information security community has
developed best practices for dealing with vulnerabilities. Timely and “responsible
disclosure” of vulnerabilities to affected vendors is a cornerstone of such
practices. Using vulnerabilities for offence is at odds with
responsible disclosure. As George Danezis, a well-known information security expert and professor at University College London, put it, “Not only government “Cyber” doctrine corrupts directly this practice, by hoarding security bugs and feeding an industry that does not contribute to collective computer security, but it also corrupts the process indirectly.” But when a
government intelligence agency finds a new vulnerability, the decision on when
to disclose it to the vendors concerned is a complex one. As another well-known expert and academic, Matt Blaze pointed out, on the one hand, an adversary
may find the same vulnerability and use it against innocent people and
institutions, which calls for immediate disclosure leading to a timely fix. On
the other hand, the vulnerability can help intelligence agencies to thwart
adversaries from harming innocent people which is the rationale for delaying disclosure. Blaze reasoned that this decision should be
informed by the likelihood that a vulnerability is rediscovered but concluded
that despite several studies, there is insufficient understanding of factors that
affect how frequently a vulnerability is likely to be rediscovered.
Equipping infosec professionals: That brings
us back to our original question: are information security professionals have
the right knowledge, tools and practices for making judgement calls when
confronted with such complex ethical issues. Guidelines for computing ethics have
existed for decades. For example IEEE Computer Society and ACM published a code of ethics for software engineers back in 1999. But to what extent do such
codes reach practitioners and inform their work? There are certainly efforts in
this direction. For example, program committees of top information security
conferences routinely look for a discussion on “ethical considerations” in
submitted research papers that deal with privacy-sensitive data or vulnerabilities
in deployed products. They frequently grapple with the issues involved in
requiring authors to reveal datasets in the interests of promoting
reproducibility of research results while balancing considerations of people
from whom the data was collected. But this needs to be done more systematically
at all levels of the profession.
Ethical
considerations in information security cannot be simply outsourced to
philosophers and ethicists alone because such considerations will inevitably
inform the very nature of the work done by information security professionals.
For example, several researchers are developing techniques that allow
privacy-preserving training and prediction mechanisms for systems based on
machine learning. Similarly, as Matt Blaze pointed out, active research is
needed to understand the dynamics of vulnerability rediscovery.
Should undergraduate
computer science curricula need mandatory exposure to ethics in computing?
Should computer science departments host computing ethicists among their ranks?
Silver lining: Coming back
to the Vault 7 episode, there was indeed a silver lining. The focus on amassing
weaponized vulnerabilities to attack end systems suggests that the increasing
adoption of end-to-end encryption by a wide variety of messaging applications
has indeed been successful! Passive wiretapping is likely to be much less
effective today than it was only a few years ago.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.