Six Lines

Teaching Software Engineering Ethics

Posted by Aaron Massey on 25 Aug 2013.

Arvind Narayanan is asking for examples of real-world ethical dilemmas that software engineers may face. This post is my contribution. I’m assuming this may become part of some teaching curriculum because he mentioned the Software Engineering Ethics module provided by the Markkula Center for Applied Ethics at Santa Clara University. I wasn’t aware of this module, but it looks pretty solid, and I may use it in the future. It may not be a bad place to start if you’re interested in software engineering ethics. And frankly, if you teach people to write software or write software yourself, then you should take an interest in software engineering ethics.

Before I get to the examples, I would be remiss if I didn’t mention the book Information Assurance and Security Ethics in Complex Systems: Interdisciplinary Perspectives. Annie Antón and I contributed a chapter on Behavioral Advertising Ethics to this book based in part on my work as a Walter H. Wilkinson Graduate Research Ethics Fellow at North Carolina State University. The chapter contains several examples that may be of use. In addition, each chapter in the book comes with a list of 20 or so discussion questions since the book is designed to be used in pedagogical settings. However, the book may not exactly meet Arvind’s request because it is structured thematically rather than on individual cases.

Individual cases are often a great way to learn about engineering ethics. Other important books that discuss engineering ethics take this approach. For example, To Engineer is Human: The Role of Failure in Successful Design focuses each chapter on a different engineering failure (and one success story). The Immortal Life of Henrietta Lacks is one of the better books that I’ve read about ethics, and it essentially covers what could be considered a single case. Still, there aren’t many books that go into detail on software engineering ethics. To Engineer is Human is mostly about civil engineering or mechanical engineering. The Immortal Life of Henrietta Lacks focuses on the ethics of scientific research. This is a problem.

Worse, ethics and failure tend to be lumped together, at least in software engineering. When I’ve asked questions similar to Arvind’s in the past, I’ve found that important or noticeable failures are common, but these are not always the most useful for learning ethics. Consider the Therac-25 failure, in which several deaths occurred because of a software engineering failure. While this is a serious failure, I’m not sure it’s fair to say that this is a great example of an ethical dilemma. The developers of the software weren’t tempted to introduce the bug; it was simply an accident of construction. Had they known about this beforehand, it’s likely they would have fixed it. Similar arguments can be made for things like the failed launch of the Ariane-5 or the Mars Climate Orbiter, which are also commonly mentioned. I suppose these are reasonable examples of the need to at least not be totally ambivalent about engineering projects, but they aren’t great examples of ethical dilemmas.

A better example of an ethical dilemma for software engineers may be something like the following generic examples:

  • While using a competitor’s product, you’ve discovered a critical security flaw that may affect the security of many of their users. Do you report the flaw to your competitor or find some way to use it to your advantage?
  • A potential client asks you to build a software system that is extremely similar to your last project. You know that you could save a lot of time by re-using some code you built for your previous client. Unfortunately, that code is owned by your previous client. What do you do?
  • Partway through your development cycle, the software platform that you’re using releases an updated roadmap that makes it clear it will not be a solid long term choice for your current project. You can either inform your customer about the problems and recommend that the project be ported to another software platform, which may cost several months worth of development time, or you can deliver the product on time and under budget knowing that the release of the new platform a year later would eventually necessitate extensive re-engineering for the project. What do you do?

These examples are better, but they are, perhaps, too generic. (Though, I’m sure with some digging examples could be found.) The real-world often produces some seriously surprising ethical dilemmas; toy examples are sometimes too simplistic or lopsided. Instead, I would like to propose an example of ethical decisions faced by real-world engineers that may prove to be the kind of example Arvind wants.

Deep Packet Inspection and Behavioral Advertising

Deep Packet Inspection (DPI) can be used by an ISP to filter Internet traffic based on information found in the data payload rather than the packet header. To employ the ubiquitous, horrible analogy: it’s like allowing the post office to look inside your letters to determine whether and how they should be delivered. DPI can be used to prevent network attacks, improve network efficiency, and protect users of the network. If your ISP could look at the data payload of packets headed for your computer to remove viruses or spam, wouldn’t you want them to do so? Of course, DPI also has potentially nefarious uses as well, like censorship.

The most compelling ethical discussion based on the use of DPI is advertising. Advertising runs the Internet. The vast majority of web-based companies make money through advertising. Google and Facebook are, fundamentally, advertising companies. They prefer this to subscription models for a reason. The vast majority of ISPs are not advertising companies; they are subscription services. There are strong economic arguments in favor of ISPs including advertising as a part of their revenue stream. However, to do that, they would have to use DPI in ways that many consider unethical. Using DPI for this purpose violates the end-to-end principle. It also may violate network neutrality, depending on which definition of network neutrality is in vogue these days. That’s not to mention the ethical arguments originating from the history of advertising1 ethics.

Perhaps most important to Arvind, DPI-based behavioral advertising was a real world ethical dilemma faced by actual software engineers. NebuAd was an American company that attempted this and was put out of business as a result of their approach. A similar approach was taken by Phorm in the United Kingdom. Both the U.S. and the U.K. ended up investigating these practices. There were congressional hearings in the U.S. and an investigation by the FTC. Similar procedures took place in the U.K. NebuAd no longer exists, mostly because of the fallout from this investigation. Phorm is still around, but has been bleeding cash for years.

If the idea of an ISP making money through advertising as well as subscriptions seems far-fetched, consider Google Fiber. Advertising is their bread and butter, and they are creating profiles to target ads through their other platforms. Why not use DPI-based behavioral advertising at some point in the future? Google has used deceptive tactics against its own users in the past. How would a Google Fiber customer know if Google started using DPI for behavioral advertising? The market for ISPs is arguably bereft of serious competition in many areas, but Google Fiber may improve competition. What would officially sanctioned DPI-based behavioral advertising do to competition? ISPs aren’t currently cashing in on the massive amount of money being made through online advertising. Well, other than Google Fiber.

DPI makes for a great debate topic in software engineering. It spawns so many interesting possibilities. What limits could be imposed on an ISP before it could implement an ethical program for behavioral advertising? Facebook collects only what you give them, but they keep it basically indefinitely. What if your ISP collected everything, but only kept it for two weeks? That would still be long enough to determine if you were interested in buying a car, switching cable companies, or about to apply to college. Those are pretty juicy advertising targets. Other possible debates: If some topics were off-limits, like healthcare, then would DPI for advertising become ethical? What sort of notice should be provided to new customers about this service? How should this new service be audited? How should ISP’s protect against possible insider snooping or peeping? How do they do that now? DPI is almost the technology that launches a thousand ethics debates. It’s concrete, but can be used in many contexts.

We need to be able to teach software engineering ethics. It’s critically important now, and it seems only to be growing in importance. If the software engineers in the classroom today are going to be in Edward Snowden’s shoes at some point in the future, won’t they need an understanding of the ethics involved? Set aside the decision Snowden made. That’s almost beside the point. The decision itself is fundamentally an ethical decision. The same will be true for engineers working on drones or financial systems or healthcare systems or any number of things we can’t dream up just yet. The worst case scenario is to be facing a decision like Snowden’s in the real world as a first exposure to ethical concerns in software engineering. (Though, maybe the worst case is not even recognizing it as a decision at all.) This is why we need to provide engineers with broad ethics statements like the ACM’s Software Engineering Code of Ethics. It’s also why we need collections of pedagogical cases. If you have one, get in touch with Arvind. I’m looking forward to seeing what comes from his collection.

  1. This is covered in some detail in my book chapter.