Stanford CIS

Zoom and the Problem of Cybersecurity Moral Hazard

By Jeffrey Vagle on

A great deal of ink has been spilled regarding the many security vulnerabilities in Zoom teleconferencing software that were discovered after hundreds of millions of people began using Zoom as a means of holding meetings, classroom discussions, yoga classes, and even funerals during the COVID-19 lockdown. And while Zoom took immediate measures to shore up the security on its platform, including hiring Facebook’s former chief of security, as well as a widely recognized leader in establishing bug bounty programs, these actions came years after security consultants found vulnerabilities that were serious enough to make cloud provider Dropbox reconsider the use of Zoom within their company, and New York City schools ban its use for remote learning.

Security vulnerabilities and other software flaws are not unique to Zoom. An entire industry has grown up around the fact that cybersecurity is a widespread problem with potentially serious legal, political, social, and economic costs. In their own defense, technology companies point out that commercial software is highly complex and consumer use cases aren’t always predictable ahead of time. And while this is true, there are bigger problems at work here, as well.

The “move fast and break things” approach to building software products, where being early to market is the chief goal, and shipping a product now often means putting off bug fixes until later releases, remains the dominant business philosophy among technology companies, even as we have come to realize the dangers of this approach as “software eats the world.” Further complicating matters is a commoditized technology industry that competes on price, so keeping margins to a bare minimum can mean security engineering is often the first thing to get cut from a project’s budget.

Building secure software, a Sisyphean task if there ever was one, is difficult, time-consuming, and requires levels of expertise not found in most software developers. In other words, it’s expensive. While some larger technology companies have spent years and dedicated large amounts of money to building software that is both useful and secure, many others, some small and some not-so-small, are still tied to the idea that building secure technology is too expensive, too time-consuming, and hinders their ability to bring products to market.

Read the full post at Just Security.