Of Interest

  • Are any encrypted messaging apps fail-safe? Subjects of Mueller’s investigation are about to find out.

    Date published: 
    June 8, 2018

    "Encryption is the best tool people have for defending against hackers, cybercriminals and government surveillance, said Riana Pfefferkorn, a cryptography fellow at the Stanford Center for Internet and Society. Still, “your communications encryption choices are only worth as much as the trustworthiness of the people you're talking to,” she said.

  • White House says its federal agencies can’t keep track of their own data

    Date published: 
    June 8, 2018

    "University of Maryland, Baltimore County's cybersecurity graduate program director Richard Forno echoed Williams' analysis and said even a simple Google search could cull results that warned about our dire state of federal cybersecurity decades earlier.

    "Government reports like this just literally say the same thing year after year: 'here are a couple of recommendations on how we can fix things' and a year goes by, and it says the exact same thing," Forno said.

  • How can we train AI to be good?

    June 13, 2018

    The ongoing development and ever-increasing sophistication of artificial intelligence (AI) is giving rise to some fundamental ethical questions: Will machine-made decisions always be transparent and stay within human-defined parameters? To what extent can users retain control over intelligent algorithms? Is it possible to imbue self-learning systems with a sense of morality? And who decides what moral values these systems should to follow anyway?

  • The Next Frontier of Police Surveillance Is Drones

    Date published: 
    June 7, 2018

    "“Axon also makes tasers, so you could imagine drones being equipped with tasers or with tear gas, rubber bullets, and other weaponry,” said Harlan Yu, the executive director of Upturn, a policy nonprofit that works on social justice and technology issues."

  • Making Smart Machines Fair

    Date published: 
    June 6, 2018

    "A first step for the professors is to measure the cultural bias in the standard data sets that many researchers rely on to train their systems. From there, they will move to the question of how to build data sets and algorithms without that bias. “We can ask how to mitigate bias; we can ask how to have human oversight over these systems,” says Narayanan. “Does a visual corpus even represent the world? Can you create a more representative corpus?”"

  • Artificial intelligence debate flares at Google

    Date published: 
    June 5, 2018

    "“We are calling on Google not to make weapons because Google has a special relationship with the public in virtue of the kind of personal data they are collecting — through our email, through Google Maps, through Android systems, through internet searches and all sorts of things,” said Peter Asaro, an associate professor at Stanford who co-chairs the International Committee for Robot Arms Control."

  • Autonomous Vehicles: Safety, Risk, and the Law

    Date published: 
    May 31, 2018

    "The observations from Smith, who holds dual degrees in law and engineering, were particularly insightful. He started by challenging widely held perceptions about the law. Many media articles mention states or cities that have passed a law authorizing autonomous vehicles, or something of the like, as if permission is needed. “We shouldn’t assume that we need a law to do something.” Smith pointed out that many of the early tests in California were before the state passed an explicit law.


Subscribe to Of Interest