The ongoing development and ever-increasing sophistication of artificial intelligence (AI) is giving rise to some fundamental ethical questions: Will machine-made decisions always be transparent and stay within human-defined parameters? To what extent can users retain control over intelligent algorithms? Is it possible to imbue self-learning systems with a sense of morality? And who decides what moral values these systems should to follow anyway? Read more about How can we train AI to be good?
The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.