AI will alter society in ways that are harmful, as evidenced by experiments in predictive policing technology that reinforce bias and disproportionately affect poor communities, as well as AI’s inability to recognize different skin tones. The potential of these biases to harm vulnerable populations creates an entirely new category of human rights concerns. As legislation that attempts to curb these dangers moves forward, design will be integral in reflecting those changes.
Amie Stepanovich, executive director of Silicon Flatirons, says, “A lot of these systems are designed by people who are coming from fairly privileged backgrounds, and they’re designing them for a specific use case based on their own understanding. That might not be the best use case for the people that these systems end up serving. If you’re not thinking about those populations in advance and doing real assessments based on them, that’s where a lot of the design decisions end up failing.”