Artificial intelligence (AI) has become a fundamental part of our lives, impacting various aspects of the modern world. From web-based services to social media platforms and recommendation engines, AI is everywhere. However, the design process behind these AI systems often neglects a crucial element – human-centeredness.
According to James Landay, vice director of Stanford HAI, we lack the knowledge of how to design AI systems that truly have a positive impact on humans. In a recent conference at Stanford University, experts proposed a new definition of human-centered AI. This new approach emphasizes the importance of creating systems that enhance human life and challenges the current problematic incentives that drive AI development.
Define Human-Centered Design
So, how can we design AI systems that are constructive and fair for human experience? Landay suggests that we start by designing and analyzing systems at three different levels: user, community, and society. Take self-driving cars as an example. Designers must consider the needs and abilities of end users. They need to ensure that the technology works seamlessly with cyclists, pedestrians, and other non-drivers right from the start. Additionally, designers should consult subject matter experts, such as transportation specialists, to understand the broader impact of these cars on the ecosystem. For instance, autonomous vehicles are predicted to exacerbate traffic congestion. In such cases, it is important to question whether we should redirect resources towards improving public transportation instead of solely relying on self-driving cars.
Good technology design should prioritize supporting users’ self-advocacy, enhancing human creativity, clarifying the responsibility of users and developers, and promoting social connectedness. Designers should aim to create AI-infused tools that are reliable, safe, and trustworthy. It is crucial to consider the long-term societal impacts of AI technology from the early stages of development.
Require Multiple Perspectives From the Get-Go
To address the complex challenges associated with AI development, multidisciplinary teams must come together. These teams should include not only technology and AI experts but also professionals from social sciences, humanities, medicine, law, environmental science, and other relevant domains. Having diverse perspectives and conflicting viewpoints from the beginning of a project is essential. Disagreement and discomfort can often lead to better results and innovative solutions.
Rethink AI Success Metrics
When evaluating AI systems, we must shift our focus from simply asking what these models can do to considering what people can do with these models. Currently, AI is primarily measured based on accuracy. However, this approach fails to capture the true value of AI. Designing human-centered AI requires incorporating metrics that align with human needs and values. An example of the consequences of inaccurate metrics is the story of a Palestinian man who posted “Good morning” in Arabic on Facebook, which was mistakenly translated to “attack them” in Hebrew. The man was subsequently arrested and questioned before the machine error was acknowledged. This incident highlights the importance of human-centered metrics that reflect the true impact of AI on people’s lives.
Pay Attention to Power Structures
The metrics used to evaluate AI systems must also be viewed within the broader power structure that drives them and addresses the problems they aim to solve. Capitalism often serves as the driving force behind AI advancements in the United States, with a strong focus on productivity. However, it is vital to consider other factors like individual autonomy, freedom, and happiness. For instance, companies like Amazon and UPS monitor their delivery drivers, but UPS utilizes a more human-centered process to address errors or grievances. The focus on productivity should not overshadow the downstream considerations that affect workers’ well-being.
FAQ
Q: How can we design AI systems that have a positive impact on humans?
A: Designing human-centered AI requires considering the needs of users, analyzing the broader societal impact, involving multidisciplinary teams, rethinking success metrics, and paying attention to power structures that influence AI development.
Conclusion
Designing and developing human-centered AI is a complex and multifaceted task. It requires taking into account the needs of users, understanding the societal impact, involving diverse perspectives, reevaluating success metrics, and considering power structures. By prioritizing human well-being and addressing the inherent challenges of AI, we can create AI systems that truly enhance our lives. To learn more about AI and its impact on humanity, visit Virtual Tech Vision.