top of page
Search
  • Writer's pictureCarolina MIlanesi

Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead

In an era where artificial intelligence (AI) is rapidly reshaping our world, the conversation around its ethical implications and inherent biases has never been more important. At the recent AWS re:Invent conference, I had the distinct opportunity to delve into these pressing issues with Diya Wynn, AWS’s Responsible AI Lead. 


Wynn opened the conversation by emphasizing the potential of AI in solving previously unsolvable challenges, such as climate change and complex health issues. However, she stressed the importance of leveraging this technology responsibly. This involves not just technological solutions but creating a culture of responsibility within organizations. Wynn articulated the need for a holistic approach to AI, and responsible AI specifically, encompassing the entire ecosystem from technology developers to end-users: “This is something that involves public and private sector, government, academia. All of us coming together and even us as individual consumers, to demand, desire, want to see responsible use, ethical use of AI implemented in the world.”


Recently, AWS worked with Morning Consult, a global intelligence company, to survey a representative sample of business leaders in the United States to understand their sentiment toward and plans for responsible AI. The survey reveals a broad awareness of responsible AI, with 77% of respondents familiar with the concept and 59% viewing it as a business imperative. However, there's a notable age gap, with younger business leaders (18-44) more familiar with responsible AI than older leaders (45+). Despite the familiarity, only a quarter have started building a strategy for responsible AI, and a significant majority lack a dedicated team for it.


Looking ahead, almost half of the respondents plan to increase their investment in responsible AI next year. This intention is more pronounced among younger leaders compared to their older counterparts. Additionally, nearly half expect their company boards to inquire about a responsible AI plan, with younger leaders more likely to anticipate this compared to older leaders. When it comes to training, there's an even split among respondents on rolling out responsible AI training in 2024, with younger leaders more inclined towards this initiative.


The survey also highlights the perceived benefits of deploying AI, such as enhancing revenue, creativity, innovation, and employee productivity. By 2028, a vast majority of organizations plan to use AI-powered solutions. There's a strong recognition of the financial risks associated with irresponsible AI use, with over a third of respondents believing it could cost their company significantly. Opinions vary on who is most responsible for the development of responsible AI, with responses split among AI service vendors, businesses using AI, and academia/researchers in the nonprofit sector.


These findings clearly point to the need for organizations providing AI tools, like AWS, to help customers, not just with how to leverage Generative AI but with educating them on how to do so responsibly. Aside from the issues of fairness and ethics there are specific challenges posed by Generative AI. This includes issues like toxicity, bias, and the ethical use of data. The complexity in defining what is ‘fair’ becomes magnified in Generative AI, given its broad and dynamic range of applications, explained Wynn.


AWS is actively developing solutions to address these challenges. At re:Invent, AWS introduced Guardrails for Amazon Bedrock. This feature simplifies the implementation of application-specific safeguards that are aligned with responsible AI policies and customer use cases. Guardrails help maintain consistency in managing undesirable and harmful content within applications using Amazon Bedrock. They can be applied to large language models, fine-tuned models, and in conjunction with Agents for Amazon Bedrock. The service enables the specification of topics to avoid, automatically detecting and preventing queries and responses that fall under restricted categories. This includes configuring content filter thresholds for hate speech, insults, sexualized language, and violence, allowing for control over the level of harmful content filtering. For example, an online banking application can be set to avoid providing investment advice and limit inappropriate content, ensuring compliance and enhancing user protection. 


Wynn expressed a sense of optimism mixed with caution regarding the industry’s focus on ethical AI. While there’s an increased awareness, she noted a disparity between understanding and practical implementation of responsible AI practices. The widespread attention to applications like ChatGPT has spotlighted the need for ethical considerations but also revealed gaps in implementation.


Wynn emphasized that trust is built on transparency, a commitment to ethical practices, and integrating diverse viewpoints. AWS is committed to transparency; especially as generative AI grows and evolves. Transparency in the development, testing, and use of technology is crucial for earning the trust of organizations and their customers. This is why AWS continues to focus on providing transparency resources like AI Service Cards to the community and remains open to iterating and receiving feedback on the best ways forward. 

AWS introduced AI Service Cards at re:Invent 2022 as a tool to enhance transparency and help customers understand AWS AI services better. These cards serve as responsible AI documentation, providing crucial information on intended use cases, limitations, responsible AI design choices, and best practices for deployment and performance optimization. They aim to address key aspects like fairness, explainability, veracity, robustness, governance, transparency, privacy, security, safety, and controllability in AI service development.


This year at re:Invent, AWS announced a new AI Service Card for Amazon Titan Text to improve transparency in foundation models. Additionally, four new AI Service Cards were launched: Amazon Comprehend Detect PII, Amazon Transcribe Toxicity Detection, Amazon Rekognition Face Liveness, and AWS HealthScribe. 


“The commitments that we, as an organization, have made to build a strategy around responsible AI matter to us. We're building and integrating responsible AI into the entire lifecycle as we build our services and products. We are investing in the next generation of diverse leaders and ensuring that we can have the kind of representation that's necessary. We are being conscious and aware of how we are sourcing our data. All these commitments are key to us building trust,” said Wynn.


As AI continues to evolve and integrate into every aspect of our lives, the imperative to approach it with responsibility, transparency, and inclusivity becomes increasingly vital. My conversation with Wynn not only highlighted the challenges but also illuminated the path forward for ethical AI development, a journey that requires the collective effort of technologists, policymakers, and society at large.



Disclosure: The Heart of Tech is a research and consultancy firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this column. The author does not hold any equity positions with any company mentioned in this column.

9 views

Comments


bottom of page