Why AI Ethics Must Address AI Literacy, Not Just Stigma

All Transform 2021 sessions are available on demand now. Look now.


Women in AI are making inroads in research, leading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to highlight the importance of their voice, work and experience and to shine a light on some of these leaders. In this series, published on Fridays, we dig deeper into conversations with winners, which we recently honored at Transform 2021. To verify last week’s interview with the winner of our AI research award.

When you hear talk AI ethics, it is above all a question of bias. But Noelle Silver, winner of VentureBeat’s Women in AI Responsibility and Ethics Award, has dedicated herself to an often overlooked part of the responsible AI equation: AI literacy.

“It’s my vision is that we’re really increasing literacy across the board,” she told VentureBeat of her efforts to educate everyone from C-suites to teens on how approach AI in a more thoughtful way.

After presenting too many boardrooms that could only see the good of AI, Silver began to see this lack of knowledge and ability to ask the important questions as a danger. Today, she is a constant champion of public understanding of AI and has also implemented several initiatives supporting women and underrepresented communities.

We are delighted to present Silver with this well-deserved award. We recently caught up with her to discuss the inspiration behind her work, misconceptions about responsible AI, and how companies can ensure that ethics in AI are more than a box. check.

VentureBeat: What do you think is your unique perspective on AI? What motivates your work?

Noelle Silver: I am motivated by the fact that I have a house full of people who consume TO THE for various reasons. There is my son with Down’s syndrome and I want to make the world accessible to him. And then my dad who is 72 and suffered a head injury, so he can’t use a smartphone and he doesn’t have a computer. Accessibility is a big part of that, and for products that I have the opportunity to be involved in, I want to make sure I represent those perspectives.

I always joke that when we first started on Alexa, it was a favorite project for Jeff Bezos. We weren’t consciously thinking about what this might do for classrooms, nursing homes, or people with speech difficulties. But these are all really relevant use cases that Amazon Alexa has now invested in. I always quote Arthur C. Clarke, who said, “Any sufficiently advanced technology is indistinguishable from magic. And it is true for my father. When he uses Alexa he says: “This is amazing!” You feel like it mystifies him, but the reality is that there is someone like me with their fingers on a keyboard building the model that supports this magic. And I think being transparent and letting people know that there are humans out there who make them do what they’re doing, and the more diverse and inclusive those humans can be in their development, the better. So I took that lesson and now I’ve spoken to hundreds of executives and boards around the world to educate them on the questions they should be asking themselves.

VentureBeat: you have created several initiatives women and under-represented communities within the AI ​​community, including AI Leadership Institute, Women in AI, and more. What prompted you to start these groups? And what is your plan and your hopes for them in the near and long term future?

Silver: I started the AI ​​Leadership Institute six years ago because I was asked, as part of my profession, to speak to leaders and boards about AI. And I was selling a product, so I was there to, you know, talk about the art of the possible and get them excited, which was easy to do. But I found that there was really a lack of literacy at the highest levels. And the fact that those with the budgets didn’t have that literacy, it made it dangerous that someone like me could tell a good story and tap into the optimistic feelings of AI and that they couldn’t recognize that this is not the only course. I say right and wrong, but what if it’s someone trying to get them to do something without being so transparent? So I created this leadership institute with the support of AWS, Alexa and Microsoft to just try to train these executives.

A few years later, I realized that there was very little diversity in the conference rooms I was presenting at, and that concerned me. I met Dr Safiya Noble, who had just written Oppression algorithms on Google’s algorithm madness years ago. You know, you type in “CEO” and it just shows you white men – that sort of thing. It was a sign of a much larger problem, but I discovered that his work was not well known. She was not a keynote speaker at the events I attended; it was like a sub-session. And I just felt like the work was critical. And so I started Women in AI just to be a mechanism for it. I did a TikTok series on 12 African American women in AI viz, and it turned into a blogging series, which turned into a community. I have a unique ability, I will say, to champion this work, and so I felt that was my mission.

VentureBeat: I’m glad you mentioned TikTok because I was going to say, even outside of the board discussions, I saw you talking about building better role models and responsible AI everywhere, TikTok at Clubhouse, etc. With this, do you hope to reach out to the masses, get the attention of the average user, and educate decision-makers in this way?

Silver: Yes, it’s true. Last year I took a LinkedIn learning course on how to spot deepfakes, and we ended up with three million learners. I think three or four of the videos have gone viral. And it wasn’t YouTube with its elaborate search model that would drive traffic or anything, right. So I started doing more content on artificial intelligence after that, because it showed me that people wanted to know more about these emerging technologies. And I have teenagers, and I know they’re going to run these companies. So what better way to avoid systemic biases than to educate them on these inclusive engineering principles, ask better questions, and design justice? What if we taught that in middle school or high school? And it’s funny because my executives aren’t the ones I show my TikTok videos to, but I was on the phone with one recently and heard his seventh grade daughter ask, “Oh my gosh. Is this the Noelle Silver? And I was like, you know, that’s when you got it – when you have the seventh grader and the CEO on the same page.

VentureBeat: the idea of Responsible AI and AI ethics is finally starting to get the attention it needs. But do you fear – or do you already feel like – this is becoming a buzzword? How do you make sure that this work is real and that it’s not a checkbox?

Silver: It’s one of those things that businesses realize they need to have an answer for, which is great. As well, they create teams. What worries me is, but what is the impact of these teams? When I see something ethically wrong with a role model and I know it won’t serve the people it’s intended for, or I know it’s going to hurt someone, when I shoot chain as a data scientist and say ‘we shouldn’t be doing this, what happens then? Most of these ethical organizations have no authority to actually stop production. It’s like diversity and inclusion – it’s fine until you tell me it will delay commercialization and we lose $ 2 billion in revenue over five years. CEOs have told me, “I’ll do anything you ask, but the second I lose money, I can’t do it anymore.” I have stakeholders to serve. So if we don’t give these teams the power to do anything, they’ll end up like many of the ethicists we’ve seen and either resign or to be expelled.

VentureBeat: Are there any misconceptions about promoting responsible AI that are important to dispel? Or something important that is often overlooked?

Silver: I think the most important thing is that people often only think about ethical and responsible AI and prejudice, but it’s also about how we educate the users and the communities that consume this AI. Each company will be data-driven, and that means everyone in the business needs to understand the impact of what this data can do and how it needs to be protected. These rules barely exist for the teams that create and store data, and they certainly don’t exist for other people within an organization who might encounter that data. The ethics of AI aren’t just for practitioners; it’s much more holistic than that.

VentureBeat: What advice do you have for companies creating or deploying AI technology on how to approach it more responsibly?

Silver: The reason I went to Red Hat It’s because I really believe in open source communities where different companies come together to solve common problems and build better things. What happens when healthcare meets finance? What happens when we come together and share our challenges and ethical practices and craft a solution that reaches more people? Especially when we look at things like Governors, which almost all businesses use to launch their applications. So being part of an open source community where you can collaborate and build solutions that serve more people outside of your limited scope, I think that’s a good thing.

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member

Leave a Comment