Ethics in artificial intelligence (AI) and technology continue to be hot topics everywhere. And while technologies like AI have been revolutionizing our world, like most things, there are two sides to every coin. However, an algorithm is as good as the data you feed it, and it is the team’s responsibility to keep it in check.
On the twelfth episode of the Decisions Now podcast, we are joined by AI ethicist, Olivia Gambelin, founder of Ethical Intelligence, who brings us a thought-provoking conversation centered around ethics in AI.
Tune in as co-hosts Rigvinath Chevala, Evalueserve’s chief technology officer, and Erin Pearson, our vice president of marketing chat with Gambelin about ethics in AI, the challenges that come with it, building an ethical strategy, and what the future looks like!
What is Ethical AI?
Remember when Facebook and its privacy issues were creating all the buzz, or when Microsoft released a chatterbot that was later shut down due to offensive tweets on Twitter? – All instances that we are reminded that while technology can be beneficial, it can’t be left unsupervised. This is where ethics in AI comes in.
“Ethics, really if you’re boiling it down, it’s the study of right and wrong and what constitutes a good versus a bad action. What we’re talking about when it comes to ethics and technology is just the decisions around our technology on what makes a better product, what makes better technology versus technology that’s kind of eh or hitting headlines on scandals,” Gambelin said. “Ethics is that tool that’s helping you differentiate and determine those decisions.”
Ethical AI vs Explainable AI
“Do we need algorithms that are transparent, where you can explain it and then move on to making them ethical? Or do you do it the other way?” Chevala asked.
Before you launch your ethical AI plan, it’s essential to be able to explain the AI first, he added.
Gambelin said in agreement that, explainability comes first, as this allows an ethicist to come in and work with the system and pinpoint what needs tweaking.
It becomes harder and time-consuming for ethicists to update the technology when they aren’t explainable. This involves more digging and asking if you got everything since you can’t see into the model, she said.
“If AI is a black box, you can still control the inputs, like the training data, the test data, and still make it somewhat ethical to your point, but then you don’t know if the output is still integral or with integrity, I guess,” Chevala said. “So, that’s what we also arrived at and it’s an important distinction because most of AI that people understand today is really not transparent.”
AI is still a black box, making it hard to trust, he added.
Three Step Process in Building an Ethics Strategy
In her conversation with the team, Gambelin highlighted a three-step process, and three questions to ask when building an ethical strategy to get organizations started.
1. Knowing who is building the AI
When building algorithms, companies should establish who is building the AI, what that encompasses is if the team is trained, if they are understanding the risks and flagging them, as well as knowing how to look for the opportunities to build a better system.
“So, you’re looking at ensuring that who is building your technology is equipped to be able to do so in a way that’s embedding ethics,” Gambelin said.
2. Knowing how they are being built
Once the process of who is building a system is established, the focus should fall on how they are building them.
“You’re looking at frameworks, you’re looking at workflows, you’re looking at policies. These are all guiding tools,” Gambelin said. “Once you have the people in place and you understand fairness in the context of your company, you’re helping automate and be able to keep up on that solution with the process level. Really important.”
3. Knowing technology comes after
Often people ask why technology doesn’t come first in this framework, Gambelin mentioned.
Technology comes last because for it to operate successfully, you first need the right team which knows how to build it ethically.
“With technology as the last point, you can say we have everything in place so when we make decisions around our technology, we have the right people putting it into action and they know how to,” she added.
At this stage, organizations can start asking bigger existential questions and determine what data sets they should collect and use, how to use them, what kind of biases to look for, etc.
The Future of AI Within the Ethics Realm
Gambelin discussed where ethical AI is headed and how companies can get on board with it.
Ethics in AI can be looked at in two different ways – what can go wrong and what can go right?
A certain technology can be launched, and it goes off the rails, you pull it back and fix it. When looking at it from the perspective of how it can go right, teams must know the values of their customers to gain their trust in technology and design that technology with those in mind.
Some factors to look at can be data privacy and strategy, knowing what you’re designing your data collection with opt-in versus opt-out features if there is a way for your customers to have any type of feedback to you and have transparent communication, Gambelin said.
She advises teams to:
1. Educate themselves on ethical AI and responsible tech.
2. Find a community to engage with and help with the overwhelming amount of information out there. Slowly becoming familiar with the discussions and chipping away at challenges one at a time.
For more advice on ethical AI by expert Olivia Gambelin, don’t forget to listen to the full episode and subscribe to the Decisions Now podcast today!