Episode 11 – Understanding AI Models with Joshua Starmer

Share the podcast on socials:

Twitter
LinkedIn
Facebook

Machine learning (ML), artificial intelligence (AI), and data science are paving the way for progress in different business sectors. AI adoption is in full swing for many organizations, Gartner predicts an increase in 2022 AI software revenue by 21.3% from 2021.  

Using and applying these technologies to various use cases, and helping the different departments succeed, lies in understanding the concepts. On the eleventh episode of the Decisions Now podcast, Joshua Starmer, Founder, and CEO of  StatQuest, and Lead AI Educator at Lightning AI dives into the nuts and bolts of ML algorithms, updating the technology, educating teams on ML within organizations, and more.  

Don’t miss this engaging episode as co-hosts, Rigvinath Chevala, EVS chief technology officer, and Erin Pearson, VP of Marketing get the scoop from Starmer, in the most musical manner. Subscribe to the Decisions Now podcast today, you can find us on Spotify, Apple Podcasts, and Amazon Music among other platforms.  

 

Verifying Your Models and Data 

 

When running different models and different interpretations of data, teams must know to ask the right questions to verify and trust the AI and results, they land on. Starmer sheds light on what it is teams should know.  

While there are many things to do when it comes to statistics, ML, and data science that we don’t do anymore.  

“When in doubt we can always basically try to calculate error or error bars,” Starmer adds. “We don’t often associate errors with the output, and I think that’s something that needs to come into fashion is people need to be a little more rigorous of with their machine learning.” 

 
He recommends and discusses SHAP values, as they give you a sense of which variables play a significant role in your models. In the past, teams used decision trees which were helpful because of how explainable they were. 

“With these relatively new SHAP values and other things, what we can do is we can apply that same interrogation to even so-called black box models, like neural networks or support vector machines and things like that. We’ve got newer tools, we’ve got old tools such as statistics, and we’ve got newer tools like these Shapley things that we could maybe use a little bit more in both cases,” Starmer said.  

 

Knowing the background of data sets, and what the statistics are made of, is critical to understanding the output and then driving decisions through it, Pearson said.  

Some questions Starmer said teams must ask are: 

-Does the sample size represent the right population? 

-Where is the data coming from? 

-What do you want to do with the data? 

-How much data do we have? 

-What do we want the models to do?