Sign up and we will send you the recording
We are proud to welcome our newest guest speaker, Peter, a recognized software strategist, evangelist, and an excellent speaker in everything software. He will hold the webinar on, “Testing for Cognitive Bias in AI: Why Machine Learning Applications Are Like People”.
When we train AI systems using human data, the result is human bias.
We would like to think that AI-based machine learning systems always produce the right answer within their problem domain. In reality, their performance is a direct result of the data used to train them. The answers in production are only as good as that training data.
Data collected by a human such as surveys, observations, or estimates, can have built-in human biases. Even objective measurements can be measuring the wrong things or can be missing essential information about the problem domain.
The effects of biased data can be even more deceptive. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion. This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision.
This webinar will explain:
Attendees will gain a deeper understanding of:
The webinar will end with a Q&A.
Speaker: Peter Varhol, Software Strategist & Evangelist
Peter Varhol is a well-known writer and speaker on software and technology topics. He has authored dozens of articles and spoken at a number of industry conferences and webcasts. Peter has advanced degrees in computer science, applied mathematics, and psychology. Currently, he has his own consulting company, Technology Strategy Research. His past roles include technology journalist, software product manager, software developer, and university professor.
Software Strategist & Evangelist