Exploring the Flip Side of Explainable AI: Unexplained Expert Systems and Cultural Acceptance for the Unexplained
Current buzz about artificial intelligence tends to present the excitement as coming from technological advances like speed to allow a real time conversation and better quality responses from chatbots and generative artificial intelligence. This talk explores how cultural expectations, rather than technology, may be what recently reached a tipping point to put artificial intelligence into the public spotlight. From a non-technological perspective, explainability is a major difference between machine learning and traditional software which encoded and applied logical rules. Popular acceptance and embrace of machine learning requires a comfort level with not having an explanation for why the software does what it does. This talk explores how past developments in how we interact with traditional software accustomed the general public to not getting an explanation, even for very explainable software. Technology tools like skip logic in forms prevent us seeing the big picture within a logical system. Cloud computing primed us to expect updates over time and accept constant changes outside our control to tools that we use daily. When even imminently explainable software is not explained, explainability no longer matters. This opens the way for the public to embrace machine learning.