Why artificial intelligence must be explainable (XAI)?

Why artificial intelligence must be explainable (XAI)?

Humans can hardly understand the algorithms used for machine learning and text-driven artificial intelligence applications: How and why decisions are made? Are the results fair, transparent and explainable enough? Are the results biased in one way or another? For instance, if the language model used for training/machine learning was not neutral different biases would rise […]

Read More