In 2020, AI algorithms get their reality check
Stuart Dobbie, product owner at Callsign, predicts that next year AI algorithms will be requiring a reality check.
A flood of businesses are looking to AI to solve business challenges, automate processes and differentiate their solutions in highly competitive markets. But with the pressure to bring these solutions to market quickly, we’ve seen gender, racial and socioeconomic bias taint public trust in some of the biggest AI projects.
In 2020, businesses will pivot quickly to ensure their AI projects live up to their promise without negatively impacting certain users or customer bases through data-driven biases. First, we’ll see companies across industries – finance, tech, education, government, etc. – investing even further in strict governance processes. This will be coupled with a culture of open communication and scrutiny. If AI projects lived under a veil of secrecy before, in the next few years more companies will be eager to involve a myriad of contributors and stakeholders to help validate their AI models. This could be sharing private and public proofs of concept, working with the academic and research communities to participate in peer reviews, or even involving consenting members of the public to help validate their AI models.
Regulations such as GDPR today and potentially new regulations in the future could be used to push companies even further to nurture this culture of algorithm scrutiny. Without companies nurturing this kind of culture on their own accord, regulations such as GDPR can encourage them to engage in more initiatives which scrutinise their algorithms, otherwise new regulations will need to be developed to enforce this. Failing to have the appropriate checks in place can result in repercussions with data processing compliance and lines of recourse for automated decision making; in particular, data subjects being allowed to contest automated decisions and request human intervention, review and maintenance of fairness. In 2020 and the years to come, companies who won’t be able to explain a black box score or reach a similar conclusion when manually reviewed will raise further scrutiny.