X
Innovation

IBM adding recommended bias monitors to Watson OpenScale

These recommended bias monitors automatically identify attributes like sex, ethnicity, marital status and age and recommend they be monitored.
Written by Larry Dignan, Contributor

IBM is rolling out recommended Watson OpenScale bias monitors for artificial intelligence and machine learning models.

These recommended bias monitors automatically identify attributes like sex, ethnicity, marital status and age and recommend they be monitored. By flagging attributes up front, IBM is removing the need for manual selection of attributes to monitor.

Watson OpenScale's recommended bias monitors can be edited by users, according to an IBM blog.

Also: The AI, machine learning, and data science conundrum: Who will manage the algorithms?    

The company has been building out its Watson suite of products to run on multiple clouds, make data preparation easier and scale algorithms in enterprises. Bias has become a key issue for companies as algorithms scale. One issue is that while an individual model may not have bias problems can develop when algorithms are combined. Large technology vendors are starting to address algorithmic bias via automation, software and education. 

IBM added that is working with Promotory to expand its list of attributes to cover to address regulations.  IBM is trying to get ahead of algorithm bias since it is likely to be regulated more in  the future. Companies like IBM have led the charge on AI governance, job impact, transparency and bias.  

Primers: What is AI? | What is machine learning? | What is deep learning? | What is artificial general intelligence?    

Related:

Editorial standards