Connect with us

Hi, what are you looking for?

Business

Expert source on ethical AI in the workplace and hiring (Includes interview)

Rick Britt argues that artificial intelligence – when used ethically – is positively impacting people when it comes to hiring and the employee experience. He expands further by saying that AI should be used to enhance human capability and help workers improve and grow – not to place a “big brother” monitoring system on them.

Digital Journal spoke with CallMiner’s Britt about examples of how AI has helped companies develop and retain top talent, simplify jobs, and drastically improve the employee experience.

Digital Journal: How is artificial intelligence disrupting business?

Britt: If by disruption we mean a radical change in a business, then the most disruptive aspects of AI is the need to rethink existing human-built processes to be AI processes, basically transforming a business from Human-to-Human to Machine-to-Human. This new process is data-driven and can help decrease the time it takes to do mundane tasks. Although, it is difficult to give up a process that is working for a new one that is now AI, but companies are having to do that to keep pace with their competitors and with the world in general. The biggest challenge is that many disparate models need to be coalesced into a single strategy, and so integration is the next immediate frontier after basic adoption.

DJ: Which types of businesses are most impacted?

Britt: If it’s not too cheeky, simply those that use AI? Though those that aren’t adopting AI practices may be left behind will experience a different kind of impact. The companies with large, well-organized data sets have the easiest adoption path. Simply due to regulatory nature, healthcare and financial services companies also fall into this category. But also, the technology firms that have built data in a more AI friendly fashion, such as smaller players like CallMiner to the titans like Google, Amazon, Facebook and Microsoft. Companies with messy data and approaches will probably follow until they fix that. As examples, what Netflix is to MSO Cable operator, or Uber/LYFT to a cab company. I wonder if a full streaming service with deep AI even sees a cable provider as a competitor? It seems the MSO’s and dish providers are following the disruption, and the evolution is so fast, Quibi is further revolutionizing the space, why are shows an hour, why not a 10 min consumable chunk?

DJ: What forms of artificial intelligence are most impactful?

Britt: The answer I have is boring so it’s probably correct, is whichever are simplest are typically the most impactful, just by scale alone. The most common by far are business analytics teams using more established statistical techniques in machine learning to get better answers to business questions. The statistical and mathematical approaches to data science are more common than the neural or more temporal. Just not as cool. The reason is that we humans understand and trust them more easily. If you can see all the features and their importance in a model, it’s easier to understand why the model is making a given prediction.

‘Explainability’ is how a model earns trust. Neural approaches are more complex, and so their output is difficult to explain without first explaining differential calculus. While these models are incredibly impactful, for example classifying which product reviews show disgust or disinterest, it’s hard to make a change from a prediction alone. We need to know the “why” before we can recommend a change. Examples are: models to lend money are typically statistical machine-learning based, while image processing for self-driving cars is Neural based.

DJ: What is meant by ‘ethical AI’ and does this matter?

Britt: Ethical AI matters more than we can even imagine. Most models are built to predict something, just like a human decision. We humans make decisions based on the data we have from experience, and our interpretation of that data. A model is only as good as the data it was trained on. Feed it biased data or a biased approach, it will be biased — even if that is not your intention. There are many embarrassing examples of biased data driving great models to make bad choices. The Apple phone face recognition does not open for a certain ethnicity. A recidivism model for rehabilitation of criminals errors on an ethnicity. We call this bias. Understanding the biases in the data help to understand, uncover, and finally prevent the biases in the model.

A second ethical concern is the malicious use of a good model. A bias can be intentionally introduced. In the cycle of the last US presidential election, Cambridge Analytica used bias to attempt to change the outcome of an election. Models are just equations. They don’t care who won or lost. By teaching humans how to use AI properly, we can avoid the unethical side of AI. Teaching a human that the model gives a racist result because of historically racist data — and NOT because of a causal relationship between race and anything else — gives the human a chance to intervene, stopping the cycle of history repeating itself.

DJ: How can AI be designed so that it remains ethical?

Britt: There are many steps data scientists employ to reduce the amount of unethical bias in their modeling. One of these is debiasing your data. Every data set is biased by nature. The key is understanding that bias and its effects. In CallMiner’s data, it is not uncommon for a customer to be frustrated when interacting with an off-shore agent. When training a model, it will group derogatory terms near those geographies or cultures. The data is accurate to the intention of customer, but it is societally racist. We need our users to understand why this is before we make any decision from it. Leaving the bias in helps clients realize that the environment these people work in is hurtful and may instigate, and then offer resources to cope with that.
Another real problem example in text data is the “He is to She” problem.

There are word models that predict analogies. You remember from your SAT’s, “Dog is to Cat as Mongoose is to Snake.” Depending on the training data set, human biases emerge. Take this example: “King is to Queen, as Prince is to…_____” The model is looking to predict “Princess” and it should. But what about Man is to Woman as Engineer is to…_____ the answer is “Engineer,” but the model trained on internet data will say “Homemaker”, Man : Woman, Doctor : Nurse… You get the idea. Why does the model output this sexist answer? Because the data it was trained on contained subtly sexist connotations. Humans must be in the loop and not trust machines blindly. Machines are no less biased or accurate than the data or human that was used to train them. They are no better than us in this regard.

DJ: Is an international framework needed for ethical AI?

Britt: As in regulation? Lawmakers who may not have a deep grasp on what machine learning is and actually does will have a hard time imposing fair and effective regulations. The best practice is to understand the bias and ask questions. My fear is that regulations like this will limit innovation.

Another topic in ethical AI is data ownership. When a company uses the data from many sources to make profits, should the creators and owners of that data receive a share of the profit too? Or is it the company who pays for the storage of such data the owner? These are difficult questions that don’t have clear answers. While I’d love to assume that data scientists all have their heart in the right place, using AI to automate the boring and mundane, leaving humans to the more complex problem solving, there may be some who seek to profit at the expense of using data from marginalized populations. Seeking proper compensation for data usage will be a need in our future.

Right now, there are a few different data regulations. I wouldn’t be surprised if an international one comes soon, but I do hope that the focus is on ethical advancement of the field over limitations and misinformed rules.

Avatar photo
Written By

Dr. Tim Sandle is Digital Journal's Editor-at-Large for science news. Tim specializes in science, technology, environmental, business, and health journalism. He is additionally a practising microbiologist; and an author. He is also interested in history, politics and current affairs.

You may also like:

World

An Iranian military truck carries a Sayad 4-B missile past a portrait of supreme leader Ayatollah Ali Khamenei during a military parade on April...

World

Tycoon Morris Chang received one of Taiwan's highest medals of honour to recognise his achievements as the founder of semiconductor giant TSMC - Copyright...

Business

Meta founder and CEO Mark Zuckerberg contends freshly released Meta AI is the most intelligent digital assistant people can freely use - Copyright AFP...

Tech & Science

Don’t be too surprised to see betting agencies getting involved in questions like this: “Would you like to make billions on new tech?” is...