Artificial intelligence (AI) is a transformative force with the potential to reshape how organizations of all sizes operate. While there’s a pressing need for ethical artificial intelligence solutions to assist organizations with sourcing, interviewing, candidate selection and career movement decisions, such solutions come with risks that must be addressed. For example, an AI solution should help to ensure that every candidate receives equal consideration regardless of their race, color, national origin, religion or sex.
To mitigate the risks associated with AI solutions, organizations need to prioritize ethics as an essential component of their implementation plans. This means that AI must reflect an organization’s commitment to ethical business practices and help facilitate regulatory compliance.
Here are some guiding questions to help organizations prioritize ethics in deploying AI solutions.
How Can AI Solutions Support Ethical Business Practices?
Before you adopt an AI solution to help with talent selection and management, document how the solution will help your organization act ethically. If you’ve already deployed a solution, focus on how it has helped you do so to date. If your organization uses a third-party solution, ask the provider to explain how their technology supports ethical hiring decisions.
To get the answers you need, you’ll have to analyze your organization’s data as well. For example, according to your hiring statistics, would your organization be better positioned to meet its diversity and inclusion goals for new hires if it used AI?
“If we look at ethnicity just as one example — the mix of the available workforce in various regions of the country — it’s going to differ,” says Jack Berkowitz, ADP’s Chief Data Officer. “That doesn’t mean that you shouldn’t be able to strive for diversity that works for your company. Today’s technology can help organizations easily evaluate their metrics against those aggregated from others in their region. Think of the actions you can take if you know where your company stands and how it compares to nearby peers and competitors.”
What Mechanisms Exist to Detect Biased Decision-Making?
Before its initial deployment, and during operation, every AI solution requires a degree of human oversight to ensure it enshrines ethical hiring practices and functions as designed. Without such oversight, errors and bias can appear.
For example, a solution that does not recognize degrees from foreign institutions might reject qualified candidates, regardless of their experience or command of the English language. Uncovering unintended bias in an AI solution requires a willingness to scrutinize its performance frequently. How often does a human review the AI solution’s output for errors to prevent these mistakes from occurring? A monthly review could find an error quickly, perhaps even while a position remains unfilled, allowing your organization to re-engage with rejected candidates.
Furthermore, is there a channel for employees or candidates to submit concerns regarding how the solution operates? Creating a web-based form for employees and candidates to submit concerns is the easiest and most practical solution.
“We can use the data from AI to make better decisions, but we have to remain vigilant, recognizing what’s going into the system and using it in a way that makes sense for our organizations without bias,” says Meg Ferrero, ADP’s Vice President and Assistant General Counsel.
Is There a Cross-Divisional Team To Critique and Support the Solution?
Ensuring ethical artificial intelligence solutions function as designed and do not present a compliance risk requires a multidisciplinary approach. For example, incorporating privacy by design, which prioritizes privacy at every stage of an organization’s operations, may require input from data privacy professionals. Similarly, complying with employment law will require assistance from suitably qualified employment counsel.
Operational leaders should also provide feedback on the performance of employees that the solution identified as candidates. If your organization’s hiring activity increases, the social climate changes or the regulatory environment evolves, the team should increase its oversight of the solution.
Ethical artificial intelligence can transform your organization’s hiring process, enabling human resources professionals to focus more time on nurturing potential candidates and less time on the administrative elements of their role. However, critical errors, such as mistakes in how a solution scans resumes and captures keywords, could violate your organization’s commitment to ethical hiring practices.
Ethical use of AI, therefore, requires businesses to ensure that errors and implicit or explicit bias do not result in hiring decisions that a reasonable person would view as unethical and contrary to the organization’s values. “People will not use technology they don’t trust. We need data to power AI. That’s how we gain insight,” says Jason Albert, ADP’s Global Chief Privacy Officer. “They are not going to trust the technology if they don’t have some say in how their data is being used or understand how it is being protected.”