It’s a particularly relevant question now, as governments consider more AI regulations, the courts deal with AI-related cases, and society grapples with the real-world sometimes tragic consequences of the technology.

Sack says companies need to consider what ethical, legal, and compliance implications could arise from their AI strategies and use cases and address those earlier rather than later.

“Ethical, legal, and compliance preparedness helps companies anticipate potential legal issues and ethical dilemmas, safeguarding the company against risks and reputational damage,” he says. “If ethical, legal, and compliance issues are unaddressed, CIOs should develop comprehensive policies and guidelines. Additionally, they should consult with legal experts to navigate regulations and establish oversight committees.”

9. What’s our risk tolerance, and what safeguards are necessary to ensure safe, secure, ethical use of AI?

Manry says such questions are top of mind at her company.

“At Vanguard, we are focused on ethical and responsible AI adoption through experimentation, training, and ideation,” she says. “Resulting from senior leader and crew [employee] perspectives, our primary generative AI experimentation thus far has focused on code creation, content creation, and searching and summarizing information.”

She advises others to take a similar approach.

“CIOs must assess risk tolerance and implement safeguards for generative AI to address safety, security, and ethical concerns. By establishing healthy safeguards like data protection protocols and ethical guardrails, CIOs ensure responsible AI use and minimize risks,” she says. “Establish an AI governance framework that defines the organizations risk tolerance, and patterns of acceptable use based on data sensitivity, allowing low risk generative AI use cases to be fast-tracked while applying more rigorous evaluation on higher-risk applications.

“This approach enables teams to innovate safely and efficiently, while ensuring more rigorous safeguards for use cases involving sensitive data. By implementing robust security measures, bias mitigation techniques, and an ethical review process, CIOs can minimize risks and ensure responsible use of AI.”

Not all organizations are there yet, though: Data governance research from Lumenalta, which delivers custom digital solutions, found that only 33% of organizations have implemented proactive risk management strategies for AI governance.

10. Am I engaging with the business to answer questions?

CIOs shouldn’t be going it alone, says Sesh Iyer, managing director, senior partner and North America co-chair of BCG X, the tech build and design division of Boston Consulting Group.

“CIOs must ask themselves whether they are engaging with the business to deliver value with generative AI, whether there is a clear focus on gen AI with a defined pathway to achieving a meaningful return on investments within 12 months, whether they are leveraging the power of the digital ecosystem to support their gen AI agendas, [and] whether they have a clear plan to extract and use data at scale to achieve these goals,” Iyer says.

“These questions are crucial for CIOs to ensure they are delivering value, targeting spend effectively to achieve returns, and considering velocity-to-value — leveraging intellectual property and products from a broader ecosystem to reach value faster. Also, they must determine whether they have the ‘digital fuel’ (i.e., data and infrastructure) needed to achieve these AI-driven outcomes.”

He advises CIOs to “sit down with the business to devise or refine an integrated ambition agenda” and “develop clear business cases that demonstrate returns within 12 months, establish a robust ecosystem strategy, and actively engage with partners to maximize value.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *