Is AI safety corporate responsibility? Manulife shares its view

While the immense potential of AI is not lost on insurance megacorporation Manulife, the company’s global chief AI officer believes the power of this rapidly advancing technology also comes with a level of corporate responsibility.
“At Manulife, we believe AI has a positive transformative potential not just for our business, but for our customers and colleagues,” said Jodie Wallis, the tech-forward champion Manulife appointed as its global chief AI officer last year.
“AI can help us make better decisions, deliver more personalized experiences and unlock efficiencies that were unimaginable just a few years ago. Yet, with all of AI’s potential also comes responsibility,” Wallis said in a recent interview with InsuranceNewsNet.
The multibillion-dollar insurer has been at the forefront of AI adoption in insurance, being recognized globally for its progress and ranking as among the top five in the world for AI maturity in Evident’s 2025 AI Index.
However, the company is also putting its money where its mouth is on the subject of safe, ethical AI use. Not only has it established guidelines on safe AI use, but it also sponsored the recent 2025 Hinton Lectures. That lecture series, held in Toronto, Canada, in November, brought together “leading global AI safety experts” to discuss current and future risks of AI.
“As organizations increasingly rely on AI, being committed to responsible AI use is paramount. That means investing in innovation alongside investing in AI safety research. The Hinton Lectures are one of the ways Manulife is supporting AI safety research. The work of researchers is critical to building and trusting AI solutions now and in the future,” Wallis said.
Humanity ‘not doing great’ at AI safety
One of the key challenges in not just insurance but all industries adopting AI is that humanity is “not doing a great job” of managing it safely, according to Owain Evans, Berkeley AI expert, founder of Truthful AI and the key speaker during the 2025 Hinton Lectures.
He asserted that AI has evolved faster than most have expected it to — including even some CEOs of AI systems.
In fact, Evans estimates that AGIs or fully autonomous AI will come onstream by 2030. But the question is, will humankind be equipped to deal with what he terms as the “profound implications” of that, whether it’s bad actors using AI for nefarious purposes or AI acting out-of-turn.
“How is humanity doing at addressing these risks? I would say we’re not doing a great job at the moment,” Evans said during the lectures.
He identified two key issues, namely the lack of:
- Rigorous scientific understanding of how to develop AGI safely
- International coordination to govern development of AGI in an international setting to minimize risks
“We’d like to realize the benefits of AGI but also minimize the risks at the same time. For both of these, there’s been progress, but I think we’re far from having a solution,” Evans said.
This is where Wallis’ emphasis on the need for innovation, investment and collaboration comes into sharper focus.
“Collaboration matters, and ethically-grounded AI research must connect with industry to deliver real-world impact,” she says.
-And she believes Manulife is doing its part with “frameworks, practices, processes and tools in place to ensure AI models are developed, deployed, used and behave in ways that are aligned with company values, regulatory requirements and stakeholder expectations.”
“Manulife also empowers our workforce with the knowledge and skills to harness AI responsibly, which is crucial to our success and aligns with our commitment to a productive and ethical use of technology,” Wallis said.
Guiding principles
To this end, Manulife is one of the few insurance companies that has not only released a set of guidelines to govern the ethical use of AI, but has also made those guidelines publicly available for others to benefit from.
For Wallis, this demonstrates the company’s commitment to “living our values in all that we do.” Additionally, she noted that Manulife’s AI principles will continue to evolve as the insurance industry matures on its technological path and as new or updated regulations emerge.
“Transparency builds trust, and if we expect our customers, partners and colleagues to trust our AI models, they need to trust how we use them,” she explained. “These principles are embedded in how we evaluate, test, and deploy AI every day. From explainability in insurance underwriting decisions to optimizing our solutions to reduce environmental impacts, we are creating an environment in which responsible AI isn’t just a checkbox — it’s a mindset.”
Manulife, founded in 1887, is one of Canada’s oldest and most established insurance and financial services providers. It operates in more than 19 countries and territories around the world, including as John Hancock in the United States.
Truthful AI is a non-profit organization that researches safe AI practices and large language model usage. It was founded in 2023 and is based in Berkeley, California.
© Entire contents copyright 2026 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post Is AI safety corporate responsibility? Manulife shares its view appeared first on Insurance News | InsuranceNewsNet.

