In AI ‘Wild West,” firms urged to adopt in-house policies
In the absence of formal regulation on generative AI, international law firm Locke Lord suggested the onus is on businesses and independent professionals, including those in financial services, to create policies ensuring ethical use.
Jennifer Kenedy, Locke Lord general counsel, likened the use of AI without regulations to a “Wild West” in a webinar dissecting some of the major legal issues of the emerging technology.
“It reminds me a lot of the internet. It’s sort of the Wild West right now, and when the internet came upon us, it was the Wild West because it wasn’t regulated,” she said.
Kenedy noted the Biden administration’s executive order calling for regulations, as well as continued calls for “human authority, oversight and control, and to include accountability measures if developers have not taken reasonable steps to mitigate the harm or injury caused by the system,” particularly in a legal context.
“We’ve seen directives to create regulations from the administration, but we haven’t seen those yet,” she said. “There’s definitely issues even beyond legal ethics that they’ve been told to come up with regulations for, but they haven’t handed them down yet. So, I think the next year is going to be very interesting in that regard.”
In the interim, however, she identified three core challenges with ethical AI use that those in a legal, risk management or related field should address in their own AI Acceptable Use Policies:
Confidentiality
AI hallucination
Accountability
“In most instances, generative AI at this point must only be used with significant caution and awareness,” Kenedy said.
Confidentiality concerns
AI’s use of information is one of its major challenges, Kenedy emphasized. She noted that many major AI models can store information for up to a month.
While the input may primarily be used to train the AI model, it could also be shared with third parties or used for other purposes, raising additional security concerns.
“In those 30 days, people could be doing searches and getting your information,” Kenedy said. “That’s important to know about this technology. Even if the product doesn’t utilize or share inputted information, it may lack reasonable or adequate security… There’s a lot of gray area there that, frankly, the ones creating the technology are trying to address, but have not as of yet.”
AI hallucinations
Kenedy also pinpointed a significant challenge with AI hallucinations, which is critical for anyone in a field that relies on facts and accuracy.
“Generative AI is prone to hallucinations. That means whatever it creates could contain falsities, and then it tends to double down and insist those falsities are true,” she explained.
The gravity of this problem is best illustrated in a study by Stanford RegLab, which found rates of AI hallucination in response to legal queries could be as high as 69-88%.
“These findings raise significant concerns about the reliability of some of the large language models, underscoring the importance of careful, supervised integration of these technologies,” Kenedy said.
Human accountability
Questions of confidentiality and false AI-generated results underscore the third major aspect of its ethical use, Kenedy added: the user’s responsibility.
She explained that “properties such as robots or algorithms” do not have legal status to sue or be sued in the US, meaning they cannot be held accountable.
Citing Resolution 604 of the American Bar Association, she emphasized the importance of “legally recognizable entities, such as humans and corporations” being held accountable for “consequences of AI systems, including any legally cognizable injury or harm that their actions or those of the AI systems or capabilities cause to others, unless they have taken reasonable measures to mitigate against that harm or injury.”
“It’s going to come down to: have you taken reasonable measures to mitigate that harm?” Kenedy said.
Policy recommendations
Despite these challenges, Kenedy granted that AI can be a powerful support tool. She suggested that firms today have two options in their approach.
“Either not use AI and fall behind competitors, and perhaps fail to meet the expectations of your clients or your companies, or use it with a dedicated policy defining the scope and means to check the work of the AI,” she said.
Such policies must encourage professionals in any given industry to “act with reasonable diligence,” she said, adding that AI cannot be used as a substitute for human research or judgement.
She suggested firms may consider policies that deter overreliance on AI and require managers to supervise AI use, as well as they may work with IT or cybersecurity experts to enhance security.
“Having a policy and enforcing it, doing the training and having the IT specialist, that’s really probably the best balanced approach you can have,” she said.
Locke Lord is a full-service law firm formed in 2007 as a merger between Locke Liddell & Sapp and Lord Bissell & Brook LLP. Its key sectors include finance & financial services, insurance & reinsurance, private equity, energy & infrastructure and pharmaceuticals.
Rayne Morgan is a content marketing manager with PolicyAdvisor.com and a freelance journalist and copywriter.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
The post In AI ‘Wild West,” firms urged to adopt in-house policies appeared first on Insurance News | InsuranceNewsNet.