A man’s recent attempt to use an AI-generated avatar in his legal appeal made an immediate impression on a New York courtroom, but probably not the one he was hoping for.
Jerome Dewald — a 74-year-old that The Register notes is behind a startup that says it’s “revolutionizing legal self-representation with AI” — was chewed out during an employment dispute hearing on March 26th for failing to inform judges that he had artificially generated the man presenting his oral argument. While the court had approved Dewald to submit a video for his case, Justice Sallie Manzanet-Daniels became confused when the unknown speaker, who clearly wasn’t Dewald, appeared on the screen.
“Hold on,” Manzanet-Daniels said, interrupting the video after the avatar had barely finished its first sentence. “Is that counsel for the case?”
“I generated that,” Dewald responded. “It’s not a real person.”
Dewald told The Register that the avatar — a “big, beautiful hunk of a guy” called Jim — was one of the stock options provided by an AI avatar company called Tavus. Dewald says the video was submitted due to difficulties he experiences with extended speaking, but the courtroom was unaware that the video contents were artificially generated.
“It would have been nice to know that when you made your application. You did not tell me that, sir, I don’t appreciate being misled.” said Manzanet-Daniels, responding to Dewald’s admission. “You are not going to use this courtroom as a launch for your business.”
This is the latest of several snafus that have occurred when people try to mix legal processes with AI technology. Two attorneys and a law firm were penalized in 2023 for submitting fictitious legal research that had been made up by ChatGPT. DoNotPay, a “robot lawyer” company, was also ordered to pay the FTC a $193,000 settlement in February for advertising, without evidence, that its AI legal representation is as good as a real human lawyer.