this post was submitted on 03 Mar 2025
682 points (99.1% liked)

Technology

63614 readers
4692 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 8 hours ago (1 children)

Yeah, I know how LLMs work, but still, if the definition of lying is giving some false absurd information knowing it is absurd you can definitely instruct an LLM to “lie”.

[–] [email protected] 2 points 8 hours ago

A crucial part of your statement is that it knows that it's untrue, which it is incapable of. I would agree with you if it were actually capable of understanding.