this post was submitted on 08 Nov 2023
13 points (78.3% liked)
Science
3557 readers
143 users here now
General discussions about "science" itself
Be sure to also check out these other Fediverse science communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Interesting that the article ends with “The new ChatGPT catcher even performed well with introductions from journals it wasn’t trained on”. Isn’t that the whole point? If you just judge a model based on what it was trained on, you just get a biased model. I can’t remember the exact word for it but it’s essentially over-relying on your own dataset. So of course it will get near-100% accuracy on what it was trained with. I’d be curious to see what the accuracy on other papers is.
Overfitting is the normal term.
There we go, thanks for the addition! I did a lot of ML/DL stuff about 2 years ago but just couldn’t remember the term.