this post was submitted on 10 Feb 2024
1035 points (100.0% liked)
196
17065 readers
1213 users here now
Be sure to follow the rule before you head out.
Rule: You must post before you leave.
Other rules
Behavior rules:
- No bigotry (transphobia, racism, etc…)
- No genocide denial
- No support for authoritarian behaviour (incl. Tankies)
- No namecalling
- Accounts from lemmygrad.ml, threads.net, or hexbear.net are held to higher standards
- Other things seen as cleary bad
Posting rules:
- No AI generated content (DALL-E etc…)
- No advertisements
- No gore / violence
- Mutual aid posts require verification from the mods first
NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.
If you have any questions, feel free to contact us on our matrix channel or email.
Other 196's:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If all of your time is spent correcting the answers you know you want it to give you, what use it to you exactly?
Like, I'll take your word for it: you can trick it into giving more correct answers.
You would only do this if you already know the correct answers.
I mean, you can use it for rubber-ducking, I suppose. I don't know if that's revolutionary, but I guess it's not useless.
I pointed out general strategies to make it more accurate without supervision. Getting LLMs to be reliable enough to use without supervision will be a matter of adding multiple layers of safe guards.