aiccount

joined 2 years ago
[–] [email protected] 20 points 6 months ago* (last edited 6 months ago) (7 children)

What could they even give? They don't even ask for an email, and they claim to run everything you browse as RAM that never gets held or recorded.

[–] [email protected] 10 points 9 months ago (1 children)

You don't have to be delusional to self-sacrifice to try to make a difference. I'm so sick of people pretending like there is nothing they could possibly do to help, so they just keep hurting others. It's just like every discussion on factory farms. At least try to help. It will make you feel better, and you can quit getting all defensive when people point out things that can be done.

[–] [email protected] 1 points 9 months ago

I think there may be some confusion about how much energy it takes to respond to a single query or generate boilerplate code. I can run Llama 3 on my computer and it can do those things no problem. My computer would use about 6kWh if I ran it for 24 hours, a person in comparison takes about half of that. If my computer spends 4 hours answering queries and making code then it would take 1kWh, and that would be a whole lot of code and answers. The whole thing about powering a small town is a one-time process when the model is made, so to determine if that it worth it or not it needs to be distributed over everyone who ends up using the model that is produced. The math for that would be a bit trickier.

When compared to the amount of energy it would take to produce a group of people that can do question answering and code writing, I'm very certain that the ai model method is considerably less. Hopefully, we don't start making our decision about which one to produce based on energy efficiency. We may, though, if the people that choose the fate of the masses sees us like livestock, then we may end up having our numbers reduced in the name of efficiency. When cars were invented, horses didn't end up all living in paradise. There were just a whole lot less of them around.

[–] [email protected] 0 points 9 months ago (1 children)

I'm sorry you've been so hurt, I hope you get better.

[–] [email protected] 0 points 9 months ago

Standing up for what you believe isn't sanctimonious. I hope you eventually learn to quit caring so much what others think about you and that you can start to express your true opinions. Sure, some people will be offended and walk away, but those aren't the best types of people anyway. Quality is much more important than quantity when it comes to who you spend your time with.

[–] [email protected] 0 points 9 months ago

Yeah, I would be unable to respond in any meaningful way if I were trying to argue your side as well. I know why I'm downvoted. I'm downvoted because I point out a disgusting habit that many people have and they hate to think about. That's fine though, if I can get through to a single person, it is worth it. Think hard about which side you are on here, it's not a good side at all. Deep down you know that. Sometimes anger is the appropriate response. You'd be angry too if you developed a moral compass.

[–] [email protected] -1 points 9 months ago (2 children)

Yeah, I'm the one without regard for others here. The whole side you are arguing for is not having regard for others. That's literally what this discussion is about. I say "have regard for others", and you say "no, lol".

[–] [email protected] 0 points 9 months ago (4 children)

I'm sorry it is so hard for you to make the connection between one abused animal and many abused animals. I don't know what else to say. This is text book cognitive dissonance. Two things couldn't be more related.

[–] [email protected] 0 points 9 months ago (6 children)

When people show outrage about the abuse of a single animal, it is in no way "shoe horning" or a "non-sequitor" to point out the massive animal abuse that many people are supporting. I understand that people hate hearing about, but it's still true.

[–] [email protected] 1 points 9 months ago

This is an issue with many humans I've hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

This is so similar to many people's complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it's not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it's worth it if it isn't even as good as people yet.

I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn't have internet access, and there really weren't agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can't be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

I don't have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

[–] [email protected] 4 points 9 months ago

This is an issue with many humans I've hired, though. Maybe they try to cut corners and do a shitty job, but I occasionally check, if they are bad at their job, I warn them, correct them, maybe eventually fire them. For lots of stuff, AI can be interacted with in a very similar way.

This is so similar to many people's complaints with self driving cars. Sure, accidents will still be had, they are not perfect, but neither are human drivers. If we hold AI to some standard that is way beyond people then yes, it's not there, but if we say it just needs to be better than people, then it is there for many applications, but more importantly, it is rapidly improving. Even if it was only as good as people at something, it is still way cheaper and faster. For some things, it's worth it if it isn't even as good as people yet.

I have very little issues with hallucinations anymore, when I use an LLM to get anything involving facts, I always tell it to give sources for everything, and i can have another agent independently verify the sources before i see them. Often times I provide the books or papers that I want it to specifically source from. Even if I am going to check all the sources myself after that, it is still way more efficient then if I did the whole thing myself. The thing is, with the setups I use, I literally never have it make up sources anymore. I remember that kind of thing happening back in the days when AI didn't have internet access, and there really weren't agents yet. I realize some people are still back there, but in the future(that many of us are in) its basically solved. There is still logic mistakes and such, that stuff can't be 100% depended on, but if you have a team of agents going back and forth to find an answer, then you pass it to another team of agents to independently verify the answer, and have it cycle back if a flaw is found, many issues just go away. Maybe some mistakes make it through this whole process, but the same thing happens sometimes with people.

I don't have the link on hand, but there have been studies done that show gpt3.5 working in agentic cycles perform as good or better than gpt4 out of the box. The article I saw that in was saying that basically there are already people using what gpt5 will most likely be just by using teams of agents with the latest models.

[–] [email protected] 2 points 9 months ago (1 children)

I think without anything akin to extrapolation, we just need to wait and see what the future holds. In my view, most people are almost certainly going to be hit up side the head in the not to distant future. Many people haven't even considered what a world might be like where pretty much all the jobs that people are doing now are easily automated. It is almost like instead of considering this, they are just clinging to some idea that the 100-meter wave hanging above us couldn't possibly crash down.

view more: ‹ prev next ›