this post was submitted on 21 Mar 2025
1431 points (99.3% liked)

Technology

67338 readers
6191 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 day ago (3 children)

This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?

[–] [email protected] 2 points 1 day ago

The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.

[–] [email protected] 1 points 1 day ago

As for everything, it has good things, and bad things. We need to be careful and use it in a proper way, and the same thing applies to the ones creating this technology

load more comments (1 replies)
[–] [email protected] 58 points 2 days ago (3 children)

I have no idea why the makers of LLM crawlers think it's a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than "well, we just don't want you to do that". They're usually more like "why would you even do that?"

Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said "please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)". Again: Why would anyone index those?

[–] phoenixz 29 points 2 days ago

Because you are coming from the perspective of a reasonable person

These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

[–] [email protected] 4 points 1 day ago* (last edited 1 day ago)

Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.

I'd not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.

[–] [email protected] 2 points 1 day ago

They want everything, does it exist, but it's not in their dataset? Then they want it.

They want their ai to answer any question you could possibly ask it. Filtering out what is and isn't useful doesn't achieve that

[–] [email protected] 39 points 2 days ago (1 children)

I guess this is what the first iteration of the Blackwall looks like.

[–] [email protected] 17 points 2 days ago

Gotta say "AI Labyrinth" sounds almost as cool.

[–] [email protected] 34 points 2 days ago

I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

[–] [email protected] 14 points 2 days ago (1 children)

Should have called it "Black ICE".

load more comments (1 replies)
[–] [email protected] 307 points 3 days ago (3 children)

Imagine how much power is wasted on this unfortunate necessity.

Now imagine how much power will be wasted circumventing it.

Fucking clown world we live in

[–] [email protected] 1 points 1 day ago

From the article it seems like they don't generate a new labyrinth for every single time: Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

[–] [email protected] 55 points 3 days ago (14 children)

On on hand, yes. On the other...imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

[–] [email protected] 86 points 3 days ago (2 children)

My dude, they'll literally sell services to both sides of the market.

load more comments (2 replies)
load more comments (13 replies)
load more comments (1 replies)
[–] [email protected] 72 points 2 days ago (3 children)

Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

[–] [email protected] 37 points 2 days ago

I think the negativity is around the unfortunate fact that solutions like this shouldn't be necessary.

load more comments (2 replies)
[–] [email protected] 36 points 2 days ago (3 children)

"I used the AI to destroy the AI"

[–] [email protected] 10 points 2 days ago (1 children)

And consumed the power output of a medium country to do it.

Yeah, great job! 👍

[–] [email protected] 20 points 2 days ago* (last edited 2 days ago)

We truly are getting dumber as a species. We're facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It's really fun to watch if it wasn't so sad.

load more comments (2 replies)
[–] [email protected] 85 points 3 days ago (2 children)

Burning 29 acres of rainforest a day to do nothing

[–] [email protected] 15 points 2 days ago
[–] [email protected] 1 points 1 day ago (1 children)

It certainly sounds like they generate the fake content once and serve it from cache every time: "Rather than creating this content on-demand (which could impact performance), we implemented a pre-generation pipeline that sanitizes the content to prevent any XSS vulnerabilities, and stores it in R2 for faster retrieval."

[–] [email protected] 1 points 1 day ago

Yeah but you also add in the energy consumption of the data scrappers

[–] [email protected] 211 points 3 days ago (8 children)

That's just BattleBots with a different name.

load more comments (8 replies)
[–] [email protected] 4 points 1 day ago

Jokes on them. I'm going to use AI to estimate the value of content, and now I'll get the kind of content I want, though fake, that they will have to generate.

[–] [email protected] 161 points 3 days ago* (last edited 2 days ago) (42 children)

this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.

load more comments (42 replies)
[–] [email protected] 107 points 3 days ago (1 children)

So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.

load more comments (1 replies)
[–] [email protected] 114 points 3 days ago (1 children)

Not exactly how I expected the AI wars to go, but I guess since we're in a cyberpunk world, we take what we get

[–] [email protected] 71 points 3 days ago (36 children)

Next step is an AI that detects AI labyrinth.

It gets trained on labyrinths generated by another AI.

So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn't get lost.

It's gonna be AI all the way down.

load more comments (36 replies)
[–] [email protected] 1 points 1 day ago

They used AI to destroy AI

[–] [email protected] 40 points 3 days ago (2 children)

And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

load more comments (2 replies)
load more comments
view more: next ›