this post was submitted on 11 Apr 2025
390 points (94.1% liked)

Technology

68673 readers
3347 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 123 points 2 days ago* (last edited 2 days ago) (3 children)

Very cool work! I read the abstract of the paper. I don't think it needs to use the "AI" buzzword because his work is already impressive and stands on its own, though, and the work has nothing to do with LLMs.

[–] [email protected] 128 points 2 days ago (1 children)

It uses a neutral net that he designed and trained, so it is AI. The public's view of "AI" seems mostly the generation stuff like chatbots and image gen, but deep learning is perfect for science and medical fields.

[–] [email protected] 72 points 2 days ago

Exactly. Artificial intelligence is the parent category.

AI

[–] [email protected] 53 points 2 days ago (3 children)

AI is far more than LLMs. Why does everyone on lemmy think AI is nothing but?!

[–] [email protected] 58 points 2 days ago (2 children)

Because that's what the buzzword has come to mean. It's not Lemmings' fault, it's the shitty capitalists pushing this slop.

[–] [email protected] 6 points 1 day ago (2 children)

Half of the user base spend their entire time complaining about AI on the other half go around trying to get everyone to understand that AI ≠ LLMs

People like to get old bent out of shape and claim that it's going to take everyone's jobs and then they completely ignore things like Alpha Fold.

[–] [email protected] 4 points 1 day ago

People like to get old bent out of shape and claim that it's going to take everyone's jobs and then they completely ignore things like Alpha Fold.

So, is AI gonna take everyone's jobs or not?

[–] [email protected] 3 points 1 day ago

Hey, I do both

[–] [email protected] 10 points 2 days ago
[–] [email protected] 14 points 1 day ago* (last edited 1 day ago)

Because actual presentation of analytical and practical AI here is rare. AI conducting analysis of medical imaging to catch tumors early isn’t something we discuss very much, for example.

What we do get is the marketing hype, AI images, crappy AI search results, ridiculous investment in AI to get rid of human workers, AI’s wasteful power requirements, and everything else under the sun with corporations basically trying to make a buck off AI while screwing over workers. This is the AI we see day to day. Not the ones making interesting and useful scientific discoveries.

[–] [email protected] 6 points 2 days ago (1 children)

The term “artificial intelligence” is supposed to refer to a computer simulating the actions/behavior of a human.

LLMs can mimic human communication and therefore fits the AI definition.

Generative AI for images is a much looser fit but it still fulfills a purpose that was until recently something most or thought only humans could do, so some people think it counts as AI

However some of the earliest AI’s in computer programs were just NPCs in video games, looong before deep learning became a widespread thing.

Enemies in video games (typically referring to the algorithms used for their pathfinding) are AI whether they use neural networks or not.

Deep learning neural networks are predictive mathematic models that can be tuned from data like in linear regression. This, in itself, is not AI.

Transformers are a special structure that can be implemented in a neural network to attenuate certain inputs. (This is how ChatGPT can act like it has object permanence or any sort of memory when it doesn’t) Again, this kind of predictive model is not AI any more than using Simpson’s Rule to calculate a missing coordinate in a dataset would be AI.

Neural networks can be used to mimic human actions, and when they do, that fits the definition. But the techniques and math behind the models is not AI.

The only people who refer to non-AI things as AI are people who don’t know what they’re talking about, or people who are using it as a buzzword for financial gain (in the case of most corporate executives and tech-bros it is both)

[–] adespoton 12 points 2 days ago (2 children)

The term “Artificial Intelligence” has been bandied around for over 50 years to mean all sorts of things.

These days, all sorts of machine learning are generally classified as AI.

But I used to work with Cyc and expert systems back in the 90s, and those were considered AI back then, even though they often weren’t trying to mimic human thought.

For that matter, the use of Lisp in the 1970s to perform recursive logic was considered AI all by itself.

So while you may personally prefer a more restrictive definition, just as many were up in arms with “hacker” being co-opted to refer to people doing digital burglary, AI as the term is used by the English speaking world encompasses generative and diffusive creation models and also other less human-centric computing models that rely on machine learning principles.

[–] [email protected] 3 points 1 day ago

According to gamedevs, 1-player pong (that is, vs computer) involves AI. It's a description of role within the game world, not implementation, or indeed degree of intelligence, or amount of power. Could be a rabbit doing little more than running away scared, a general strategising, or a right-out god toying with the world, a story-telling AI. Key aspect though is reacting to and influence on the game itself or at least some sense of internal goals, agency, that set it apart from mere physics, it can't just follow a blind script. The computer paddle in pong fits the bill: It reacts dynamically to the ball position, it wants to score points against the player, thus, AI. The ball is also simulated, possibly even using more complex maths than the paddle, but it doesn't have that role of independent agent.

[–] [email protected] 0 points 1 day ago

Valid point, though I’m surprised that cyc was used for non-AI purposes since, in my very very limited knowledge of the project, I thought the whole thing was based around the ability to reason and infer from an encyclopedic data set.

Regardless, I suppose the original topic of this discussion is heading towards a prescriptivist vs descriptivist debate:

Should the term Artificial Intelligence have the more literal meaning it held when it first was discussed, like by Turing or in the sci-fi of Isaac Asimov?

OR

Should society’s use of the term in reference to advances in problem solving tech in general or specifically its most prevalent use in reference to any neural network or learning algorithm in general be the definition of Artificial Intelligence?

Should we shift our definition of a term based on how it is used to match popular use regardless of its original intended meaning or should we try to keep the meaning of the phrase specific/direct/literal and fight the natural shift in language?

Personally, I prefer the latter because I think keeping the meaning as close to literal as possible increases the clarity of the words and because the term AI is now thrown about so often these days as a buzzword for clicks or money, typically by people pushing lies about the capabilities or functionality of the systems they’re referring to as AI.

The lumping together of models trained by scientists to solve novel problems and the models that are using the energy of a small country to plagiarize artwork also is not something I view fondly as I’ve seen people assume the two are one in the same despite the fact one has redeeming qualities and the other is mostly bullshit.

However, it seems that many others are fine with or in support of a descriptivist definition where words have the meaning they are used for even if that meaning goes beyond their original intent or definitions.

To each their own I suppose. These preferences are opinions so there really isn’t an objectively right or wrong answer for this debate

[–] [email protected] 10 points 2 days ago

I don't know why but reading this is hilarious to me, picturing the high schoolers log into chat gpt and ask it "how many unknown objects are there in space" and presenting the response as their result.

[–] [email protected] 33 points 1 day ago* (last edited 1 day ago) (3 children)

The model was run (and I think trained?) on very modest hardware:

The computer used for this paper contains an NVIDIA Quadro RTX 6000 with 22 GB of VRAM, 200 GB of RAM, and a 32-core Xeon CPU, courtesy of Caltech.

That's a double VRAM Nvidia RTX 2080 TI + a Skylake Intel CPU, an aging circa-2018 setup. With room for a batch size of 4096, nonetheless! Though they did run into some preprocessing bottleneck in CPU/RAM.

The primary concern is the clustering step. Given the sheer magnitude of data present in the catalog, without question the task will need to be spatially divided in some way, and parallelized over potentially several machines

[–] [email protected] 17 points 1 day ago (2 children)

That's not modest. AI hardware requirements are just crazy.

[–] [email protected] 16 points 1 day ago

For an individual yes. But for an institution? No.

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago)

I mean, "modest" may be too strong a word, but a 2080 TI-ish workstation is not particularly exorbitant in the research space. Especially considering the insane dataset size (years of noisy, raw space telescope data) they’re processing here.

Also that’s not always true. Some “AI” models, especially oldschool ones, function fine on old CPUs. There are also efforts (like bitnet) to get larger ones fast cheaply.

[–] [email protected] 7 points 1 day ago (1 children)

So a 5090, 5950x3d & 192gb of RAM would run it on "consumer" hardware?

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago)

That’s even overkill. A 3090 is pretty standard in the sanely priced ML research space. It’s the same architecture as the A100, so very widely supported.

5090 is actually a mixed bag because it’s too new, and support for it is hit and miss. And also because it’s ridiculously priced for a 32G card.

And most CPUs with tons of RAM are fine, depending on the workload, but the constraint is usually “does my dataset fit in RAM” more than core speed (since just waiting 2X or 4X longer is not that big a deal).

[–] [email protected] 5 points 1 day ago (1 children)

I've managed to run AI on hardware even older than that. The issue is it's just painfully slow. I have no idea if it has any impact on the actual results though. I have a very high spec AI machine on order, so it'll be interesting to run the same tests again and see if they're any better, or if they're simply quicker.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago)

I have no idea if it has any impact on the actual results though.

Is it a PyTorch experiment? Other than maybe different default data types on CPU, the results should be the same.

[–] [email protected] 48 points 2 days ago (2 children)

Every day a new Einstein is born, and their life and choices are dictated by the level of wealth and opportunity they are born into.

We would see stories like this every week if wealth and opportunities were equally distributed.

[–] [email protected] 10 points 2 days ago

I largely agree, except s/equally/equitably.

[–] [email protected] 36 points 2 days ago* (last edited 1 day ago) (1 children)

I was hoping the article would tell us more about the technique he developed.

The model I implemented can be used for other time domain studies in astronomy, and potentially anything else that comes in a temporal format

All I gathered from it is that it is a time-series model.

[–] [email protected] 26 points 2 days ago* (last edited 2 days ago) (1 children)

I found his paper: https://iopscience.iop.org/article/10.3847/1538-3881/ad7fe6 (no paywall 😃)

From the intro:

VARnet leverages a one-dimensional wavelet decomposition in order to minimize the impact of spurious data on the analysis, and a novel modification to the discrete Fourier transform (DFT) to quickly detect periodicity and extract features of the time series. VARnet integrates these analyses into a type prediction for the source by leveraging machine learning, primarily CNN.

They start with some good old fashioned signal processing, before feeding the result into a neutral net. The NN was trained on synthetic data.

FC = Fully Connected layer, so they're mixing FC with mostly convolutional layers in their NN. I haven't read the whole paper, I'm happy to be corrected.

[–] [email protected] 22 points 2 days ago (1 children)

My mans look like he about to be voted most likely to agent 47 a health insurance ceo

[–] [email protected] 22 points 2 days ago

Let them live in fear.

[–] [email protected] 1 points 2 days ago (3 children)
[–] [email protected] 11 points 2 days ago

Anything but the fuckin' metric system...

[–] [email protected] 8 points 2 days ago

Begging your pardon Sir but it's a bigass sky to search.

[–] [email protected] 4 points 2 days ago

Been wanting that gif and been too lazy to record it!

[–] [email protected] 0 points 1 day ago (1 children)

AI accomplishing something useful for once?!

[–] [email protected] 13 points 1 day ago (1 children)

AI has been used for tons of useful stuff for ages, you just never heard about it unless you were in the space until LLMs came around

[–] [email protected] 6 points 1 day ago

Also AI is a buzzword. Before it was called Machine Learning (ML) and has been in use for the past two decades.

[–] [email protected] -5 points 1 day ago

I havent read the paper and surely he did a great job. Regardless of that, and in principle, anyone can do this in less than hour. The trick is to get an external confirmstion for all the discoveries you've made.

load more comments
view more: next ›