otter

joined 2 years ago
MODERATOR OF
[–] otter 140 points 3 weeks ago (18 children)

Wireless is just a fad anyway /s

Many expressed their appreciation for Kalle's years of service to the Linux networking stack but as of writing no one has stepped up to take over the formal maintainer role. Thankfully there are other Linux WiFi driver developers out there working on the increasing number of Linux wireless drivers, just not any immediate leader yet to take on the maintainer duties.

Good to know :)

While I didn't use Linux back then, I heard the wifi situation was difficult to deal with. I assume this maintainer is responsible for fixing that over the years?

[–] otter 9 points 3 weeks ago (1 children)

Looks good! What is each dish?

[–] otter 8 points 3 weeks ago (4 children)

Thankfully a lot of these are accessible by the web browser if you really need to use them. I should switch to doing that for more of them

[–] otter 3 points 3 weeks ago

Call human resources, ants are harassing Jim

[–] otter 13 points 3 weeks ago* (last edited 3 weeks ago) (7 children)

How much is the 6 million dollar dog, if we account for inflation

[–] otter 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Good point, the third guess doesn't make sense. I'm going to cross it out

[–] otter 5 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

It gets worse

The nitrogen even got into our DNA...

https://en.m.wikipedia.org/wiki/Cytosine

We need to cleanse and get back to all natural DA

[–] otter 3 points 3 weeks ago (4 children)

I was going to make a joke about Dihydrogen Monoxide, but this one is better.

I wonder if there's a similar joke with something from around the timeframe of when autism diagnosis criteria changed (70s-90s)

[–] otter 31 points 3 weeks ago* (last edited 3 weeks ago)

In case that link doesn't load for some users: https://framapiaf.org/@debian

[–] otter 2 points 3 weeks ago (1 children)

There have been some issues with bias with the university admin, but I wouldn't say that's the case with the university as a whole

For example

https://www.reddit.com/r/UCalgary/comments/10lgm6a/may_2023_elections_is_our_chance_to_get_what_we/

I'm on mobile and I can't check, but is this the riding that the university falls under?

https://en.m.wikipedia.org/wiki/Calgary-Varsity

As for the author, I'm not sure. The profile only has this article, and I'm not well versed on the area of research to know if there is a bias or not

https://theconversation.com/profiles/andrew-allison-1511511/articles

[–] otter 1 points 3 weeks ago

Part of why I opened the article was because I saw that spin in comment sections earlier.

Now if it comes up in discussions, I'll know enough to speak about it

4
WhenTaken #247 (whentaken.com)
submitted 3 months ago by otter to c/[email protected]
5
Emovi #838 (emovi.teuteuf.fr)
submitted 3 months ago by otter to c/[email protected]
4
Globle - 2024-10-31 (globle-game.com)
submitted 3 months ago by otter to c/[email protected]
15
Hexcodle #448 (hexcodle.com)
submitted 3 months ago by otter to c/[email protected]
11
🙂 Daily Quordle 1011 (www.merriam-webster.com)
submitted 3 months ago by otter to c/[email protected]
5
Strands October 31, 2024 (www.nytimes.com)
submitted 3 months ago by otter to c/[email protected]
 

cross-posted from: https://lemmy.ca/post/31947651

definition: https://opensource.org/ai/open-source-ai-definition

endorsements: https://opensource.org/ai/endorsements

In particular, which tools meet the requirements and which ones don't:

As part of our validation and testing of the OSAID, the volunteers checked whether the Definition could be used to evaluate if AI systems provided the freedoms expected.

  • The list of models that passed the Validation phase are: Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360) and T5 (Google).
  • There are a couple of others that were analyzed and would probably pass if they changed their licenses/legal terms: BLOOM (BigScience), Starcoder2 (BigCode), Falcon (TII).
  • Those that have been analyzed and don't pass because they lack required components and/or their legal agreements are incompatible with the Open Source principles: Llama2 (Meta), Grok (X/Twitter), Phi-2 (Microsoft), Mixtral (Mistral).

These results should be seen as part of the definitional process, a learning moment, they're not certifications of any kind. OSI will continue to validate only legal documents, and will not validate or review individual AI systems, just as it does not validate or review software projects.

 

definition: https://opensource.org/ai/open-source-ai-definition

endorsements: https://opensource.org/ai/endorsements

In particular, which tools meet the requirements and which ones don't:

As part of our validation and testing of the OSAID, the volunteers checked whether the Definition could be used to evaluate if AI systems provided the freedoms expected.

  • The list of models that passed the Validation phase are: Pythia (Eleuther AI), OLMo (AI2), Amber and CrystalCoder (LLM360) and T5 (Google).
  • There are a couple of others that were analyzed and would probably pass if they changed their licenses/legal terms: BLOOM (BigScience), Starcoder2 (BigCode), Falcon (TII).
  • Those that have been analyzed and don't pass because they lack required components and/or their legal agreements are incompatible with the Open Source principles: Llama2 (Meta), Grok (X/Twitter), Phi-2 (Microsoft), Mixtral (Mistral).

These results should be seen as part of the definitional process, a learning moment, they're not certifications of any kind. OSI will continue to validate only legal documents, and will not validate or review individual AI systems, just as it does not validate or review software projects.

 

cross-posted from: https://lemmy.ca/post/31913012

My thoughts are summarized by this line

Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it's appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”

It's good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.

It's not good to use it for tasks that traditional "AI" was already doing, because traditional AI doesn't hallucinate and it doesn't require so much processing power.

It absolutely should not be used for diagnosis or insurance claims.

 

My thoughts are summarized by this line

Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it's appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”

It's good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.

It's not good to use it for tasks that traditional "AI" was already doing, because traditional AI doesn't hallucinate and it doesn't require so much processing power.

It absolutely should not be used for diagnosis or insurance claims.

view more: ‹ prev next ›