TechTakes

2087 readers
361 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
851
852
26
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

I don’t really have much to say… it kind of speaks for itself. I do appreciate the table of contents so you don’t get lost in the short paragraphs though

853
854
 
 

archive.org | and .is

this is almost a NSFW? some choice snippets:

more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

good thing it's so good that everyone will use it amirite

starting around $13 for the basic Microsoft 365 office-software suite for business customers—the company will charge an additional $30 a month for the AI-infused version.

Google, ..., will also be charging $30 a month on top of the regular subscription fee, which starts at $6 a month

I wonder how long they'll try that, until they try forcing it on everyone (and raise all prices by some n%)

855
 
 

Carole Piovesan (formerly of McCarthy Tétrault, now at INQ Law) describes this as a "step in the process to introducing some more sort of enforceable measures".

In this case the code of conduct has some fairly innocuous things. Managing risk, curating to avoid biases, safeguarding against malicious use. It's your basic industrial safety government boilerplate as applied to AI. Here, read it for yourself:

https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Now of course our country's captains of industry have certain reservations. One CEO of a prominent Canadian firm writes that "We don’t need more referees in Canada. We need more builders."

https://twitter.com/tobi/status/1707017494844547161

Another who you will recognize from my prior post (https://awful.systems/post/298283) is noted in the CBC article as concerned about "the ability to put a stifling growth in the industry". I am of course puzzled about this concern. Surely companies building these products are trivially capable of complying with such a basic code of conduct?

For my part I have difficulty seeing exactly how "testing methods and measures to assess and mitigate risk of biased output" and "creating safeguards against malicious use" would stifle industry and reduce building. My lack of foresight in this regard could be why I am a scrub behind a desk instead of a CEO.

Oh, and for bonus Canadian content, the name Desmarais from the photo (next to the Minister of Industry) tweaked my memory. Oh right, those Desmarais. Canada will keep on Canada'ing to the end.

https://dailynews.mcmaster.ca/articles/helene-and-paul-desmarais-change-agents-and-business-titans/

https://en.wikipedia.org/wiki/Power_Corporation_of_Canada#Politics

856
 
 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

857
 
 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
858
 
 

Direct link to the video

B-b-but he didn't cite his sources!!

859
860
 
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

861
 
 

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

862
 
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

863
864
 
 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

865
 
 

I think I giggled all the way through this one.

Pebble, a Twitter-style service formerly known as T2, today launched a new approach: Users can skip past its “What’s happening?” nudge and click on a tab labeled Ideas with a lightbulb icon, to view a list of AI-generated posts or replies inspired by their past activity. Publishing one of those suggestions after reviewing it takes a single click.

Gabor Cselle, Pebble’s CEO, says this and generative AI features to come will enable a kinder, safer, and more fun experience. “We want to make sure that you see great content, that you're posting great content, and that you're interacting with the community,” he says.

How is it "kinder, safer, and more fun"?

Cselle says he recognizes the perils of offering AI-generated text to users, and that users are free to edit or ignore the suggestions. “We don’t want a situation where bots masquerade as humans and the entire platform is just them talking to each other,” he says.

To protect the integrity of the community as it throws open the door to over 300 million people, Pebble will also be using generative AI to vet new signups. The system will use OpenAI’s GPT-3.5 model to compare the X bio and recent posts of people against Pebble’s community guidelines, which in contrast to Musk’s service ban all nudity and violent content.

Pebble CTO Mike Greer says the aim is to determine “whether someone is fundamentally toxic and treats other people poorly.” Those who are or do will be blocked and and manually reviewed. Pebble intends to vet would-be users against “other sources of truth” online once it opens signups further, he says, to include people without an X account.


There are too many quotable passages, so I'll stop there.

My favourite thing about these products is how they want to take on giants with these differentiating features that would be trivial plug-ins for the giants if they were to pose any threat. It's common in the enterprise blockchain world as well. It'll take SAP much less time to figure out blockchain than it will for your shitty blockchain startup to work out whatever SAP is.

866
867
 
 

the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @[email protected]):

Naw, I figured it out; they absolutely don't care if AI doesn't work.

They really don't. They're pot-committed; these dudes aren't tech pioneers, they're money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it's literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it's not real and they don't care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don't know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it's total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it's all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

868
869
870
 
 

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

871
 
 

by Amy Castor and me, the second in our how-to series on how you can build yourself an unfriendly AI! Here's part one from June.

872
 
 

a friend linked this to me earlier today: nitter (someone else maybe archive it? I don't know what tusky has done to birdsite and how to make wayback play nice)

in one lens/view one could see this as just more of the same (if people were already gunning for YC track shit, there's other things already implied etc), but even so: just how bad is(/must) the "belief" (be) for young people to feel this intensely about it?

I'm over here just watching the arc of likely events and I can barely fathom the anger and disappointment that may[0] come about in a few years after this

[0] - "may" because it seems a lot of folks have their anger redirected far too easily; remains to be seen if it can remain correctly directed in future

873
9
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 
 

Since the thread was flagged and marked dead, here's what it says if you don't have showdead on:

I was working late and half my company is in California so I was still at work when that rumour hit. My boss is former YC. Anyway, I don’t want to fan something that might not be true, but it doesn’t matter because the way people reacted really affected me. It’s almost 2 and I can’t stop thinking about it and just the gloating. I obviously knew who he was back when he posted here and I remembered he got some BS ban but nothing that justified people saying they hoped it was true, they hoped they “finally got him.” Maybe that’s normal in the Valley but I come from a more traditional culture where it’s not normal to root for someone to get sick or die.

I was supposed to move to the west coast in December for my company but now I’m having my doubts. I don’t think all Americans are like that, because half my family is American so I know they’re not, or even all programmers, but what I learned about venture capital tech, this community, is really bothering me.

I think I want out. I wouldn’t mind living in the States but not if I’m going to be working for people I can’t trust. And I’m having issues of conscience too because I remember when Dan and the others went after MOC and I obviously knew what they were trying to make happen and now that it seems they may have, I just don’t want to be part of this industry or community anymore.

Thoughts? Anyone in the same space about all this?

This dead HN comment says:

There’s a rumour going around Silicon Valley that MOC died today. As far as I know it’s still completely unsubstantiated, but the reactions I’ve heard have been sickening. And I feel bad because I should have stood up for him against you know who.

This dead comment links to this thread.

Here's some comments about it from the dead man himself.

874
 
 

of course he was afraid of russian nuukes. this only prompted Ukrainian engineers to bypass use of starlink entirely and current sea drones, like the one used in second Kerch bridge strike, or these used against SIG tanker and Olenegorsky Gornyak landing ship use domestic technology only

875
 
 

"Oh no! - Anyway" meme intensifies.

view more: ‹ prev next ›