Pro

joined 1 month ago
MODERATOR OF
 

Major Areas of Concern:

Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

  • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
  • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
  • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.

CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

  • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
  • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
  • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case.

Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

  • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
  • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
  • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns.

Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

  • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
  • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
  • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment.
 

Major Areas of Concern:

Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

  • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
  • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
  • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.

CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

  • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
  • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
  • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case.

Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

  • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
  • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
  • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns.

Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

  • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
  • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
  • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment.
[–] [email protected] -1 points 5 hours ago (2 children)

they're just going to lower the quality of their products.

Great, you as user of the service/ product can make your choice to not deal with companies that use AI.

The rest of users can enjoy more choices as they might simply prefer AI.

AI in the strict sense doesn't exist yet.

WTF?

[–] [email protected] -1 points 5 hours ago (5 children)

My dude, don't put words in my mouth.

I said Protesting AI is dumb , not all protesting is dumb.

Also, when did protesting AI had actually worked to achieve real improvement?

[–] [email protected] -2 points 5 hours ago (4 children)

2 things: AI is wider than LLMs and if you were even remotely correct then artists and writers would not protest it at all.

-7
submitted 6 hours ago* (last edited 6 hours ago) by [email protected] to c/[email protected]
 

I support giving users the choice between AI and non-AI services and products.

But workers protesting the use of AI in their industries are dumb in my opinion.

AI is going to change a lot of industries forever and there is almost nothing workers and unions can do currently to actually stop the progress.

I even had seen some worker unions who are protesting AI use, accept working with companies that use AI and I support them, Because they won't be taken seriously if they did not do that.

In short: I think workers should protest workers conditions and wages, rather than protesting technology.

AI adaptation is inevitable.

[–] [email protected] 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

One problem I see in your way of thought here, adblock use among social media users will never reach 100%.

Furthermore, adblockers are getting weaker with Google Chrome MV3. All of this leads to the logical conclusion in my eyes that you can only change the sources that power users use which will eventually lead to better privacy for all people involved. You will never be able to control people setups to be super private.

[–] [email protected] 6 points 1 day ago* (last edited 1 day ago) (5 children)

Said that on Social Media

[–] [email protected] 4 points 2 days ago

Olvid, for people close to me. Signal for strangers.

[–] [email protected] 4 points 1 week ago

What. The. Fuck. Are You Talking About?

[–] [email protected] 1 points 1 week ago (1 children)
[–] [email protected] 15 points 1 week ago

Yes, they are even republished by OCCRP.

[–] [email protected] 3 points 1 week ago (2 children)
view more: next ›