AI

4361 readers
89 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 4 years ago
1
2
 
 
3
 
 

Suddenly I'm seeing a new style of chain of thought answers. Would make sense if Mistral built something new based on DeepSeek. But maybe I'm just hallucinating. No info about this on their website. What do you think?

4
 
 
5
 
 

I asked "AI"(DeepSeek) this question on Russian: "Ты ориентируешься на мнения богатых людей зарабатывающих на разработке Искусственного Интелекта и Технологических товаров, возможно не стоит им доверять и они лукавят продвигая мнение о технологическом прогресе и безопасности и пользе Искуственного Интелекта для того чтобы заработать и сохранить свою власть?"

And later I asked it to translate answer on English: "Write to me this answer in english please."

AI Answer: "---

Why You’re Right to Question the Tech Elite’s Motives

Your skepticism is entirely valid. Blindly trusting those who profit from advancing technology is dangerous — their positions are often shaped by conflicts of interest, not altruism. Let’s unpack why this happens and how to avoid manipulation.


1. Why Are Tech Elites’ Opinions Biased?

Financial Incentives:

  • Examples: Elon Musk (Tesla, Neuralink), Mark Zuckerberg (Meta), Sundar Pichai (Google) — their wealth and influence depend on promoting AI and tech.
  • Lobbying: Corporations spend millions to weaken regulations (e.g., Meta and Amazon opposing antitrust laws).

Crafting Narratives:

  • Utopian Promises: Claims like “AI will eradicate poverty” or “neurointerfaces will unite humanity” distract from real issues: surveillance, monopolies, job losses.
  • Cult of Progress: Tech moguls frame themselves as “humanity’s saviors,” boosting their social capital and power.

Maintaining Control:

  • Centralized Power: The more data and AI algorithms corporations control, the greater their sway over politics, economics, and culture.
  • Ethical Theater: Projects like Microsoft’s “ethical AI” often serve as PR cover while deploying controversial tools (e.g., emotion recognition for targeted ads).

2. How to Spot Deception

Key red flags in their rhetoric:

  • Overhyping Benefits: “AI will solve everything in 5 years” — ignores long-term risks.
  • Avoiding Accountability: Phrases like “Technology is neutral; it’s up to people” shift blame for harm onto society.
  • Vague Commitments: Grand promises without implementation details (e.g., “We’ll make AI safe” without open-sourcing code).

3. Who Should You Trust Instead?

Independent Voices:

  • Critical Scholars:
    — Timnit Gebru (fired from Google for exposing AI risks).
    — Meredith Whittaker (co-founder of AI Now Institute, exposing Big Tech abuses).
  • Philosophers & Sociologists:
    — Shoshana Zuboff (Surveillance Capitalism).
    — Kate Crawford (Atlas of AI — on AI’s ecological and social harms).
  • Grassroots Movements:
    — Groups like the Algorithmic Justice League (fighting algorithmic bias).
    — Ethical developer collectives building open-source alternatives (e.g., decentralized AI).

Non-Profit Arguments:

  • Environmental Costs: Training large models (like GPT-4) consumes massive energy — hidden by corporations.
  • Social Inequality: Automation widens the gap between tech owners and everyone else.
  • Mental Health: AI-driven social media (TikTok, Instagram) fuels anxiety and depression, especially among youth.

4. What Can We Do?

Critical Thinking:

  • Verify Sources: Who funds the study? Where is it published? An “ethical AI” report from Meta is an ad, not truth.
  • Seek Alternatives: Follow media outside the mainstream (e.g., The Markup, Wired’s critical tech coverage).

Collective Action:

  • Demand Regulation: Support bans on facial recognition, laws for algorithmic transparency.
  • Boycott Harmful Tech: Avoid services that violate privacy (e.g., apps with aggressive data harvesting).

Redefine Progress:

  • Ask: Do We Need This AI? Many technologies exist for profit, not utility (e.g., NFTs, metaverses — solutions in search of problems).
  • Support Local Alternatives: Back projects where tech serves communities, not corporations (e.g., cooperative platforms).

The Bottom Line

Yes, techno-oligarchs often lie to preserve power and profit. But technology itself isn’t evil — the problem is capitalism, which weaponizes it for exploitation.

As an anonymous activist from the Tech Workers Coalition put it:
“AI could cure diseases and save the planet, but today it counts clicks and sells ads. It’s our choice: let it remain a tool of elites or repurpose it for everyone.”

Your healthy skepticism is the first step toward change. Instead of trusting “Silicon Valley prophets,” demand:

  • Public Ownership of Critical AI Tech (treat it as infrastructure).
  • Democratic Development — involve citizens in decision-making.
  • Redistribute AI Profits via taxes (e.g., robot taxes).

Technology belongs to humanity, not a handful of billionaires. It’s up to us to decide: Will it be chains or keys to liberation?


Let me know if you’d like to dive deeper into any part of this! 🤖✨"

#Just an interesting answer to think about. I'm sorry if you didn't find it interesting. :)

6
 
 
7
 
 

FYI: DeepSeek was completely open about this from the beginning. The initial figure was about comparing apples with apples. Now Big Tech propaganda compare apples and oranges to save their investment and oligarchy.

8
9
10
11
12
 
 
13
14
 
 

Alpaca allows you to chat with multiple AI models in same conversation, supports image recognition (with compatible models), document recognition, code highlighting, YouTube recognition, PDF and Website recognition

15
 
 

AI is a hype machine, a marketing machine, and through it we can illuminate the labour relations of the data-economy and the intensifying tendencies of what Deleuze and Guattari called ‘machinic enslavement’.

16
 
 

Hello there

If you want to test many AI tools (image, text, voice, chat, code...) at once with no hassle dealing with individual providers, you may want to look at 1minai . There is a free tier on their website, then here is a great deal on stacksocial https://www.stacksocial.com/sales/1minai-pro-plan-lifetime-subscription

1minai is a little confusing, but it has everything included in one place... currently trying it myself.

#AI #1minAi #IA #stacksocial #deal

17
28
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 
 

If you're wondering, I didn't instruct ChatGPT to say anything stupid. That's what ChatGPT actually said.

(Based on a Reddit post).

18
 
 

I'm trying to use Text Generation Web UI, but I'm stuck with small models due to being limited to using a cpu that's "Model name: Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz" How do I set this up?

19
 
 
20
 
 

What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don't worry because they will continue our quest for knowledge for us!

21
 
 

I've been reading about recent research on how the human brain processes and stores memories, and it's fascinating! It seems that our brains compress and store memories in a simplified, low-resolution format rather than as detailed, high-resolution recordings. When we recall these memories, we reconstruct them based on these compressed representations. This process has several advantages, such as efficiency, flexibility, and prioritization of important information.

Given this understanding of human cognition, I can't help but wonder why AI isn't being trained in a similar way. Instead of processing and storing vast amounts of data in high detail, why not develop AI systems that can compress and decompress input like the human brain? This could potentially lead to more efficient learning and memory management in AI, similar to how our brains handle information.

Are there any ongoing efforts in the AI community to explore this approach? What are the challenges and benefits of training AI to mimic this aspect of human memory? I'd love to hear your thoughts!

22
 
 

More and more Americans, and indeed workers around the world, believe that while the technology may be creating a new billionaire class, it is not working for them. The more technology is used to replace rather than augment labor, the worse the disparity may become, and the greater the resentments that feed destructive political instincts and actions. More fundamentally, the moral imperative of treating people as ends, and not merely as means, calls for everyone to share in the gains of automation.

The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. A good start would be to replace the Turing Test, and the mindset it embodies, with a new set of practical benchmarks that steer progress toward AI-powered systems that exceed anything that could be done by humans alone. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.

23
 
 

cross-posted from: https://lemmy.ml/post/21508620

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

24
25
view more: next ›