this post was submitted on 24 Aug 2024
-21 points (18.2% liked)

science

15735 readers
923 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

<--- rules currently under construction, see current pinned post.

2024-11-11

founded 2 years ago
MODERATORS
 
Key points
  • The hippocampus enables abstract reasoning; LLMs mirror this through pattern-based language prediction.
  • Future AI could emulate human inference by integrating multimodal learning and reinforcement methods.
  • AI's evolution hinges on bridging prediction and reasoning, moving toward deeper, human-like understanding.
top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 21 points 5 months ago
[–] [email protected] 12 points 5 months ago

"Can LLMs think like us?"

No.

"Can LLMs think—?”

No.

"Can LLMs—?"

No.

[–] [email protected] 7 points 5 months ago
[–] [email protected] 5 points 5 months ago

Facts, reasoning, ethics, ect. are outside the scope of an LLM. Expecting otherwise is like expecting a stand mixer to bake a cake. It is helpful for a decent part of the process, but typically is lacking in the using heat to process batter into a tasty desert area. An AI like one from the movies would require many more pieces than an LLM can provide and saying otherwise is a a category mistake*.

That isn't to say that something won't be developed eventually, but it would be FAR beyond an LLM if it is even possible.

(* See also: https://plato.stanford.edu/entries/category-mistakes/)

[–] [email protected] 4 points 5 months ago

Not like us, but maybe like OP 🤣

[–] [email protected] -1 points 5 months ago

"Can LLMs Think ?" YES "Like Us ?" NO ... not right now anyway.

[–] [email protected] -3 points 5 months ago

The fear in here is palpable.