The article is a great critique of how what the author refers to as the "Efficiency Lobby" has been pursuing a narrow idea of task oriented intelligence focused on productivity. It's a narrow focus, driven by corporate interests, that necessarily leads to individualistic consumption of AI services, hindering genuine creativity, open-ended exploration, and collection.
A recent paper introduces MemOS with the potential to create a truly collaborative and community driven foundation for AI. The paper introduces a new approach to memory management for LLMs, treating memory as a governable system resource.
It uses the concept of MemCubes that encapsulate both semantic content and critical metadata like provenance and versioning. MemCubes are designed to be composed, migrated, and fused over time, unifying three distinct memory types: plaintext, activation, and parameter memories.
This architecture directly addresses the limitations of stateless LLMs, enabling long-context reasoning, continual personalization, and knowledge consistency. The paper proposes a mem-training paradigm, where knowledge evolves continuously through explicit, controllable memory units, blurring the lines between training and deployment paving the way to extend data parallelism to a distributed intelligence ecosystem.
It would be possible to build a decentralized network where there's a common pool of MemCubes acting as shareable and composable containers of memory, akin to a BitTorrent for knowledge. Users could contribute their own memory artifacts such as structured notes, refined prompts, learned patterns, or even "parameter patches" encoding specialized skills that are encapsulated within MemCubes.
Using a common infrastructure would allow anyone to share, remix, and reuse these building blocks in all kinds of ways. Such an architecture would directly address Morozov's critique of privatized "stonefields" of knowledge, instead creating a truly public digital commons.
This distributed platform could effectively amortize computation across the network, similar to projects like SETI@home. Instead of constantly recomputing information, users could build out a local cache of MemCubes relevant to their context from the shared pool. If a particular piece of knowledge or a specific reasoning pattern has already been encoded and optimized within a MemCube by another user, it can simply be reused, dramatically reducing redundant computation and accelerating inference.
The inherent reusability and composability of MemCubes make it possible to have a collaborative environment where all users contribute to and benefit from each other. Efforts like Petals, which already facilitate distributed inference of large models, could be extended to leverage MemOS to share dynamic and composable memory.
This has the potential to transform AI from a tool for isolated consumption to a medium for collective creation. Users would be free to mess about with readily available knowledge blocks, discovering emergent purposes and stumbling on novel solutions.