this post was submitted on 02 Mar 2025
39 points (100.0% liked)

Learn Programming

1739 readers
66 users here now

Posting Etiquette

  1. Ask the main part of your question in the title. This should be concise but informative.

  2. Provide everything up front. Don't make people fish for more details in the comments. Provide background information and examples.

  3. Be present for follow up questions. Don't ask for help and run away. Stick around to answer questions and provide more details.

  4. Ask about the problem you're trying to solve. Don't focus too much on debugging your exact solution, as you may be going down the wrong path. Include as much information as you can about what you ultimately are trying to achieve. See more on this here: https://xyproblem.info/

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

I’m versed enough in SQL and RDBMS that I can put things in the third normal form with relative ease. But the meta seems to be NoSQL. Backends often don’t even provide a SQL interface.

So, as far as I know, NoSQL is essentially a collection of files, usually JSON, paired with some querying capacity.

  1. What problem is it trying to solve?
  2. What advantages over traditional RDBMS?
  3. Where are its weaknesses?
  4. Can I make queries with complex WHERE clauses?
you are viewing a single comment's thread
view the rest of the comments
[–] HamsterRage 3 points 1 day ago (1 children)

I spent 30 years working with derivatives of the Pick Operating System and its integrated DBMS. Notably Universe and Ultimate. Back in the day, it was very, very difficult to even explain how they worked to others because the idea of key/value wasn't commonly understood, at least as it is today.

I was surprised at how similar MongoDB is to Pick in many many respects. Basically, key/value with variant record structures. MongoDB uses something very close to JSON, while Pick uses variable length delimited records. In either case, access to a particular record in near instantaneous give the record key, regardless of how large the file is. Back in the 1980's and earlier, this was a huge advantage over most of the RDBMS systems available, as storage was much slower than today. We could implement a system that would otherwise take a huge IBM mainframe, on hardware that cost 1/10 the price.

From a programming perspective, everything revolves around acquiring and managing keys. Even index files, if you had them (and in the early days we didn't so we maintained our own cross-reference files) were just files keyed on some value from inside records from the main data file. Each record in an index file was just a list of record keys to the main data file.

Yes, you can (and we did) nest data that would be multiple tables in an SQL database into a single record. This was something called "Associated Multivalues". Alternatively, you could store a list of keys to a second file in a single field in the first file. We did both.

One thing that became very time/disk/cpu expensive was traversing an entire file. 99% of the time we were able to architect our systems so that this never happened in day to day processing.

A lot of stuff we did would horrify programmers used to SQL, but it was just a very different paradigm. Back in a time when storage and computing power were limited and expensive, the systems we built stored otherwise unthinkable amounts of data and accessed it with lightening speed on cheap hardware.

To this day, the SQL concepts of joins and normalization just seems like a huge waste of space and power to me.

[–] [email protected] 1 points 22 hours ago

This was super cool, thanks for sharing