Point 1 may be a problem if you're embedded. Otherwise, enjoy the fact that the 150,000 lines of c-code are some of the most tested lines of code on the planet.
Point 2 doesn't tackle the reasons why there is a mismatch between in-memory representation and tabular data. Some benefits include the wins obtained from a schema built to utilize "normal form". Object databases have their place, but so do fully normalized database tables.
Point 3 doesn't strike me as useful. I don't find myself reverting rows to previous points in history that often, if I ever have. Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.
Overall, sure, a new, "light weight", object database that uses data structures* may have a place somewhere. But to replace SQLite? I think not.
*The Java API gives me the same recoil as Java JSON API. Pulling out data key by key feels like pulling teeth tooth by tooth.
Large dependencies are not only a problem in embedded programming. That sort of thinking is how we got to the explosion of dependencies and software complexity we're in today.
> Object databases have their place, but so do fully normalized database tables.
Agreed, but you can build a stricter data model on top of generic data structures. The idea is to keep them separate rather than hard-coding just one specific data model. See for example running DataScript on top of xitdb: https://gist.github.com/radarroark/663116fcd204f3f89a7e43f52...
> Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.
"Adding a new version" to revert is exactly what xitdb does. See this line, which appends a new "version" of the database whose value points to an older version:
history.append(history.getSlot(historyIndex));
It's fine if you don't find immutability useful directly, but it is also what enables reading the db while writes are happening, which is clearly useful even if you don't care about time travel.
Were these cannibalized companies working on an implementation that is not a terrible idea? I'd go as far as saying that it was a decent enough idea that we can applaud the efforts while also rooting for the 'underdogs'.
Whatever the pasta is now, it cooks quite differently. You'll get softer noodles much quicker, and will congeal into a blob unlike the old recipe.
Additionally the water gets extra starchy with the new recipe.
I haven't seen it mentioned anywhere else but the quantity per box has decreased as well. Special shapes came in smaller weights, but now even the regular box does too.
That is the specific problem I get now. The powdered sauce-stuff just never turns into a sauce but instead likes to stay in these horrid blobs full of unreconstituted powder, while half the noodles aren't coated. When it happens three times in a row, with three separate batches/box date codes and different milk/butter, I can cook your competitor's product with no issues, and I used to be able to cook yours so I know I'm not just being dense, that means it's your fault and you're out. Forever. Having real alternatives means I have no mercy for enshittification.
I should probably also note that this is specifically the "Thick and Creamy" variant because the " 'Original' Flavor" got banned from the household a long time ago for somehow being inferior to store-brand generic. Kraft just really does not want our business.
Point 2 doesn't tackle the reasons why there is a mismatch between in-memory representation and tabular data. Some benefits include the wins obtained from a schema built to utilize "normal form". Object databases have their place, but so do fully normalized database tables.
Point 3 doesn't strike me as useful. I don't find myself reverting rows to previous points in history that often, if I ever have. Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.
Overall, sure, a new, "light weight", object database that uses data structures* may have a place somewhere. But to replace SQLite? I think not.
*The Java API gives me the same recoil as Java JSON API. Pulling out data key by key feels like pulling teeth tooth by tooth.