Or just forget disks altogether, keeping everything in the memory of several servers and just backing up to disk periodically.  See <http://delivery.acm.org/10.1145/1060000/1059805/p30-gray.pdf> (2005) for various arguments that disk-based row-oriented databases are no longer the be-all and end-all.




On Wed, Jul 10, 2019 at 6:43 PM Lassi Kortela <xxxxxx@lassi.io> wrote:
> There are basically two principled approaches:
>
> 1) Be content with the 30-second auto-sync like everyone else.
>
> 2) Carefully select and configure your whole hardware, firmware, OS,
> database, language and application stack so that every level of the
> stack reliably delivers the sync guarantees you need.

Another way to think about this: If you want some kind of manual sync,
it means you want a guarantee that a system crash will wipe out less
than 30 seconds worth of data. What kind of home/office scenario is so
critical that every minute counts? More likely it's some kind of server
collecting live financial or scientific data from an expensive source.
Which means those organizations would also spend money on expensive
reliable server hardware to gather that data (high-end servers might not
be that expensive compared to the instruments that source the data and
the value of the data). And if the hardware is that sophisticated, does
it really use Unix sync() as the interface to the instant reliable sync?