If NT disabled file system cache it would result in a huge slowdown. I don't beleive that it does that. Even on big IBM mainframes with dozens of scsi drives they still use file system cache (and I know first hand that they do. The cache is stored in a card on our s/390. You have 2 powersupplies, fully rendundant hotswappables. Both those power supplies have their own battery backup, the file system cache is volitale RAM, however it also has it's own battery backup to sustain the cache thru a blackout. 2 powersupplies, 2 UPSs, and a battery specificly for the file system cache)
ReiserFS 4 doesn't do any data or file system logging. It's fix is fully atomic filing operations. When you do something the system changes are not in effect until the entire operation is complete.
The idea is that if you moving a file, for instance, the entire file is moved or it isn't. There is no state in between were you have two identical files, or that you have a half-way copied file or a partially deleted file. You have a full file either in it's original location or it's destination. No inbetween. Same thing with all other file operations.
That way you could be writing a file and yank the power cord. If the file made it before the power cord was pulled, then it's their in it's entirety, if wasn't finished writing then the file isn't there....
The last thing it does is institute is the change to the directory system, that's how it does it. If your writing the file it doesn't show up till it's finished writing because that's when the directory system is updated, and that's done in one operation.
People have done that before, but never with good performance. Reiser figured out how to do it and get good performance... So in ReiserFS 4 it's makes file system logging obsolete.
In most Linux filing systems (although the behavior can be changed) and NT (I believe) you write files asyncronously. So the changes to the file system appear in the directory system before they are completed. Like for instance when your writing to a floppy. In Windows it's not for floppies, so when you write a file whatever app your using pretty much locks up until the floppy is finished writing, but with Linux the directory system is changed almost immediately and not to much data is written... Of course the data is finished when you umount the floppy so most of the time you will have most of your floppy activity happen then. That's why you should never yank out a mounted file system in Linux (and most other OSes)....
With the BSD filing systems they don't do logging, they keep their filing system in sync with the directory system. The performance hit is worth it in their eyes to the ability to keep the data in a constantly correct state. They also institute a system called "soft updates", which I am not sure how they work.
In XFS, JFS, ReiserFS 3, and I am thinking NTFS only the metadata is logged. The actual data transactions are not logged... So that if the system goes down then they can recover and keep the file system itself sane and functioning correctly (having a entire file system getting blown away sucks, as does having files and directories just dissapear) but you may have partially corrupted files go unchecked... So then it's up to the user to discover these niceties.
In ext3 both the metadata and data is logged. It incures a slight performance hit, however if your system goes down the metadata is corrected and updated like the other journalling systems, however then you still run a e2fsck on the last accessed data files and those are checked for corruptions. Thats why Redhat uses Ext3 still exclusively for it's enterprise-level operating systems.
But that behavior is modifiable....
from
here
New mount options:
"mount -o journal=update"
Mounts a filesystem with a Version 1 journal, upgrading the
journal dynamically to Version 2.
"mount -o data=journal"
Journals all data and metadata, so data is written twice. This
is the mode which all prior versions of ext3 used.
"mount -o data=ordered"
Only journals metadata changes, but data updates are flushed to
disk before any transactions commit. Data writes are not atomic
but this mode still guarantees that after a crash, files will
never contain stale data blocks from old files.
"mount -o data=writeback"
Only journals metadata changes, and data updates are entirely
left to the normal "sync" process. After a crash, files will
may contain stale data blocks from old files: this mode is
exactly equivalent to running ext2 with a very fast fsck on reboot.
Now keep in mind that's all my understanding... I can't vouch for the accuracy of it. Personally I use XFS because I want the large (multigig) file performance for big media files, on my laptop (although I am using FreeBSD 5.3 on that right now) I use Ext3 when using Linux.