NTFS or FAT file system?

bim27142

Senior member
Oct 6, 2004
213
0
0
i have a general idea between the differences of the two... i'm currently on NTFS for security/stability reasons... which do you guys prefer using?
 

phisrow

Golden Member
Sep 6, 2004
1,399
0
0
Of the two, definitely NTFS. Now that most *nix and BSDs can fairly easily read(and usually write as well) you no longer gain too much compatability by suffering with FAT. I've still got a few FAT and FAT32 partitions floating around, all on things like USB keys and floppies and things), but all my serious partitions are either NTFS or Reiserfs thes days. Just before anyone brings this one up; I know that manual and/or low-level data recovery is somewhat easier with FAT. If I care enough about something to even think about doing low level recovery of it, I care enough to back it up.
 

Schadenfroh

Elite Member
Mar 8, 2003
38,416
4
0
NTFS all the way, i hate having to go through scan disk everytime i shut down incorrectly
 

KoolDrew

Lifer
Jun 30, 2004
10,226
7
81
Isn't FAT32 faster on smaller drives? That is the reason I have my OS partision in FAT32 because I heard NTFS is faster on drives larger then 32GB or soemthing, but since my partision is smaller then that isn't FAT32 the way to go for performance?
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Fat realy has no business being on a modern operating system, it was obsolete nearly 10 years ago.. At least NTFS has journalling and decent security.
 

KoolDrew

Lifer
Jun 30, 2004
10,226
7
81
What about FAT32? Can you read my above post and answer my question since I am reformatting soon?
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I don't think that Fat32 will provide much if any performance increase. I think that data integrity is more important. Use NTFS.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
I think I've seen in artificial benchmarks with new partitians and a minimal amount of files some people have shown that FAT can faster by a very small margin. However this is only in an artificial benchmark with very small drives and less than a handfull of files.

In the real world NTFS is going to be about the same on small drives and as the drive size and number of files increases NTFS will outpace FAT.

As Drag stated FAT is very old and quite obsolete as far as file systems go.

-Erik
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: phisrow
Of the two, definitely NTFS. Now that most *nix and BSDs can fairly easily read(and usually write as well) you no longer gain too much compatability by suffering with FAT. I've still got a few FAT and FAT32 partitions floating around, all on things like USB keys and floppies and things), but all my serious partitions are either NTFS or Reiserfs thes days. Just before anyone brings this one up; I know that manual and/or low-level data recovery is somewhat easier with FAT. If I care enough about something to even think about doing low level recovery of it, I care enough to back it up.

What I want to know is - why is MS such a h&rd &ss about allowing others to created IFS drivers for Windows? I would absolutely love to run something like Resier4 under W2K/XP. That would be groovy. Alas. The pains of closed source.

(Count me in for FAT32 though, I refuse to use NTFS for several reasons, strongest among them principle, with the easier manual recovery ability of FAT32 being a very close second place.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: Schadenfroh
NTFS all the way, i hate having to go through scan disk everytime i shut down incorrectly

You do realize that MS officially recommends restoring from backups when that happens, because they don't guarantee the integrity of the contents of user data files, only that the filesystem metadata is in a "consistent" state, right?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: drag
Fat realy has no business being on a modern operating system, it was obsolete nearly 10 years ago.. At least NTFS has journalling and decent security.

Metadata-only journaling is worse than no journaling at all, at least according to Linux Torvalds.
 

stash

Diamond Member
Jun 22, 2000
5,468
0
0
Metadata-only journaling is worse than no journaling at all, at least according to Linux Torvalds.

So where does that leave ext3, ReiserFS (before 4), XFS, JFS...all of which are metadata journaling by default. NTFS has the option to do full journaling like most of the file systems I listed.

Reiser4's performance aside, full-journaling is not necessary for the vast majority of computer users.

And his name is Linus
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
What I want to know is - why is MS such a h&rd &ss about allowing others to created IFS drivers for Windows?

The docs and API specs are on MSDN just like everything else, but the IFS development kit cost money last time I checked and most people don't care enough to pay for the kit just to port drivers to a non-free OS.

Metadata-only journaling is worse than no journaling at all, at least according to Linux Torvalds.

Not really, because with no journaling you still end up running something like e2fsck on the drive before you can use it so the best case scenario is that the drive is fine and there are no errors but the worst case is that you have a ton of files that had issues and now you have X number of files that lost their names in the lost+found directory and potentially other files that are corrupt because their dirty pages didn't get flushed before the unclean umount. The only difference journaling will make is that you should never see anything in the lost+found directory unless you manually run a fsck tool.

And what data journaling gets you is the same filesystem consistency guarantee, sure there's a better chance that the data will make it to disk since it gets put in the journal too, but there's no guarantee that the data will get into the journal to be replayed on remount. And now that there's 2x as many writes required it's possible that less data will actually make it to disk during a crash.

As much as I like Linus and respect his opinion, he tends to view things from the academic and theoretical standpoint a little too much, even when real-world usage shows the difference to be minor enough that noone would really care.

You do realize that MS officially recommends restoring from backups when that happens, because they don't guarantee the integrity of the contents of user data files, only that the filesystem metadata is in a "consistent" state, right?

I'm sure they would say the same thing about FAT, if they actually cared enough about it to comment.

Maybe you should spend the money on a real RAID controller with memory and a battery so that in the case of a power failure all of your writes are guaranteed to make it to disk.
 

Blain

Lifer
Oct 9, 1999
23,643
3
81
I've heard of people even formatting with FAT16 for working with very large files (audio).
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: STaSh
Metadata-only journaling is worse than no journaling at all, at least according to Linux Torvalds.
So where does that leave ext3, ReiserFS (before 4), XFS, JFS...all of which are metadata journaling by default. NTFS has the option to do full journaling like most of the file systems I listed.
Reiser4's performance aside, full-journaling is not necessary for the vast majority of computer users.
And his name is Linus

OMG. Wow, my brain runs away with me, or my fingers run away from my brain sometimes. I remember when I posted that yesterday, and I clearly had "Linus" in my head, and yet it still came out as "Linux" when I DMA'ed it from my brain to my fingers. Must be data-corruption caused by a Promise controller or something.

But I disagree with your assertion that NTFS offers full journaling, perhaps you would like to offer some technical evidence to support that? I think that you might be surprised. MS's current docs clearly state that it only does journaling for FS meta-data, and that has always been true.

As for full-journaling being necessary, I guess it only is, if you care about the integrity of your data at all. Which is true, most people simply don't.
 

Schadenfroh

Elite Member
Mar 8, 2003
38,416
4
0
Originally posted by: VirtualLarry
Originally posted by: Schadenfroh
NTFS all the way, i hate having to go through scan disk everytime i shut down incorrectly

You do realize that MS officially recommends restoring from backups when that happens, because they don't guarantee the integrity of the contents of user data files, only that the filesystem metadata is in a "consistent" state, right?

doesnt matter, i format once a month for paranoia reasons and boredom
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: Nothinman
What I want to know is - why is MS such a h&rd &ss about allowing others to created IFS drivers for Windows?
The docs and API specs are on MSDN just like everything else, but the IFS development kit cost money last time I checked and most people don't care enough to pay for the kit just to port drivers to a non-free OS.
They aren't publicly available, and you must sign a binding NDA to get access, last time I checked, over and above the monetary issue. That is different than the vast majority of the rest of the info on MSDN, which, at worst, requires paying for an MSDN subscription to access the non-public parts.

Originally posted by: Nothinman
As much as I like Linus and respect his opinion, he tends to view things from the academic and theoretical standpoint a little too much, even when real-world usage shows the difference to be minor enough that noone would really care.
I tend to disagree, for the simple point re: journaling, that with full journaling, everything can be guaranteed to be transactional (assuming that the underlying hardware also supports that guarantee), whereas anything short of that does not. Therefore partial journaling is effectively just as useless, (for the specific purpose of ensuring data-integrity), as no journaling at all. The only thing it offers is operator convenience, for not having to run some sort of filesystem-checking tool at startup.

So if you value convenience over data-integrity, then by all means, metadata-only journaling is what someone needs for that. I disagree that full journaling offers less protection, that would only mean that it is an incorrect implementation if that were true. To ensure integrity, obviously there IS increased overhead involved. (Consider the performance differences between disk-defragmentation tools, one that operates with a guarantee of recoverability if the power should cut out, and the other that does not offer that guarantee. The latter tends to operate nearly twice as fast as the first, but because of the risk, nearly every common defragmenter operates as the former does.)

Originally posted by: Nothinman
You do realize that MS officially recommends restoring from backups when that happens, because they don't guarantee the integrity of the contents of user data files, only that the filesystem metadata is in a "consistent" state, right?
I'm sure they would say the same thing about FAT, if they actually cared enough about it to comment.
Maybe you should spend the money on a real RAID controller with memory and a battery so that in the case of a power failure all of your writes are guaranteed to make it to disk.

That's part of the point - that doesn't matter. NTFS doesn't make any guarantees about the consistency of written user data. (But the counterpoint is that for it to do so, it would also have to run atop hardware that could also make that guarantee, which is the type of hardware that you're talking about.)

For a comparable analog - consider parity/ECC memory, and what happens when a fault is detected. On PC-level OSes, the entire OS BSODs/halts, on purpose, to prevent further unseen data-corruption.

If FAT32 faults, it's obvious. If NTFS faults, it may not be obvious that your user data state may be inconsistent, even though the filesystem metadata stays consistent, which can allow further "bit rot" to propegate. Which is why MS claims that if that is a concern, that you should restore from backups to ensure your user data is consistent.

FAT32/NTFS relative merits aside, I DO consider this to be one of the most serious technical flaws in NTFS. It's also disturbingly-similar to MS's attitude towards Hibernate support, in terms of potential data-corruption in degenerate cases.
 

kEnToNjErOmE

Member
Oct 27, 2004
30
0
0
Originally posted by: KoolDrew
Isn't FAT32 faster on smaller drives? That is the reason I have my OS partision in FAT32 because I heard NTFS is faster on drives larger then 32GB or soemthing, but since my partision is smaller then that isn't FAT32 the way to go for performance?
If you are looking for pure speed on a partition less than 30 gigs than FAT is the way to go. If you want the latest tech than NTFS is what you want.
http://www.xbitlabs.com/articl...play/3ware-8506-8.html
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I've heard of people even formatting with FAT16 for working with very large files (audio).

That wouldn't be very smart, considering FAT is limited to 2G (or was it 4G) files.

They aren't publicly available, and you must sign a binding NDA to get access, last time I checked, over and above the monetary issue.

Like much of everything else on MS's site I can't find it now, but I swear I saw it there at one point. And I bet the fact that you have to pay for the IFS kit is because it comes with the ISO9660 and FASTFAT drivers as examples, not because the API is secret.

(assuming that the underlying hardware also supports that guarantee)

And I would hazard a guess to say that probably 90% of IDE drives out there don't make any guarantees.

Therefore partial journaling is effectively just as useless, (for the specific purpose of ensuring data-integrity)

Noone said it ever ensured data integrity.

The only thing it offers is operator convenience, for not having to run some sort of filesystem-checking tool at startup.

Which is huge if you have a filesystem that's over like 30G.

So if you value convenience over data-integrity, then by all means, metadata-only journaling is what someone needs for that.

But no journaling doesn't get you any data-integrity either.

I disagree that full journaling offers less protection, that would only mean that it is an incorrect implementation if that were true.

But you can't work around the fact that you have over 2x as much I/O to do with full data journaling, so even though things won't be twice as slow you can't guarantee all of the I/Os will make it into the journal just like you can't guarantee that all of your I/O will make it to disk with meta-data only journaling. And I don't think any filesystem that supports data journaling doubles the size of it's journal when data journaling is enabled so you have a lot less room for journal entries making it even more likely that some will be discarded.

If FAT32 faults, it's obvious.

Not always. You can't make any guarantees about what will happen with in memory corruption unless you have ECC memory to detect it. FAT has no checksumming or anything to guarantee that the data in memory was written correctly to disk.

FAT32/NTFS relative merits aside, I DO consider this to be one of the most serious technical flaws in NTFS.

And you're insane, because FAT is no better and in most cases worse. The only benefit that FAT has over NTFS is that it's an extremely simple filesystem so more tools understand it, but now with things like BartPE and sh!t there's no reason to keep all of those DOS based recovery tools around.
 

kylef

Golden Member
Jan 25, 2000
1,430
0
0
Use NTFS, hands down.

Originally posted by: VirtualLarry
As for full-journaling being necessary, I guess it only is, if you care about the integrity of your data at all. Which is true, most people simply don't.
It's not that they don't care about the integrity of their data: it's that they don't accept the performance tradeoff one gets when you do full-data logging. When the day comes when this performance tradeoff becomes more reasonable, it will be a more viable option. But right now, it is truly a performance nightmare, which people reject wholeheartedly.

(And as you pointed out, full logging still does not protect your data from hardware malfunction; that requires more expensive hardware, further displacing true storage integrity/recovery from the mainstream.)

A much more reasonable option is to invest in a battery back-up device to stop power failures, which is the overwhelming cause of file system data loss. This is more cost effective than expensive hardware, and has the additional benefit of no performance penalty.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,204
126
Originally posted by: kylef
Use NTFS, hands down.

Originally posted by: VirtualLarry
As for full-journaling being necessary, I guess it only is, if you care about the integrity of your data at all. Which is true, most people simply don't.
It's not that they don't care about the integrity of their data: it's that they don't accept the performance tradeoff one gets when you do full-data logging. When the day comes when this performance tradeoff becomes more reasonable, it will be a more viable option. But right now, it is truly a performance nightmare, which people reject wholeheartedly.
I believe that Reiser4 does it, without a huge performance penalty.

Originally posted by: kylef
(And as you pointed out, full logging still does not protect your data from hardware malfunction; that requires more expensive hardware, further displacing true storage integrity/recovery from the mainstream.)
For IDE drives, you would obviously have to disable the device's cache feature. I'm pretty-sure that the server versions of NT do that already.

Originally posted by: kylef
A much more reasonable option is to invest in a battery back-up device to stop power failures, which is the overwhelming cause of file system data loss. This is more cost effective than expensive hardware, and has the additional benefit of no performance penalty.
For the most part. But that isn't a total replacement for a fully-transactional filesystem, which still has benefits above and beyond recovery/data-integrity. Imagine what would happen to your data, if your RAM suffered a parity/ECC failure. Even with the power still running, your filesystem could be hosed, due to a BSOD. Not with Reiser4 though.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I believe that Reiser4 does it, without a huge performance penalty.

I believe Reiser4 just makes sure each transation is atomic, not that they journal user data in the normal sense. It's hard to pick through all of the marketing crap on the namesys page, but it seems to me that all that means is that the data is indeed logged to the journal but instead of writing the data a second time to the file to complete the transaction the file's blocks are redirected to the data in the log and the log 'wanders' somewhere else on disk for the next transaction. But whether it really proves to be a good way to attack the problem has yet to be seen, I've been less than impressed with reiserfs in the past and it'll be some time before I try it again.

For IDE drives, you would obviously have to disable the device's cache feature. I'm pretty-sure that the server versions of NT do that already.

And I believe most of them ignore flush cache and disable cache commands for performance reasons.

Imagine what would happen to your data, if your RAM suffered a parity/ECC failure. Even with the power still running, your filesystem could be hosed, due to a BSOD. Not with Reiser4 though.

Huh? If the data got corrupted in memory and you didn't have the hardware to detect it, how would reiser4 fix that? It won't, it'll just make sure the corrupted data is on disk in a single transaction.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
If NT disabled file system cache it would result in a huge slowdown. I don't beleive that it does that. Even on big IBM mainframes with dozens of scsi drives they still use file system cache (and I know first hand that they do. The cache is stored in a card on our s/390. You have 2 powersupplies, fully rendundant hotswappables. Both those power supplies have their own battery backup, the file system cache is volitale RAM, however it also has it's own battery backup to sustain the cache thru a blackout. 2 powersupplies, 2 UPSs, and a battery specificly for the file system cache)

ReiserFS 4 doesn't do any data or file system logging. It's fix is fully atomic filing operations. When you do something the system changes are not in effect until the entire operation is complete.

The idea is that if you moving a file, for instance, the entire file is moved or it isn't. There is no state in between were you have two identical files, or that you have a half-way copied file or a partially deleted file. You have a full file either in it's original location or it's destination. No inbetween. Same thing with all other file operations.

That way you could be writing a file and yank the power cord. If the file made it before the power cord was pulled, then it's their in it's entirety, if wasn't finished writing then the file isn't there....

The last thing it does is institute is the change to the directory system, that's how it does it. If your writing the file it doesn't show up till it's finished writing because that's when the directory system is updated, and that's done in one operation.

People have done that before, but never with good performance. Reiser figured out how to do it and get good performance... So in ReiserFS 4 it's makes file system logging obsolete.

In most Linux filing systems (although the behavior can be changed) and NT (I believe) you write files asyncronously. So the changes to the file system appear in the directory system before they are completed. Like for instance when your writing to a floppy. In Windows it's not for floppies, so when you write a file whatever app your using pretty much locks up until the floppy is finished writing, but with Linux the directory system is changed almost immediately and not to much data is written... Of course the data is finished when you umount the floppy so most of the time you will have most of your floppy activity happen then. That's why you should never yank out a mounted file system in Linux (and most other OSes)....

With the BSD filing systems they don't do logging, they keep their filing system in sync with the directory system. The performance hit is worth it in their eyes to the ability to keep the data in a constantly correct state. They also institute a system called "soft updates", which I am not sure how they work.

In XFS, JFS, ReiserFS 3, and I am thinking NTFS only the metadata is logged. The actual data transactions are not logged... So that if the system goes down then they can recover and keep the file system itself sane and functioning correctly (having a entire file system getting blown away sucks, as does having files and directories just dissapear) but you may have partially corrupted files go unchecked... So then it's up to the user to discover these niceties.

In ext3 both the metadata and data is logged. It incures a slight performance hit, however if your system goes down the metadata is corrected and updated like the other journalling systems, however then you still run a e2fsck on the last accessed data files and those are checked for corruptions. Thats why Redhat uses Ext3 still exclusively for it's enterprise-level operating systems.

But that behavior is modifiable....
from here
New mount options:

"mount -o journal=update"
Mounts a filesystem with a Version 1 journal, upgrading the
journal dynamically to Version 2.

"mount -o data=journal"
Journals all data and metadata, so data is written twice. This
is the mode which all prior versions of ext3 used.

"mount -o data=ordered"
Only journals metadata changes, but data updates are flushed to
disk before any transactions commit. Data writes are not atomic
but this mode still guarantees that after a crash, files will
never contain stale data blocks from old files.

"mount -o data=writeback"
Only journals metadata changes, and data updates are entirely
left to the normal "sync" process. After a crash, files will
may contain stale data blocks from old files: this mode is
exactly equivalent to running ext2 with a very fast fsck on reboot.


Now keep in mind that's all my understanding... I can't vouch for the accuracy of it. Personally I use XFS because I want the large (multigig) file performance for big media files, on my laptop (although I am using FreeBSD 5.3 on that right now) I use Ext3 when using Linux.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |