Bad disk sectors on HDD

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sub.mesa

Senior member
Feb 16, 2010
611
0
0
I'm afraid I cannot agree with that, paul878. Bad sectors are inherent to mechanical storage. In fact, modern 2TB drives have a very high chance of running into bad sectors.

To understand this, you must know what Bit Error Rate (uBER) means and how it relates to bad sectors. Then you will also understand why bad sectors will continue to grow to be a problem. As data densities increase, so will the likelihood of bad sectors. This is a trend that is only going to continue.

The real solution is not to no longer assume the storage device is perfect. Instead, the filesystem (the software) should be intelligent enough to cope with bad sectors. In other words, instead of trying to perfect the hardware, a much more logical route is to address these issues in software.

For example, ZFS is virtually immune to bad sectors.
 

piasabird

Lifer
Feb 6, 2002
17,168
60
91
This is your drives way of telling you "Hay, I am going to give up soon, so if you want save your data do it now!" Of course you can just wait till the drive totally gives up and then cry about losing all your data. I would back up any important data on that drive to another location. When you defrag it should theoretically mark the bad sectors as it moves the data around.

Sometimes it is just mechanical failure of the drive arms or of the motor/spindle on the drive. Other things that can cause bad drives is a bad or an underpowered power supply or some bad RAM. I read an article once suggesting that it is often bad RAM that causes damage on servers. If the drive is under warranty you might try to get it replaced.
 

paul878

Senior member
Jul 31, 2010
874
1
0
I'm afraid I cannot agree with that, paul878. Bad sectors are inherent to mechanical storage. In fact, modern 2TB drives have a very high chance of running into bad sectors.

To understand this, you must know what Bit Error Rate (uBER) means and how it relates to bad sectors. Then you will also understand why bad sectors will continue to grow to be a problem. As data densities increase, so will the likelihood of bad sectors. This is a trend that is only going to continue.

The real solution is not to no longer assume the storage device is perfect. Instead, the filesystem (the software) should be intelligent enough to cope with bad sectors. In other words, instead of trying to perfect the hardware, a much more logical route is to address these issues in software.

For example, ZFS is virtually immune to bad sectors.


Defects are map out in the factory, by the time defects start to show at the os level the spare area has been used up. At this point the drive can no longer deal with it and your data are compromised.
 
Last edited:

Elixer

Lifer
May 7, 2002
10,371
762
126
I'm afraid I cannot agree with that, paul878. Bad sectors are inherent to mechanical storage. In fact, modern 2TB drives have a very high chance of running into bad sectors.

To understand this, you must know what Bit Error Rate (uBER) means and how it relates to bad sectors. Then you will also understand why bad sectors will continue to grow to be a problem. As data densities increase, so will the likelihood of bad sectors. This is a trend that is only going to continue.

The real solution is not to no longer assume the storage device is perfect. Instead, the filesystem (the software) should be intelligent enough to cope with bad sectors. In other words, instead of trying to perfect the hardware, a much more logical route is to address these issues in software.

For example, ZFS is virtually immune to bad sectors.
When a HD starts having bad sectors, that is a major symptom that the drive should be replaced. This is why every single HD maker tells you to RMA once it has been detected.
While ZFS might help in the short term, in some circumstances, that by no means is a fix for a faulty HD, nor is it an option on windows machines.

Sure, you can continue to use the HD, but it will crap out on you sooner or later, and one thing for sure is, it will never get better.
So, why take on the extra risk ?
 

paul878

Senior member
Jul 31, 2010
874
1
0
Every time I get a customer that don't listen to me regarding bad sectors, they always come back crying. To OP, just replace it!
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I'm afraid I cannot agree with that, paul878. Bad sectors are inherent to mechanical storage. In fact, modern 2TB drives have a very high chance of running into bad sectors.
No, they don't. They have a high chance of needing to re-read, a high chance of eventually doing a bad write, and a high chance of light data corruption under very heavy write utilization.

Actual bad sectors are usually hidden from you. When they show up in a way that you can see them, that's generally bad. They may be inherent in mechanical storage, but the HDD controller and firmware are prepared to deal with the expected bad sectors transparently--you don't see problems until they are rather bad.

The real solution is not to no longer assume the storage device is perfect. Instead, the filesystem (the software) should be intelligent enough to cope with bad sectors. In other words, instead of trying to perfect the hardware, a much more logical route is to address these issues in software.

For example, ZFS is virtually immune to bad sectors.
What happens when ZFS meets a bad sector in free space? The same thing that happens on any sane FS: pretty much nothing (note the bad sector, and move on). What happens when ZFS meets a bad sector in some of your data? The same thing that happens in any sane FS: a CRC error. Your data is still corrupted, either way. ZFS is a server FS, recovering only metadata, like others before it (XFS and JFS1, FI). It is not remotely a solution to this problem, and it is not remotely immune to bad sectors. To be even resistant to bad sectors, it would need all data to be given ECC information (such as sacrificing 1/n space, xor data of n chunks, store the parity, and checksum both the whole set and each chunk).

The only current solution requires the entire hardware and software stack of RAID-Zn on Solaris, OpenIndiana, or FreebSD, which also basically eats up an entire computer, and does no good for your desktop or laptop, running Windows, OS X, or Linux. Then, you still need to keep up with it, and replace the drive that got the bad sector(s).

NTFS may be on borrowed time, but the best solution for now, IMO, would be for Windows to check SMART and warn about certain errors. The core problem is HDD QC (and, to a lesser extent, the ECC used).
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
This is your drives way of telling you "Hay, I am going to give up soon, so if you want save your data do it now!"
Only in cases of excessive bad sectors, generally because of a physical issue. For example, dust that somehow got loose in the contained atmosphere of the harddrive could wreak havoc and cause many more bad sectors to occur as the drive is continued to being utilised. Such drives would distinguish themselves from the normal harddrive with a casual bad sector every now and then, because they will continue to generate bad sectors even after a full zero write. In normal cases, this does not happen.

In most other cases, bad sectors do not indicate imminent failure! A harddrive with bad sectors could live for 10 years, but still is useless because the drive is utilised by an archaic filesystem that belongs to the 1990's - i.e. without any protection to bad sectors. That is why many people replace them under warranty. Actually a gigantic waste of resources.

When you defrag it should theoretically mark the bad sectors as it moves the data around.
Defrag had no other effect on bad sectors than a complete surface read of all sectors. It will also not solve bad sectors, just detect them.

Defects are map out in the factory, by the time defects start to show at the os level the spare area has been used up.
Absolutely not true. What you are saying is that the host will only see the bad sectors when the harddrive has run out of reserve sectors. This is one big myth and absolutely incorrect.

If your reserve sectors are used up, you have got tens of thousands of bad sectors and the normalised value of Reallocated Sector Count in the SMART data is 1 and below the failure threshold. This is extremely rare. You simply have been taught wrong, like most of you.

Let's try to merge our knowledge together and see what we all can learn about bad sectors? ;-)

When a HD starts having bad sectors, that is a major symptom that the drive should be replaced.
Also not true. If you calculate the uBER of modern drives with 2TB+ capacities, you can simply calculate that more than 50% of all harddrives will get bad sectors. Harddrives are designed that way. If manufacturers wanted something different, they would have increased the ECC error correction and less bad sectors would occur. If you do not understand the relation between ECC and bad sectors, you simply do not know what we are talking about right here.

You guys probably are thinking about bad sectors which are physically damaged. However, this is actually very rare. In most cases the bad sectors due to insufficient error correction occur without any physical damage. Such bad sectors will continue to be used after being overwritten.

This is why every single HD maker tells you to RMA once it has been detected.
And will zero write the disk and send it to another customer as a 'refurbished drive' - very correct. ;-)

While ZFS might help in the short term, in some circumstances
Can you explain to me how, if you happen to know ZFS that well? ;-)

nor is it an option on windows machines.
True, Windows users - just like Linux users and Mac users to a lesser extent - are vulnerable because these users do not have possession of a filesystem that can deal with current era storage devices - like ZFS.

Sure, you can continue to use the HD, but it will crap out on you sooner or later, and one thing for sure is, it will never get better.
In many cases the harddrive stabilizes on bad sectors, having swapped a few of them while every few months a pending sector flies by. This is in no way is abnormal or indicative of imminent failure. Bad sectors are normal for high capacity harddrives. uBER = 10^-14 remember? What does that mean? It means 100 times more bad sectors than SSDs (10^-16). SSDs use more than half their raw storage space as error correction - preventing the occurrence of bad sectors.

Every time I get a customer that don't listen to me regarding bad sectors, they always come back crying. To OP, just replace it!
Probably because your customers do not use reliable filesystems. Not that strange, since there are only three filesystems that are safe at this time: ZFS, Btrfs and ReFS. Only ZFS is mature enough to be actually usable. So this would confirm your experience in my view.

No, they don't. They have a high chance of needing to re-read, a high chance of eventually doing a bad write, and a high chance of light data corruption under very heavy write utilization.
I really don't know what you mean by all this? Can you explain to me what uBER 10^-14 means? How that does translate to bad sectors?

Actual bad sectors are usually hidden from you.
No, all bad sectors show up as visible to the host. They become invisible when they are overwritten - by the host. At this time the SMART data will substract the Current Pending Sector by 1 and increase the Reallocated Sector Count by 1. That is called a bad sector remap.

The only exception to this is the so called 'weak sector'. This sector can still be read, but has to be read multiple times. The drive will replace such sectors as a preventative measure. This is the only exception where bad sectors can occur without being visible to the host first. These kind of weak sectors are often discovered by the autonomous action of a harddrive. This happens during background surface scans that the harddrive performs autonomously - independent from the host. You can hear this and notice it with a power consumption monitor.

When they show up in a way that you can see them, that's generally bad. They may be inherent in mechanical storage, but the HDD controller and firmware are prepared to deal with the expected bad sectors transparently--you don't see problems until they are rather bad.
Sorry, but this is not true. If it was, you did not need TLER harddrives and all the troubles with bad sectors would be gone. Because somehow magically, before the bad sector becomes bad the harddrive could read the data and write it somewhere else. This is not the case. When a sector becomes unreadable, it stays like that until it can be read. During that time, it is visible to the host and shows up as Current Pending Sector.

What happens when ZFS meets a bad sector in free space? The same thing that happens on any sane FS: pretty much nothing (note the bad sector, and move on).
Both untrue. What happens for legacy storage (NTFS, Ext4) is that due to long recovery times your desktop will stall (i.e. the application freezes; only the mouse moves). And after a minute or so, you can get a blue screen or crash or sudden reboot. If you read the SMART data at that time, you find Current Pending Sector is not 0.

When using ZFS, the bad sector is fixed instantly even before the harddrive finishes its recovery cycle. ZFS reads redundant data from other sources (either RAID redundancy or ditto blocks) and uses this data to determine what data should have been stored on the bad sector. It then writes this data to the affected harddrive, which will then initiate a remap of the bad sector with a reserve one (in case of physical damage only!)

What happens when ZFS meets a bad sector in some of your data? The same thing that happens in any sane FS: a CRC error.
CRC errors do not happen - only if corruption occured between the harddrive and the controller (UDMA CRC Error Count). If a harddrive cannot read a sector, it is obligated to return an I/O error conforming to the ATA-ACS2 standard. It may NEVER return corrupt data.

Your data is still corrupted, either way. ZFS is a server FS, recovering only metadata, like others before it (XFS and JFS1, FI). It is not remotely a solution to this problem, and it is not remotely immune to bad sectors.
100% incorrect as well. How come you do not know this?? To me the above statement is like telling the Earth is flat. It is not, its a globe. But how do you prove this to someone who thinks the Earth is flat? Might be difficult. ;-)

The only current solution requires the entire hardware and software stack of RAID-Zn on Solaris, OpenIndiana, or FreebSD, which also basically eats up an entire computer, and does no good for your desktop or laptop, running Windows, OS X, or Linux. Then, you still need to keep up with it, and replace the drive that got the bad sector(s).
I simply do not understand what you are trying to me tell me here.

NTFS may be on borrowed time, but the best solution for now, IMO, would be for Windows to check SMART and warn about certain errors. The core problem is HDD QC (and, to a lesser extent, the ECC used).
The core problem is that current era software solutions like NTFS and Ext4 treat the storage device as being perfect - i.e. not containing bad sectors. ZFS treats the harddrives as imperfect drives and tries to create a reliable storage facility based on imperfect hardware. This of course is the correct route and continues to grow more important as harddrives reach higher data densities.

This whole problem is addressed in these articles:

Why RAID5 stops working in 2009
Why RAID6 stops working in 2019

These two articles address the growing problem of bad sectors that increase as data densities increase. The magic word is: uBER. uBER uBER uBER!
 

paul878

Senior member
Jul 31, 2010
874
1
0
sub.mesa, you sound like a really really really really smart guy. I don't know where you get all that from.

In the real world when software detect bad sectors the HD must be replaced or you will lose your data.

I'll leave it at that.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I really don't know what you mean by all this? Can you explain to me what uBER 10^-14 means? How that does translate to bad sectors?
Standard error rates are uncorrectable errors per amount of data transferred. It will occur with perfectly good sectors. An uncorrectable read error may or may not indicate a bad sector. If the sector is fine, but the data was written badly, it's not a bad sector, and typically won't get remapped.

An bad sector will give uncorrectable reads, but the reverse is not necessarily true. An uncorrectable read may or may not be a bad sector. Generally, however, a reallocated sector is. If a new write succeeds to the sector that failed to read, it is not considered a bad sector.
No, all bad sectors show up as visible to the host.
And, none should, unless the drive is going bad (though occasionally 1 or 2 may show up from other sources over the life of a drive, so it's not a hard and fast rule, unless there are other indicators of a bad drive, too). You should be seeing <1 bit uncorrectable per 11TB, for a WD Caviar Blue, FI.

Sorry, but this is not true. If it was, you did not need TLER harddrives and all the troubles with bad sectors would be gone. Because somehow magically, before the bad sector becomes bad the harddrive could read the data and write it somewhere else.
No, if it were not true, you'd just have the same behavior as "AV" type HDDs. As it is, they will try and try and try, and if they can do it, they will get the data, return it to you, and either re-write the sector, or move it if it might have been a physical problem. If it was just plain written badly, but physically reads OK (bit problems, but not signal problems), then it becomes a pending sector, that the HDD maker hopes will be overwritten.

Both untrue. What happens for legacy storage (NTFS, Ext4) is that due to long recovery times your desktop will stall (i.e. the application freezes; only the mouse moves).
If it is visible to the host, it goes in the bad blocks list. If that bad block was in free space, it simply won't get used. If it was in recoverable data, it will be recovered. If it was in unrecoverable data, the data is gone. Long recovery times only occur for when the disk drive tries to recover the sector, and that is not something that ZFS has any more control over than any other OS, if it occurs. Some consumer HDDs used to handle time limit requests, but after WD's RE series success, none do, TMK.

When using ZFS, the bad sector is fixed instantly even before the harddrive finishes its recovery cycle.
How? Can you cite where there is guaranteed to be a piece of recovery information for every single sector, when using ZFS?

ZFS reads redundant data from other sources (either RAID redundancy or ditto blocks) and uses this data to determine what data should have been stored on the bad sector.
From where? Let's say I use ZFS with a single drive, and an unreadable sector pops up in the middle of a Quickbooks worksheet that I would like intact. The error occurred on part of it that either hasn't changed in awhile, or is a new edit, so there aren't multiple copies from older transactions. All FS settings were left at defaults, so no RAID-Zn, no copies=2, etc.. Where is that recovery data coming from? Nowhere.

CRC errors do not happen - only if corruption occured between the harddrive and the controller (UDMA CRC Error Count). If a harddrive cannot read a sector, it is obligated to return an I/O error conforming to the ATA-ACS2 standard. It may NEVER return corrupt data.
Think long and hard about what you just said, there. Even before getting conroller drives and the FS involved (they do their own checksums, too, though mostly not of enough types of information), everything after the, "-" contradicts what you wrote before it.

I simply do not understand what you are trying to me tell me here.
http://pages.cs.wisc.edu/~kadav/zfs/zfsrel.pdf
There you go. With ZFS, you need to hope and pray that you've got two copies of some given piece of data--IE, that you had copied writes set up, and that a prior edit to the file did not alter that disk location. You have no guarantee, and on mutating data, a small chance.

If you apply RAID or RAID substitutes (like copies), ZFS offers robustness that is currently unparalleled, and it can generally recover from most expected errors. Without that, it's better than more common FSes, but not a panacea. That it can up and recover any arbitrary sector is simply false. It can be configured to be able to, depending on the state of the hardware, but does not intrinsically do so.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
sub.mesa, you sound like a really really really really smart guy. I don't know where you get all that from.

In the real world when software detect bad sectors the HD must be replaced or you will lose your data.
I'm not disputing that. I'm only telling you that the harddrive itself is fine. It is operating within specifications (uBER 10^-14). Unreadable sectors are normal. The manufacturer basically specifies how frequently they occur.

So you can do two things to cope with bad sectors:

1) replace all harddrives that show any trouble at all, using legacy filesystems and RAID engines. This route often involves more expensive hardware and 'dumber' software.

2) solve the problem in software so that the filesystem no longer assumes the storage device is perfect, and can cope with a bad sector now and then. This route works best with cheap hardware paired with 'smart' software.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Standard error rates are uncorrectable errors per amount of data transferred. It will occur with perfectly good sectors. An uncorrectable read error may or may not indicate a bad sector. If the sector is fine, but the data was written badly, it's not a bad sector, and typically won't get remapped.
All correct! Finally I can hear something I can relate to.

uBER simply means that sometimes, once in a awhile, a harddrive has insufficient error correcting capabilities to know the contents of a single sector. The 'u' in uBER refers to the fact that raw bit errors are corrected by ECC error correction. If there are more bit errors than ECC can correct, that sector becomes unreadable. The frequency to which this occurs, is called the uBER specification.

Bad sectors can be caused by normal uBER situations where there the Signal Noise is greater than the error correcting capabilities. These will not be remapped as you say correctly. Bad sectors caused by physical damage are, because even after overwriting these, they stay unreadable. These will be remapped with reserve sectors whenever a write-request from the host overwrites the unreadable sector.

The whole story about data transferred as you highlighted, is a scam. Newer disks are simply faster, so they can say that per 100 megabytes the number of bad/unreadable sectors stays about the same. The truth is, with bigger capacities come higher data densities. This results in more problems with bad sectors roughly each time the platter density is increased.

Thus, the uBER is a growing problem and the two articles I linked to address exactly this problem. We need more intelligent software to cope with this problem. Why do you think WHS DE 2.0 was to be equipped with bitcorrection?! To cope with bad sectors. Now that this is cancelled. Windows users have to wait the arrival of ReFS and the maturity process that follows. They are very late to address the problem of uBER in their software.

Linux is slightly better equipped with a fledgling ZFS implementation combined with the apprentice Btrfs filesystem.

An uncorrectable read may or may not be a bad sector.
I call uBER bad sectors and physical bad sectors simply 'bad sectors' or 'unreadable sectors'. From the perspective of the host, that is all that matters. The harddrive visibly has an unreadable sector exposed to the host. That is bad! Especially if the host uses legacy filesystems which can not cope with this.

If it is visible to the host, it goes in the bad blocks list. If that bad block was in free space, it simply won't get used. If it was in recoverable data, it will be recovered. If it was in unrecoverable data, the data is gone. Long recovery times only occur for when the disk drive tries to recover the sector, and that is not something that ZFS has any more control over than any other OS, if it occurs.
You mean the bad blocks list of the harddrive - not the FAT32/NTFS filesystem - right?! Because the latter is not useful anymore and archaic in nature.

ZFS has more control over the recovery times of disk, because ZFS overwrites the sector on a soft timeout (controlled in operating system). Upon receiving the write request, the disk stops trying to recover that sector and overwrites it and if it still can't be read it will replace it with a reserve sector. Either way, ZFS fixes bad sectors on the fly without any user intervention. All you will see is a failed read request in the zpool status output.

How? Can you cite where there is guaranteed to be a piece of recovery information for every single sector, when using ZFS?
Your very own PDF document, for one. :biggrin: :awe:

At least, as the document specifies, depending on the ditto blocks setting. By default, ditto blocks are only used for filesystem metadata. So bad sectors can not harm ZFS itself, only the files. You have to enable copies=2 on certain filesystems to give it additional protection.

This is on a single drive. If you have RAID-Z or mirror redundancy, ZFS uses this redundancy to reconstruct the data and overwrite the bad sector. With ditto blocks, this effect is cumulative. Since ZFS does all disk aggregation ('RAID') itself, it also knows where to place the ditto blocks. Not just blindly on random LBA, but distributed over physical disks.

From where? Let's say I use ZFS with a single drive, and an unreadable sector pops up in the middle of a Quickbooks worksheet that I would like intact. The error occurred on part of it that either hasn't changed in awhile, or is a new edit, so there aren't multiple copies from older transactions. All FS settings were left at defaults, so no RAID-Zn, no copies=2, etc.. Where is that recovery data coming from? Nowhere.
No redundancy, no ditto blocks, no backup. You data was not important. ZFS allows you to grant additional protection like copies=2 (ditto blocks) to only those files which you deem important. In this case, the documents filesystem would be set copies=2 while the download folder is left at default copies=1.

If you care about data, you will provide ZFS the means to protect your data. RAID-Z, RAID-Z2 and potentially combined with ditto blocks, provide formidable protection against bad sectors to such an extent that ZFS is immune to them.

everything after the, "-" contradicts what you wrote before it.
No CRC (error detection) is used for sending data through ATA/IDE interface. I.e. cable errors. ECC is used for error correction to correct bad sectors.

http://pages.cs.wisc.edu/~kadav/zfs/zfsrel.pdf
There you go. With ZFS, you need to hope and pray that you've got two copies of some given piece of data
Yes, you have to pray that you have multiple disks setup in a redundant configuration so that ZFS can utilise the potantial error correcting capabilities to properly protect your data. I don't see your point?

If you apply RAID or RAID substitutes (like copies), ZFS offers robustness that is currently unparalleled, and it can generally recover from most expected errors. Without that, it's better than more common FSes
ZFS is a filesystem, RAID layer and LVM manager in one package. You do not use them separately. Even in a single disk configuration.

Besides the protection the SPA ('RAID' part) disk aggregation engine provides, the ZPL ('filesystem' part) also provides ditto blocks protection. So what you say is not true for two reasons. Even the ZPL has protection of its own. To protect the metadata and to protect files with have copies=2+ set.

ZFS is immune to bad sectors, provided you have at least one redundant source available of course. If that was missing in my earlier statement, then I would like to correct it now.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
The whole story about data transferred as you highlighted, is a scam. Newer disks are simply faster, so they can say that per 100 megabytes the number of bad/unreadable sectors stays about the same. The truth is, with bigger capacities come higher data densities. This results in more problems with bad sectors roughly each time the platter density is increased.
ECC has been improved to cope, bringing rates to around what they used to be, allegedly. But even so, we need bigger drives for bigger working sets, so we'd have been better served by better error checking and correction, on each disk, and possibly for the transport mechanism, even so. What used to be a 2K .doc is now a 30K .docx, FI. The file itself has gotten bigger, or the number of files used has gotten bigger, or both.

By default, ditto blocks are only used for filesystem metadata. So bad sectors can not harm ZFS itself, only the files. You have to enable copies=2 on certain filesystems to give it additional protection. (em. added)
Exactly. ZFS does not magically make for being able to recover anything. When configured to do so, it will do it better than what else is out there, by far, but making a pool, formatting it, and mounting it, will not offer such protection. You've got to sacrifice half your disk*, without multiple disks to work with. I'd personally like to see a non-server FS tackle these problems, with more space-efficient methods of parity (give up 1/n space for 1/n-worth of xor- or R-S-based recovery blocks, FI).

I don't see your point?
For example, ZFS is virtually immune to bad sectors.
It can be made so, while no other FS yet can but that's not right as a blanket statement.

* Technically, double the used space per file, since you wouldn't necessarily have to apply the copies setting to everything.
 
Last edited:

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Please consider the fact that ZFS is typically used with multiple disks and quite often as storage appliance or NAS. Besides its formidable features desirable by the server market, consumers are able to grasp its rewards just as easily.

This is in fact where I specialise in. I want to make ZFS accessible to a broader range of people. They want something they can understand, while still providing the protection that ZFS grants (in typical configurations, yes yes yes... sheesh )

Especially in this era where high capacity disks are very prone to producing unreadable sectors, using ZFS to store your data is almost a necessity unless you rely heavily on backups. For many consumers, having a 1:1 backup of all their stuff is simply not an option. They can only backup their true important files like documents and photos and personal stuff. This usually is much smaller in size and might even fit on a single USB stick. For the same reasons, setting copies=2 on such datasets can really improve the resilience of your most important data, while costing only marginally more disk space in relation to the total size of the pool.

You provided the one and only exception to this. A single pool with a single disk with important data sitting in copies=1 filesystem (which is the default). This is a very a-typical configuration, because most people use ZFS to store large volumes of data on multiple disks and gain redundancy in doing so. But also because, if you really had important data on a single disk, you would either employ a backup and quite probably would want to set copies=2 on that document filesystem. So you provided a very implausible and unrealistic scenario that is exempt from what I stated earlier. You also neglected to address the metadata, which is very important. Killing your filesystem is much worse than killing one file. So even in the worst possible configuration, ZFS is ways better than all the legacy stuff.

My statement 'ZFS is virtually immune to bad sectors' is simply true for virtually all typical usage scenario's. Your one exception doesn't realistically change anything in my opinion.

There is something to be said about being too precise. People want to hear understand they can understand. Having to nuance every sentence or statement for very unusual exceptions, doesn't really help with conveying an understandable line of reasoning.

But it pleases me that we might have found some common ground on the bad sector story?
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
My statement 'ZFS is virtually immune to bad sectors' is simply true for virtually all typical usage scenario's.
Well, see, here's where a lot of the contention is, I think: my mind has been on a 1.5TB Seagate being used as a single-drive data volume, quick-formatted with default settings--no RAID in sight, and it is a very typical usage scenario for a secondary HDD--that has already lost some number of sectors-worth of data (<=89, from the OP's pic, I believe). ZFS has, in my mind, been referring to, from an OS supporting it, using that kind of hardware configuration, but with ZFS.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |