X-25M 160GB noticeably slower

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

erdemali

Member
May 23, 2010
102
0
0
You really are joking right? I don't think I have EVER stated I was an expert and, as a matter of fact I would not even consider such unlike.... well read your own comment.

A thought though... Never present something you cannot stand by. I really don't care who you are because it means nothing if you will praise a theory that isn't yet proven successful such as you have already praised.

I suggest you check my calculations again please.
http://forums.anandtech.com/showpost.php?p=29872517&postcount=32

If it doesn't mean anything to you, well I can't do much more than that.
 

flamenko

Senior member
Apr 25, 2010
349
0
0
www.thessdreview.com
I suggest you check my calculations again please.
http://forums.anandtech.com/showpost.php?p=29872517&postcount=32

If it doesn't mean anything to you, well I can't do much more than that.

I read that. It looks great. In the end though, until we can prove this great theory somehow, its foolish and negligent to put it out there right?

I even came forward with a simple test. Fill the SSD and do a CDM score to verify that, with the additional NAND allocated to overprovisioning, there is no performance degradation.

As a matter of fact, would you believe I have only ever seen one other example of this and yes, I posted it. We know there is no slowing with the OWC and its 28% over provisioning because I posted the result. I can again if you wish.

Absolutely no performance degradation. If I see the same here I will be all over it...You will here me tell mesa he is a genius...You can bet I will also be sending the links to some very close contacts who would also be interested...
 

erdemali

Member
May 23, 2010
102
0
0
I read that. It looks great. In the end though, until we can prove this great theory somehow, its foolish and negligent to put it out there right?

I even came forward with a simple test. Fill the SSD and do a CDM score to verify that, with the additional NAND allocated to overprovisioning, there is no performance degradation.

As a matter of fact, would you believe I have only ever seen one other example of this and yes, I posted it. We know there is no slowing with the OWC and its 28% over provisioning because I posted the result. I can again if you wish.

Absolutely no performance degradation. If I see the same here I will be all over it...You will here me tell mesa he is a genius...

I will also be sending the links to some very close contacts who would also be interested...

I suggest please don't use this method at all. Did you know, Canada does not exist since I have never seen/been.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Idontcare and Sub.Mesa....

Are you serious? Really? I didn't put forth the theory therefore I wouldn't have to prove it. If anything, it suddenly looks like there is something maybe you don't want others to see and it would concern me as others are jumping at your ideal for the overpartitioning.

Its a normal course of any idea....any idea.... to put it forward and then prove the idea. You just dont throw it up for others to jump at without proving the theory.

You can set up all the over partitioning you want but until you can justify it with proof, its nothing more than wasted SSD space.

The most concerning part is that it is a easy thing for you to do which is fill the SSD and run Crystal.

Oh...sorry...why dont I do it? Well really....would I be an idiot to jump on someones idea only to find somehow it interferes with the over-provisioning of the disk or, well maybe, I thought you believed in your idea enough to test it before you throw it into the crowd for the inexperienced to grab a hold of.

I run an OWC with 28% OP. My Intel is my main system disk which doesnt get changed but heck, if you can suck anyone in without proving your theory, who am I to throw up a flag for others SSD safety sake right?

Apologies for getting my back up but well, everyone knows that I will give it right back when called for. I simply asked for a simple test which you could do and now am left wondering why you wouldn't do it. Simple test...very. Fill the disk and run a Crystal. If the overprovisioning works, there will be no drop in performance.

You really seem to struggle against the notion that this is a community. You are the one that seems to want the data, hence my suggestion that you seek out the data yourself. It was a simple question on my behalf, hardly deserving of your retort.

In scientific communities theories are posited all the time for digestion, contemplation, and yes if members of the community wish to pursue proving or disproving the theory then they may elect to commit their time and resources towards such an endeavor. It really does work that way. My PhD is in quantum chemistry, I've no need to BS you about this or anything else here.

I have no theories regarding proper SSD management, not sure why you keep ascribing me as the progenitor or harbinger of such in your post above.

What I saw was that you are seemingly frustrated at having a lack of data and I was curious why you don't invest some of your time disproving (or proving) the theory.

No you don't have to, no its not your job to do so, but it is rather senseless of you to waste your time requesting proof of a theory when you already have the wherewithal to generate the data yourself to answer your own question. It beats beating your head repeatedly against the rocks over and over again, which is about all you seem to be doing IMO.
 

flamenko

Senior member
Apr 25, 2010
349
0
0
www.thessdreview.com
Is there any reason you don't do exactly whatever it is that you keep hoping other people will do?

If you do it yourself then you'll have less reason to be concerned with the manner in which the results were generated.

Come come now.... This was an insult response so don't play like you didn't mean it. A community this is and you are absolutely right. The community should then realize that there are people jumping in and getting bits and pieces and following it aimlessly. Multiple examples are seen with the alignment posts.

I also never said that you were responsible for anything other than what was said above.

I took insult to what was said. Quite frankly, I thought that Mesa was simply going to say "Ya, that test idea will work" and come back with the result or another idea but rather, for some odd reason, the community feels it unnecessary to prove the idea and we already have people saying thanks they are going to do this.

Isnt it even more unusual that the author of the theory hasn't taken the 5 minutes to prove his theory after a perfectly valid method was presented? Myself, I am wondering if the group is just brainwashed here eheheh

As I said in the previous post, it doesn't matter who you are if you elect to support a theoretical conclusion that isnt proven.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
Take it as you like then, I know my motivations for drafting the post were honest and sincere.

I then redoubled my time investment in yet another sincere effort to close what I perceived to be a communication gap.

If you want to insist on seeing dragons all around you despite my efforts to assure you to the contrary then I am prepared to live with that and cut my losses, my conscience is clean.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
There's nothing to apologize to me about, or rather I mean to say I certainly do not feel like anyone owes me an apology.

But I'm serious, you know exactly what kind of test you want conducted to disprove the theory you take exception too. Why leave it to others to generate data that you might then just end up questioning as well? Trust but verify, yes?

FWIW I consider myself a 3rd party to this situation. I neither created nor promoted the theory in question. I simply asked some questions regarding its implementation.

I agree the burden of proof falls to the theory's progenitor, but that doesn't mean they are bound to follow-through with proving anything. It's all free info and we all are getting exactly what we paid for here.

However if you were an enterprising individual who happened to be motivated to find out the truth sooner instead of later then there is no better person to do that than yourself.

I'm rather apathetic to the matter at the moment...my G2 sits behind a 2GB cache on my areca raid controller, I'm not worried about performance degradation but then again my controller card set me back a couple grand so I'm entitled to that performance. But for others I could see the topic being timely, and resolution to the issues you highlight would be value-add to them (and you, or so you've convinced me).

Hence my suggestion that you create the data you seek rather than wait for it to be done to your specifications. Make it a Q.E.D. rather than letting it drag on and on.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Guys, let's try to keep a friendly atmosphere and try to learn from each other and understand each other well, instead of fighting each other. Personal comments in that setting rarely have any positive effect. We all like discussing the theories behind SSDs, but we would like it a lot more in a friendly atmosphere where questions can be asked without holding back.

It's also perfectly fine to criticize a theory. But i will point out that it is not up to me to prove anything. I'm just sharing with you guys of what i learned; it is up to you to either trust and follow this, do more research or reject altogether. Only if you would hire me would i go on proving it; and i can assure you it doesn't come cheap.

As for proof, well it is perfectly valid to desire proof that under-provisioning is an effective way to stop performance degradation of SSDs and enhance lifespan as well due to lower write amplification (i.e. 20GB written by the OS gets amplified to 40GB worth of writes on the SSD = 2.0 write amplification). The problem is that getting this proof the 'correct' way would be a research study of its own; definately not something i can do in one day. Not the right way, at least, by simulating real use over time under varying conditions but with as many external factors eliminated. It would likely require D-Trace to do intensive I/O monitoring and capturing and replaying traces found by actual Operating System usage. Then once you can confirm the authenticity of the patterns that would simulate 6 or 12 months of real use over time, you can do normal benchmarks in that condition to determine the actual (degraded) performance at that point.

As far as i can tell, under-provisioning is a widely accepted mechanic to increase the spare area used by the SSD. The Anandtech-article, in case you haven't read it thoroughly yet, is an excellent writeup that goes relatively deep into the subject for a techsite; research papers really aren't that easy or interesting to interpret. The Anandtech-article is also based on findings by IBM Zurich, i'll give you the links here:

http://www.anandtech.com/show/2738/9
http://www.anandtech.com/show/2829/8
IBM Zurich research document:
http://www.haifa.ibm.com/conferences/systor2009/papers/2_2_2.pdf

Some quotes:

There’s not much we can do about the scenario I just described; you can’t erase individual pages, that’s the reality of NAND-flash. There are some things we can do to make it better though.
The most frequently used approach is to under provision the drive.
and

Intel ships its X25-M with 7.5 - 8% more area than is actually reported to the OS. The more expensive enterprise version ships with the same amount of flash, but even more spare area. Random writes all over the drive are more likely in a server environment so Intel keeps more of the flash on the X25-E as spare area.
You’re able to do this yourself if you own an X25-M; simply perform a secure erase and immediately partition the drive smaller than its actual capacity. The controller will use the unpartitioned space as spare area.
I mean, its not my theory; this is actually widely accepted and common knowledge to anyone intimate with SSD controller technology. I'm just relaying the message, especially since i believe 6.8% is too little spare space and that Windows-users with TRIM would still have a slowdown depending on their actual use and how frequently they do a Secure Erase.

However, if you feel you could use the space and would accept some degradation of performance with the benefit of being able to store more applications/games on the SSD, that's a valid argument to not reserve any extra, or keep the extra reserved space very low (under 10%).

But, if you still have doubts about this 'theory' then the only real proof would be to find out and test yourself, and/or write to those concerned (IBM Zurich, Anand himself, OCZ/Intel user forums, Intel directly, controller designers if you have the contacts, etc). I think i pretty much told you all i know on the subject, at least those relevant to preventing performance degradation and lower lifespan. The Anandtech articles i linked are highly recommended to carefully read; the more you know about how SSDs work, the more logical some things may start to sound.

Cheers.
 
Last edited:

n7

Elite Member
Jan 4, 2004
21,281
4
81
Lots of GREAT information in this thread; keep it coming.

I expect to see no more bickering amongst each other in this thread or i'll be locking this.

Please continue the information flow, but keep on the topic, not on each other.

n7
Memory/Storage Mod
 

capeconsultant

Senior member
Aug 10, 2005
454
0
0
We are getting down to some serious knowledge regarding these new fangled SSD's.

I for one find the idea of reserving space diabolical in its simplicity. Could it be that easy?

Dave
 

Old Hippie

Diamond Member
Oct 8, 2005
6,361
1
0
Kinda interesting about the creation of the spare area.

I take it to mean that setting the max. address is preferred to creating a smaller partition?

I have a seperate unallocated/unformatted partition. Why would setting the max address be preferred?

What utility could be used to set the max address?

The article has a date of September 23rd, 2009. I musta missed the first time around.
 
Last edited:

erdemali

Member
May 23, 2010
102
0
0
It certainly would not hurt to start with a Secure Erase. That way you are 100% certain that it is a brand new drive for all intents and purposes.

I also may missed a step: upgrade the firmware. If there is newer firmware available, you better apply this immediately before installing windows or doing anything with the drive.

So the correct procedure would be:
1. Upgrade firmware
2. Set BIOS to AHCI mode
3. Install Windows 7 with creating a partition 80% of the maximum capacity, leaving the rest unused.

As above
 

erdemali

Member
May 23, 2010
102
0
0
Setting a partition less than then the max available after secure erase.
This is one way to achieve larger spare area.
 

capeconsultant

Senior member
Aug 10, 2005
454
0
0
This is quite interesting, as one way of looking at it means that they are not giving us even the formatted space they are advertising.

So, an Intel 160 which formats to 149 is really about 120GB.

When will it end?????????????
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
This is quite interesting, as one way of looking at it means that they are not giving us even the formatted space they are advertising.

So, an Intel 160 which formats to 149 is really about 120GB.

When will it end?????????????

You could look at it that way...the glass is half-empty viewpoint...but you could also look at it from the same viewpoint as turbo-clocking and power-saving p-states.

If you don't need the capacity to store volumes of data then you have the flexibility and option to put it to a different use as over-provisioning to boost performance and/or endurance.

I look at my 160GB G2 as a 160GB device...of which I am happy to allocate 7% to do things that significantly boosts the speed of the remaining 93% capacity.

If I had an i7 and was only using a single-threaded app at full load I would not be unhappy with Intel downclocking my under-utilized cores while boosting the speed on the core that is handling the active thread.


That's just absurd...isn't it!? Do small file (this graph is for 8KB) IOPs really stand to increase by 350% if I create a 20% over-provisioning area on my G2? How did the review sites miss this opportunity to test out the performance benefits of over-provision?
 

erdemali

Member
May 23, 2010
102
0
0
If I see the same here I will be all over it...You will here me tell mesa he is a genius...You can bet I will also be sending the links to some very close contacts who would also be interested...

Please do accordingly, as you said.

We do respect the knowledge.
 
Last edited:

capeconsultant

Senior member
Aug 10, 2005
454
0
0
Yes, I guess it is a good thing. Sort of. Just that with my 128GB minus formatting minus another 20% it starts to impede on the amount of space I planned for when I bought it.

I always planned on leaving 20% open, but for temporary storage, not storage that was permanently off the table.

Actually, is this true for all SSD or just Intel? Forgot to ask that important question
 

erdemali

Member
May 23, 2010
102
0
0
Yes, I guess it is a good thing. Sort of. Just that with my 128GB minus formatting minus another 20% it starts to impede on the amount of space I planned for when I bought it.

I always planned on leaving 20% open, but for temporary storage, not storage that was permanently off the table.

Actually, is this true for all SSD or just Intel? Forgot to ask that important question

In the presentation I heard that it is intel's unique feature that is.
But IMO it is applicable to most SSDs.

just increasing the spare area slightly, Costs vs Benefits really impressive. We were determined to reserve a free space ~30-40% in SSD's anyhow. Plus we get a much more reliable device.
 
Last edited:

erdemali

Member
May 23, 2010
102
0
0
I knew I'd seen a tool to set max address before.

It's HDAT2 that will do the job.
You are a legend.

[FONT=Arial CE said:
[SIZE=+1]Q10: Can I cut down size of hard drive ? [/SIZE][/FONT] [SIZE=+1] A10: Yes. In 'Device List' menu select your disk drive. Select 'SET MAX (HPA) Menu', then select 'Set Max Address' and press Enter. Now you can choose your required size of hard drive. It should be smaller as native (maximal) size of drive, of course. See also question nr. 1. [/SIZE]

Can this be done while keeping the data?
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
are those old slides? the G2 X25-E uses super-capacitor for bbwc operation which changes ALOT when it comes to intel write.

If you'd like i could point this thread to an intel SSD Specialist who could share non-nda opinions. (aka intel employee) might take a while.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |