Gigabit Woes

aboutblank

Member
Nov 14, 2005
34
0
0
So we just got a GS108 and the speed simply isn't what it should be. Let me explain, and any help would greatly be appreciated.

This is a simpler diagram of our network. In my testing, there were no other computers connected to the network, though there usually are.

GS108 - Netgear gigabit switch
FS108 - Netgear "Fast" switch

My PC runs XP SP2. The Fileserver runs Windows Server 2003.

My PC has a 10/100/1000 Nvidia integrated NIC in the DFI Lanparty NF4 SLI-DR (whew!). The Fileserver has a VIA 10/100/1000 integrated NIC in its Soyo SY-P4RC350.

My PC reports the "link speed" in Task Manager - Networking is 100 Mbps. Fileserver reports 1 Gbps. Both are correct.

Here and Here are screenshots during the transfer of a 5 GB file from My PC to the Fileserver. As you can see, it is erratic even with no other activity. Keep in mind that in those screenshots, 100% utilization means 100Mbps.

Before this setup, we had both My PC and Fileserver connected directly to the FS108. At that time, a 5 GB transfer would go smoothly, nearly maxing out the 100Mbps bandwidth until the transfer was done.



Now we will try both My PC and Fileserver connected directly to the GS108. Both PCs report a 1Gbps connection.

Here and here are screenshots of a transfer with 100 Mbps marked in white.



My speed sucks. The GS108 seems to be bursting and isn't close to the speed I was getting with just the FS108. Any ideas?
 

bluestrobe

Platinum Member
Aug 15, 2004
2,033
1
0
I've never been a big fan of netgear and just had my last product of theirs die. What kind of cable are you running, what distance?
 

TC10284

Senior member
Nov 1, 2005
308
0
0
I don't guess you're running jumbo-frames on your file servers' NIC are you?

I have a GS108 also. Best transfer rates I can see between two Gigabit PCs is around 21-25%. That is with no other system hitting the file server (File server is running a RAID5). Going from Gigabit to 100mbps I can see around 80-95% usage on the 100mbps system. Doesn't matter if it's a CAT5e or CAT6 cable

On a sidenote:
I also have a GS105 that's been acting up on me lately. If I have the switch connected to a gigabit connection, the port on the GS105 will flash constantly like there is traffic going over it, but usually, the other end will not be flashing at all. If I connect the switch to a 100mbps switch, the 100mbps switch link will flash constantly like there is traffic. My GS108 will NOT do this if it connects to the same switch or PC that causes this "problem". Tried to talk to tech support, they basically gave me the runaround. Kept telling me to try stuff then call back. I about felt like telling the person, "No thanks, I'll just buy another brand of switch."
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Three potential explanations have been given above: (1) Netgear brand. (2) Lack of jumbo frames. (3) Cabling. The following shows that you can get much better performance than the OP's despite (1) and (2).

M:\test>xxcopy /y test0\10gb.out \\192.168.0.191\f\test\test0
XXCOPY == Freeware == Ver 2.93.0 (c)1995-2006 Pixelab, Inc.
-------------------------------------------------------------------------------
M:\test\test0\10GB.out 10,000,000,000
-------------------------------------------------------------------------------
Directories processed = 1
Total data in bytes = 10,000,000,000
Elapsed time in sec. = 150.7
Action speed (MB/min) = 3981
Files copied = 1
Exit code = 0 (No error, Successful operation)

That's a 10 GB file transferred at 66 MB/s average. Not impressive, but it's better than the norm, and might have satisfied the OP.

The switch was a Netgear GS608 v2. The NICs here were nVIDIA nForce 3 on the sending side, and a RealTek PCI on the receiving side. Jumbo frames were not used. W2K to XP Home. Onboard RAID 0 to Highpoint RAID 5 (nearly full).

-----------------

Did you try rebooting all the machines? Power cycling the switch?

After that, if you're still having problems, it's probably best to look at the raw network performance first. I'd suggest using iperf for this.

Receiver: iperf -s -i 3
Sender: iperf -c receiver -l 60000 -t 30 -i 3 -r

This sends a 60K message for 30 seconds and reports the performance every 3 seconds. Then it sends in the opposite direction for another 30s.

Reduce the duration for exploratory tests. Keep a fairly long duration for reported results for better averaging and reducing the impact of luck.

Also look at CPU utilization during this test. Very high CPU utilization can become a bottleneck.

If raw networking looks fine, then stop tweaking it, and look elsewhere for the problem.

Erratic network utilization graphs can sometimes be seen when the drive systems are under stress / under performing. You could see bursting up to cache saturation, and then slowdown as the transfers only back-fill the rate of data writing. Thereafter, you should ideally see the system settle down to a sustainable transfer speed, but it might still be erratic, perhaps due to CPU load at times, trashing on the swap drive, and HD fragmentation, lack of system resources, among other possible reasons.

Windows RAID 5 is not noted for its writing speed, and that should be the bottleneck.

You can monitor the rate of data reading / writing using PerfMon. E.g. Object Physical Disk, Counter Disk Bytes/sec, Instance desired drive.
 

InlineFive

Diamond Member
Sep 20, 2003
9,599
2
0
Originally posted by: TC10284
I don't guess you're running jumbo-frames on your file servers' NIC are you?

I have a GS108 also. Best transfer rates I can see between two Gigabit PCs is around 21-25%. That is with no other system hitting the file server (File server is running a RAID5). Going from Gigabit to 100mbps I can see around 80-95% usage on the 100mbps system. Doesn't matter if it's a CAT5e or CAT6 cable

That seems pretty weird, on my 100Mbits network a P4-D and P4-HT can utilize at least 60% of the connection.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: InlineFive
Originally posted by: TC10284
I don't guess you're running jumbo-frames on your file servers' NIC are you?

I have a GS108 also. Best transfer rates I can see between two Gigabit PCs is around 21-25%. That is with no other system hitting the file server (File server is running a RAID5). Going from Gigabit to 100mbps I can see around 80-95% usage on the 100mbps system. Doesn't matter if it's a CAT5e or CAT6 cable

That seems pretty weird, on my 100Mbits network a P4-D and P4-HT can utilize at least 60% of the connection.

I think he meant 21-25% of gigabit, which would be max 250 Mb/s, which is around 30 MB/s, and still around 3x as much as 100 Mb/s can do. There's nothing wierd about that -- it's fairly common.
 

TC10284

Senior member
Nov 1, 2005
308
0
0
Originally posted by: Madwand1
Originally posted by: InlineFive
Originally posted by: TC10284
I don't guess you're running jumbo-frames on your file servers' NIC are you?

I have a GS108 also. Best transfer rates I can see between two Gigabit PCs is around 21-25%. That is with no other system hitting the file server (File server is running a RAID5). Going from Gigabit to 100mbps I can see around 80-95% usage on the 100mbps system. Doesn't matter if it's a CAT5e or CAT6 cable

That seems pretty weird, on my 100Mbits network a P4-D and P4-HT can utilize at least 60% of the connection.

I think he meant 21-25% of gigabit, which would be max 250 Mb/s, which is around 30 MB/s, and still around 3x as much as 100 Mb/s can do. There's nothing wierd about that -- it's fairly common.

Yes, that is what I meant. Sorry. What's the highest anyone has seen sustained between two gigabit PCs? Is 21-25% good? I would assume so since that's close to as much as an HD can write as far as sustained speed. On average I see, 16% between two gigabit PCs.
As for my RAID 5 it is a hardware RAID 5. Although, I wasn't writing to the array, just reading from it. I definitely know that a RAID 5 isn't good for writing speeds due to XOR calculations and such. Reading is pretty quick though.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: TC10284
What's the highest anyone has seen sustained between two gigabit PCs? Is 21-25% good? I would assume so since that's close to as much as an HD can write as far as sustained speed. On average I see, 16% between two gigabit PCs.

It's not great, but it is fairly typical. Many factors conspire to give this sort of performance -- average / older single HD performance, crowded / fragmented drives, network implementation inefficiencies, PCI bus, RAID implementation, transfer protocol inefficiencies, thrashing on the swap drive, relative overhead on small files, etc. When you factor most of these out, you can get better performance.

Above, I reported 66 MB/s transfers in a setup that simulated the OP's somewhat.

I've seen around 60 MB/s sustained single-drive to single-drive transfers in the outer sectors.

I've seen in excess of 90 MB/s sustained RAID to RAID transfers.

Maybe I'll see better with Vista to Vista SMB 2.0? Would be nice. I'll have to check this out sometime.

Again, 30 MB/s is a heck of a lot better than 100 Mb/s, and a reasonable level to be satisfied with for the most part. The difficulty increases as you try to push this higher, and the relative returns diminish.
 

aboutblank

Member
Nov 14, 2005
34
0
0
It's Cat 5e cable a total of about 50 feet from My PC through the switches to the Fileserver.

I'm not using Jumbo Frames.

Keep in mind that at the dips in my first screenshot, I'm getting < 5 MB/s! That's TERRIBLE!

The fileserver is running RAID 5 with a Promise controller, it could definintely do better then 5 MB/s writing.

Should I say the GS108 is the fault? What brands do people prefer?
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
Wild guess, but flow control between the gig nic and switch?

SOHO gear isn't known to adhere to standards.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
You're testing file transfers, and not just the networking. Don't assume the networking is at fault until you test it in isolation. Read my 1st post in this thread to start. Some SOHO gear has amazing performance, don't just assume that it's at fault.
 

aboutblank

Member
Nov 14, 2005
34
0
0
Alright, OP here.

Let's try to answer a basic question: Is there a problem? (Am I getting the performance I should be?) If there is a bottleneck, where is it? Let's benchmark different parts to try to determine where it is.

All benchmarks listed are with My PC and the Fileserver plugged directly into a gigabit GS108.

I didn't earlier quote my benchmark for straight file transfer to the "networked drive" (samba shared on Windows). 5,120,000 KB in 810 sec = 6,320 KB/s.

Iperf output: (again, gigabit connection)
C:\iperf>iperf -c fileserver -l 60000 -t 30 -i 10 -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to fileserver, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[1852] local 192.168.0.99 port 1786 connected with 192.168.0.97 port 5001
[ ID] Interval Transfer Bandwidth
[1852] 0.0-10.0 sec 338 MBytes 283 Mbits/sec
[1852] 10.0-20.0 sec 340 MBytes 285 Mbits/sec
[1852] 20.0-30.0 sec 340 MBytes 286 Mbits/sec
[1852] 0.0-30.5 sec 1017 MBytes 280 Mbits/sec
[1952] local 192.168.0.99 port 5001 connected with 192.168.0.97 port 4332
[ ID] Interval Transfer Bandwidth
[1952] 0.0-10.0 sec 193 MBytes 162 Mbits/sec
[1952] 10.0-20.0 sec 192 MBytes 161 Mbits/sec
[1952] 20.0-30.0 sec 193 MBytes 162 Mbits/sec
[1952] 0.0-30.0 sec 578 MBytes 162 Mbits/sec

Here is a screenshot of task manager while running two of these iperf tests.

I understand that I could probably improve this by tweaking things like the window size, but 160 Mbits is 20 MBytes, and isn't close to what I'm getting with a straight file transfer.



Now let's benchmark the hard disks on the fileserver using SiSoft's Sandra.

For the "File system" benchmark it reports:
Drive Index: 34 MB/s
Random Access Time: 3 ms

For the "Physical Disks" benchmark for read performance it reports:
Drive Index: 49 MB/s
Random Access Time: 13ms

Now let's benchmark the fileserver's hard disks using HD Tach:
Here is a screenshot of the results.
Random Access: 13.3ms
Average Read: 43.6 MB/s


What else could it be? There is hardly any CPU load.

Could the bottleneck simply be Windows?

From these numbers, I cannot see the bottleneck! Iperf seems to tell me that network bandwidth is at least 160 Mbits/20 MBytes. Both hard disk benchmarks tell me at least 34 MBytes/s. So why am I getting 6 MB/s file transfer over a windows share!?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Well, the above network performance is not amazing (in a positive sense). The worst I've seen personally using a tool like iperf -l 60000 is around 500 Mb/s, so 160 Mb/s is pretty bad for gigabit, and potentially an indication of other performance problems.

For some basic elimination, try taking the switch out, and wiring the two GbE NICs directly (standard cable should be fine for GbE). Assign IP's manually, and re-test. Move the gear, measure with different store-bought and tested cables as appropriate.

If performance is not significantly improved when you take out the switch, then don't bother about upgrading the switch at this point -- it's not the bottleneck.

Check for new network drivers. Check driver advanced options. Remember the original settings, try changing some of them and re-testing using iperf.

If you have the option for jumbo frames on both NICs, turn it on and set MTU. Reboot after changing the registry.

The file server NIC is on PCI, and its storage controller is presumably on PCI as well. Especially so if there are other significant devices on PCI, with the OS on that array as well, these could be combining to reduce the overall performance. So the underwhelming 160 Mb/s networking performance could reduce even further when the PCI is being heavily loaded by the storage controller. One test of this would be to do an iperf measurement at the same time as a drive performance test.
 

VooDooAddict

Golden Member
Jun 4, 2004
1,057
0
0
Originally posted by: aboutblank
It's Cat 5e cable a total of about 50 feet from My PC through the switches to the Fileserver.

I'm not using Jumbo Frames.

Keep in mind that at the dips in my first screenshot, I'm getting < 5 MB/s! That's TERRIBLE!

The fileserver is running RAID 5 with a Promise controller, it could definintely do better then 5 MB/s writing.

Should I say the GS108 is the fault? What brands do people prefer?

Depends on the controler and the saturation of the PCI bus. (I assume you are using 33mhz/32bit PCI Bus controler?)
 

VooDooAddict

Golden Member
Jun 4, 2004
1,057
0
0
Originally posted by: aboutblank
What else could it be? There is hardly any CPU load.

Could the bottleneck simply be Windows?

From these numbers, I cannot see the bottleneck! Iperf seems to tell me that network bandwidth is at least 160 Mbits/20 MBytes. Both hard disk benchmarks tell me at least 34 MBytes/s. So why am I getting 6 MB/s file transfer over a windows share!?

What anti-viral software and settings are you using? I've had some older versions of norton on Win2k destroy transfer rates between servers before. Specificly excluding active scans large files the two severs were constantly exchanging made a massive performance improvment. (Files were still scanned 3 times a week during "full" scans.)

 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: VooDooAddict
I've had some older versions of norton on Win2k destroy transfer rates between servers before.

Also try disabling any software firewalls if present; I've heard of some of them taking a performance hit.

 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I usually top out at 25-26% utilization to and from my server over a Gbit line. I figure it is the write speed of the HD's I am using that is holding me back. Eventhough it isnt near max, it is still 2.5x faster than 100Mbit so no complaints from me.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |