Only 200Mbps on new Gigabit system?

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
Just hooked up a new D-Link DGS-2208 Gigabit switch to a computer using an Intel Gigabit NIC and to another computer using the mobo's Marvell Yukon Gigabit NIC.

I have done a speed test using netCPS and also monitored with DU Meter. NetCPS shows about 19.5MB/sec and DU meter shows roughly 200Mbps.

The D-Link says it supports Jumbo Frame up to 9.5KB, and I've tried setting both NIC to Jumbo Frame 9014B and the speed drops way down to ~20KB/sec on netCPS and ~187Kbps on DU Meter.

I am using Cat 5e cables and have swapped them one at a time with different Cat 5e cables and there is no difference in speed.

Both NICs indicate 1Gbps connection and the switch LEDs indicate 1Gbps connections.

Any suggestions to speed things up?

Why does enabling Jumbo Frames make it much worse when all devices on the Gigabit switch say it's supported?
 

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
Which ones are wrong and I'll fix them.

Anyway, does that prevent anyone from answering the question - only 200Mpbs from Gigabit NICs and switch?

 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Try iperf version 1.7

E.g.

server: iperf -s
client: iperf -c server -l 64k -t 15 -i 3 -r

Where server is the IP or name of the remote machine, which is running iperf -s
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
Try to increase the TCP/IP's RCWin to 513920 in both computers.
 

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
Did not yet try the iperf program, but increasing RWin to 513920 doesn't seem to have made any improvement - speed numbers are still essentially the same.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,529
416
126
Swapping Cable from Cat5e to Cat6 would do nothing (unless the current cable is damaged).

I am not familiar with the D-Link DGS-2208, but when I swapped my Old Entry Level Netgear Giga switch with a better one, the Bandwitdh improved x2.

 

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
I ran the basic iperf test using iperf -s on one machine and iperf -c server on the other machine. This gave me about the same results as I had before ~200Mbps.

Then I ran iperf with the -c server -l 64k -t 15 -i 3 -r switches and it presented five line of output in two groups.

The first group averaged ~373Mbps, the second group averaged ~618Mpbs.

What is this more advanced test telling me (besides showing the obviously higher bandwidth)?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Whoaru99
I ran the basic iperf test using iperf -s on one machine and iperf -c server on the other machine. This gave me about the same results as I had before ~200Mbps.

Then I ran iperf with the -c server -l 64k -t 15 -i 3 -r switches and it presented five line of output in two groups.

The first group averaged ~373Mbps, the second group averaged ~618Mpbs.

What is this more advanced test telling me (besides showing the obviously higher bandwidth)?

Were jumbo frames enabled?

The key parameter for iperf here is the -l 64k -- this increases the message buffer size to 64k, which also better corresponds to some applications.

-t 15 sets the total time to 15s
-i 3 sets reporting intervals to 3s (thus 15/3 = 5 lines)
-r enables transmission in the reverse direction in addition, so reports receive performance after the transmission test.

Your receive performance is better than the transmission performance. This is not uncommon, esp. when you have different systems on both ends. Jumbo frames can help sometimes. This can also indicate a PCI bus limitation in some cases.

However, 373 Mb/s =~ 47 MB/s, which is higher than the typical Windows file transfer rate over gigabit (around 30 MB/s), so tweaking the network further might not give you as much of an improvement for Windows file transfers as you'd hope, due to OS limitations.
 

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
Jumbo frames were not enabled for the iperf test.

I'll try with JF enabled to see the result.


Your receive performance is better than the transmission performance. This is not uncommon, esp. when you have different systems on both ends. Jumbo frames can help sometimes. This can also indicate a PCI bus limitation in some cases.

Hmmm?? Is it possible the receive performance is better because in this case the client computer uses the mobo's integrated NIC which claims to be PCI-E, whereas the server computer is using a standard PCI card?
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Originally posted by: Whoaru99
Hmmm?? Is it possible the receive performance is better because in this case the client computer uses the mobo's integrated NIC which claims to be PCI-E, whereas the server computer is using a standard PCI card?

It's hard to say -- I had some guesses based on the information provided so far. It might be easier if you list more specs for the client and server -- at least motherboard and OS.
 

Whoaru99

Junior Member
Jan 2, 2007
21
0
0
The server is Win2K SP4 running on Asus CUSL2-C mobo with a P3 @ 1.2GHz and 512MB RAM. The NIC in that machine is a (PCI) Intel Pro/1000MT. The HDD is a 160GB PATA Hitachi.

The client computer is running Win XP Home SP2 on a Gigabyte P965-S3 mobo with a Celeron @ 3.3GHz and 2GB RAM. The NIC is the on-board Marvel Yukon setup that claims to be PCI-E. The HDD is a Raid 1 array of two 160GB SATA Seagates using the mobo's JMicron SATA/RAID controller.
 

Madwand1

Diamond Member
Jan 23, 2006
3,309
0
76
Have you tried installing the latest drivers (from Intel and Marvell)? Check the NIC settings afterwards, as installation sometimes turns off jumbo frames.

I've just done some tests from a P4 641 running 2003 with an add-on Marvell-based PCIe NIC to an X2 3800+ running XP Home with an Intel Pro/1000 MT Server NIC in a standard PCI slot via a D-Link DGS-2205, and the results were good, but the systems are quite different from yours.

In your place, I'd probably try (a) tweaking the NIC settings, esp. interrupt moderation on the Intel (b) observing the CPU utilization on the P3 (also while adjusting interrupt moderation), (c) going to application-level benchmarks for further tweaking (it's possible that fine tuning for synthetic benchmarks such as iperf is de-tuning for application benchmarks and vice versa).
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |