• Announcements

    • Terry (WGS)

      WGS Forums Will Close on January 31st 2017   01/18/17

      Hi everyone, We're coming up to We Got Served's ten year anniversary and I've been taking a good look at the site to work through some plans for the future. I wanted to give you all a couple of weeks notice that I'll be closing WGS Forums at the end of this month.  As you'll be aware, the forums were opened to support Windows Home Server users and have done so brilliantly, thanks to everyone's participation. However, with each day that passes, there are fewer and fewer WHS deployments out there, meaning that forum registrations and traffic has now dwindled. I've tried a few times over the years to test some forums on related or adjacent topics, however, they simply haven't caught on. It's clear that these were always going to be predominately Windows Home Server forums and, with the passing of that platform, they've served their purpose well. So, please take some time to archive anything you need over the next two weeks. Access to the forums will end on Jan 31st 2017. Many, many thanks once again for your participation and support here. It's been a great community! Very best wishes Terry
thebeephaha

Fast uploads to server, slow downloads from it.

16 posts in this topic

When I test the speed:

(Uploading) I get 30-40 MB/sec from my Vista 64 client to the WHS server shared folder, which seems pretty good.

(Downloading) I get 6-7 MB/sec from WHS server to Vista 64 client. (copying to either of it's raid arrays)

Why is this happening? (note: I'm not the only one either, READ THIS)

My network equipment is as follows:

Linksys EG005W Gigabit Switch (connects the server and client together) --> Linksys WRT54GL Router running Tomato --> Linksys BEFCMU10 Cable Modem

Full server specs:

* Intel Entry Server Motherboard SE7221BK1-E w/ Dual Intel Pro/1000 MT Adapters

* Pentium 4 540 3.2GHz

* 2GB (2x1GB) Kingston DDR2 PC5300 RAM

* Supermicro AOC-SAT2-MV8 8 port SATA Controller

* Promise SATAII150 TX4 4 port SATA Controller

* Adaptec 1210SA 2 port SATA controller

* Nine x Seagate 320GB 7200.10 drives (system) & (storage)

* Four x Western Digital 160GB Scorpio drives in a 4-in-1 Hotswap Bay running RAID 10 (320GB usable) (storage)

* Two x 500 GB Samsung Spinpoint F1 HD502lJ drive (storage)

* Two x 500GB Seagate 7200.9 & .10 drives (storage)

* ASUS DVD-ROM

* Lian Li PC-75 Full Tower

* PC Power & Cooling Silencer 750w CF Edition

* One x 120mm fan, six x 80mm fans, one x 70mm fan - All undervolted.

Full client specs:

* ASUS Striker Extreme

* Intel Core 2 Quad Q6600 @ 3.6GHz

* 8GB (4x2GB) G.SKILL 1066MHz @ 900MHz 5-5-5-15 2T

* EVGA 8800 Ultra Superclocked

* Dell/LSI Perc 5/i SAS Controller w/ 512MB DDR2 Cache

* RAID0 4x80GB Raptors (WD800GD)

* RAID5 4x750GB Hitachi 7K1000 (HDS721075KLA33)

* Intel Pro/1000 PT Adapter

* Samsung SH-S203B

* Auzentech X-Meridian w/ LM4562NA op-amps

* Cooler Master Stacker 832

* Cooler Master Ultimate Circuit Protection 900w

* Nine x 120mm fans, one x 92mm fan, two x 40mm fans - running on a Zalman MFC1 Plus-B Fan Controller

Note: I already applied this tip and this tip to my Vista client but it didn't make any difference.

Share this post


Link to post
Share on other sites

Upgrade to a WGS Supporter Account to remove this ad.

Might be a long shot but have you tried different ethernet cables?

Ethernet cables are normally done in pairs, i.e 1 pair, (2 cables), for send and the same for receive. I believe, but a more advance network teccie can correct me, that this still applies with 1gb connections, although they use more pairs in the cable.

If there's a faulty cable \ connection on just one of the pairs then it may reproduce your error. Again I could be wrong here but I think it's logical.

Maybe disconnect the Linksys and use that cable for testing.

Share this post


Link to post
Share on other sites

I'm using CAT6 cables, tried some CAT5e too, no change. I even connected both server and client together without the switch in between and I get the same result.

I then tested file speeds between my storage pool drives and a non storage pool drive and writing and reading from it I get the same results as well.

Something with WHS is not right.

Share this post


Link to post
Share on other sites

Anyone? Any other ideas?

I tried this tip also but still no dice. I also checked and my drives are in DMA mode so that's not the issue.

I'm going crazy. What good is a server if I can't share files at reasonable speeds?

Share this post


Link to post
Share on other sites

I assume you tried the switch to 1000mpbs (1Gbps) Full Duplex listed in one of those guides you listed on BOTH of your systems? You may also want to try enabling Jumbo Frames depending on your network cards abilities. Some drivers have a simple ON/OFF option, others have preset sizes that must be the same on both ends to receive the data faster. In my setup, I am using 4Kb frames on all systems, some have used 7Kb with good effect. However, the best case scenario for this tweak is small amounts of really large files, no large amounts of really small files, where the disk subsystem starts to slow things down.

Give it a try and see if it helps.

Share this post


Link to post
Share on other sites

I will try it. It just bugs me cause I get good speeds to the server from my client but trying to read files off the server is dreadfully slow.

Share this post


Link to post
Share on other sites
I will try it. It just bugs me cause I get good speeds to the server from my client but trying to read files off the server is dreadfully slow.

I had the same issue with my WHS (standard JBOD) and HTPC, but changing the HTPC NIC settings from "Auto-Negotiate" to "1.0Gbps Full Duplex" solved that issue.

Doesn't sound like it worked for you.

Share this post


Link to post
Share on other sites

Since you have good transfer rates one way thru your network, you should get equal rates the other way. In other words, it does not *seem* as tho you have a network problem.

I noticed that you have several SATA cards in your WHS. This may or may not be the problem area, but...

If you really feel like going this far to track down the problem, you might (if possible) try to move data around until you get down to 1 SATA card. You *may* have driver conflicts from 1 card to another. 1 card may work just fine, perhaps 2 cards would be OK. 3?, who knows.

FWIW, my WHS is what is in my sig. My workstation is an Intel based (E6600) Vista x64 machine. I have not noticed any real slowdown, unless I am moving massive amounts of data and/or copying more than 1 set of data at a time. I do have a direct connection between these machines thru a 1gb switch, tho.

Share this post


Link to post
Share on other sites

I enabled Jumbo Packets on the server and client for 4088 bytes (9014 was unstable, thinking my switch can't take it) and I also changed the transmit buffer on the client & server from 256 to 512. I also disabled flow control and QoS.

Now transfers to the server fluctuate up and down a little more 30-40 MB/sec (before) to 25-50 MB/sec (after) BUT transfers from the server over doubled! From 6-7 MB/sec (before) to around 18-19 MB/sec. (after)

This is much more acceptable but still it's not where I'd like it to be.

Anyone know the optimal settings for these Intel Pro/1000 NICs?

Share this post


Link to post
Share on other sites

Changed a bunch of settings on these Pro NICS, got it now to 40-50MB/sec upload and 25-30MB/sec downloads.

Much happier!

If anyone needs I can get the exact details later.

Share this post


Link to post
Share on other sites
Changed a bunch of settings on these Pro NICS, got it now to 40-50MB/sec upload and 25-30MB/sec downloads.

Much happier!

If anyone needs I can get the exact details later.

I am interested in the details, I have the same card and the network also needs some tweaking to get better results.

Thank you in advance.

Nic

Share this post


Link to post
Share on other sites

You may not have solved the underlying issue which sounds like a duplex problem to me. I strongly suspect that one or more of your network ports or NICs has a fixed setting (like 100Mbps/full) and another is on automatic. This is bad.

When one end of a link (say the switch) is fixed to 100Mbps/full (for example) and the other end is auto-negotiate (auto), the auto-negotiate will FAIL and the adapter at that end immediately drops to auto-sense. Auto-sense detects the 100Mbps carrier (in my example) and therefore brings up the adapter as 100Mbps/HALF, since full-duplex isn't supported with auto-sense.

The result is that frames transmitted (by the full-duplex end) whilst another is being received will cause a collision and a connection reset at the half-duplex end. This causes the slow-one-way and fast-the-other problem.

SLOW DIRECTION

Say switch is 100Mbps/full

Server is 100Mbps/half

Downloading a large file Windows uses a 'sliding window', which means tiny ACK[nowledgement] frames are only sent every so often, rather than for every frame. Say the window size is 10 frames, the client will send an ACK after say 5 but the server will stop sending if it hasn't heard by the 10th frame (and wait).

The transfer starts, the server sends 5 frames full-speed and the client then sends an ACK. The switch forwards the ACK as the server is sending the 6th frame. The server sees a collision and stops sending.

If we're lucky the switch will re-sends the ACK for frame 5. Server now re-sends frames 6..10 and the cycle repeats. One lost frame in six plus a random wait (this is how Ethernet handles collisions), reducing transfer rates quite a bit.

On older or more basic kit the impact can be catastrophic as nothing re-sends the ACK. The server assumes all data was lost, reduces the window size, and re-starts. The window is eventually reaches just one frame. Send-wait-ACK rec'd-Send etc. Reckon on a few hundred Kbps on a 100Mbps link.

FAST DIRECTION

Same scenario, but PC is uploading a large file.

Client sends first 5 frames, server queues an ACK but can't send because the line is busy. Clients sends a further 5 frames and now stops and waits. Server can now send the first and second ACK and the process continues. No data loss but there is inefficiency added because the sending machine keeps needing to wait for ACKs. Hence higher transfer rate.

I think that jumbo frames will mask the problem because the overall number of packets lost for a given file transfer will be lower.

HTH.

Share this post


Link to post
Share on other sites
I am interested in the details, I have the same card and the network also needs some tweaking to get better results.

Thank you in advance.

Nic

I set both Intel nics to have 9014 jumbo packets, I disabled log link state events, I disabled priority & vlan, enabled receive side scaling, offload v2 enabled, interrupt modulation enabled, and under the performance options section I have adaptive interframe spacing enabled, flow control rx & tx enabled, interrupt modulation rate adaptive, receive and transmit buffers at 2048.

I get up to 70MB/sec read and write now.

Share this post


Link to post
Share on other sites
I set both Intel nics to have 9014 jumbo packets, I disabled log link state events, I disabled priority & vlan, enabled receive side scaling, offload v2 enabled, interrupt modulation enabled, and under the performance options section I have adaptive interframe spacing enabled, flow control rx & tx enabled, interrupt modulation rate adaptive, receive and transmit buffers at 2048.

I get up to 70MB/sec read and write now.

I have a pro 1000mt in the whs but cant access the advanced settings - even tried reinstalling the latest intel drivers. How did you access the advanced settings on the server?

Edit: I'm logging in remotely, so perhaps this is why I cant see them? Will try getting on actual whs later and update accordingly.

Edit 2: yep - remote access = no intel contol panel. glad i could help myself :-P

Share this post


Link to post
Share on other sites
I set both Intel nics to have 9014 jumbo packets, I disabled log link state events, I disabled priority & vlan, enabled receive side scaling, offload v2 enabled, interrupt modulation enabled, and under the performance options section I have adaptive interframe spacing enabled, flow control rx & tx enabled, interrupt modulation rate adaptive, receive and transmit buffers at 2048.

I thought Vista's magical Self-Tuning TCP/IP stack choked when "Receive Side Scaling"(RSS) was enabled? Obviously here in this instance it doesn't, not at +/- 70MB/s.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now



Upgrade to a WGS Supporter Account to remove this ad.



  • Posts

    • I am running WSE2012 R2 and I only ever see red, yellow or green. I have never seen blue even when backing up. I can't explain what could be happening here, but maybe some feedback from other users may help us pin it down. What OS are your client PC's? Mine are all Win 10. Not sure if that may have a bearing.   
    • I made the leap from WHSv1 to WSE2012R2.  I like the new system but one thing seems to be missing.  In WHS v1, the tray icon would change colors to indicate various things blue - backing up now green - everything is healthy yellow - warnings detected (e.g. haven't backed up in a while) red - critical issues (e.g. recent backup failed) In WSE 2012 R2, however, I only get the blue and green indicators, even when there are critical errors on my home network.  For example, a server hard drive had failed for several weeks, but the connector icon in my system tray remained green.  I only discovered the issue when I happened to login to the dashboard for something else. Does anyone know if this feature has been removed in the latest WSE versions?  Or have I just mis-configured something?
    • I am building a new home server: I really want to use StableBit DrivePool and Scanner - am used to them and like the relative ease of use. Hardware = PC running Windows 7 Pro has SSD for OS and 5 additional 2TB HDDs + SansDigital Towerraid (model TR5M6G) connected by eSATA controller (pCie) with 5 2TB HDDs of varying age (2 brand new WD Red and the remaining 3 a combo of Hiatachi and Seagate of varying age - all test out fine for now). Towerraid is currently configured as JBOD, so I can't really see individual drives, and neither does StableBit drive pool - so I get a pool consisting 5 1.82 TB drives (the ones on the PC) and 1 giant 9.1TB drive. I have about 10 TB of data ready to go into this new build.  Will be adding more over time.  Not sure when but certain I'll have HDDs go bad on me, of course. My question(s): 1)  Is there a RAID configuration that will let StableBit do its thing watching individual drives and letting me identify sick drives that need to be removed/replaced?                           2)  If I decide to give up the RAID and just let DrivePool handle duplication, how do I configure the SansDig?                           3)  Am I just getting it all wrong from the get-go? Would appreciate any advice - thanks   
    • I have almost 10 TB in the storage pool so I was not surprised when my first server backup took a very long time (22 hrs.).  Lately the incremental server backups have been fast (15 - 18 min.)  Two days ago the server apparently initiated a full backup that failed twice but completed successfully this morning after about 23 hours.  An hour later the regular daily backup is back to a normal incremental backup times (18 min.) So why did the previous backup take so long, as long as a full backup would take? Does the server occasionally make a fresh start with a full backup to reduce the time a full incremental recovery takes?  (fewer increments, less processing time) I don't see a setting in the server backup options for specifying a number of increments between full backups, for example.    
    • I just posted a similar question elsewhere.  By Shadow Copies are you referring to the Windows 10 File History backups that periodically use shadow copy to backup open files? I too discovered that a huge amount of backup space was consumed by File History backups, no doubt because I was moving some large Blu-Ray files around.  That I fixed by moving the temporary and final file folders to the server, where they are backed up as part of the daily server backup.  That too, of course, uses Shadow Copies but that is another story. Has anyone done the math?  What is the equivalence .. or lack thereof ... between the traditional WS2012r2 (WHS v1 et al.) daily incremental backup :: and a Win7 System Image backup + File History backups? Restore time comparison? Which do you feel is best?
  • Popular Contributors