Sign in to follow this  
Followers 0
Chris Cowles

Backup Scheme Advice?

3 posts in this topic

I'm running WHS2011 on an HP Proliant Microserver. The OS is on a pair of 2.5" 350GB drives in RAID1 format. Currently I have 2 x 1TB + 1 x 2TB as data drives. The 2 TB is partitioned as 2 x 1 TB drives. Effectively I have 4 x 1TB partitions on 3 physical drives.

 

1 of the 1TB hard drives is dedicated to server backup. Since WHS server backup takes over the drive, I can't give it a partition on the 2 TB drive.

 

1 x 1TB hard drive + 1 1TB partition on the 2TB hard drive are dedicated to DrivePool. All shares on it are duplicated. I have 1 TB effective data storage space, which is  plenty for my needs.

 

1 x 1 TB partition on the 2 TB hard drive is used to store client backups and is also a target destination for backups from other machines using CrashPlan. The clients I back up with WHS are not backed up with CrashPlan. It's only for a single floating laptop that my daughter takes to college. CrashPlan seems easier for that purpose than Hamachi, especially since the latter no longer runs as a service.

 

I have a subscription to CrashPlan+ for off-server backup. My intent is to backup the server backup, the client CrashPlan backup, and perhaps the WHS client backups, if those aren't covered by WHS server backup. My use of CrashPlan in that way is as protection against disaster, such as my house burning down or the server being stolen.

 

My first issue is that backup of the DrivePool requires backing up both parts of the pool, to get everything. That's a lot of duplication. Am I better off just pairing the 2 x 1TB drives with RAID? That way the primary data protection is RAID and backup of the data share sees only one set of data. My original use of DrivePool was because I had an odd collection of drives. That has changed as I've been able to accrue more similar drives.

 

My second issue is that I dedicated a 1TB drive to server backup and it filled up REALLY fast. Considering that I have CrashPlan for off-server backup, what should I be protecting with server backup that provides a means to restore the server? I have enough space on 2 other workstations on the network to use CrashPlan to copy some data (maybe the client backup directory?) to them, as redundant off-server backup that can be accessed easily.

 

Any suggestions are welcome. I don't mind effort reconfiguring drives, etc., to make all this work. But, once configured, it has to be automatic or I'll never keep up with it. If I were required to swap external drives in rotation, the scheme will fail.

 

Thanks for your help.

 

EDIT: Also, what recommendations do you have for using CrashPlan? It interferes with server backup so I can only run it outside the time that server backup is running.

 

Chris

Share this post


Link to post
Share on other sites

Upgrade to a WGS Supporter Account to remove this ad.

I'm running WHS2011 on an HP Proliant Microserver. The OS is on a pair of 2.5" 350GB drives in RAID1 format. Currently I have 2 x 1TB + 1 x 2TB as data drives. The 2 TB is partitioned as 2 x 1 TB drives. Effectively I have 4 x 1TB partitions on 3 physical drives.

 

1 of the 1TB hard drives is dedicated to server backup. Since WHS server backup takes over the drive, I can't give it a partition on the 2 TB drive.

 

1 x 1TB hard drive + 1 1TB partition on the 2TB hard drive are dedicated to DrivePool. All shares on it are duplicated. I have 1 TB effective data storage space, which is  plenty for my needs.

 

1 x 1 TB partition on the 2 TB hard drive is used to store client backups and is also a target destination for backups from other machines using CrashPlan. The clients I back up with WHS are not backed up with CrashPlan. It's only for a single floating laptop that my daughter takes to college. CrashPlan seems easier for that purpose than Hamachi, especially since the latter no longer runs as a service.

 

I have a subscription to CrashPlan+ for off-server backup. My intent is to backup the server backup, the client CrashPlan backup, and perhaps the WHS client backups, if those aren't covered by WHS server backup. My use of CrashPlan in that way is as protection against disaster, such as my house burning down or the server being stolen.

 

My first issue is that backup of the DrivePool requires backing up both parts of the pool, to get everything. That's a lot of duplication. Am I better off just pairing the 2 x 1TB drives with RAID? That way the primary data protection is RAID and backup of the data share sees only one set of data. My original use of DrivePool was because I had an odd collection of drives. That has changed as I've been able to accrue more similar drives.

 

My second issue is that I dedicated a 1TB drive to server backup and it filled up REALLY fast. Considering that I have CrashPlan for off-server backup, what should I be protecting with server backup that provides a means to restore the server? I have enough space on 2 other workstations on the network to use CrashPlan to copy some data (maybe the client backup directory?) to them, as redundant off-server backup that can be accessed easily.

 

Any suggestions are welcome. I don't mind effort reconfiguring drives, etc., to make all this work. But, once configured, it has to be automatic or I'll never keep up with it. If I were required to swap external drives in rotation, the scheme will fail.

 

Thanks for your help.

 

EDIT: Also, what recommendations do you have for using CrashPlan? It interferes with server backup so I can only run it outside the time that server backup is running.

 

Chris

sounds like you have a good plan,  and having both local and offsite backups is not necessarily over kill.   as you mentioned your use of crash plan is for the very worst case, and I'm sure if provides some reassurance as well.     I don't subscribe to any of the off site backup, but I do keep a copy of all the data that I can not reproduce, documents, photos, home movies, etc, on a hard disk in a fire box, that's on top of the  complete backup set that I do for all the server data, but those drives are in a closet, so in case of fire I'm toast.haha

 

I've thought about getting a separate fire box for just my HDDs, but haven't got around to it.

 

 

You mentioned that you backup up your drive pool, I would recommend that you backup only the shares themselves, this ensures your not backing up the duplicated data.

 

I'm not fan of the built in backup, I'm using it only to backup my OS drive, for my shares, I'm using a program called backup4all, and the only reason I selected this program is because you can set it to do a checksum of all files, this should help catch any file corruption, either in the backup set or on the server, before its too late.  Lets hope we never find out.

Share this post


Link to post
Share on other sites
I don't subscribe to any of the off site backup, but I do keep a copy of all the data that I can not reproduce, documents, photos, home movies, etc, on a hard disk in a fire box, that's on top of the complete backup set that I do for all the server data ...

 

Keeping them in a firebox requires physically swapping external drives. If I had to do that, it would never happen.

 

You mentioned that you backup up your drive pool, I would recommend that you backup only the shares themselves, this ensures your not backing up the duplicated data.

 

I'm doing what you suggest. I backup from the pool, but not the individual copies of the pool. I'm effectively just backing up the shares as presented by DrivePool. From CrashPlan's perspective, they're not duplicated. I do that only becuase it's easier to use DrivePool's drive letter, and I don't have to concern myself with whether I'm getting all the pieces.

 

I do backup the WHS client computer backups from one copy of the pool's hidden directories, using server backup. That's because I don't want to back those up to CrashPlan, and becuase server backup requires a real hard disk as a source. Since the client backup folder is duplicated but the pool has only 2 drives, I know that each half fo the pool contains complete copies of all of the files.

 

Chris

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0



Upgrade to a WGS Supporter Account to remove this ad.



  • Posts

    • Hi, Thanks for the reply. I understand what you mean but then that's like having 2 DC's having DHCP to issue IP in case one goes down, so both DHCP scopes would have to issue same IP address to same client or they would run out of addresses. But anyway thanks for clarifying, I was getting really frustrated about not being able to solve this because I was understanding that the clients would not leave the network when the DC is down and authentication takes place at RODC. Thanks a lot.
    • Well your right and wrong at the same time, i tested the idea using a tp-link loadbalancer and while kt didnt work on websites, for torrents/netflex it worked perefctly simple messirment's of speeds on speedtest.net got me the combined speeds as well and thats without using the 802.3ad protocol which from what i been reading makes it works perfectly long story short, even if i am not able to use the "full pipe" in all applications, it make hell of a diffrance from the plain single connection   note: i think that adding a vpn layer will even solve the multipath problem, thats what speedify and others are doing to get the combined speed
    • Yeah, mine stopped working also, but this time the communicator would allow connection to the server.
      But the automated backups were not occurring. I had to re-setup the folders associated with the computer "Customize Backup For This Computer" and then the automated backups worked again. Apparently there is an option for an anniversary clean install. Then install the connector.
      I think that would be the best option. But definitely a pain in the butt. Let us know how it goes.
      Shane
      Sydney, Australia      
    • Depends on what your storage requirements are going to be.   I personally favor RAID-10 (mirrored stripe) arrays because of the lack of a need for parity, but it's not particularly storage efficient, and with your current setup, you'd have only 4TBs (3.63TiBs) of storage space available (half of an 8TB stripe due to mirroring), in which a minimum 160GBs of that 4TBs is used for the C: drive if you partition the volume during installation. WIth 4 disks, you could do RAID-5, which would give you 6TBs (5.4TiBs) of storage space (because 2TBs is used for RAID stripe parity), but then there's the parity performance penalty and the higher risk of an unrecoverable read error (URE) killing your array and losing your data if you're not using enterprise drives.   ...or you could try Storage Spaces, I guess.
    • RODCs only work with "active" services like DHCP when the RWDC (your DC) is still active.  If the RODC can't get the information from the RWDC, then the DHCP service stops on the RODC.   The problem is, since the RODC is read-only, it can't write back the DHCP lease information to AD, especially if your only read-write domain controller is down.   DNS and Global Catalog (AD authentication) works as long as clients are only reading from it, but DNS changes won't update if a client's only DNS server is a RODC.   AD best-practices is to have a minimum of two RWDCs for any domain (which Microsoft itself doesn't follow with the Essentials product in the first place), so that you have another writable domain controller that can update information in the event the primary goes offline.
  • Popular Contributors