Backup to Google-drive? FUSE?

FAF

Verified User
Joined
Apr 1, 2006
Messages
87
Location
CPH, Denmark
Hi

Goggle just dropped the price on their 1TB Google drive, so it's actually a pretty good deal right now. Need to read more about it though, like bandwidth usage, etc.

Anyone here using Google-drive or S3 for the backups? Google-drive-ocamlfuse (FUSE) works in CentOS6.*, just unsure if it's stable or if it's a good idea at all. I use FTP right now, locally in the hosting center to a different machine, but I only have 100GB or so. 1TB would be a nice addition because my 7 days rotation builds up.

Alternatively, I run DA in a KVM and using vzdump to make "bare-metal" snapshots locally. The only thing which lags, is that the VM needs to be suspended during the backup, but these are actually just disaster recovery-images for a full restore. They are stored locally on the machine and should eventually be moved outside the hosting center.

It's more important that DA get some kind of incremental solution, or an off-site solution for backup purposes, like to Google-drive or S3 and not only ftp or ssh. It works as it is and have done that for many years, no problem with DA and backup, just need to be able to store the backup outside the center directly from DA to the clouds. So my question comes down to FUSE - is it ok or not?

Any thoughts?... or working cases which utilizes Google drive for backup purposes? Plugins?

thx

/FAF
 
Definitely thughts... $10/TB./mo is very interesting to me for offsite backup. I can't sell at that price, and I'd love to buy at that price.

Please see what you can find out about the Fuse file system on CentOS6 with Google Drive, and also about their charges, if any, for transit.

And perhaps you'll be willing to try it out for us, or perhaps I can.

Jeff
 
Definitely thughts... $10/TB./mo is very interesting to me for offsite backup. I can't sell at that price, and I'd love to buy at that price.

Please see what you can find out about the Fuse file system on CentOS6 with Google Drive, and also about their charges, if any, for transit.

And perhaps you'll be willing to try it out for us, or perhaps I can.

Jeff

Thx Jeff

Yes, I'm looking at different solutions right now.

Just reading up on: http://code.google.com/p/gdrive-linux/w/list

Also checking "Grive" out: http://www.lbreda.com/grive/
- "grive can do two-side synchronization between the Google Drive and local directory".

Was also playing with the idea of making a Linux distribution on KVM, solely for the purpose of syncing with Google-drive. Could mount via NFS or SSH. To keep FUSE running in another VM and be able to play around and test stuff, trying to make it centralized and running as clean as possible. Don't know about how much space I would need for this, or if it just can act like a proxy. In that way multiple servers can use the service.

Was also thinking about a Google-Drive DA plugin, and using Grive, but it's a little early for me to see the full picture. Need to read up on that.

I have been running FUSE on Debian for a couple of years. Made a simple s3fs sync to Amazon S3. It works well, but I don't really know if it will run on CentOS just as nice. Amazon S3 API is also another beast. The issues for me right now is stability, security and reliability. Also build a Pydio server at one point, it's build on a Debian, which also connects to S3 via s3fs without issues, but again, this is S3 and not Google's API.

Found an intro on Python-FUSE, which can be used for making a simple File-system: http://www.slideshare.net/matteobertozzi/python-fuse
Need to wrap my head around the concept.

I will continue the research to find the most simple and stable solution. If you have a dev install then your more than welcome to try. I'm a little reluctant right now to just throw this into production without further testing. One way or the other, this is definitely something which can be useful, and as it is I really need to push the DA backups off-site and up to Google's drive, so, I'll make further research to see whats comes up.

I think that the $10/1TB is a beast, 1TB is really a substantial amount of backup-space - if not just double it :). I'm paying $3-4 a month on S3 for something like 70GB right now, and that is only the storage, not the bandwidth. I only use it for WP backups and some 40GB MP3's for redundancy on my CentovaCast server. Think that Google made a nice move here and I would gladly migrate to Google-drive for my backup purposes. Haven't found any pricing or info on the bandwidth yet.

Stay Tuned :)

thx

/FAF
 
I'm going to have to let you be the 'go-to' person on this project; I'm just too busy right now.

But I'm going to remain interested.

Have you found any information on data transit? Please feel free to write me (email) for some informatoin on what I can get and what I pay for it, in the datacenter I use.

Jeff
 
Thx again Jeff. I'm busy too right now. Need to upgrade the KVM on my Virt server first. After the upgrade I will work on getting FUSE on CentOS and connected to Google-drive. The first run will just be a mounted folder. KISS!

Found something on the price, but I'm not sure if it applies to Google-drive, which must have some kind of flat-rate. Amazon does not charge for the incoming bandwidth and it would be a surprise if Google did, but then again, it makes sense if they do charge a minimal amount, because the price is already low as it is. Just got my bill from Amazon S3 and it looks like that they have lowered their prices too, because it was only $0.63 in total for the storage this month. Didn't investigate further on that though, just accepted :).

Here is something on Google's storage pricing:

https://developers.google.com/storage/pricing#_PricingExample

I will continue the research on FUSE after I have upgraded my KVM server. I will also look at the details from Amazon to see if they have dropped their prices too. Nice with at little price-war amongst the storage giants.

/FAF
 
Presuming these are the prices, free ingress (in to the Google Network) means we could backup and store at no extra charge, but if we need to get copies out it would cust us as much as $120/TB. This may be doable for most of us; it's got to be an individual decision.

but how does Fuse change this? Is there anything Fuse will do which will cause us to download when we don't want to? I don't think so.

What about if we delete and move things on a Fuse directory:
Code:
mv backupdirectory oldbackupdirectory
Will that cause any unexpected moves?

Test required, I suppose.

Jeff
 
Sorry NoBaloney....never got a finish on this one. Seems too much of a hassle as is, so I ditch Google this time. Lets see what's pop-up later. Sry for the inconv.
 
What about MEGA? They give 50GB for free with account and 4TB for 8.33€/month.

I saw an article on a Linux magazine that explain how to make automatic backup using SSH stored in a MEGA account, isn't that cheaper and useful aswell?

Regards
 
Where do you see 8.33 for 4TB? https://mega.co.nz/#pro it costs 29.99 by looking at that page. Still pretty cheap I think.

But what's the situation with MEGA? I feel that it's not as reliable in terms of being a business.
 
Tnx for the lead. I stick with S3 for now. Solid and reliable for several years, almost like put set and forget..it's also quit reasonable at $0.0300 per GB. It can only get better from now on :) - I sleep well and the price is nothing to worry about. Need to make a price survey soon though, but for my needs I have found my peace.
 
I'm still using local (same datacenter, but different cabinet, different network, different router) server for backup. But I'm interested in something like S3. How do you manage rotation? Do you create a server in the cloud to rotate the S3 data?

Jeff
 
Hi

With S3 I use a s3cmd script

/usr/bin/s3cmd put -r --reduced-redundancy --server-side-encryption /backup s3://mybackup/

Then you may rotate the backup files using "versioning" feature and migrate olds files to glacier ($0.01 /Gb) with the "lifecycle" feature.This is very easy using the aws control panel.

roberto
 
Last edited:
Do you have a bit more information? Perhaps a link to the scripts?

We'd like to offer our clients an offsite backup option.

Jeff
 
Hi Jeff

s3cmd script:

http://s3tools.org/s3cmd

s3cmd howto:

http://s3tools.org/s3cmd-howto


I use s3cmd with these options:

/usr/bin/s3cmd put -r --reduced-redundancy --server-side-encryption /backup s3://mybackup/

where /backup is the local folder and s3://mybackup is the remote s3 folder backup

Now you have your files stored in S3.

How to rotate your files and migrate older to Glacier ?

- Log in AWS Management Console
- Log in S3 service
- Click your backup folder (bucket)
- Click "Properties"
- Enable "Versioning" for this folder

Now S3 keep all history backup files when you overwrite one file. The downside is that all are stored in S3 and never are deleted.

To lower our costs we need to migrate files to glacier when are older than "n" days (daily, weekly, what you want).

In "Properties" enable "Lifecycle" and set the lifecycle rules (migrate to glacier and when delete files).

These are my lifecycle rules:

Archive to the Glacier Storage Class 1 days after the object's creation date.

Note that objects archived to the Glacier Storage Class are not immediately accessible .

Expire 95 days after the object's creation date

Action on Previous Versions

Archive to the Glacier Storage Class 0 days after overwrite/expiration date.

This rule could reduce your storage costs. Refer here to learn more on Glacier pricing. Note that objects archived to the Glacier Storage Class are not immediately accessible ."



I hope help you and sorry my english :)

roberto
 
Last edited:
Thanks.

I'm very busy through next week (building a new VPS host) but will look into this for remote backup offering soon. I (and hopefully others) appreciate your help.

Jeff
 
I'm still using local (same datacenter, but different cabinet, different network, different router) server for backup. But I'm interested in something like S3. How do you manage rotation? Do you create a server in the cloud to rotate the S3 data?

Jeff

I see that you already have got an answer to this, but if I may just tell you shortly on how I do it, since my needs are very small.

I have a KVM setup. All my KVM instances and the dump from DA goes to a local backup server, just as yours. I have a 7-day rotation locally. All the content related stuff, from inside the domains, eg a WP site, goes directly to S3. So all user generated stuff, goes to S3, that includes also the DB, media, uploads, themes, plugins, files, etc. If I have to restore a KVM machine, or DA stuff, I just fetch it from the local server. If I need some user domain content restored, I just fetch it from S3 via a s3-backup/restore-cms-plugin or similar. YMMV, but my setup is simple and I like it clean. I do not have a lot of clients :)

If I was to build another setup, I would build another KVM with S3fs-fuse, or similar, and use that VM as a backup-sync-with-S3 node. You have to find a stable solution and be aware that you could introduce a memory-leak or similar to your production server, that's why I think that it's better to utilize a separate isolated instance for your syncing with S3. I already have a working Pydio server syncing with S3 in an isolated KVM, it's just a test, but interesting - it could be utilized as a backup to S3 server very easily. Maybe I'm a little paranoid, but I like to keep the DA server as minimal and clean as possible. I use KVM and not OpenVZ, because I like each VM to run it's own kernel/OS. I think it still applies to the KISS principal.

Good luck to you Jeff.

/FAF
 
You've made a lot of good suggestions. I will look into it as soon as I've got my VPS host up and running.

Jeff
 
Just a little resurrection: just reading up on this backup-tool:

http://blog.phusion.nl/2013/11/11/d...automated-full-disk-backups-for-your-servers/

I know it's an old post, but I will make some research on this.

Someone commented:
"or try tklbam (TurnKey Linux Backup And Migration) turnkeylinux.org which uses duplicity + S3 with even more simplicity. It's now available as a stand alone package using their ppa."

Turnkey offers readymade VMs. But it seems that there's also a ppa.

Also from the comments:
"DreamObjects (S3 compatible) is only 5c/GB and no API call charges either. Much cheaper than S3 :-)"

2.5¢/GB Storage A Month + 5¢/GB Download A Month:

https://www.dreamhost.com/cloud/storage/

They also have a plan, eg.:
1,024 GB $19.95 1.95¢

Worth a look. This subject obviously never dies :)
 
Last edited:
Back
Top