Mail Redundancy in the age of IMAP

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
Now that many users keep their mailstore on the server and use IMAP access, redundant mail storage is becoming more imortant.

Have you tried to build it?

Please share your thoughts. What solutions worked for you? And what solutions didn't work?

Thanks.

Jeff
 

SeLLeRoNe

Super Moderator
Joined
Oct 9, 2004
Messages
6,790
Location
A Coruña, Spain
Well the clustering i'm working on is still not ready, but will probably work once i'll expand the NFS Storage in a multiple NFS Storage rsynced and will work for redundancy on all data including mail.

Regards
 

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
I was thinking of using NFS across the 'net for geographical diversity, but of course it still requires one point f failure.

Not as much a chance of failure as with storage on same system as services, but some, anyway.

Have you decided that's the only real alternative?

I was thinking of building someting like a mail router, a point of failure least likely to break, to send emails to two servers, but of couse with failures they'll get out of sync. And then Sent email wouldn't end up on both servers, and the sieve filter would be limited to one machine as well.

I have a bare CentOS 6.4 server (nothing else installed) on it for about four more weeks, in a Florida datacenter; send me an email if you'd like to borrow it. It's expensive so I won't keep it past expiration unless there's a solution.

Jeff
 

SeLLeRoNe

Super Moderator
Joined
Oct 9, 2004
Messages
6,790
Location
A Coruña, Spain
Well for a clustering where there are two or more frontend and two of more directadmin server yes, for share the data accros multiple system with the ability of live-write from any source i didnt find any better solution than nfs... i need to work (after mysql replication work correctly) on nfs replication aswell.

Regards
 

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
So far I haven't found a better solution which would keep sorting and moving between directories/folders.

I'm guessing that may be why Gmail doesn't use folders, but instead just filters, which would require a proprietary storach schema, but can be kept in a database somewhere.

Anyone else have any ideas?

Jeff
 

SeLLeRoNe

Super Moderator
Joined
Oct 9, 2004
Messages
6,790
Location
A Coruña, Spain
Ive a question, where filters are stored? Cause if they are stored in a MySQL DB so just need to point all webmail/dovecot servers to that DB, if they are in user home or etc/virtual the nfs system would work for all aswell.

Im now working on a NFS replication to have no point of failure.

Regards
 

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
Ive a question, where filters are stored?
We're using a standard Dovecot installation running sieve filters. I think that replicating those between machines could be a nightmare.
Im now working on a NFS replication to have no point of failure.
I suppose NFS could be a good idea with a high-availability NFS filestore. Have you looked into building an NFS filestore in one location and calling it from another location a few thousand miles away? Do you think it'd be fast ejnough to give users a good experience?
nfs is very insecure just use drdb or rsync
Is there any reason we couldn't do NFS over a secure link?
I'm using DRDB for sync two storage server that share contents using NFS ;)
Sounds like a good idea. Does it work well over long-distance links over the 'net?

Thanks to both of you.

Jeff
 

SeLLeRoNe

Super Moderator
Joined
Oct 9, 2004
Messages
6,790
Location
A Coruña, Spain
I still didnt try NFS on long distance, dont have resource to that, using lan at this moment.

DRBD with NFS is working fine, now im gonna try DRBD with GFS for have the ability of simultaneus connection on all Storage nodes (at this time is just at failover with DRBD that duplicate block-by-block on all Storage Servers).

Since now the NFS Storage are in LAN with a shared LAN IP would be hard to test on a remote server, also, the failover need to have same shared IP, that need so or a VPN between locations or a public IP assigned on both location (so both ISP must be "ready" to serve that IP).

Regards
 

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
I'm not sure why you'd need the same IP#; I'm thinking failover which would direct the user to the IP# of the working server.

Maybe once you';ve got it all figured out I'll jut hire you :D.

Jeff
 

SeLLeRoNe

Super Moderator
Joined
Oct 9, 2004
Messages
6,790
Location
A Coruña, Spain
Cause you point the server to mount a remove NFS Storage using a single IP, so, the IP for a mount-point must be the same.

So, the failover work that all boxes have their own private IP + a "shared" ip between host, that come online on a single host at once, and, when this host is offline, another host bring up that IP.

So the servers will always try to reach a single IP that is shared between multiple servers without need to re-mount

I hope i did explain it fine.

Regards
 

nobaloney

NoBaloney Internet Svcs - In Memoriam †
Joined
Jun 16, 2003
Messages
26,119
Location
California
Perfect explanation. Easy to do, with VPN.

All this to sell business class email hosting in competition with Google. I have to wonder if it's worth it :).

Jeff
 
Top