For MySQL some information from the links below can be used:
http://www.howtoforge.com/mysql_database_replication
[/quote]
It's a lot more complex th...06/04/20/advanced-mysql-replication.html]here, it's not just a simple fix.
Please note that this is basic information. The network still could go down and because off that you should have 2 uplinks to different routers with different connections to the world to minimize this problem.
Actually our network at the California data center is set up this way, though we've never had a request, therefore we've never implemented it. Our uptime (Level 3) appears to be good enough without it; downtime is generally system related, with less than five minutes per year network downtime.
With this documentation you can see that the things below won't be a single point off failure (please note I do not do the management part, I just do the part that a visitor off a website will see):
- mail (MX20 in an other network that forwards the mail to the MX10 mailserver, this could also be a cluster)
The problem with bu MX (which has been discussed ad-nauseum in many email forums) is that if your bu MX doesn't know which users are and which are not real users, you can easily overload a system with for example, a dictionary attack. And when the primary mx is restored you've got the problem of what to do with the undeliverable emails. You couldn't refuse them while your primary was down because you didn't have a definitive list of real addresses, and now you can't discard the undeliverables because it's against RFCs. Neither can you return them because you may be returning them to forged senders, in which case you're a spammer yourself. In my opinion much better to rely on the built-in redundancy of the email system which (with proper DNS replication) will attempt to make deliveries for up to four days on most systems.
httpd/webservers (more webservers with it installed with the same config)
Works great for static sites (we're implementing it now for emergency support sites for many of our resellers). However for database driven sites, see above discussion on MySQL, and for sites where the user uploads information, even more dependence on the issues in my discussion above.
fileserver (2 servers that contain the same data with 1 virtual IP with heartbeat)
Seems great until you discuss getting that one virtual IP implemented; all the articles assume you've got all the required resources; you need either dependence on one datacenter (which works for some of us but not for others), dependency on (for most of us) at least one point of failure), or, for use on multiple data centers, you have to own your own IP#s and somehow manage to get single IP#s into routing tables.
loadbalancers (2 servers that contain the same config and data with 1 virtual IP)
Same redundancy issues as directly above, but doable if you can resolve those issues and have the resources.
2 or 3 DNS servers (in at least 2 networks)
Absolutely, in the kind of set-and-forget solution that both DA and our Master2Slave DNS replication supply.
MySQL (see the MySQL documentation that it is possible to have 2 servers for MySQL with 1 virtual IP)
Possible. Absolutely, within limitations (above). Easy? Not for most of us.
SSH (for maintenance off course on every server, if you also want it for clients you could use an extra server for it)
- FTP could be on 2 servers that place the files on both fileservers, but off course with 1 virtual IP like the loadbalancers.
What did I forget?
Your opinion may most certainly vary; in my opinion you've forgotten that this isn't the sort of set-and-forget system that most DA users seem to want. I don't think this is DA's market, at least as defined today for most DA users.
Even with H-Sphere, which is set up for multiple servers, this kind of redundancy isn't addressed; it's just too complex for most of us and requires specialty setups in datacenters and with cooperative transit for IP#s for geographical diversity.
Jeff