You can avoid a single point of failure for static websites, but not for email, and not for databases and/or database driven sites, as they cannot be easily replicated in realtime.
To avoid a single point of failure for static websites:
Run two servers on geographically diverse networks.
Have each server run it's own DNS, pointing websites to one or more IP#s entirely on the same server.
Point to one of the machines as ns1 and the other as ns2.
However there's still a single point of failure you have no control over, and that's the way DNS works:
Once an IP# is in the DNS server your user uses it won't clear until the TTL has expired.
Additionally, once you've run your browser on your desktop, it won't clear it's DNS cache until it's shut down and restarted. If you're using your local router as your DNS source (and most small networks do so) it probably isn't honoring TTL either, but perhaps will have to be restarted to clear it's cache.
And many ISPs (among them AOL, though I'd hardly call them an ISP
) don't even honor the short TTLs you'd normally use to avoid at least some of the problem.
We're currently building some redundancy into our network at Level3 in Tustin, California, by having multiple ingress/egress points to the Internet through Multiple routers and switches to our servers. But even so there will be single points of failure.
Jeff