Directadmin on the cloud, is it possible?

System Error Message

Verified User
Joined
Feb 4, 2021
Messages
9
Hi is it possible to have directadmin on a cloud setup where everything is redundant and fail proof? For example i may want to have multiple different webservers running at the same time on their own, multiple dedicated php instances running at the same time and same for database. Is it possible for a single DA to control all of this from a separate instance?

For instance irregardless of accounts, for a each cloud, have a single DA assigned to it. Since DA itself isn't crucial to the account websites running, it can be a single instance on its own separate from the cloud but to control the cloud just as you would from a single server. All steps all having the same settings only IPs being different. Is this possible?

I've been dealing with cloud setups for singular websites so can this be done with DA for shared hosting? One setup i did was for thousands of requests /s on a less common php website that was pretty big. I'd prefer not to have to install direct admin on every part since part of the goal of separating is for each to get more resources in general. The other reason for this is being able to use a different OS. Debian works well for webservers and php since it takes very little ram, while opensuse works based for storage and databases because it not only gets newer release packages but also comes with an easy to use recovery tool should things fail. During testing of a cloud VM system it corrupted LVM storages consistently, and opensuse was the only OS that could work thanks to being able to boot into recovery with file recovery and filesystem tools and could then proceed on.
 
That's not how it is designed. It would be a complete rewrite for that to happen, it would need to be rebuilt as a kubernetes cluster management system (or equivalent, for each cloud provider provides a slight different interface to their systems) to manage storage, nodes, databases, etc all as nodes.

I could see a product like that being interesting, but it would more than likely be much more expensive than it currently is for a niche market.

Building websites at scale is different than just running a simple site, lots more moving pieces to take into consideration. Most sites would be limited to running a particular web site on a single node, so it would be difficult to scale an individual site, unless it was written with that in mind.
 
not planning to put it on kubernetes but rather VMs. For instance each part being its own VM.
As an example, a bunch of VMs running litespeed/nginx, another bunch of VMs running lsphp/php-fpm, another bunch of VMs running mariadb-galera, another bunch of VMs running redis cluster, and underneath, ceph providing mount points and NFS.

Then manage all that with directadmin like as if it is a singular mechanism.
PHP websites were not made for such scale in mind, but the design of the flow of website software is. For instance the most basic, nginx, php-fpm and mariadb can all be separated into 3 servers by configuring the right network path of each without having to change anything on the website itself. However i find that shared hosting can sometimes suck and this sort of design using multiple servers would do well. nginx for example can load balance multiple php-fpm servers, and databases can be replicated. HAproxy helps if you dont want to change the website and deal with multiple IPs as components.

Im doing my own website like this right now at home using various low powered PCs and ARM boards.
 
Last edited:
Realistically the costs aren't justified by the results in most cases. Anyone who thinks they have achieved total, seamless redundancy is just not creative enough to consider all of the single points of failure they have.

A good provider and a well managed server can easily go years without downtime, and everyone has downtime. Facebook has downtime. Office365 has downtime. Spending exponentially more to prevent occasional downtime while major internet services are even sometimes going down, it's worth reconsidering if this is worth it.

You can over do redundancy and really destroy cost to benefit ratio. Then your registrar goes down like Enom has a million times (as one example) and your whole plan is shot.

There's beauty in simplicity. Be careful chasing dreams that sound this wonderful, I've rarely seen anyone come out on the other side feeling like it was worth it. More often, they realize that they just added a hundred new points of failure and increased their odds of running into one.
 
Providers can have disastrous downtime, just look at OVH - one datacentre burned down....... That said, don't put all your eggs in one basket.
 
i know what you mean, but at times things like updates can bring a server down, for instance when upgrading from opensuse 15.2-15.3 or debian 9-10 requires a single reboot. So by splitting the components you can progressively update, even hardware as well. Then for networking one of my designs before had each server connected to at least 2 switches, doing bonding over 2 switches for seamless capacity and fail safe, while like what aws recommends is to distribute over multiple AZs and regions, so if a datacenter gets burned down your application doesn't get interrupted or only has minimal interruption, which is one of the key advantages of a service like regular S3 where your files are replicated all around the world. Security is a different thing as its not about if one server gets hacked either.

But my point is about just getting seamless management of a split component by a web panel. I think what i can try is have DA run on a small VM then replicate things over the network from configuration files to user files. For instance if the files use ceph NFS and DA has NFS mounted with a symlink, this could work and no one would notice. Then set the email servers to a different server altogether. Would be worth a try but i'm too broke to even get the $2 personal license to experiment with because where i live based on salaries and currencies, feels like $10. All i have are the machines i bought before back when i was in a different country. Right now im just working on migrating my website from shared host to a cloud setup with no panel, and seeing if replication abuses LS licenses (not really abuse but having multiple instances load balanced but with the same license from the same ip) considering that if you were to use aws ec2 auto scaling from a snapshot based on load for example, to see if it would work but using my own hardware to try it.
 
Back
Top