Directadmin and Portainer

It's relevant to note that portainer is just a manager for docker instances. Just an easy GUI to view your containers in and manage them to some degree. Quite useful, but doubtful you'd accomplish everything in it without hopping into CLI.

In theory there should be no issue with installing DA in a docker container, but you're working against the spirit of Docker at that point and I'd strongly advise that you not. Using it so far out of it's intended functionality means it's developers aren't really considering your use case when pushing updates. Docker containers are intended to be expendable, you should be able to lose the container and go on about your day. This means you'll basically end up mounting the whole system root in the container to a static location on your disk so that it's data persists. At that point you're just not getting the value of docker.

Instead, I'd recommend considering LXC containers with something like Proxmox.
 
It's relevant to note that portainer is just a manager for docker instances. Just an easy GUI to view your containers in and manage them to some degree. Quite useful, but doubtful you'd accomplish everything in it without hopping into CLI.

In theory there should be no issue with installing DA in a docker container, but you're working against the spirit of Docker at that point and I'd strongly advise that you not. Using it so far out of it's intended functionality means it's developers aren't really considering your use case when pushing updates. Docker containers are intended to be expendable, you should be able to lose the container and go on about your day. This means you'll basically end up mounting the whole system root in the container to a static location on your disk so that it's data persists. At that point you're just not getting the value of docker.

Instead, I'd recommend considering LXC containers with something like Proxmox.
Hi mxroute,
Have you ever tested this solution?
 
Hi mxroute,
Have you ever tested this solution?
If you’re referring to containers most of us have at one point or another. An OpenVZ VPS would qualify as equivalent and they were quite big a few years back, a bit less so in favor of KVM these days.

if referring to docker, I don’t consider that a solution unless backed by the DA devs for a new deployment strategy.
 
Hi mxroute,
Have you ever tested this solution?
Has anyone found DA on a ProxMox or any other cloud? Are there specific install instructions, advantages, etc?
Thx
 
Has anyone found DA on a ProxMox or any other cloud? Are there specific install instructions, advantages, etc?
Thx
No significant advantages in most ways. Being able to snapshot a VM is nice, the potential of live migrating one in a hardware outage (assuming volume stored via network attached storage, native of course to the VMs viewpoint) is there but there are always trade offs along the way (disk IO, and therefore capacity, typically).

No benefit to a premade image as it’s a one liner to install DA.
 
  • Like
Reactions: ai8
No significant advantages in most ways. Being able to snapshot a VM is nice, the potential of live migrating one in a hardware outage (assuming volume stored via network attached storage, native of course to the VMs viewpoint) is there but there are always trade offs along the way (disk IO, and therefore capacity, typically).

No benefit to a premade image as it’s a one liner to install DA.
Thank you @mxroute
How can one achieve a cloud setup where you keep plugging new servers in to grow it, have load-balancing, auto-scaling and fail-over?
 
How can one achieve a cloud setup where you keep plugging new servers in to grow it, have load-balancing, auto-scaling and fail-over?

It's one of those topics that a lot of people have opinions on, but not many have the direct experience to back it up. I don't know all of the things that do work, but I know a lot of the things that don't.

On the face of it, an expandable cloud server with some form of block storage gives you room to scale vertically. But not everything scales well vertically either. Network-attached storage is the only way to keep scaling storage vertically (like Ceph), but it's neither easy nor cheap to set up a reliable cluster, and you'd need at least a dedicated local network port between the server and the cluster to ensure that you could scale very far without capping out throughput. That's only going to grow you as far as the memory on the host node but these days you can pack a lot of memory in a server so it won't be your first roadblock.

Every application is going to handle replication / failover and horizontal / vertical scaling differently. For example:

Dovecot replication - https://wiki.dovecot.org/Replication

Load balancers can be outsourced at most major cloud providers but haproxy is great if you want to do it yourself.

Load balancers and individual service replication processes aren't going to keep all of your data in sync. Take, for example, your SNI certificates for Dovecot/Exim, your user configs in /etc/virtual, all that jazz. A newbie is going to tell you "GlusterFS" and maybe, just maybe, two servers in the same rack running GlusterFS on a dedicated port between them might be able to handle the replication needs of a vertically scaling server, but you put them across the internet and it would barely be reasonable to expect it to handle a few websites, much less a whole server that needs to stay in sync at all times and is constantly writing new files (because that one guy with a catchall email and no spam filters receives 500 spam per minute, right?).

None of this is really reasonable at the scale that most shared hosting providers are operating, or the market that they're selling to. If you pull off all of this and theoretically have no points of failure guess what... your Ceph cluster just took a shit and now 75,000 customers are down. Your customers are ticketing to tell you that their last host, the one that kept things simple and deployed customers across multiple single servers with 1TB HDDs in RAID1 never had that problem, "I thought your setup was so expensive because it would never go down?"

IMO it's not worth it. Build good servers, deploy the amount of customers you're comfortable with on each one, make backups, and keep building new servers. At the end of the day the more you spend doesn't mean better uptime. Office 365 has an outage every week or two. Gmail has outages. Amazon cloud regions go down. Spread out the uptime over a 10 year period and I'd wager none of these overly complex failover / replicated / horizontally scaled systems have enough to show for their investment over the ones not doing it.
 
It's one of those topics that a lot of people have opinions on, but not many have the direct experience to back it up. I don't know all of the things that do work, but I know a lot of the things that don't.
That's indeed very valuable! Thank you for sharing!

On the face of it, an expandable cloud server with some form of block storage gives you room to scale vertically. But not everything scales well vertically either. Network-attached storage is the only way to keep scaling storage vertically (like Ceph), but it's neither easy nor cheap to set up a reliable cluster, and you'd need at least a dedicated local network port between the server and the cluster to ensure that you could scale very far without capping out throughput. That's only going to grow you as far as the memory on the host node but these days you can pack a lot of memory in a server so it won't be your first roadblock.
And after a while you need to build another cloud. And then connect both. And increase complexity which was supposed to be minimized. It's almost like a derivative (with more risk than the underlying asset=server).

Every application is going to handle replication / failover and horizontal / vertical scaling differently. For example:

Dovecot replication - https://wiki.dovecot.org/Replication
No real top down management. No real abstraction. Perhaps one day automated, but not yet.

Load balancers and individual service replication processes aren't going to keep all of your data in sync. Take, for example, your SNI certificates for Dovecot/Exim, your user configs in /etc/virtual, all that jazz. A newbie is going to tell you "GlusterFS" and maybe, just maybe, two servers in the same rack running GlusterFS on a dedicated port between them might be able to handle the replication needs of a vertically scaling server, but you put them across the internet and it would barely be reasonable to expect it to handle a few websites, much less a whole server that needs to stay in sync at all times and is constantly writing new files (because that one guy with a catchall email and no spam filters receives 500 spam per minute, right?).
This is exactly what made me wonder all the time, how have they done it...so they have not, good to know.

None of this is really reasonable at the scale that most shared hosting providers are operating, or the market that they're selling to. If you pull off all of this and theoretically have no points of failure guess what... your Ceph cluster just took a shit and now 75,000 customers are down. Your customers are ticketing to tell you that their last host, the one that kept things simple and deployed customers across multiple single servers with 1TB HDDs in RAID1 never had that problem, "I thought your setup was so expensive because it would never go down?"
All eggs in one basket. Goes back to risk mgmt. Love your examples!

IMO it's not worth it. Build good servers, deploy the amount of customers you're comfortable with on each one, make backups, and keep building new servers. At the end of the day the more you spend doesn't mean better uptime. Office 365 has an outage every week or two. Gmail has outages. Amazon cloud regions go down. Spread out the uptime over a 10 year period and I'd wager none of these overly complex failover / replicated / horizontally scaled systems have enough to show for their investment over the ones not doing it.
Agreed. Real cable beats wifi, real server beats cloud. What you save on licensing with one cloud you invest in fixing complexity, but may cost you more lost customers and rep.

Will gladly build single servers like some loved (and not so loved big) companies.

THANK YOU so much for your honest, realistic post - the most precious gift! (y)
 
Back
Top