I have a need to be able to do backups and restores sequential, i.e. back up one account, wait for it to complete, then back up another account... and so on. Likewise for restores, I need to be able to restore an account, wait for it to restore, and then restore another one. I'm doing all of this from the command line.
Now, from my understanding this is possible
https://help.directadmin.com/item.php?id=198
echo "action=backup&append...blahblah" >>/usr/local/directadmin/data/task.queue
/usr/local/directadmin/dataskq d200
for each user and for each restore... I get that.
I suppose my question is... is there anyting inherently wrong with running /usr/local/directadmin/dataskq like this? Over and over and over again? And is there anything wrong with constantly adding backup and restore commands over and over again to /usr/local/directadmin/data/task.queue?
Would a better solution - and something that might have usefulness for other areas - would be to have a secondary task.queue - or a series of task.queues available ... that don't necessarily run automatically. Or in the simpliest way... specific a path to a task.queue directly to dataskq? Something like:
echo "action=backup&append...blahblah" >/root/mytask.queue
/usr/local/directadmin/dataskq --taskqueue=/root/mytask.queue d200
This way you are not interfering with regular tasks that get added to /usr/local/directadmin/data/task.queue and you're not interfering with the every minute execution of those tasks. But you're still able to execute certain tasks on a more on-demand basis. It doesn't consist of a lot of extra work - just allowing a taskqueue specific parameter to be passed to dataskq and reading tasks from that file if specified.
Putting everything in /usr/local/directadmin/data/task.queue like this present a potential race condition. Where a backup/restore task is added to /usr/local/directadmin/data/task.queue but then the dataskq cron picks up the task before execution of /usr/local/directadmin/dataskq d200 - that's why I think being able to specify an alternative taskqueue file - whether this be user specific or hardcoded in - would be a benefit.
Or is this functionality already existing?
Just a suggestion.
On a somewhat related note. I see different ways of running dataskq from the command-line. Sometimes I see:
/usr/local/directadmin/dataskq d200
other times I see
/usr/local/directadmin/dataskq d800
I don't think it makes any difference. The 200 and 800 are just debugging levels. At least in my particular case, the debugging information isn't all that necessary - just being able to run a process and wait for it to complete before proceeding to the next step.
Now, from my understanding this is possible
https://help.directadmin.com/item.php?id=198
echo "action=backup&append...blahblah" >>/usr/local/directadmin/data/task.queue
/usr/local/directadmin/dataskq d200
for each user and for each restore... I get that.
I suppose my question is... is there anyting inherently wrong with running /usr/local/directadmin/dataskq like this? Over and over and over again? And is there anything wrong with constantly adding backup and restore commands over and over again to /usr/local/directadmin/data/task.queue?
Would a better solution - and something that might have usefulness for other areas - would be to have a secondary task.queue - or a series of task.queues available ... that don't necessarily run automatically. Or in the simpliest way... specific a path to a task.queue directly to dataskq? Something like:
echo "action=backup&append...blahblah" >/root/mytask.queue
/usr/local/directadmin/dataskq --taskqueue=/root/mytask.queue d200
This way you are not interfering with regular tasks that get added to /usr/local/directadmin/data/task.queue and you're not interfering with the every minute execution of those tasks. But you're still able to execute certain tasks on a more on-demand basis. It doesn't consist of a lot of extra work - just allowing a taskqueue specific parameter to be passed to dataskq and reading tasks from that file if specified.
Putting everything in /usr/local/directadmin/data/task.queue like this present a potential race condition. Where a backup/restore task is added to /usr/local/directadmin/data/task.queue but then the dataskq cron picks up the task before execution of /usr/local/directadmin/dataskq d200 - that's why I think being able to specify an alternative taskqueue file - whether this be user specific or hardcoded in - would be a benefit.
Or is this functionality already existing?
Just a suggestion.
On a somewhat related note. I see different ways of running dataskq from the command-line. Sometimes I see:
/usr/local/directadmin/dataskq d200
other times I see
/usr/local/directadmin/dataskq d800
I don't think it makes any difference. The 200 and 800 are just debugging levels. At least in my particular case, the debugging information isn't all that necessary - just being able to run a process and wait for it to complete before proceeding to the next step.