System Quotas not working

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
Greetings,


I have recently moved over DirectAdmin from one VPS to another, and since I have done so most of the system quotas refuse to work.

I have followed the instructions on http://help.directadmin.com/item.php?id=42. I have addedquota_partition=/home to my /usr/local/directadmin/conf/directadmin.conf file. My /etc/fstab looks like this:
Code:
#
# /etc/fstab
# Created by anaconda on Sat May 30 19:55:20 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_149--210--219--198-root			/			xfs	defaults,uquota,gquota	0 0
UUID=xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx /boot                   xfs     defaults        0 0
/dev/mapper/centos_149--210--219--198-home		/home			xfs	rw,defaults,uquota,gquota,usrquota,grpquota	0 0
/dev/mapper/centos_149--210--219--198-swap swap                    swap    defaults        0 0
But I still get

Code:
[root@vps1 etc]# /usr/sbin/repquota /home
repquota: Mountpoint (or device) /home not found or has no quota enabled.
repquota: Not all specified mountpoints are using quota.
as a response.

To round up, this is the rest of the perhaps relevant information:
Code:
[root@vps1 etc]# /usr/local/directadmin/directadmin c | grep quota_partition
quota_partition=/home
ext_quota_partitions=
[root@vps1 etc]# repquota `/usr/local/directadmin/directadmin c | grep quota_partition= | cut -d= -f2`
repquota: Mountpoint (or device) /home not found or has no quota enabled.
repquota: Not all specified mountpoints are using quota.
[root@vps1 etc]# ls -lad /home/tmp
drwxrwxrwt. 2 root root 6 Jun  7 12:21 /home/tmp
[root@vps1 etc]# ls -la /home/tmp
total 4
drwxrwxrwt.  2 root root    6 Jun  7 12:21 .
drwx--x--x. 26 root root 4096 Jun  7 12:10 ..
[root@vps1 etc]# df -h
Filesystem                                  Size  Used Avail Use% Mounted on
/dev/mapper/centos_vps-root   50G  3.6G   47G   8% /
devtmpfs                                    1.9G     0  1.9G   0% /dev
tmpfs                                       1.9G   24K  1.9G   1% /dev/shm
tmpfs                                       1.9G  137M  1.8G   8% /run
tmpfs                                       1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/centos_vps-home   96G   30G   67G  31% /home
/dev/vda1                                   497M  144M  353M  29% /boot
Thank you in advance for reading this,
Kevin
 

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
Hello Alex,

Thanks for your reply. This seems to have made a difference already when I log in as a user and click the Update button on the stats page. However, when I'm in my reseller or admin view and I look at the totals they do not seem to update. I have tried running /sbin/quotaoff -a; /sbin/quotacheck -avugm; /sbin/quotaon -a; to see if that would update everything on a system-wide level but I'm still getting the following error when I run that command:

Code:
[root@vps1 directadmin]# /sbin/quotaoff -a; /sbin/quotacheck -avugm; /sbin/quotaon -a;
quotacheck: Skipping /dev/mapper/centos_vps-root [/]
quotacheck: Skipping /dev/mapper/centos_vps-home [/home]
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
 

zEitEr

Super Moderator
Joined
Apr 11, 2005
Messages
14,255
Location
GMT +7.00
Your partitions are XFS. Did you enable xfs quotas in the directadmin.conf? If not then please follow the official guide.
 

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
I did:

Code:
[root@vps1 directadmin]# ./directadmin c | grep xfs
use_xfs_quota=1
xfs_on_domains=1
xfs_quota=/usr/sbin/xfs_quota
 

zEitEr

Super Moderator
Joined
Apr 11, 2005
Messages
14,255
Location
GMT +7.00
Try

Code:
# xfs_quota -x
and then

Code:
xfs_quota> state
what do you see?
 

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
Code:
[root@vps1 directadmin]# xfs_quota -x
xfs_quota> state
User quota state on / (/dev/mapper/centos_vps-root)
  Accounting: ON
  Enforcement: ON
  Inode: #1850569 (6 blocks, 4 extents)
Group quota state on / (/dev/mapper/centos_vps-root)
  Accounting: OFF
  Enforcement: OFF
  Inode: N/A
Project quota state on / (/dev/mapper/centos_vps-root)
  Accounting: ON
  Enforcement: ON
  Inode: N/A
Blocks grace time: [7 days 00:00:30]
Inodes grace time: [7 days 00:00:30]
Realtime Blocks grace time: [7 days 00:00:30]
Is what I get.

I assume I need to change the path to /home from / somewhere? I my directadmin.conf I have already added quota_partition=/home though and if I run the "update" tally from a user I'm logged in as it'll display the correct amount. Do I need to set it in another location for xfs?
 

zEitEr

Super Moderator
Joined
Apr 11, 2005
Messages
14,255
Location
GMT +7.00
OK, try this then:

Code:
xfs_quota -x  -c "enable -gu -v" /dev/mapper/centos_vps-home
and post your results.
 

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
This is what I get:

Code:
[root@vps1 directadmin]# xfs_quota -x  -c "enable -gu -v" /dev/mapper/centos_vps-home
XFS_QUOTAON: File exists
 

KevinF

Verified User
Joined
Mar 22, 2012
Messages
12
Code:
[root@vps1 directadmin]# cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=1931736k,nr_inodes=482934,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/centos_vps-root / xfs rw,relatime,attr2,inode64,usrquota,prjquota 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
/dev/vda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/mapper/centos_vps-home /home xfs rw,relatime,attr2,inode64,usrquota,grpquota 0 0
The totals seem to display for the individual users it would appear (and only update after explicitly clicking the update button), but for the reseller and admin users the sidebar just gives 0mb used.
 

zEitEr

Super Moderator
Joined
Apr 11, 2005
Messages
14,255
Location
GMT +7.00
From what I've seen I might conclude you need to wait for when directadmin updates its cached data or try to run tally once more.
 
Top