Disks periodically unmounted in cloudlinux error

aitorserra

Verified User
Joined
Jul 4, 2019
Messages
31
Hello,

Once a week or month, one of my partitions gets unmounted. I am on a hetzner cloud server. I use cloudlinux and directadmin. Directadmin and Cloudlinux support told me that this could be a server problem. DC told me that everything it's ok.

This is the state of the partitions when it fails:

[root@ns81 home3-81]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 228.9G 0 disk
├─sda1 8:1 0 228.8G 0 part /
├─sda14 8:14 0 1M 0 part
└─sda15 8:15 0 64M 0 part /boot/efi
sdb 8:16 0 500G 0 disk
sdc 8:32 0 500G 0 disk /home3-81
sdd 8:48 0 400G 0 disk /home4-81
sr0 11:0 1 1024M 0 rom

Missing partition: /home2-81

After running the disk mount command and to remount cagefs:

[root@ns81 home2-81]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 228.9G 0 disk
├─sda1 8:1 0 228.8G 0 part /
├─sda14 8:14 0 1M 0 part
└─sda15 8:15 0 64M 0 part /boot/efi
sdb 8:16 0 500G 0 disk /home2-81
sdc 8:32 0 500G 0 disk /home3-81
sdd 8:48 0 400G 0 disk /home4-81
sr0 11:0 1 1024M 0 rom

Everything works normally again.

Actual partition table:

UUID=252a70f2-2df2-4221-b5b9-3423197391e4 / ext4 defaults,usrquota,grpquota 1 1
UUID=B824-9A11 /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
/dev/disk/by-id/scsi-0HC_Volume_7036097 /home3-81 ext4 discard,nofail,defaults,usrquota,grpquota 0 0
/dev/disk/by-id/scsi-0HC_Volume_7036083 /home2-81 ext4 discard,nofail,defaults,usrquota,grpquota 0 0
/dev/disk/by-id/scsi-0HC_Volume_7044764 /home4-81 ext4 discard,nofail,defaults,usrquota,grpquota 0 0


What else could I check?

I appreciate any ideas or suggestions, thank you.
 
Do you think with UUID will be better? Why? I'm checking logs since the problem started but didn't see anything similar to "unmount".
 
previous partitions mounted with UUID didn't have problems, as I see.
also if only second goes offline - maybe replace it with another one (that will be on another more stable node).
or add monitoring that will remount OR notify you when problem comes. Maybe this partition overlimit IOPS or something and freezed by hetzner?
 
At the moment, hetzner keeps saying that there is no problem in the node. Directadmin support has also confirmed to me that it is a problem with the virtual disks.
 
Any error at /var/log/messages

I found this that perhaps it's the failure. Let's see what hetzner says:

Oct 11 13:34:05 ns81 kernel: sd 0:0:0:0: Power-on or device reset occurred
 
Back
Top