Logrotate can't rename old logs: read-only file system

open4biz

Verified User
Joined
Mar 22, 2009
Messages
124
Today I noticed the following problem in my error logs:

Aug 28 00:00:02 hostname logrotate[2016624]: error: error renaming /usr/local/php74/var/log/php-fpm.log.30.gz to /usr/local/php74/var/log/php-fpm.log.31.gz: Read-only file system Aug 28 00:00:02 hostname logrotate[2016624]: error: error renaming /usr/local/php81/var/log/php-fpm.log.30.gz to /usr/local/php81/var/log/php-fpm.log.31.gz: Read-only file system Aug 28 00:00:02 hostname logrotate[2016624]: error: error renaming /usr/local/php82/var/log/php-fpm.log.30.gz to /usr/local/php82/var/log/php-fpm.log.31.gz: Read-only file system

I checked the directory's permissions:

drwxr-xr-x 2 root root 4096 Jan 24 2023 log

And the only file in the directory:

-rw------- 1 root root 236303 Aug 29 18:06 php-fpm.log

I looked around the 'net a bit and found this but wasn't sure it it's a good solution for a DirectAdmin setup.

I wondered how I might fix the problem so logrotate can advance the log files over time?

Cheers

Update: I also found this Debian 11 bug report and that's the OS I'm currently using on the server.
 
Last edited:
I ended up editing the logrotate.service:

nano /lib/systemd/system/logrotate.service

And adding a few lines at the bottom:

ReadWritePaths=/usr/local/php56/var/log/ ReadWritePaths=/usr/local/php74/var/log/ ReadWritePaths=/usr/local/php81/var/log/ ReadWritePaths=/usr/local/php82/var/log/

Then restarting the service:

systemctl daemon-reload && systemctl start logrotate

And checking its status to make sure it didn't error out:

systemctl status logrotate

If there's a better way to do it I'm all ears!
 
Update: it looks like the logrotate service is back on track as I now see rotated files in each of the directories mentioned above:

-rw------- 1 root root 112 Aug 30 00:00 php-fpm.log -rw------- 1 root root 479031 Aug 29 23:11 php-fpm.log.1

But I would still like to hear if someone has a better approach than adding each directory explicitly to the logrotate.service file:

ReadWritePaths=/usr/local/php56/var/log/ ReadWritePaths=/usr/local/php74/var/log/ ReadWritePaths=/usr/local/php81/var/log/ ReadWritePaths=/usr/local/php82/var/log/

For instance, it's going to start failing again if I add a new version of PHP to the server and forget to update logrotate. It would be nice to make it more future-proof if possible.

Cheers
 
Read-only file system
This usually means the drive has some problem so the filesystem put itself in read only mode to prevent content being changed, mostly done to protect the filesystem itself.
So your solution is in fact not the way to fix this issue.

You might also encounter more issues of this kind.

Maybe it's good to run smartctl on your disk to see if there is a problem with the disk. Or run fsck.
sudo fsck -Af -M
or
sudo fsck.ext4 -f /dev/sda1
replace /dev/sda1 with the correct disk.

It's possible to mount the filesystem in back read-write mode. It might just have been a hickup or something after a reboot and no problem present. But I would check with smartctl at least.

Check what partition your logs are in. Probably the / partition. Check with df -h or look in /etc/fstab what partition it is.
Suppose it's /dev/sda then you could remount probably like:
mount -o remount,rw /dev/sda1
I'm not 200% sure so use this on your own risk.

However.... after the check if the problem persists, try rebuilding php via custombuild.
This month somebody else had the same issue and after running the fsck command which fixed some minor things, he rebuild php via custombuild and all was working again.

Code:
cd /usr/local/directadmin/custombuild
./build update
./build php n

At least you have some better options now to look at and try.
 
Thank you Richard.

My VPS provider attaches paravirtualized drives to each virtual machine and smartctl is unable to pick up any details about the drive(s) so I checked with the provider. They say there's no sign of bad sectors of other S.M.A.R.T. indicators which would lead them to think there's a dying SSD in the array. That line of inquiry seems to be hitting a dead end for me.

As I reviewed that Debian 11 bug report I followed the link to the AskUbuntu discussion I'm leaning more toward the systemd protection being the source vector as the symptom and prescription seem to have solved the problem. That said, I'm going to pay more attention to my uptime monitor and logs in case it does turn out to be hard drive issue. I don't mind circling back to tell you you were right if it turns out to be the case.

Cheers
 
Back
Top