FreeBSD 11.0 64-bit ALPHA release

Partitioning is good if you have many different raid arrays. If you use one, there's no reason to restrict yourself :)
I did quite a bit of Googling and making it all one partition is certainly what the vast majority are doing these days. I like the idea of not having to crystal-ball partition sizes, and for a desktop, file server, or database server on bare metal I would also have done it that way. I welcome challenges to my reasoning for partitioning because I'm obviously not in the majority, and recent for me is FreeBSD 7.2, LOL! My reasons, from a VMware perspective, for my partitioning decisions were these:

1. /swap I just made it 4GB. If processes are actively swapping due to lack of memory and causing a massive number of memory page faults, I've never gotten FreeBSD, at least with 7.2, to use over 2GB, even under even unrealistic test conditions when we ran the SLC SSDs. It just plain wouldn't do it. With ESXi, adding memory amounts to clicking a spin box.

2. /tmp I didn't go with tmpfs for memory reasons. I made it 1GB and purge on reboot.

3. /var/tmp I know people often symlink to /tmp or use tmpfs, but in FreeBSD it is intended for /var/tmp to survive reboot. I made it 1GB as in the past.

4. /var with DA on FreeBSD doesn't require much of a crystal ball because mail folders and user databases are under /home. Logs are the only things that could cause things to get out of control, but I need to set up processes to trim them back in any scenario. The fixed size makes sure I do. Other than that, there are root Apache, a few web apps, a few configs, the DNS database, ClamAV, pkg, etc., but nothing other than logs that grows much that I know of. dumpdev is set to auto, and default, so it will only catch kernel memory.

5. /usr requires a little more more of a crystal ball, and for that reason I set up /usr and /home last so that I can expand them if necessary. I've never had to expand /usr even when set at 10GB. The worst case scenario I've used was 67%, so I went with 15GB. Separating /usr from /home means the apps can work if /home fills up.

6. /home, defined last, has the remainder of the virtual drive. /home is the only volume I ever find running on the edge, and twice going over the edge. I've only had to expand it one time in the past. Users seldom use their quota, so for efficient use of space, it is overbooked. It also means that a few pack rats can cause a problem. The only time I had to do it, I simply went into the VM's settings, clicked the spin box to extend the virtual drive to the size needed, made some careful manual edits in FreeBSD, and expanded /home.

7. Unlike bare metal, with ESXi and its normal thin provisioning, the sizes of volumes are simply maximum sizes. The only physical space used is what is actually used, or has been used. Thus, we overbook ESXi's physical disk space with VMs as well. With a minor amount of work, I can get the no-longer-used space back also, but I only tend to do that after some big change. If I ever have a problem with a partition in the middle of the pack, with a few mouse clicks, I could make a new virtual hard drive and move it there, either temporarily or permanently. The only thing another virtual hard drive does is add two files to my backups. With the need to expand a partition being very infrequent, and with /home on the end, I never have had to add another drive. The only place I current do that is where I have a multi-TB backup volume that I do not want backed up. While ESXi cost us almost exactly 25% in CPU cores over bare metal under max loads, with 7.2, UFS->VMFS gives way better disk performance than UFS on bare metal. (which my disbelief and curiosity caused me to waste considerable time attempting to prove otherwise) I went with MBR because many VM tools do not work with GPT, and according to other VMware users, MBR is certainly not slower than GPT, and the ability to do V2P with GPT, and have it boot afterwards, is not a given.

Using ESXi and the normal thin provisioning does impact the reasoning for running everything off /, but maybe there are other advantages that I am not aware of. One could make the argument that I could have been extravagant with space and have it made no real impact, and I would never run out of space. However, size limits are also an advantage in some cases, as is the case with /tmp, /var/tmp, and arguably /var. In addition, during some VM manipulation operations, the VMs are are expanded to their full size. A common time that happens is when shrinking VMs to reclaim previously used space, such as after a major upgrade. The unused space is filled with zeros thereby expanding the VM to its maximum size before copying it back to thin and becoming much smaller. Irrelevant factors that influenced my decisions were the old VM was 80G, /home was ~95% with 57G. I settled on 128G for the VHD size (on a 2TB VMFS partition), I guess because they make physical hard drives that size, and this partition layout results in an even 100G /home. I'm certainly open to better reasoned partition strategies in the context of a thin provisioned environment, and I already feel a little stupid with my decisions for /usr and /home. Since they are VMs, I can throw them away, and make another at will, with no requirements for additional hardware or impact on production VMs.
 
Last edited:
Oh and one more thing - if you wish quotas to work properly, you must build a kernel with quota support.

I don't like custom kernels (because of the hassle when updating) and I stayed on GENERIC.
 
Oh and one more thing - if you wish quotas to work properly, you must build a kernel with quota support.
Did they just start adding quota to Generic? I know I had to add it in 7.2. This is from FreeBSD 11.1 GENERIC, and it shows it included.
Code:
# $FreeBSD: releng/11.1/sys/amd64/conf/GENERIC 318763 2017-05-24 00:00:55Z jhb $

cpu		HAMMER
ident		GENERIC

makeoptions	DEBUG=-g		# Build kernel with gdb(1) debug symbols
makeoptions	WITH_CTF=1		# Run ctfconvert(1) for DTrace support

options 	SCHED_ULE		# ULE scheduler
options 	PREEMPTION		# Enable kernel thread preemption
options 	INET			# InterNETworking
options 	INET6			# IPv6 communications protocols
options 	IPSEC			# IP (v4/v6) security
options 	TCP_OFFLOAD		# TCP offload
options 	SCTP			# Stream Control Transmission Protocol
options 	FFS			# Berkeley Fast Filesystem
options 	SOFTUPDATES		# Enable FFS soft updates support
options 	UFS_ACL			# Support for access control lists
options 	UFS_DIRHASH		# Improve performance on big directories
options 	UFS_GJOURNAL		# Enable gjournal-based UFS journaling
options 	[B]QUOTA[/B]			# Enable disk quotas for UFS
options 	MD_ROOT			# MD is a potential root device
options 	NFSCL			# Network Filesystem Client
options 	NFSD			# Network Filesystem Server
options 	NFSLOCKD		# Network Lock Manager
options 	NFS_ROOT		# NFS usable as /, requires NFSCL
options 	MSDOSFS			# MSDOS Filesystem
options 	CD9660			# ISO 9660 Filesystem
...
sysctl -a shows kern.features.ufs_quota: 1
 
Last edited:
You are right. Quotas are in Generic kernel in FreeBSD 11. I didn't notice so I replicated my old memories... probably from version 9 in my case :)
 
Another tip about the swap - add vm.defer_swapspace_pageouts=1 in sysctl. More info here:http://forum.directadmin.com/showthread.php?t=55140
I extensively document my setups and make a checklist with why things are done. When I went to see how it works, I ran into this thread on forums.freebsd.org:

Question: Any idea why sysctl vm.defer_swapspace_pageouts is removed from FreeBSD 11.1 (and maybe 11.0 too)? How to configure the server to use mostly the RAM and swap only as the last resort?

Answer: Sir Dice responded with: https://reviews.freebsd.org/D8302

When I checked out the reference it states:
Introduce(Should be Introducing?) a new page queue, PQ_LAUNDRY, for unreferenced, i.e., inactive, dirty pages and a new thread for laundering the pages on this queue. In essence, this change decouples page laundering and reclamation. For example, one effect of this decoupling is that the legacy page daemon thread(s) will no longer block because laundering anonymous pages consumes all of the available pbufs for writing to swap. Instead, they are able to continue with page reclamation. This eliminates the need for dubious low-memory deadlock avoidance hacks, specifically, the vm_page_try_to_cache() calls in I/O completion handlers. ...
I'm assuming from this that vm.defer_swapspace_pageouts functionality has recently been replaced by PQ_LAUNDRY?

Thanks!
PS: This could mean no more CUSTOM kernels, which would be very cool!
 
Last edited:
You are right about that too.

sysctl: unknown oid 'vm.defer_swapspace_pageouts'

See, I was at FreeBSD 10 and recently upgraded to 11 and then 11.1 with freebsd-update. So lots of old settings are still there :)
 
- I did the install of FreeBSD 11.1 and everything worked fine.
- I backed up the old site with Admin Backup/Transfer.
- I did the DirectAdmin install using BIND912 like you, and went with the default PHP 5.6 and PHP-FPM, and everything WORKED fine, but when I went into DNS Administration, it was a mess. It showed hundreds of red entries like 100.51.198.in-addr.arpa with No under local data, and No under local mail. If you click on one, it displays "error reading from db file".
- I did the restore from Admin Backup/Transfer and everything restored but
a. The DNS Administration didn't fix all of the bad entries, but when I get down to the real domains, they are green, show Yes in both columns, and everything is like it should be.
b. Apache would no longer start with errors:
Code:
 "AH00526: Syntax error on line 23 of /usr/local/directadmin/data/users/ineedah/httpd.conf:
Invalid command 'php_admin_flag', perhaps misspelled or defined by a module not included in the server configuration"
I then renamed your options.conf to be options.conf.moved, and ran:
Code:
./build update
./build rewrite_confs
but it didn't help. Posted topic in Apache area.

Thanks!
 
Last edited:
I am unsure about the DNS. Will look to it later.

Regarding "php_admin_flag" - make sure that you have htscanner enabled in options.conf of DA. After adding it, recompile apache and php.

Htscanner is a module which enables .htaccess to manage php flags like it was in mod_php.
 
I am unsure about the DNS. Will look to it later. Regarding "php_admin_flag" - make sure that you have htscanner enabled in options.conf of DA. After adding it, recompile apache and php. Htscanner is a module which enables .htaccess to manage php flags like it was in mod_php.
Htscanner was not set so I set it and recompiled apache and php. I get the same error:
Code:
Restarting apache.
Stopping httpd:         [ FAILED ]
Starting httpd:         [ OK ]
AH00526: Syntax error on line 23 of /usr/local/directadmin/data/users/ineedah/httpd.conf:
Invalid command 'php_admin_flag', perhaps misspelled or defined by a module not included in the server configuration
Thanks for the Htscanner. That would have driven me nuts next.
 
Please paste "line 23 of /usr/local/directadmin/data/users/ineedah/httpd.conf" and probably few lines before/after, so we can see what's going on there.
 
Please paste "line 23 of /usr/local/directadmin/data/users/ineedah/httpd.conf" and probably few lines before/after, so we can see what's going on there.

Code:
<VirtualHost 123.45.67.890:80 >
	<Directory /home/ineedah/domains/domain.com/public_html>
		Options +Includes -Indexes
		[B][COLOR="#FF0000"]php_admin_flag engine ON[/COLOR][/B]
		<IfModule !mod_php6.c>
			[B][COLOR="#FF0000"]php_admin_flag safe_mode OFF[/COLOR][/B]
		</IfModule>
		php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f [email protected]'
	</Directory>
	ServerName www.domain.com
	ServerAlias www. domain.com  domain.com 
	ServerAdmin webmaster@ domain.com
	DocumentRoot /home/ineedah/domains/ domain.com/public_html
	ScriptAlias /cgi-bin/ /home/ineedah/domains/ domain.com/public_html/cgi-bin/
	UseCanonicalName OFF
	<IfModule !mod_ruid2.c>
		SuexecUserGroup ineedah ineedah
	</IfModule>
	CustomLog /var/log/httpd/domains/ domain.com.bytes bytes
	CustomLog /var/log/httpd/domains/ domain.com.log combined
	ErrorLog /var/log/httpd/domains/ domain.com.error.log
	<Directory /home/ineedah/domains/ domain.com/public_html>
		<FilesMatch "\.(inc|php|phtml|phps|php56)$">
			AddHandler "proxy:unix:/usr/local/php56/sockets/ineedah.sock|fcgi://localhost" .inc .php .phtml .php56
		</FilesMatch>
	</Directory>
</VirtualHost>

It repeats for 443. After that there is another identical pair.

Thanks!
 
About the only place that can come from is from user_virtual_host.conf bottom:
Code:
	|*if USER_CLI="1"|
		php_admin_flag engine |PHP|
		php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f |PHP_EMAIL|'
		|CLI_PHP_MAIL_LOG|
	|*endif|
	|*if OPEN_BASEDIR_AND_CLI="ON"|
		php_admin_value open_basedir |OPEN_BASEDIR_PATH|
	|*endif|

If I edit /usr/local/directadmin/data/users/username/user.conf and do a search and replace of "php_admin" -> "# php_admin", Apache will start and run. If I ./build rewrite_confs, it will be right back where it was.
 
That part:

php_admin_flag engine ON
<IfModule !mod_php6.c>
php_admin_flag safe_mode OFF
</IfModule>
php_admin_value sendmail_path '/usr/sbin/sendmail -t -i -f [email protected]'

is completely useless.

Let me explain:

1. PHP engine is already on.

2. PHP safe_mode is deprecated in 5.3 and removed since 5.4. It can't be turned off, because it does not exist anymore.

3. Directadmin uses Exim, not Sendmail.

Now I guess you must login to the User level of that account and check it's "custom httpd configuration" (it's called something like that as an option in the control panel). It looks like it's coming from there. Just delete it properly from there and rewrite_confs will no longer revert your changes.

Moreover - it seems that that website is very old. Even if you get it to work, you must monitor it's error log (/var/log/httpd/domains/<DOMAIN>.error.log). You may need to fix some code to make it compatible with 5.6. The biggest differences are the mysql_* functions (no longer exist) and ereg_*/eregi_* functions (must be replaced by their preg_* equivalents).
 
Last edited:
I'm going to roll back to the fresh install of FreeBSD and go forward again. I have bind issues anyway.

BTW, I dropped that code out of the template, did a ./build rewrite_confs, and got the same errors back, so it must not be coming from user_virtual_host.conf.
 
Last edited:
I meant that what you see up there should be a custom.
Got it! There isn't a custom. I've got worse problems. I rolled back and did a reinstall of DA. I still have the DNS issue, so I'll do an OS reinstall with more granular snapshots along the way. My DNS management inside DA looks like this:
View attachment 2361
 
This is not a problem.

Look at /etc/namedb/named.conf

These zones are defined inside. For example parts of what you are seeing is "Shared Address Space (RFC 6598)", "IETF protocol assignments (RFCs 5735 and 5736)", etc.

They all point to /usr/local/etc/namedb/master/empty.db which is, like the name says, empty zone file.
 
Last edited:
Back
Top