Server Administration

This document describes how the current server installation is set up

Apache Tomcat

Currently, every environment has it's own dedicated Apache Tomcat application container where the production version has a memory limit set to 2GB and the others are set to 512MB. The separate containers allow easy restarting of certain environments without impacting the other environments. The following environments have currently been set up in /home/tomcat:

  • apache-tomcat-www --> production environment
  • apache-tomcat-nmcdsptest --> the test environment; note that this is not called 'test' but nmcdsptest. The reason is that Grails' built in testsuite uses the test environment and expects an hsqldb, while for test server environment we use PostgreSQL and numerous other tweaks.
  • apache-tomcat-ci --> the continuous integration environment

In /home/tomcat also some other folders can be seen:

  • apache-tomcat-tools --> the tomcat instance running specific tools (like nexus and hudson)
  • apache-tomcat-dbnptest --> a fake tomcat folder, used for building the dbnptest package
  • apache-tomcat-dbnpdemo --> alias to dbnptest, used to building the dbnpdemo package

Tomcat configuration

The four tomcat installations are set up to use the following portnumbers (see /home/tomcat/apache-tomcat-*/conf/server.xml):

  • ci
    • HTTP/1.1 on 8081
    • AJP/1.3 on 8009
    • SHUTDOWN on 8006
  • nmcdsptest
    • HTTP/1.1 on 9080
    • AJP/1.3 on 9009
    • SHUTDOWN on 9005
  • www
    • HTTP/1.1 on 10080
    • AJP/1.3 on 10009
    • SHUTDOWN on 10005
  • tools
    • HTTP/1.1 on 11080
    • AJP/1.3 on 11009
    • SHUTDOWN on 11005

Scripts for managing tomcat

In /home/tomcat/scripts a couple of scripts are available for easy handling the tomcat instances:

  • --> kills a running tomcat instance
  • --> restarts a running tomcat instance, however is prefered here as would take care of starting tomcat with the proper settings
  • --> checks if a given tomcat is running, and starts it if it is not running

Some of these scripts are currently executed using a cronjob:

tomcat@nmcdsp ~/scripts $ crontab -l
# m h  dom mon dow   command
# Check if tomcat is running properly
* * * * * /home/tomcat/scripts/ www >> /home/tomcat/logs/check_tomcat.log 2>&1
* * * * * /home/tomcat/scripts/ ci >> /home/tomcat/logs/check_tomcat.log 2>&1
* * * * * /home/tomcat/scripts/ nmcdsptest >> /home/tomcat/logs/check_tomcat.log 2>&1
* * * * * /home/tomcat/scripts/ tools >> /home/tomcat/logs/check_tomcat.log 2>&1

# restart tomcat automatically every day
0 6 * * * /home/tomcat/scripts/ www >> /home/tomcat/logs/restart_tomcat.log 2>&1
0 6 * * * /home/tomcat/scripts/ ci >> /home/tomcat/logs/restart_tomcat.log 2>&1
0 6 * * * /home/tomcat/scripts/ nmcdsptest >> /home/tomcat/logs/restart_tomcat.log 2>&1
0 6 * * * /home/tomcat/scripts/ tools >> /home/tomcat/logs/restart_tomcat.log 2>&1

# and kill ci every 3 hours
0 */2 * * * /home/tomcat/scripts/ ci >> /home/tomcat/logs/restart_tomcat.log 2>&1
tomcat@nmcdsp ~/scripts $ 

If, for some reason, a tomcat needs to be restarted, it's best to use these scripts for doing so as they will also make sure tomcat is started with the proper memory settings. So, if you for example want to restart the ci instance, you would execute:

tomcat@nmcdsp ~ $ ~/scripts/ ci
201106061658 killing tomcat ci instance (pid: 17992)
tomcat@nmcdsp ~ $ 

The cronjob ci will take care of starting the instance with the proper settings


Currently, the Apache webserver is set up to proxy requests to the tomcat daemon in charge of a specific environment. The configuration is also already set up to perform loadbalancing. Note that in the configuration below the application context is gscf-0.8.3-www, this should be changed to match the application context on the tomcat server:

root@nmcdsp:/etc/apache2/sites-available# cat nmcdsp.org_gscf-www.conf 
# Apache Virtual Host for GSCF Production Build
# Author	Jeroen Wesbeek
# Since		20100825
# Revision Information:
# $Author$
# $Date$
# $Rev$
<VirtualHost *:80>

	ErrorLog /var/log/apache2/gscf-www-error.log
	CustomLog /var/log/apache2/gscf-www-access.log combined

	# Make sure that my document root points to the root of the web
	# application (where the WEB-INF is located, for instance).
	#DocumentRoot /var/www/nmcdsp.org_www/htdocs

	#ErrorDocument 404 /503.php

	<IfModule mod_rewrite.c>
		RewriteEngine on

                # keep listening for the serveralias, but redirect to
                # servername instead to make sure only one user session
                # is created (tomcat will create one user session per
                # domain which may lead to two (or more) usersessions
                # depending on the number of serveraliases)
                # see gscf ticket #321
		RewriteCond %{HTTP_HOST} ^$ [NC]
		RewriteRule ^(.*)$$1 [R=301,L]
		RewriteCond %{HTTP_HOST} ^$ [NC]
		RewriteRule ^(.*)$$1 [R=301,L]
		RewriteCond %{HTTP_HOST} ^$ [NC]
		RewriteRule ^(.*)$$1 [R=301,L]

		# rewrite the /gscf-a.b.c-environment/ part of the url
		RewriteCond %{HTTP_HOST} ^$ [NC]
		RewriteRule ^/gscf-0.8.3-www/(.*)$ /$1 [L,PT,NC,NE]

	<IfModule mod_proxy.c>
		<Proxy *>
			Order deny,allow
			Allow from all

		ProxyStatus On
		ProxyPass / balancer://gscf-cluster/gscf-0.8.3-www/ stickysession=JSESSIONID|jsessionid nofailover=On
		ProxyPassReverse / balancer://gscf-cluster/gscf-0.8.3-www/
		ProxyPassReverseCookiePath /gscf-0.8.3-www /

		# backend servlet container for virtual host support.
		ProxyPreserveHost On

		# Tell mod_mod proxy that it should not send back the body-content of
		# error pages, but be fascist and use its local error pages if the
		# remote HTTP stack is sending an HTTP 4xx or 5xx status code.
		#ProxyErrorOverride On

                <Location />
                        SetOutputFilter proxy-html
			ProxyHTMLDoctype XHTML Legacy
                        ProxyHTMLURLMap /gscf-0.8.3-www/ /

		<Proxy balancer://gscf-cluster>
			#BalancerMember ajp://localhost:10009
			BalancerMember http://localhost:10080

Adding more nodes to include in the loadbalanced setup can be done by extending the proxy balancer configuration directive:

		<Proxy balancer://gscf-cluster>
			BalancerMember http://node1:port
			BalancerMember http://node2:port
			BalancerMember http://nodeN:port


Currently the backup cycle consists of two parts:

Database backups

The databases are backed up by a cronjob which runs as user posgres which currently dumps the production databases twice a day:

root@nmcdsp:~# su - postgres
postgres@nmcdsp:~$ crontab -l
# m h  dom mon dow   command
0 13,01 * * * ~/scripts/ gscf-www 2>&1
0 13,01 * * * ~/scripts/ nmcdsp-www 2>&1
0 13,01 * * * ~/scripts/ sam-www 2>&1

The database dumps are remotely stored on nbx14 in /home/nmcbackups/backups using the scponly nmcbackups account on nbx14. On nbx14 the backups folder is cleaned through another cronjob which runs as user root and keeps at least 5 backups files per environment and additionally keeps all backups that are less than 7 days old. This means that if there are 15 backup files for one product / environment (e.g. gscf-www) that are younger than 7 days, and two that are older than 7 days, only the latter two will be deleted.

root@nbx14:/root# crontab -l
# The server has an automated backup process going, storing
# database dumps in the (scponly) useraccount nmcbackup. This script
#	1. cleans up the backup folder
#	2. makes sure a minimal number of backups (per product, per environment)
#	   are kept
#	3. makes sure backups of a age less than a max age are kept
30 1,13 * * * /root/scripts/cleanup_backups >> /root/scripts/cleanup_backups.log 2>&1

File backups

to be implemented...

Project deployment

For the purpose of deploying Grails projects in a particular environment, a deploy script has been created (/home/tomcat/scripts/

tomcat@nmcdsp ~/scripts $ ./ 
Usage: ./ <projectname> <type>
       where type is one of ci, nmcdsptest, www or dbnptest, dbnpdemo
tomcat@nmcdsp ~/scripts $ 

So, to deploy a new production instance of gscf you should issue the following command:

tomcat@nmcdsp ~/scripts $ ./ gscf www builds

Builds for are also built the same way on the server, however the builts are then scp'd to for deployment:

tomcat@nmcdsp ~/scripts $ ./ gscf dbnptest; ./ gscf dbnpdemo
tomcat@nmcdsp ~ $ cd ~/apache-tomcat-dbnptest/webapps/
tomcat@nmcdsp ~/apache-tomcat-dbnptest/webapps $ scp *.war

Where the administrator of can then deploy the war file(s).

Note: make sure the apache virtual host configuration points to the proper tomcat application context''

Continuous integration

Continuous integration is basically automated deployment. The deploy script checks if the local workspace (in /home/tomcat/workspace/projectname) is older that svn HEAD and -if so- creates a new build and deploys it on the ci tomcat instance:

tomcat@nmcdsp ~ $ crontab -l
# m h  dom mon dow   command

# Continuous Integration
* * * * 1-5 /home/tomcat/scripts/ gscf ci >> /home/tomcat/logs/gscf_continuous_integration.log 2>&1
* * * * 1-5 /home/tomcat/scripts/ gscf4animaldb ci >> /home/tomcat/logs/gscf4animaldb_continuous_integration.log 2>&1
*/5 8-19 * * 1-5 /home/tomcat/scripts/ sam ci >> /home/tomcat/logs/sam_continuous_integration.log 2>&1
*/5 8-19 * * 1-5 /home/tomcat/scripts/ nmcdsp ci >> /home/tomcat/logs/nmcdsp_continuous_integration.log 2>&1