-
Notifications
You must be signed in to change notification settings - Fork 1
Deployment
These instructions are being updated and are not yet in a complete state. Come back later and use only when this message has been removed!
These are instructions to deploy your own implementation of Cloud-COPASI directly from the source code.
The following software packages must be installed and configured:
- A Linux or UNIX system
- Python 3.8 or above
- A web server compatible with Django, such as Apache
- A database compatible with Django, such as postgreSQL, MySQL, Oracle and SQLite, along with an appropriate Python wrapper (see here for full details.
- Django version 3.x or greater (handled by "pip install -r requirements.txt" - see below)
- Django-extensions version 3.0.8 or above
- MatPlotLib (handled by "pip install -r requirements.txt" - see below)
- Cycler version 0.10.0 or above
- Psycopgy2 version 2.8.5 or above
- Dateutil version 2.8.1 or above
- typing version 3.7.4.3 or above
- Boto Python interface to AWS
- BasiCO (handled by "pip install -r requirements.txt" - see below)
- HTCondor (installed as a separate user - see below)
- COPASI (executable as a separate user - see below)
For security reasons, Cloud-COPASI should run as its own user.
adduser cloudcopasi
su - cloudcopasi
A postgreSQL database (on Ubuntu) example follows: Install, if not already done.
sudo apt-get update
sudo apt-get install postgresql postgresql-contrib
A new database should be created, and username and password should be set up to allow Cloud-COPASI to access this database. Refer to your database documentation for details on how to do this. Initially this user must have permission to create tables in the database.
Eg. Become the postgres user (default from installation), to gain (inherited) priviledges for the postgres administrative user.
sudo -i -u postgres
Use the psql interactive shell to connect to the database server. Then create the database and give your cloud_copasi_user (specified in ~/cloud-copasi/cloud_copasi/settings.py) the necessary permissions.
psql
psql (9.5.10)
Type "help" for help.
postgres=# CREATE DATABASE cloud_copasi_db;
CREATE DATABASE
postgres=# CREATE USER cloud_copasi_user WITH PASSWORD 'password';
CREATE ROLE
postgres=# ALTER ROLE cloud_copasi_user SET client_encoding TO 'utf8';
ALTER ROLE
postgres=# ALTER ROLE cloud_copasi_user SET default_transaction_isolation TO 'read committed';
ALTER ROLE
postgres=# ALTER ROLE cloud_copasi_user SET timezone TO 'UTC';
ALTER ROLE
postgres=# GRANT ALL PRIVILEGES ON DATABASE cloud_copasi_db TO cloud_copasi_user;
GRANT
postgres=# \q
logout
HTCondor is a high-throughput computing manager which is used as the underlying system to submit COPASI jobs to a compute pool. Cloud-COPASI uses the Bosco part of HTCondor to send jobs via ssh to remote batch job schedulers (PBS, LSF, SLURM, etc.) While Bosco was developed by Open Science Grid it is now included as part of HTCondor.
HTCondor is installed on the server that runs Cloud-COPASI. These instructions were tested with version 9.12.0, and hopefully will work with more recent versions of HTCondor (though not older). We install a HTCondor tarball with binaries that can be found here. Chose the version number required (9.12.0 for these instructions) and finally the appropriate Linux distribution (in our case this is CentOS7 ).
Unpack the tarball in the home directory of the cloudcopasi user, and run the install script.
tar -xvf condor-*-stripped.tar.gz
mv condor-*stripped condor
cd condor
bin/make-personal-from-tarball
cd ~
Cloud-COPASI needs a customized version of condor_remote_cluster, this is done by applying a patch:
patch condor/bin/condor_remote_cluster < cloud-copasi/condor_overlay/bin/bosco_cluster.patch
(For older versions of HTCondor, condor_remote_cluster was called bosco_cluster, if installing an older version you may need to change the above command accordingly)
We we will use file-based authentication, and this is set up with the following commands:
mkdir -p condor/local/fs_auth
echo "FS_LOCAL_DIR=/home/cloudcopasi/condor/local/fs_auth" > condor/local/config.d/condor_config.local
We prefer to use a passwordless key for Bosco's interaction with the submit nodes, and for that we have to create the key ourselves before letting Bosco do it iself. This is achieved by the following (leave password empty by just pressing Enter):
ssh-keygen -b 4096 -f .ssh/bosco_key.rsa
Finally we load the HTCondor environment variables and start HTCondor.
. condor/condor.sh
condor_master
HTCondor is now running and will be used by Cloud-COPASI. Let's test it:
condor_q
which should produce output similar to this:
-- Schedd: yourhost.com : <10.10.1.103:44733?... @ 02/22/22 22:22:22
OWNER BATCH_NAME SUBMITTED DONE RUN IDLE HOLD TOTAL JOB_IDS
Total for query: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended
Total for all users: 0 jobs; 0 completed, 0 removed, 0 idle, 0 running, 0 held, 0 suspended
You should also ensure that the CopasiSE binary is available, readable, and executable by the cloud-copasi user. E.g.
mkdir -p copasi/bin
cd copasi
wget https://github.com/copasi/COPASI/releases/download/Build-170/COPASI-4.22.170-AllSE.tar.gz
tar -xzvf COPASI-4.22.170-AllSE.tar.gz
cd bin
ln -s ../COPASI-4.22.170-AllSE/Linux64/CopasiSE
cd
You should also create folders to store log files, user files, and ssh keypairs.
mkdir log user-files instance_keypairs
Download the latest stable branch of the source code:
git clone git://github.com/copasi/cloud-copasi.git
Copy settings.py.EXAMPLE to settings.py and fill in the details for the database, file locations. Eg.
cd cloud-copasi/cloud_copasi
cp settings.py.EXAMPLE settings.py
vim settings.py
cd
Copy the Copasi model file used in the cluster test into the copasi directory.
cp cloud-copasi/brusselator_scan_test.cps copasi/
Add the Cloud-COPASI source folder to the python path, and set the Django setting module:
export PYTHONPATH=$PYTHONPATH:/home/cloudcopasi/cloud-copasi
export DJANGO_SETTINGS_MODULE=cloud_copasi.settings
In bash, for example, those can be made persistent by putting those lines in a created ~/.bash_aliases
file (which is sourced by ~/.bashrc
).
Let's use Python's virtualenv to sandbox your own versions of Python and the needed modules away from any system python packages, and try to ease replication of a working setup.
Chances are you already have Python installed. The current version of Cloud Copasi uses Python 2.7. Check this by . . .
python --version
Otherwise install Python 2.7.
pip is a tool for installing Python packages from the Python Package Index. Let's install pip.
sudo apt-get install python-pip
Now let's use pip to install virtualenv.
sudo pip install virtualenv
virtualenv is used to set up local directories for a separate Python, pip, and any python modules you want to install. It also includes shell commands which, when run (i.e. "sourced") will set your PYTHONPATH to first look in your local environment for your Python stuff. This also loads a shell function to "deactivate" this behavior.
cd ~/cloud-copasi
virtualenv venv
. . . creates a an "venv" directory, which will be minimally populated with local versions of Python, pip, some shell code, etc. Now let's "activate" our shell environment to set our shell's PYTHON_PATH to try and use any python stuff under our "venv" directory, before any system Python packages (and set the prompt to remind us of this state)
source venv/bin/activate
Now is where this sandboxing will help us. The pip now used will be the local one, and packages installed with that will now be installed under our "venv". We could now use pip to install Django and the other Python dependencies, individually. But we can instead utilize a list of the exact packages, with versions, to tell pip what to install. This has been saved in ~/cloud-copasi/requirements.txt ("pip freeze > requirements.txt" generated this from a working configuration). Tell pip to install these by . . .
pip install -r requirements.txt
While in the the ~/cloud-copasi directory run migrate, to create/update the Django website's data "model" in the database.
python manage.py migrate
And create the static files directory
python manage.py collectstatic
Install a web server, including wsgi interface, if that hasn't already been done. E.g.:
sudo apt-get update
sudo apt-get install apache2 libapache2-mod-wsgi
Now, add Cloud-COPASI to the web server configuration. Cloud-COPASI can be run on any web server capable of running Django applications (see http://docs.djangoproject.com/en/1.11/howto/deployment/), though the Django documentation recommends using Apache with the mod_wsgi Python interface.
See http://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/ for general details on deployment. In Ubuntu, the general installation procedure is as follows:
- Ensure Apache and mod_wsgi are installed and configured correctly
- Configure the Apache site config by adding a new file to /etc/apache2/sites-available:
sudo touch /etc/apache2/sites-available/cloud-copasi.conf
And add the following configuration to this file:
<VirtualHost *:80>
ServerName domain.com
ServerAlias cloudcopasi
Alias /static /home/cloudcopasi/cloud-copasi/cloud_copasi/web_interface/templates/static-all/
<Directory /home/cloudcopasi/cloud-copasi/cloud_copasi/web_interface/templates/static-all/>
Require all granted
</Directory>
Alias /admin/static /home/cloudcopasi/cloud-copasi/cloud_copasi/web_interface/templates/static-all/admin-media/
<Directory /home/cloudcopasi/cloud-copasi/cloud_copasi/web_interface/templates/static-all/admin-media/>
Require all granted
</Directory>
WSGIDaemonProcess cloud-copasi user=cloudcopasi group=cloudcopasi threads=5 python-path=/home/cloudcopasi/cloud-copsai/ python-home=/home/cloudcopasi/cloud-copasi/venv
WSGIProcessGroup cloud-copasi
WSGIScriptAlias / /home/cloudcopasi/cloud-copasi/cloud_copasi/wsgi.py
<Directory /home/cloudcopasi/cloud-copasi/cloud_copasi/>
Require all granted
</Directory>
ErrorLog /var/log/apache2/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel debug
CustomLog /var/log/apache2/access.log combined
</VirtualHost>
Now enable the site, and reload apache
sudo a2ensite cloud-copasi
sudo service apache2 reload
The background daemon runs in addition to the web server, and periodically polls the bosco queue for completed jobs. The background daemon is located in cloud_copasi/background_daemon/cloud_copasi_daemon.py, and can be started by adding the cloud-copasi source folder to the python path, and setting the DJANGO_SETTINGS_MODULE variable as above, and then running
python cloud_copasi_daemon.py start
Alternatively (and recommended), you can add the daemon as a system process. For example, in Ubuntu, you can add a systemd unit specification. To do this . . .
sudo cp /home/cloudcopasi/cloud-copasi/cloud-copasi-daemon.service.EXAMPLE /etc/systemd/system/cloud-copasi-daemon.service
sudo systemctl enable cloud-copasi-daemon.service
sudo systemctl start cloud-copasi-daemon.service
That should allow the system to handle starting and stopping of /home/cloudcopasi/cloud-copasi/cloud_copasi/background_daemon/cloud_copasi_daemon.py via /home/cloudcopasi/cloud-copasi/cloud-copasi-daemon.sh (and start it immediately).
Upgrading to a new version of Cloud-COPASI requires a few simple steps:
- Stop the background daemon and web server
- Go to the cloud-copasi source folder, and run
git pull origin master
- Add the source folder to the python path and add the DJANGO_SETTINGS_MODULE variables as above, and run:
python manage.py migrate
- Start the web server
- Start the background daemon