Development Tools, Tips and Tricks
Table of Contents
How to access the shell
When things are still in development and testing we need to access the shell of the installed system in order to check log files, debug, play with changing configurations, etc.
For this we can install the command docker-enter
:
docker run -v /usr/local/bin:/target jpetazzo/nsenter
Then access the shell of a running container like this:
docker-enter btr
See also this article: http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
Install sshd
We want to start a sshd server on port 2201 inside the container,
but when we created the container with docker run
we did not think
about forwarding this port. We have to destroy this container and
create a new one, with an addition -p
option for the
port 2201. But first we should use the command docker commit
to
save to a new image any configurations and data that we already have
in this container.
docker stop btr docker commit btr btr_server:1 docker images docker rm btr docker run -d --name=btr --hostname=example.org \ -p 80:80 -p 443:443 -p 2201:2201 btr_server:1
Now we can enter the container and start the sshd server:
docker-enter btr cd /usr/local/src/btr_server/ dev/install-sshd.sh 2201
Change the webserver
There are two webservers installed, apache2 and nginx (with php5-fpm and memcached). For development Apache2 can be more suitable, while for production NGINX is supposed to be more responsive in high load.
We can choose one of them like this:
docker-enter btr cd /usr/local/src/btr_server/ dev/webserver.sh apache2 dev/webserver.sh nginx
Making clones of the application
Clones of the main site can be used for development and testing.
Inside the container you can make a clone for development like this:
docker-enter btr cd /usr/local/src/btr_server/ dev/clone.sh btr btr_dev1 dev/clone.sh btr_dev1 btr_test1
It creates a new application with root /var/www/btr_dev1/ and with DB named btr_dev1. It also creates the drush alias @btr_dev1, and modifies the configuration of the webserver so that the cloned application can be accessed at dev1.example.org.
Caution: The root directory and the DB of the clone will be erased, if they exist.
Other clones like this can be created for testing etc. To cleanup (remove/erase) a clone, you can use clone_rm.sh like this:
docker-enter btr cd /usr/local/src/btr_server/ dev/clone_rm.sh btr_dev1 dev/clone_rm.sh btr_test1
By the way, you can also modify a little bit the configuration of a
development copy of the application (in order to help us not confuse
a development copy with a live or testing one), with the script
dev/config.php
:
docker-enter btr cd /usr/local/src/btr_server/ drush @btr_dev php-script dev/config.php dev2
It will set site_name to 'B-Translator (dev1)', will make site email something like 'user+dev1@gmail.com', will enable email re-routing, display the devel menu on the footer region, etc. Sometimes it may be useful.
Re-installing the application
It can be done with the script dev/reinstall.sh
:
docker-enter btr cd /usr/local/src/btr_server/ nohup nice dev/reinstall.sh settings.sh & tail -f nohup.out
It will rebuild the Drupal directory with drush make and install the btr_server profile with drush site-install, and then do all the rest of configurations just like they are done during installation.
Normally there is no need to reinstall the application, unless we want to test the installation profile and the installation scripts.
Another kind of re-installation, which touches only the database of
Drupal and nothing else, can be done with the script
dev/reinstall-db.sh
:
docker-enter btr cd /usr/local/src/btr_server/ nohup nice dev/reinstall-db.sh @btr_dev & tail -f nohup.out
It is useful for testing the installation of custom modules, feature modules, etc. The argument @btr_dev is the alias of the site that should be reinstalled.
Making a backup of the application
Sometimes, when testing things on Drupal (installing/uninstalling modules etc.) things get messy and it is not possible anymore to revert to the state that you were before starting the test. In this case the only way to get safely to a previous stable state is by restoring a backup (or installing from the scratch and repeating all the configurations).
A snapshot of the application is just like a full backup with a time stamp. It saves the state of the application at a certain time, both the code (the whole Drupal directory) and the database. It can be done like this:
docker-enter btr cd /usr/local/src/btr_server/ dev/snapshot.sh make @btr dev/snapshot.sh make @btr_dev
These will create the files snapshot-btr-20140914.tgz
and
snapshot-btr_dev-20140914.tgz
. They can be restored like this:
dev/snapshot.sh restore @btr --file=snapshot-btr-20140914.tgz dev/snapshot.sh restore @btr --file=snapshot-btr_dev-20140914.tgz dev/snapshot.sh restore @btr_dev --file=snapshot-btr-20140914.tgz dev/snapshot.sh restore @btr_dev --file=snapshot-btr_dev-20140914.tgz
As you may notice, a snapshot of @btr_dev can also be restored on the main application, and the other way around.
However, in many cases a backup/restore of the database is all that is needed, and it is more efficient. It can be done with drush sql-dump and drush sql-query like this:
drush sql-dump @btr > btr.sql drush sql-dump @btr_dev > btr_dev.sql drush @btr sql-query --file=$(pwd)/btr.sql drush @btr sql-query --file=$(pwd)/btr_dev.sql drush @btr_dev sql-query --file=$(pwd)/btr.sql drush @btr_dev sql-query --file=$(pwd)/btr_dev.sql
Accessing the code of the application from outside container
In general it is not possible to directly access the directories and files of of a container from the host system. However we can use the docker volumes to share directories between the container and the host. It can be done like this:
- First we make a backup of the directory inside the container that
we want to share:
docker-enter btr cd /var/www/btr_dev/profiles/ cp -a btr_server/ btr_server-bak exit
- Then we save the image of the container as
btr_server:dev
, in order to start a new container based on it:docker stop btr docker commit btr btr_server:dev docker images
- Next we create a new container that shares a directory with the
host system (using the option
-v
):docker run -d --name=btr_dev --hostname=dev.example.org \ -v $(pwd)/btr_dev:/var/www/btr_dev/profiles/btr_server/ -p 80:80 -p 443:443 btr_server:dev
Note: The container btr must be stoped before we create and start the new container btr_dev, otherwise the ports 80 and 443 will conflict.
- Finally we enter the container and move the content of the backup
directory to the shared directory:
docker-enter btr_dev cd /var/www/btr_dev/profiles/btr_server/ cp -a ../btr_server-bak/* . cp -a ../btr_server-bak/.* . rm -rf ../btr_server-bak/ exit
Now we can go to the directory btr_dev/ and start emacs or any other tools. This way we don't have to install emacs or any other development tools inside the container and we can use the best of development tools that the host system can offer.
Pushing commits
The copy of the application on /var/www/btr_dev/profiles/btr_server/
(as well as the one on /var/www/btr/profiles/btr_server/
) are actually
clones of the git repository of the project on GitHub, so we can
pull from it and push to it. Pulling (to get up-to-date) can be
done by everybody, however pushing requires a username and password
(the ones that are used to access the account at GitHub).
The commit workflow
For small or straight-forward changes you can also work directly on the master branch, then commit, and then push to github.
However I usually use a bit more complicated workflow. First I create and checkout a dev branch. When the work is done I merge this branch to master and then delete it. Finally push the commit(s) to github.
git checkout -d dev ### create a branch and switch to it [work...commit...work...comit] git checkout master ### switch back to master git pull ### get any latest commits from github git merge dev [--squash] git push ### send commits to github git branch -D dev ### erase the branch
Usually there are no commits comming from github, since I am the only developper (unless I have worked and commited from some other location). So, when I merge without –squash this usually results in fast-forward merge, which means that all the commits that I have done on the branch dev are automatically transferred to the branch master.
However sometimes there may be dirty commits on the dev branch, which means that there may be incomplete commits, or commits that reverse what was done on the previous commits etc. When I wish to reorganize commits and make them cleaner, I use the –squash option, which collects all the changes on the dev branch and leaves them on the master sandbox as local modifications (uncommitted). Then I can redo the commits on a cleaner or more logical way. Afterwards the dev branch will be deleted and the old commits will be lost.
Working with a dev-test-live workflow
At some point, all the modifications on the local copy of the application (sandbox) have to be transferred to a public server, where the application is in "production", performing "live". On that public server there is the same docker container as in the development server. The synchronization of the application can be done via git push and pull.
However drush rsync and drush sql-sync offer another option for synchronization. For more details see:
drush help rsync drush help sql-sync drush topic docs-aliases
These commands use drush aliases, which allow also remote
execution of drush commands. On my development environment I have
created the file /etc/drush/remote.aliases.drushrc.php
, which has
a content like this:
<?php $aliases['live'] = array ( 'root' => '/var/www/btr', 'uri' => 'http://example.org', 'remote-host' => 'example.org', 'remote-user' => 'root', 'ssh-options' => '-p 2201 -i /root/.ssh/id_rsa', 'path-aliases' => array ( '%profile' => 'profiles/btr_server', '%downloads' => '/var/www/downloads', ), 'command-specific' => array ( 'sql-sync' => array ( 'simulate' => '1', ), 'rsync' => array ( 'simulate' => '1', ), ), ); $aliases['test'] = array ( 'parent' => '@live', 'root' => '/var/www/btr', 'uri' => 'http://test.example.org', 'remote-host' => 'test.example.org', 'command-specific' => array ( 'sql-sync' => array ( 'simulate' => '0', ), 'rsync' => array ( 'simulate' => '0', ), ), );
It defines the aliases live and test. The test/stage application
is almost identical to the live/production one, however it is not
for public use. The idea is to test there first any updates/upgrades
of the application, in order to make sure that they don't break any
things, before applying them to the real live application. In my
case it is placed on a different server, however it can also be
placed on the same server as the live application (just make a clone
of the main application with dev/clone.sh btr btr_test
).
When everything is set up correctly, the synchronization can be done as simply as this:
drush rsync @live @test drush sql-sync @live @test drush rsync @live @btr_dev drush sql-sync @live @btr_dev
Note: Synchronizing this way from @test to @live or from @btr_dev to @live, usually is a HUGE mistake, but the simulate option on the config file will make sure that it fails.
For drush commands to work remotely, ssh daemon has to be running on the remote server, inside the docker container. By default it is not installed, but it can be installed with the script dev/install-sshd.sh. This script will also take care to change the ssh port to 2201, in order to avoid any conflicts with any existing daemon on the host environment, and also for increased security.
For remote access to work correctly, the public/private key ssh access should be set up and configured as well. For more detailed instructions on how to do it see: http://dashohoxha.blogspot.com/2012/08/how-to-secure-ubuntu-server.html