Building LOMSY With Flow, Part 3

Warning: This post is over 370 days old. The information may be out of date.

Get agile, deploy early!

This episode will guide you through setting up Gitlab CI deployment for the Flow application.

Add vHost on target machine

To be able to deploy the application to some server, it’s usually needed to prepare a virtual host or vHost on that target machine - unless you order a server for your application alone. So far, I more often saw myself deploying to a shared server, where each application resided inside it’s own vHost.

For this example, we’re using an Ubuntu Linux 16.04 based server with Nginx, MySQL and PHP 7.0 installed. But since the ground setup and especially the steps to get a vHost and Linux user prepared are kind of different for each provider, just make sure you have it prepared before you go on.


The above was of course the main pre-requisite: Without having a place to deploy to, we cannot automate the deployment of course… But now let’s first have an overview about the components involved and how we want to deploy the application:

The CI architecture to deploy LOMSY

Step by step:

  • (1) Developer(s) push code into the Git-Repository, this triggers a build
  • (2) Gitlab CI starts a Docker container in which we execute a defined set of reproducible tasks (these tasks can be executing tests, deployment steps or whatever is needed)
  • (3) We run a Surf-Deployment from within the temporary docker container that connects to the target server
  • (4) The target server fetches the Sourcecode from Gitlab + does the needed steps (e.g. running composer to fetch dependencies)
  • (5) After finished deployment, the end-users can use the new version of the app on the server

All connections are SSH based and authentication is done via SSH-Keys as you will see below.

One note to the approach of “remote-executing git-checkout and composer install”: In our setup here, we execute those steps directly on the target machine. We could also completely build the application within the docker container and then ship the application as a whole, e.g. via rsync to the target. This would reduce the complexity a bit since the target server wouldn’t need read-access on the git repository. Well, it’s a bit like Mac vs. Linux and stuff: One could argue for hours, why the one or the other is better, … The outlined approach here is my preferred way of doing it for now (and this could change in the future).

Enough boring theory, let’s prepare the continuous integration automation into your project:

Prepare SSH-Authentication

This step will prepare two things: Allow us to connect as the newly created user on the target server - and let that user access the Git repository.

First, make sure your personal public key is in the list of allowed keys, this happens by adding it to the file .ssh/authorized_keys in the home directory of the new Linux user on the target server. Depending on your server-setup, this can happen automatically or via a control panel - or by friendly asking your Ops guy to help out.

Second we create a pair of SSH keys on the target server, so we can add it’s public key as a deploy key to our repository later on (to allow the user on the target system to checkout our application):

lomsy@nimbus:~$ ssh-keygen
lomsy@nimbus:~$ cat .ssh/
ssh-rsa AAAA(... shortened ...)

Copy the output of the cat command from above and keep it - we’ll come back to it later.

Add Deploy-Key

Now go to your project on your Gitlab instance, find the Gear-Icon-Dropdown on the right top corner, select Deploy Keys from it and add the public key you created before.

For me, the “title” as “user @ server” has worked out pretty good in the past to identify the keys (and yeah, working with several projects, the list can grow quickly as long as you use a fresh key per project - and you should do so!)

After saving you can try while being logged in on the target box:

lomsy@nimbus:~$ ssh git@my-gitlab.domain.tld
PTY allocation request failed on channel 0
Welcome to GitLab, Anonymous!
Connection to my-gitlab.domain.tld closed.

So far so good.

Add Gitlab Variable

Just one thing regarding SSH is missing now: The Docker container we’ll spin up for each CI run will need to have an SSH-Key injected that is allowed to login and deploy to the user on the target system.

Best is to create a fresh pair of keys in the project directory:

ssh-keygen -f lomsy.key -C "deploy-lomsy@ci"

This will create a fresh pair of SSH keys (public and private key) with the comment “deploy-lomsy@ci”, which will help to identify and work with that keys later on. Open the generated file lomsy.key and copy it’s content. Then navigate to your project on Gitlab, find the Gear-Icon-Dropdown again and this time select CI/CD Pipelines from that menu. On the next page that appears, scroll to the section called “Secret Variables”.

Now add a new variable with the identifier SSH_PRIVATE_KEY and paste the content as the value of the variable.

Finally, copy the content of the file (the public key) and also add it to the list of authorized keys on the target server (in .ssh/authorized_keys file).

Hopefully we’ve not forgotten anything - now let’s setup the Gitlab CI pipeline with the deployment:

Add Gitlab CI configuration

Gitlab ships with Gitlab CI, a fully integrated Continuous Integration platform which is very well suited for our needs. In the first run, we’ll just add a deployment stage - which should be triggered and executed upon each commit in the master branch.

Add the following snippet to your project repository:


image: php:7.0
  - /cache/composer
  # define a cache directory for composer
  - export COMPOSER_CACHE_DIR=/cache/composer
  # Install ssh-agent if not already installed, it is required by Docker.
  - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
  # Run ssh-agent (inside the build environment)
  - eval $(ssh-agent -s)
  # Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
  - ssh-add <(echo "$SSH_PRIVATE_KEY")
  # For Docker builds disable host key checking. Be aware that by adding that
  # you are suspectible to man-in-the-middle attacks.
  # WARNING: Use this only with the Docker executor, if you use it with shell
  # you will overwrite your user's SSH config.
  - mkdir -p ~/.ssh
  - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
  - apt-get update
  - apt-get install -y sshpass
  - curl -L -o surf.phar
  - chmod +x surf.phar
  stage: deploy
    - echo Deploying to Production environment.
    - php -v
    - ./surf.phar deploy --configurationPath Build/Surf myApp-Live
    - master

The biggest part of it is the stuff handling the SSH-Key injection into the Docker container - the real work (downloading and executing Surf) takes just a few lines…

Do not commit/push yet, one step is missing:

Add a Surf deployment file

Surf is a PHP based deployment tool. It allows us to define a deployment configuration (the snippet below), which defines what has to be done in a deployment run. As you’ll see, this snippet is pretty short since we base on the “Flow-Application” Model, which already covers most of the steps we’ll need.


use \TYPO3\Surf\Domain\Model\Node;
use \TYPO3\Surf\Domain\Model\SimpleWorkflow;

$application = new \TYPO3\Surf\Application\TYPO3\Flow('my-lomsy.domain.tld Application');
$application->setOption('repositoryUrl', 'git@my-gitlab.domain.tld:me/lomsy.git');
$application->setOption('keepReleases', 3);
$application->setOption('composerCommandPath', '/usr/local/bin/composer');

// do not use local composer + rsync, but git + composer directly on the target server
$application->setOption('transferMethod', 'git');
$application->setOption('packageMethod', NULL);

// set the Flow major version

$workflow = new SimpleWorkflow();

$deployment->onInitialize(function() use ($workflow, $application) {

$node = new Node('LOMSY LIVE Webserver');
$node->setOption('username', 'lomsy');



In case you copy/paste this to your project, be aware that you need to modify the above script in several places and replace application, server und usernames to fit your setup.

Set it on fire

Now add those two new files to your git repo, press the red button (= commit to master + push to Gitlab) and watch the magic happen…

Probably the deployment run will finish without any error - but the frontend viewed with your browser might not look like you intended to… Reason is that Surf is not aware yet of a major change that came with Flow 4.x - the namespace switch from TYPO3 to Neos. Thus certain commands cannot be ran now, which leads to missing resources (and the DB migrations not being executed). Details + workaround can be found here

Add a database

Add an empty MySQL database on the target server and a dedicated user for it - note the DB name and username/password for it.

Now log in as the user on the target server and store the configuration with your favorite text editor:

lomsy@nimbus:~$ vi httpdocs/shared/Configuration/Production/Settings.yaml


        dbname: 'lomsy'
        user: 'lomsy'
        password: 'probablytosecureforablogpost'

Stored in that file, the DB configuration will be safe and survive further deployments.

Probably, this file could also be versioned in the LOMSY repository, but that’s a personal preference of keeping the “secrets” (DB credentials, SSH-keys and stuff) away from the repository.

Next up

We’ve reached a point where we can now start building actual functionality into LOMSY. Follow the next part of this series to see how we integrate the registration and login mechanism for the users. (another post that needs to be written first…)

Posts in this series