I work for starsellersworld.com, I wanted my company start to use tool for tracking project development in a centralized way, Jira was expensive as resource, and I can not ask such a step, so I asked a machine were to run Redmine. Then it cames the gogs, and, after little time, Jenkins.
A number of tool I never used before, I am a developer, not a DevOps, I install those as docker containers. Containers communicate each other over internal network, but there was 3 different docker network, so I added the network to each container and specify its address by IP. That strange configuration worked for more than a year. Then it was becoming frustating, I decided to spare time to switch to swarm mode.
Problem of porting data to containers managed by the docker swarm are related mostly to data about references between them: the IP must be replaced by the service’s name.
Redmine. Using gogs repository, redmine refer to its via IP, there is no way to edit the repository configuration into redmine, the only option, via webgui, is to remove and recreate that link. The solution was to enter into the postgresql db and update the repository address with a query directly:
$ docker exec -it redmine redmine.1.[hashcodehere] bash
psql -U redmine -d redmine_production
UPDATE repositories SET extra_info = regexp_replace(extra_info, '^(.*)extra_clone_url: ssh:\/\/git@(172.17.0.2:22)(.*)$', '\1extra_clone_url: ssh://git@gogs:22\3', 'g')
Jenkins. This also refers to gogs for the repo retrievement by IP, and must be changed. It happens that into $JENKINS_HOME/jobs/ folder there is a list of folder, each for every job, and $JENKINS_HOME/jobs/*/config.xml contains the configuration, and there is something like:
<definition class="org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition" plugin="email@example.com"> <scm class="hudson.plugins.git.GitSCM" plugin="firstname.lastname@example.org"> <configVersion>2</configVersion> <userRemoteConfigs> <hudson.plugins.git.UserRemoteConfig> <url>ssh://email@example.com:22/xWave/orders.git</url> ...
I proceeded with an update script:
#!/bin/sh for file in `ls */config.xml`; do sed -i -e 's/ssh:\/\/firstname.lastname@example.org/ssh:\/\/git@gogs/g' $file done
This was enough.
Now gogs. It is problematic, really, but it happens that gogs uses sqlite for storing information about webhook used to signal jenkins to start a build script.
So I downloaded the gogs.db from $GOGSHOME/gogs/data/gogs.db and updated it with a query in sqlitebrower (desktop app):
UPDATE webhook SET url = replace(url, ‘172.17.0.4’, ‘jenkins’);
And that was enough for the first day, in 4 hours the services was back and working again, but … but Jenkins refuse to work.
Additional problem with Jenkins. It was started as root, and now in swarm mode I do not want such a thing. I was using a dirty solution provided by digital ocean just for testing (I am not blaming digital ocean for it, it was stated clearly!), and now nothing worked.
Also I was using
pdmlab/jenkins-node-docker-agent:6.11.1 image as pipeline agent because I need to run some integration testing before setting service on other swarm machine.
First of all I found the source of digital ocean provided image, and modified that adding something
RUN apk -U add docker shadow \
&& rm -rf /var/cache/apk/* \
&& addgroup jenkins ping \
&& addgroup jenkins shadow
I needed to add ping, that is 999 in
jenkins/jenkins:lts-alpine image, but it is docker in the hosting machine. That seems to work.
But still there was problem using docker-compose with that image https://github.com/PDMLab/jenkins-node-docker-agent
After some hours working on it I realized that that image was an overkiller, it runs a dockerd inside a container (dind – docker in a docker), but for my use I was mounting the /var/run/docker.sock inside, and I do not know if dockerd daemon just quits because it found it is already running (on host machine), or whatelse.
I fact I run jenkins container (now as service) with ‘-v /var/run/docker.sock:/var/run/docker.sock’ that is good for jenkins, and then jenkins pass the same to the docker agent.
Also I do not need nodejs, but just docker-compose. I started to write my own version, I face some problems and learnt some lessons.
At the end the docker image is in https://hub.docker.com/r/danielecr/dconedo
It is really simple.
Here is where I found the –group-add parameter to pass to the script:
Here is where I found how to do it:
So, with the help of the community(ies) I found the way to having back the building system, in a better shape.