multi stage rocket flying

Dive into (useless?) uncovered case of multistage dockerfile

Looking at the upstream documentation of Multi-stage builds there are some examples but this is missing:

FROM rust:1.72-alpine as build

RUN ls -ls

FROM alpine

COPY --from=build /usr/local/cargo/bin/rustc /rustc

CMD [ "ls", "-l", "/"]

Ok, I mean, the example by itself is useless, but “it works”. Try:

docker build -f Dockerfile -t copiedfrom-rust:v0.1 .

Then:

docker run -it copiedfrom-rust:v0.1 sh

/ # ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    rustc  sbin   srv    sys    tmp    usr    var
/ # ./rustc
error: rustup could not choose a version of rustc to run, because one wasn't specified explicitly, and no default is configured.
help: run 'rustup default stable' to download the latest stable release of Rust and set it as your default toolchain.
/ #

Real use case scenario

Lets get serious now: why?

Suppose you are inside a shell, inside a devop tool and you need to often to rely on binary/ies coming from another image, furthermore you know that built process of those binary is time & resource consuming (like Rust, for example? yes).

So it is better to arrange compilation so that it does not overload the CI tool for just re-compile what did not changed (and should be separated, instead).

This hint also apply to golang, and to everything that is cpu intensive.

Once one gets the power of static analysis, it is time to use it carefully

Real life examples

Porting of legacy PHP code into the cloud is difficult: by itself PHP does not provide multi thread execution, there were mainly 2 options, and now there is a third:

  1. use nginx or apache module to run php code: but this does not run php-cli env! and this may be a security issue
  2. Use PHP exotic extension (ReactPHP, workerman, Amp, …): ok, but this is not php anymore, every extension impose some limitation, and it does not provides real “legacy code porting” (lets be honest here, legacy means the same code, and the same code simply … has problem)
  3. Create a special executor daemon: this executor daemon can be configurable by a yaml file, where, for each http url/verb, there is path of php script, working directory, environment, and whatever.

A simple example of that yaml file:

restapi:
  routers:
    - url: /templates
      method: GET 
      wd: /home/phpusr/templates/
      content-type: "application/json"
      timeout: 30
      cmd: php /home/phpusr/templates/listtemplate.php
      environment:
        - HOME: "/home/phpusr"
    - url: /template-new
      method: POST
      payload-to: body
      escape-shell-args: true
      cmd: php /home/phpusr/templates/addtemplate.php $body

A better defined/refined yaml file is what I am actually using but for consuming rabbitmq message as a worker which run PHP (where also a unix socket is provided to get back response from PHP (and yes, unix socket support in php is really good)).

Before I just thought about a “do nothing from image” idea, I need to spend 4 minutes on each simple change. Simply absurd.

When you need to deploy soon and fast, is this trick that useless?

Requirements

  • CI tool with an environment capable to build docker
  • CI environment where docker has access to a private register
  • a private register where the “executor-daemon” is built as an image
  • the private registry must be seen (visible/accessible) as https://registry.private.local/v2/_catalog

In the CI environment, the user running CI stage must be logged into the private register, that is:

docker login registry.private.local

This command will store in ~/.docker/config.json file the required auth to access the registry at registry.private.local (I give here a .local domain, because typically a private registry is not exposed outside, by default https://registry.private.local:443/v2 is accessed when specified that way, so this domain must be known).

On legacy PHP porting

With this strategy local development of existing PHP code can proceed in a natural way:

  • port legacy code to composer as first step (at very least)
  • clean the code as usual and provide test: tests, even integration tests, does not need the executor-daemon to be run.
  • it is possible to port code in another language: then move outside the code base and use another stage during docker build
  • make your legacy code quickly communicate with the cloud environment (docker swarm, kubernetes, or whatever), then clean code

The last point is the “jump-into-the-cloud” as soon as possible, it should provide a legacy developers a new prospective and let them enjoy cloud native app for the new powers it provides, and not blaming it because “it is difficult to understand” (or comment like those).

There is also a progressive culture upgrade that is favored by this approach: for developer is not required to understand details, but s/he can do look at details and choose to enhance the code and the service.

References

There are reference about this strategy. Like in https://shahbhargav.medium.com/docker-multi-stage-build-3d1af8868ac0

But there is no mention about local registry usage.


Posted

in

by