My 2 days Journey on moving services to a swarm

I work for, I wanted my company start to use tool for tracking project development in a centralized way, Jira was expensive as resource, and I can not ask such a step, so I asked a machine were to run Redmine. Then it cames the gogs, and, after little time, Jenkins.

A number of tool I never used before, I am a developer, not a DevOps, I install those as docker containers. Containers communicate each other over internal network, but there was 3 different docker network, so I added the network to each container and specify its address by IP. That strange configuration worked for more than a year. Then it was becoming frustating, I decided to spare time to switch to swarm mode.

Porting data

Problem of porting data to containers managed by the docker swarm are related mostly to data about references between them: the IP must be replaced by the service’s name.

Redmine. Using gogs repository, redmine refer to its via IP, there is no way to edit the repository configuration into redmine, the only option, via webgui, is to remove and recreate that link. The solution was to enter into the postgresql db and update the repository address with a query directly:

$ docker exec -it redmine redmine.1.[hashcodehere] bash
psql -U redmine -d redmine_production

UPDATE repositories SET extra_info = regexp_replace(extra_info, '^(.*)extra_clone_url: ssh:\/\/git@(*)$', '\1extra_clone_url: ssh://git@gogs:22\3', 'g')

Jenkins. This also refers to gogs for the repo retrievement by IP, and must be changed. It happens that into $JENKINS_HOME/jobs/ folder there is a list of folder, each for every job, and $JENKINS_HOME/jobs/*/config.xml contains the configuration, and there is something like:

  <definition class="org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition" plugin="workflow-cps@2.63">
    <scm class="hudson.plugins.git.GitSCM" plugin="git@3.9.3">

I proceeded with an update script:


for file in `ls */config.xml`; do
        sed -i -e 's/ssh:\/\/git@\/\/git@gogs/g' $file

This was enough.

Now gogs. It is problematic, really, but it happens that gogs uses sqlite for storing information about webhook used to signal jenkins to start a build script.
So I downloaded the gogs.db from $GOGSHOME/gogs/data/gogs.db and updated it with a query in sqlitebrower (desktop app):

UPDATE webhook SET url = replace(url, ‘’, ‘jenkins’);

And that was enough for the first day, in 4 hours the services was back and working again, but … but Jenkins refuse to work.

Additional problem with Jenkins. It was started as root, and now in swarm mode I do not want such a thing. I was using a dirty solution provided by digital ocean just for testing (I am not blaming digital ocean for it, it was stated clearly!), and now nothing worked.

Also I was using pdmlab/jenkins-node-docker-agent:6.11.1 image as pipeline agent because I need to run some integration testing before setting service on other swarm machine.

First of all I found the source of digital ocean provided image, and modified that adding something

RUN apk -U add docker shadow \
&& rm -rf /var/cache/apk/* \
&& addgroup jenkins ping \
&& addgroup jenkins shadow

I needed to add ping, that is 999 in jenkins/jenkins:lts-alpine image, but it is docker in the hosting machine. That seems to work.
But still there was problem using docker-compose with that image

After some hours working on it I realized that that image was an overkiller, it runs a dockerd inside a container (dind – docker in a docker), but for my use I was mounting the /var/run/docker.sock inside, and I do not know if dockerd daemon just quits because it found it is already running (on host machine), or whatelse.

I fact I run jenkins container (now as service) with ‘-v /var/run/docker.sock:/var/run/docker.sock’ that is good for jenkins, and then jenkins pass the same to the docker agent.

Also I do not need nodejs, but just docker-compose. I started to write my own version, I face some problems and learnt some lessons.

At the end the docker image is in
It is really simple.

Here is where I found the –group-add parameter to pass to the script:

Here is where I found how to do it:

So, with the help of the community(ies) I found the way to having back the building system, in a better shape.

I stopped the data flow – on React best practices

Reading the interesting article from Dan Abramov:

I stopped at memoization. I did not know this technic before, but simply because nobody care so much about names before functional programming. And so I discovered how large is the cover of react hooks, useMemo() that does

In a cleaner way with the hook, in my opionion

It recall me the lodash.throttle() function, but relation I setted for this is obscure, I think it is because of optimization nature of both the concepts: do not refesh too often (_.throttle), do not recalculate if source not changed (memoize)

But I further goes insight on react hooks, and I found redux-react-hook

it looks simplier.

But from the point of view of testing, hooks can not be mocked, or it could but providing a reduced store.
While when using HOC one can just test the inner component (LOC?) providing just properties, here the hook refers to a library.
To separate these and relay only on properties, the properties should be functions: useDispatch() and useMappedState() should become
this.props.useDispatch and this.props.useMappedState

That would make code less appealing, but really not so bad, changing the example of project

export function DeleteButton({index,useDispatch, useMappedState}) {

just the function-component declaration is changed, and still code is testable injecting the right functions

Anyway, given the concept behind redux, using two provider calling reducers should not be a problem, I mean, the switch can be progressive

git-merge: the github conference

This is the list and summary of talks of the conference day, what last on me.

The future of free software.

In this talk Deb Nicholson talk about free software and the future.
First of all she describe future, first by its view as a young girl, then using words from William Gibson: “The future is now, it is just not evenly distributed”.
So the future should be better distribuited, and free software should play this role, to distribuite the power to users.

Tales in scalability … google …

In this talk Ivan Frade and Minh Thai explain the challenge to manage huge git repositories: Android and Chromium.
Ivan Frade explained the use of a bitmap for tree and references, such that every commit relation to others is coded into a map of bit: 1 means that it is related, 0 that is not.

The What, how and why of scaling repositories

In this talk Johan Abildskov expose some use case and how companies suppose to scale repository. There are 2 main strategies: a mono repository, and a many repositories policy.
He advocates “Accelerate, the Science of DevOps”, a book that analyses which are the reasons to choose one or the other option, in this and all that cases where DevOps choices are under consideration.

Transition Git to SHA256

Brian Carlson explain the phases setted to switch existing repositories from SHA1 to SHA2. Due to its distribuited nature, every git repository manage its state in all the place where a developer maintain its cloned repository, so everything is inside its .git folder, and arranged there.
The git CLI manage every commit by signing (by hash function) the current state and the dependency to its parent. The strategy to switch from SHA1 hash function to SHA256 is diveded into phases, starting from the first where only SHA1 is supported, then the middles ones where SHA1 and SHA256 are both supported, the ending with the very last where only SHA256 is supported, so all commits are identified by a 64 characters hashcode, and the 40-char length of SHA1 is no more understood

Git Protocols ….

Brandon Williams describe which are the new future that are introduced with git v2 protocol. Git natively support https, ssh and git protocols. For the first 2 protocols solving the versioning problem (to support both version) are solved by http-headers and by environment variable, respectively in https and ssh. But for git protocol, that is a native, binary protocol, the most optimized and quickier one, identifing old server from theirs reply was a challenging job. Brandon explained how all that was faced.

Git support for large object

Terry Parker expose the use of lfs, and cdn related problems, and partial clone option of git

Git for games: …

here John Austin dives into game development environment and how to manage repo in git, where there are large files that are not very well managed by a git due to distribuited nature and the needs for every developer to fetch all the repo.

He made a project, gitglobalgraph, that show something related to dependencies of tree in the git repos

The art of pacience

Belen Barros Pena is an interaction designer, and she does not know nothing about git.
She explain how she learnt to use git by example, by just using the command line.
What she underscores is that there is no simple way to explain git, because version control, distribuited, is a complex thing, so the only way to explain it is to work with the designer, and not for the designer, explains every time there is a problem how to use git command line to solve that problem.

Version control for law.

Ari Hershowitz compares congress and local parlaments lawyer work, with git diff, and try to make a translation between line changes (diff) and law-language, for mapping things into a repository, and back for the laweyer.

Git, the annotated notepad

Very accessible, but focused, talk that explain what is an atomic change and why it is important to commit every single logical change.
Aniket Subhash Kadam, independent developer

Version control for visual learner

Veronica Hanus explain his problem when she need associates a commit with a change visually, that is, changing a stylesheet is not visually evident from the commit message. She investigate on the use of puppeters/selenium driver and such, to automate such a task.

Panel conversation

—- not really a talk

Gitbase a sql interface to git

it could be interesting

Microsoft windows into git

there are at least some option to optimize the use of git, and those was presented by the microsoft worker (at Azure), John Briggs

Git based education ….

This talk explain how git promote the use of a … TDD development.

The workshop

For some reason I decided to book the workshop too, because I have to admit that I really have problem with git, I like it but sometime I fight with it.


The more interesting tool presented during the workshop day was the visualizing tool

In the visualizing tool there is a command ‘undo’, that has no counterpart in git, but that make the tool more usable and the learning more confortable.

The most important command I learned, and of which I was not aware, is

git reflog

That is very usefull to show the log of reference changes.

Also the presentation of 10 git problems and their solution, really focus on problem that I have all the day and I need to solve (but I am not good at it).

A full day of workshops focusing on git tool seems too much, but it is a tool that I use all the day for a long time now, but without really understand the concept behind it. As Belen Barros Peña said in her talk, the art of patience, the better way to explain git to a no developer is by example command, but that apply to developer too, because what I found when I tried to read the documentation was a pile of manpages that document every single command with all possible use and option, and some other document presenting the concept behind git. I am sure I had buyed a book too, talking about git. But at the end there was little relation between concept exposed and real git command.

What I learned is that ‘git reflog’ is really valuable, but also that the only way to understand how to solve a problem is to face that problem, as said by Briana Swift, more and more, in order to interiorize the problem and the relative solution.


Other theme is about the patterns that could be implemented on using git. That means the policy to be followed, the opportunity to protect a branch, the definition of (how many) branches.

Also there exist different option for the repository: many repo, one repo. Me and the company I am working for, are using the many repo arrangement, and the idea to have a single repo is not really appealing to me, but I discovered it exists git submodule, I do not know if it is related.

Another interesting tool I discovered is Gerrit

a collaboration tool that I probably want to introduce in the company

Also, during the conference there was a presentation of Jenkins X, but it was during lunch break. For what I could understand it is a command line tool (CLI) that setup pipelines on a Continuous Integration tool, and it permit to control the setted pipelines by the CLI. We are currently using JenkinsCI and I am ok with the pipeline defined in a Jenkinsfile.

Other CI tool everybody is talking about is CircleCI, I do not know which advances it has over Jenkins, maybe a native/natural support for docker-compose?

php zmq in docker and checking whether the C compiler works… no

 I spent 2 hours of what it lasts of my life trying to understand what was the problem on installing php’s zmq extension in a docker running alpine as base image. I would like to share my experience here.

 During morning I would like to write supporting class for integrating zmq into our system and use it from inside containers where services are  supposed to run.

First I just added to Dockerfile the pecl install command, and expected it works:


FROM php:7-cli-alpine
RUN apk add zeromq libzmq
RUN pecl install zmq-beta \
   && docker-php-ext-enable zmq


hey, docker hub specification says there should be no problem (rif.  # PECL extensions )


The message was:

> docker build .
[blah blah check that worked]
checking whether the C compiler works... no
configure: error: in `/tmp/pear/temp/pear-build-defaultuserEhCEfp/zmq-1.1.3':
configure: error: C compiler cannot create executables
See `config.log' for more details

and no way to find this config.log :

> docker run –rm xxxx ls /tmp/pear/

no such file. In fact pecl cleanup its building folder everytime, even if it fails. (why?? or bigger: WHY??)

I started with a reduced image, just 

FROM php:7-cli-alpine
RUN apk add zeromq libzmq

I ran:

inside_docker> pecl install zmq-beta
you do not have autoconf!
inside_docker> apk add autoconf
inside_docker> pecl install zmq-beta 

no way, no config.log.

I am a bit nervouse, I asked my collegue, a PHP expert … “Sorry, I’m not aware of this completely”. More nervouse. Some food.

Tried in my machine, a debian, and pecl install worked.

I got source from git (now I follow )

inside_docker> git clone git:// git command
inside_docker> apk add git
..[repeat again]
inside_docker> cd php-zmq
inside_docker> phpize
... no phpize
inside_docker> apk add phpize
inside_docker> phpize
inside_docker> ./configure
....[blah ...]

check config.log

And now it exists! but, what?!?

configure:2697: cc -V >&5
cc: error: unrecognized command line option '-V'
cc: fatal error: no input files
compilation terminated.
configure:2708: $? = 1
configure:2697: cc -qversion >&5
cc: error: unrecognized command line option '-qversion'; did you mean '--version'?
cc: fatal error: no input files
compilation terminated.
configure:2708: $? = 1
configure:2728: checking whether the C compiler works
configure:2750: cc    conftest.c  >&5
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find Scrt1.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find crti.o: No such file or directory
/usr/lib/gcc/x86_64-alpine-linux-musl/6.4.0/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find -lssp_nonshared
collect2: error: ld returned 1 exit status
configure:2754: $? = 1
configure:2792: result: no
configure: failed program was:
| /* confdefs.h */
| #define PACKAGE_NAME ""

what is -qversion ? it looks wrong … (I wasted some more time checking that)

some line above this:

Target: x86_64-alpine-linux-musl

Configured with: /home/buildozer/aports/main/gcc/src/gcc-6.4.0/configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --build=x86_64-alpine-linux-musl --host=x86_64-alpine-linux-musl --target=x86_64-alpine-linux-musl --with-pkgversion='Alpine 6.4.0' --enable-checking=release --disable-fixed-point --disable-libstdcxx-pch --disable-multilib --disable-nls --disable-werror --disable-symvers --enable-__cxa_atexit --enable-default-pie --enable-cloog-backend --enable-languages=c,c++,objc,java,fortran,ada --disable-libssp --disable-libmpx --disable-libmudflap --disable-libsanitizer --enable-shared --enable-threads --enable-tls --with-system-zlib --with-linker-hash-style=gnu
Thread model: posix
gcc version 6.4.0 (Alpine 6.4.0)

I decided to compare with the config.log created in my debian machine. I splitted the lines:

../src/configure -v
--with-pkgversion='Ubuntu 7.3.0-27ubuntu1~18.04'
 --enable-libmpx --enable-plugin

/home/buildozer/aports/main/gcc/src/gcc-6.4.0/configure --prefix=/usr
 --with-pkgversion='Alpine 6.4.0'

I asked the internet: what is multilib? is not possible to have m32 on alpine and blah … ( ). Fuck. I just want something like:

inside_docker> apk add build-base

and then … pecl:


inside_docker> pecl install zmq-beta


It run! It works. So my Dockerfile:

FROM php:7-cli-alpine
RUN apk add autoconf gcc libzmq zeromq-dev zeromq coreutils build-base
RUN pecl install zmq-beta \
   && docker-php-ext-enable zmq

That’s all. Now I can start the real work.

News from Verona ( ReactJS Day – October, 5th, 2018)

I wrote some kind of report about ReacJS day – Verona, October, 5th. But my pc blocked (TODO: buy a new pc, once I understand which), I did not saved the document. I taked a screenshot.

My experiment on React.context will focus on edit state of a page. Currently the page is made of various components and some are switchable to edit-mode and view-mode. I want to add constraint that only one element in a page is on edit mode, and also to add the behavior of switch-to-view-mode-on-ESC-key.

I should provide a value via EditState.provider : elementid

For this I need that every editable element have a unique id in the page.

Some faced it in, the most immediate and simple proposed solution is

But I will skip this problem passing a register function

const EditStateContext = React.createContext( {
   editElement: null,
   register: () => {}, // return an unique id
   editModeOn: (id) => {}, // toggle on
   editModeOff: (id) => {} // toggle off

as documented the value should be the state of a containing component

import { EditStateContext } from './EditStateContext';
class App extends React.Component {
  constructor(props) {
    this.editableComponents = [];
    this.register = () => {
	let newLength = this.editableComponents.push(this.editableComponents.length);
	return newLength - 1; // this is enough to stay safe on race condition
    this.editModeOn = (id) => {this.setState({editElement:id})}
    this.editModeOff = (id) => {this.setState((oldState) => { if (oldState.editElement ===id) {return {editElement:id} }} )}
    this.state = {
	// about js lang I dislike the use of -1 for notFound of indexOf, but ...
	editElement: -1,
	register: this.register,
	editModeOn: this.editModeOn,
	editModeOff: this.editModeOff
  componentDidMount() {
	document.addEventListener('onkeyup',(e) => {
		if (e.code === 'ESC') { // Replace with something meaninful
			this.setState(state=>{ return {editElement: -1 }} ));
  render() {
	return (
		<EditStateContext.Provider value={this.state}>
		  <Content />

The consumer will receive the value

import { EditStateContext } from './EditStateContext';

function withSkipEditableEl(Element) {
   return (
		{ (props) => (<Element ...props />) }

Now I can use that HOC function passing the editable element.

It may works, I will test tomorrow.

I am still confused about contexxt and a number of pending tasks on the go.

Docker in a Docker

Running a docker container from inside a docker container, I mean.

Of course one can define an image that instanciate a container as its default command, and this would lead to an infinite loop of forks … let’s see

The infinite containment

docker run -v /var/run/docker.sock:/var/run/docker.sock -name whocare alwaysfork

Ok, that is bad and I will not describe.

Container near a container

Maybe it is a bad thing, but most of the time docker and container is used interchangely, so I would had better written docker near a docker, that is cool.

What is near and why not “in”? it is like the infinite containment example, it is near because every contained execution is managed by the main daemon (dockerd) running in the host machine.

In fact in a infiniete containement you are not forking endless, but you are asking the dockerd to instanciate endlessly a new container, there is nothing that make one of the container a manager of the other, or a more authoritative manager than the other: every container that can access the /var/run/docker.sock socket and has the right can stop/remove every container in the machine.

Security concerns

I am not expert of security concerns, but everytime you do something “strange” there are great chances that that “strangeness” could lead to a security hole. (not being an expert I think even “not-strange” done by me can imply bad things)

Here the problem is due to the way docker is managing user, and mostly by how it manage the uid=0, root. The command by itself run the image standard command, and it is expected that the Dockerfile specify USER that will run the default command (and that is something != root).

But the weird part here is that being actually a container-near-container situation, at every instanciate point it could be switched to root with a

docker exec -u root -it myfunnyimage ./abadcommandforyou

this could be run from any container, call it aquitecontainer, all that is needed is the docker command in the image of aquitecontainer and that the socket is mounted as -v /var/run/docker.sock:/var/run/docker.sock and … so?

So you can switch to root from a container not running as root, because you can see the socket, and this is enough.

Two case are given:

  1. the containing container is running as root, so no particular needs to access the socket
  2. the containing container is running as a unprivileged user, but it has access to socket, so no particular needs to access the socket from the anyuserisused inside
  3. the containing container is instanciated by root, but running as unprivileged user, that again map to the calling user, that is root

Yes, the problem here is that docker daemon always map the “no-root” user to the uid of the user that call its api.

But really it is not enough

In fact to access /var/run/docker.sock a user must be part of docker group, when a container is instanciated with its default user the groups are not migrated, so the image’s definition of user’s group membership is something to take into account: if internal user of the docker has no access to the docker.sock it can not communicate with the dockerd running, so no more docker in docker, no more root migration

A matter of elegance

I need some moment to fully understand this:

const compose = ( ...fns ) => fns.reduce( ( f, g ) => ( ...args ) => f( g( ...args ) ) );

reduce method is trickly: when initialValue is not given, the first element of the array is taken as first argument of reduce’s callback parameter, so f is the first argument, g is the second, (…args) is the formal parameters (passed to the resulting function), at the end of the reduction all …fns are applyied in reverse order (i.e. compose(f,g,h) g is applyed before f, and h is applyed before g, resulting in (…args) => f(g(h(…args)) ).

But once I got it it happen to be really an elegant way to write it:

const $d = $data.mergeMap( compose(Rx.Observable.from, r=>r.split('\n')) );

and not:

const $d = $data.mergeMap(r=>Rx.Observable.from(r.split('\n')));

that is less readable

Lo sporco lavoro del CI/CD e DevOps

Ora racconto una storia. Beh, la faccio breve, si è fatta una certa.


Da quando c’è bisogno di rilasciare il software in tempo reale sono stati scritti diversi software di supporto per fare lo sporco lavoro di: eseguire test, impacchettare, distribuire i pacchetti nei vari server di destinazioni.

Uno di questi è Jenkins, scritto in Groovy che è un linguaggio funzionale che compila per JVM.

A marzo dello scorso anno ho visto una presentazione di un tipo delle RedHat che usava appunto Jenkins e Gogs (repo a la github minimale). Ovviamente siccome dove lavoro siamo all’età della pietra per quanto riguarda il Continuous Integration e Delivery, ho deciso che dovevo implementare questo.

Così è da inizio gennaio che tra influenza e maldistomaco ho messo su Jenkins facendolo parlare con gogs (già in uso da quasi un mese) … cioè in realtà è gogs che manda un messaggio a Jenkins quando gli arriva un push.

Per rendere la cosa più stagna ho installato gogs e jenkins in propri dockers e lo ip è indicato staticamente e numericamente perché l’interfaccia bridge di default non prevede la risoluzione per nome (mentre definendo una rete specifica magicamente dovrebbero essere risolti gli indirizzi col nome o id dell’immagine, ma sono cose che correggerò in futuro, forse)

Bene o non bene, fino a quando lunedì decido che anche l’applicazione electron che sto scrivendo deve fare il build in automatico, e sta volta arrivano i guai.

1. Jenkins esegue il build in un docker che va a creare subito dopo aver scaricato il repo (e sì, basta passare /var/run/docker.sock come volume al docker dove gira Jenkins, e il client all’interno del docker Jenkins va a comunicare col demone docker dell’host: trovo che sia una figata).

2. Electron non gli sta mica bene che usi un docker minimale tipo alpine. Questo è semplice c’è da usare electronuserland/builder:wine

3. devo sbatterci un po’ perché in un primo momento provo ad usare babel 7.xx, che è in beta, e quindi modifico .babelrc, e poi altri file che fanno riferimento a babel, ma viene fuori che webpack di lavorare con babel 7 non ne vuole sapere quindi torno indietro a 6.23, che è comunque ok, ma come mi passa per la testa, eccetera. Poi npm install non funziona e lamenta qualcosa di incomprensibile (non trova il modulo file, flow-typed? ma che vai cercando??) e trovo “One of the things that can cause this bug is adding packages to the wrong dependency section. For instance yarn add gulp will add it to dependencies instead of devDependencies”
… e poi? poi c’è che vuole pubblicare chissadove il pacchetto debian, che non so neanche perché lo sto facendo. e devo aggiungere –publish never
E siamo a mercoledì. In realtà per via del fuso siamo a giovedì … fuso? sì.

E le Promise ritornano (risolvono in) function?

Stasera ero qui con un test (acceptance) cercando di accrocchiarlo alla meglio.
In sostanza usando jsdom e jest ho del codice legacy che utilizza jquery (versione vecchissima) e il js è nella pagina. Lo incollo in un file separato, prepare.js. E visto che devo caricarlo tramite node lo metto dentro una funzione che chiamo con parametro $ (che è la jQuery). Tipo:
let start = ($) => {
   $(function() {
       var actions = ActionGroup($("#actions"));
      .... // e blablabla
      function loadExample() {
        const fs = require('fs');
        var exJson = fs.readFileSync(.....)
module.exports = start

Quindi posso eseguire un solo caso, testare un solo json. (e devo usare anche un timeout a caso per prima di fare expect blabla)

Ora arriva un nuovo caso da controllare, un nuovo json. E qui ho un attimo di delirio, penso che forse dovrei trasformare il tutto in una funzione, da usare come class, cioè instanziarla tramite la new … Oppure uso una promise che risolve in una function. Cioè:
let start = ($) => {
    return new Promise((resolve, reject) => {
        $(function() {
            actions e blablabla ....
            . ...

             function loadExample(filename) { // parametro
    }); // non chiamo mai reject()
module.exports = start
e così quando la vado a chiamare ho:
let prepare = require('./prepare');
prepare(window.$).then((loadAction) => {
.catch( (err) => expect(false).equals(true) )
Siccome sono poco funzionale il cambiare 2 righe per ottenere questo m’è sembrata una figata

Observable. Prendo nota

Probabilmente in ritardo col resto del mondo, h chiarito un punto importante riguardo gli observable, e l’ho capita usando observable-redux. Prendo nota.

Definendo un Epic, da uno stream di action$ si restituisce un’altro stream di action

La Epic è eseguita dopo che il relativo reducer ha fatto il suo lavoro.

È possibile, data una action, restituirne un’altra, tramite .map()
in tal caso l’oggetto da restituire è proprio una action

È possibile inoltre restituire uno stream (un observable) che restituisce una (o più) action, in questo caso
si deve usare il metodo mergeMap()


const loadOrdersEpic = (action$, store) => {
  return action$
  .flatMap( () => { Observable.from(ajax('req')) })
  .mergeMap((res) => {
    return Observable.merge(
      Observable.of({type:'DATAREADY', res})

a differenza di map posso comporre degli observable(s) quindi generare più action (ovvero uno stream che genera più action)

const loadOrdersEpic = (action$, store) => {
  return action$
  .flatMap( () => { Observable.from(ajax('req')) })
  .map((res) => {
    return {type:'DATAREADY', res}