option to pull from insecure (custom setup) registry#53
option to pull from insecure (custom setup) registry#53greg-szabo wants to merge 1 commit intomainfrom
Conversation
|
I like the "remote docker compose" idea. How secure is it, compared to the config in this PR? |
|
The "remote docker compose idea" is SSH-based, it's as secure as our current setup. This PR introduces the option to "build-once-deploy-lots": currently we build our source code as many times as many servers we run: each server downloads the source code and builds it locally. With this change, we can run a Docker registry on one of the servers, build the source code there and ask all the other servers to "docker pull" our custom image. It's faster than building on each server. I tried this on a Rust repository while testing rust-libp2p. The Rust build process is fairly so, even compared to the Golang build process. The QA infrastructure code here has a "command and control" server deployed next to the QA test servers. It hosts Grafana, Prometheus, and a Docker Registry. The management of the servers is still fairly manually there, because it was mainly for testing. But building once and "docker pull"ing the image to the other servers turned out to be much simpler than building custom images on all servers. |
|
On comparing the "remote docker compose" and the "docker build, upload to insecure registry, and pull" approaches, in my mind
So, in terms of latency of the QA step consisting of building & deploying, are we sure the latter will be faster? Anyway, this is just some thought of mine, with no hard data to back it up, so I trust your knowledge and experience when comparing both methods. If, in the end, we go for the approach based on the insecure registry, is there a way to host the registry itself in one of our droplets, so that we can replace this line:
by, e.g., something like this?
|
This is a change that should help once we start leveraging docker images for QA testing. (Instead of the current systemd-based setup.)
This allows the nodes to download custom images from locally hosted repositories.
In practice, we can build one docker image with our custom binary, host it on one of the servers using Docker Registry and use a remote docker-compose call to pull down the image to the rest of the nodes and start running them automagically.
The problem is that
docker pulldefaults to using HTTPS (for good reason) and client-server certificates for authentication. This is overkill in our local-network or virtual-network setup.If this feels insecure, we can start managing certificates or find a different way to deploy the images to the nodes. (docker-compose can build nodes remotely too.)
The change in itself doesn't have any impact right now.