Cleaning up private docker registry

The solution to cleaning the unused docker registry layers is not straightforward, there is nothing readily available from docker. As more and more docker layers are pushed and tagged, it may be possible that there are image layers that may not be required. These unnecessary layers will consume a lot of storage space. This page describes the way to delete unused layers without disturbing the registry.
I am going to explain how to clean up unused layers from docker registry using HTTP API V2
Docker HTTP API V2
The current version of the docker provides an option to interact with the images in the remote private registry using HTTP API version 2.
Few useful Digest APIs
List all the repositories available in the private registry
CATALOG
$ curl reg-server:5000/v2/_catalog

Output
------
{"repositories":["alpine","1_ubuntu_16.04","centos","centos6-build-test","centos6-build-qa","centos6-build-build","centos6-jenkins-agent","jenkins","squid-deb-proxy","ubuntu","ubuntu-build","ubuntu-build-agent","ubuntu_16.04"]}
Getting a list of layers used, and other metadata of a repository (image)
GET DETAILS
$ curl -k -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET http://reg-server:5000/v2/ubuntu-jenkins-agent/manifests/latestrl reg-server:5000/v2/_catalog
OUTPUT
Output
------
* Trying 10.0.93.36...
* Connected to reg-server (12.0.33.1) port 5000 (#0)
> GET /v2/ubuntu-jenkins-agent/manifests/latest HTTP/1.1
> Host: reg-server:5000
> User-Agent: curl/7.47.0
> Accept: application/vnd.docker.distribution.manifest.v2+json
> 
< HTTP/1.1 200 OK
< Content-Length: 5540
< Content-Type: application/vnd.docker.distribution.manifest.v2+json
< Docker-Content-Digest: sha256:d00e05048b4ef3d7e175d233a306f64175f1c716c755224984099c7e8cf0948
< Docker-Distribution-Api-Version: registry/2.0
< Etag: "sha256:d00e05048b4ef3d7e175d233a306f64175f1c716c7552245e14099c7e8cf0948"
< X-Content-Type-Options: nosniff
< Date: Fri, 06 Apr 2018 08:39:42 GMT
< 
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 17980,
"digest": "sha256:6c4e6280d347be4762dd77a20845ec69c4c1da3424195523765dfaeeecbffa22"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 67103213,
"digest": "sha256:5d890c53be21ea2d7c417960dfdb8edf87f623bfd016751261fac26943a0b188"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 72628,
"digest": "sha256:f775b856e1997836995617cf691ea4ffb0b1ef967ac73db661666ba3a216d432"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 363,
"digest": "sha256:552c4f407d99f5ff4e96e79430bae55c4ff1154824dab3945ef4bb0482c826d5"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 676,
"digest": "sha256:fda304b96f8a99052eacb6ce515f26d8ff10fc78cbb6a9f09e996faaadabdaaa"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 162,
"digest": "sha256:2b033adb904af1b663e78cf33f513fc2f98730b9c0dcab4a3ad4cd85eb825880"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 69424905,
"digest": "sha256:3b93b65608a04bbd902b73d76e61f52a28b2a0b0faca95b8303dbe9a3397a688"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 76466711,
"digest": "sha256:8ccd40bb9dd1bb2dfbc16a0ab661b817ce9b6af0617ae679cfabaa80a74414c5"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 1066,
"digest": "sha256:0cf16c8ac4188b500ccbdf405c1288c298c4a50e40f0aafe485cd2d58f81ee8f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 4606,
"digest": "sha256:5670062ffd23e300b2fa32af6d9763211372ae919801c1ee98d1827aec24b57c"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 1020,
"digest": "sha256:ce7a2ec46384b5e5cf94f6bdded1c8a46c1614b4e5216101e706a5a47dd1a8b5"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 38402,
"digest": "sha256:73dd7e155ea00edcb02adc94f271aa07924faac30b4413fba823cbb7be51541c"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 175,
"digest": "sha256:46094570601896ddccb7446c48fd64e7089fb3fb826b747e6db4c971da4f2d9a"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 28176,
"digest": "sha256:2a4b48722514ef189776665fa00e98b24296117db359b491c83d995c53d4a3c1"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 47021,
"digest": "sha256:55c8e005550e7be9ea7f990696b6b863a270f7a43a059b8e52af9e84a745403d"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 415368,
"digest": "sha256:9cbcf4bfa6017c078eb8e2f20da1593e91cb0b5d271538c480c2a847fd0453ea"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 98274,
"digest": "sha256:4e6638ed8398ca234d28dad09047ffba24fa96961497d8975ea20fe89738fe7e"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 1133119,
"digest": "sha256:cffca1cea7728d2f65b29956c5f4ca46e3130e7c5eca2a8e74ceee7760f75227"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 465660,
"digest": "sha256:586d6232925d2948926e1c0697657ddeb8daf63840f50d14e75a5d80a140f53f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 114104,
"digest": "sha256:329692f820b7a8f115763b238025b0021eeebfa83f9ba5ba94d09e4d30ab0443"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 11356,
"digest": "sha256:8b948376919e9027aed5fb110f827e29b5d5b89faa249dd5e96d9f322f16671d"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 233,
"digest": "sha256:280ed15840b101f31d36195774dbc64a0756b10445ac86a7d56c9323b254d3ab"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 177,
"digest": "sha256:8b3430dea5aa62d32f1afa524062c8842b69f4e4c6c206c506096d61b86cf9aa"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 5397987,
"digest": "sha256:940f81402bbb4f7cd2fc2e27ad60243ee352594d2d347d4cc7d061543a645579"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 265,
"digest": "sha256:f22d1b75d0776f7b50c189b5f902411a16f1e558bb182eaa50d87970a783c3ed"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 573,
"digest": "sha256:9d201349e9cab134e816a063bcbf18d1a23ff854a953492b9d9aa4165cf059ec"
}
]
* Connection #0 to host reg-server left intact
Deleting an image tag using the Docker-Content-Digest
Using this API, any given tag of an image can be deleted remotely. This will delete the layers associated with this tag. But one has to identify the tags that should be deleted. After deleting a tag remotely, one has to run the garbage-collector to take the depletion effect. Please note that the DELETE API requires a digest of an image tag. The digest can be obtained using a tag as bellow (for an image which is tagged).
The tag name in the below snippet is latest. It will work with any other named tag
DELETE
$ curl -k -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET http://reg-server:5000/v2/ubuntu-jenkins-agent/manifests/latest 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}'
To get the digest of an untagged image, go to the storage of the docker registry inside the registry container. If the registry container has configured with volume, a digest can be obtained from this location as well.
Suppose if 966fcf31b8a2 is a container id of a private registry and /var/lib/registry is the registry storage inside the container
DELETE
$ sudo docker exec -it 966fcf31b8a2 sh
$ cd /var/lib/registry/docker/registry/v2/repositories/alpine/_manifests/revisions/sha256

## below listed sha256 are the manifest whihc reference the image tag This is called a digest. These digests are used while deleting an image tag.
## there are 3 unamed tags here and the latest is always tagged as :latest
$ ls
xr-xr-x 2 root root 18 Apr 4 06:02 ef04ea6e2324b2e1f2b1a25a56defc92d24f6b364e14ddd081241426af82aa2d
xr-xr-x 2 root root 18 Apr 4 10:06 b978ab300d84b859181fbf8c579315e709d22bb47e25b448952e6dfdc79be1f5
xr-xr-x 2 root root 6 Apr 4 10:10 c62d369018e25c79c651cdebc10d380e585acbda33340476d9f85d34c4a37b0d

To delete a tag using a Digest (which you got in the previous step)
Note:
  1. stop the running container and start again with additional ENV variable -e REGISTRY_STORAGE_DELETE_ENABLED=true. The run command looks like
    docker run -d -p 5000:5000 -v /home/sanjeeva/registry_vol:/var/lib/registry -e REGISTRY_STORAGE_DELETE_ENABLED=true --name registry registry:2
  2. By editing the config file inside the registry container.
    vi /etc/docker/registry/config.yml
    Under 
    storage: add these 2 lines
    delete:
    enabled: true
    Run the below command
    $ sudo docker restart >
DELETE
$ curl -k -v --silent -X DELETE http://localhost:5000/v2/alpine/manifests/sha256:934c25b1f1c266e31ee3693890b08f67cf0b05c162561edc779150c0ece7d872
DELETE OUTPUT
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5000 (#0)
> DELETE /v2/alpine/manifests/sha256:934c25b1f1c266e31ee3693890b08f67cf0b05c162561edc779150c0ece7d872 HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 202 Accepted
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Fri, 30 Mar 2018 10:58:18 GMT
< Content-Length: 0
< Content-Type: text/plain; charset=utf-8
< 
* Connection #0 to host localhost left intact
Garbage-collection
After deleting a tag of an image using API, run the garbage collection inside the container.
DELETE
## dry run with -d
$ /bin/registry garbage-collect -d /etc/docker/registry/config.yml
## actual run without -d
$ /bin/registry garbage-collect /etc/docker/registry/config.yml
Summary of steps to clean up a registry
  1. Enabling a layer deletion in docker registry (there are 2 ways to do this 1.b is the preferred method)
    1. by stopping and deleting the registry container
      1. Stop and delete the registry container
    2.  
      1. This may delete all the images in the registry unless it was started with a volume with -v option
      2. Start the registry container with
      3. REGISTRY_STORAGE_DELETE_ENABLED=true as
      4. docker run -d -p 5000:5000 -v /home/sanjeeva/registry_vol:/var/lib/registry -e REGISTRY_STORAGE_DELETE_ENABLED=true --name registry registry:2
    3. by restarting the container after editing the config file
      1. $ vi /etc/docker/registry/config.yml
      2. Under storage: add these 2 lines
        delete:
        enabled: true
      3. Run the below command
      4. $ sudo docker restart
  2. inside the registry container, for each repository
    1. List all the manifests except the latest tag.
  3. Run the API V2 command to delete the non-latest manifests
    1. $ curl -k -v --silent -X DELETE http://reg-server:5000/v2/alpine/manifests/sha256:249c714c688541c83ca2c9b2a8c30dd77b5c45c836e6c69632815ee3614ccbd2
  4. Run the garbage-collector inside the registry container
    1. It will delete the image layers which are no more referenced by any tag(manifest) except the latest tag
    2. We may see a few folders left undeleted.


How to use Jenkins docker image to run Jenkins master

How to use Jenkins docker image as a master and as a build node.
In the recent past, Containerization is becoming very popular. Docker is a very prominent player in the field of containerization.
Today I am going to show step by step procedure on how to use a docker image for both Jenkins master and node. Also, I am going to address a few known issues and how to find a workaround for them. In the tutorial, I will use the Jenkins version jenkins/jenkins:2.107.3.
Pull the image from docker hub
$ docker pull jenkins/Jenkins:2.107.3
Note that the dockerized Jenkins will use /var/jenkins_home as Jenkins home. If you want to change this you have to edit the docker file provided by the docker hub, which was used to build the image jenkins/jenkins:2.107.3. I will not cover that in the tutorial.
The additional steps we are doing on top of the base image Jenkins/jenkins:2.107.3
1.     Customize the Jenkins docker image with additional packages and new users
2.     Use different user other than the default jenkins user.
3.     Use a different home directory for a different user but use the same jenkins_home folder
4.     Use a host-volume to map to Jenkins_home inside the container
5.     Use the host machine ssh keys on the container as also
Let us see one by one and what are the issues we have.
In most cases you will not want to use the default user called Jenkins, you need to use the user which is specific to your organization or company. I will call the new user as user_jenkins, this user is already present in my host machine and it needs to be created in the new image.
Below docker file will add the new user, declares 2 arguments for ssh private and public key, copy the plugins.txt file and install them, finally copies the necessary ssh keys from host to the container.  

from jenkins/jenkins:2.107.3
ARG priv
ARG pub
USER root

COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt

# Add user user_jenkins using static UID/GID from AD, home /var/lib/jenkins, adequate shell
RUN groupadd -g 215 group_jenkins && \
    useradd -u 1396 -g group_jenkins -d /var/jenkins_home_tmp -s /bin/bash -m user_jenkins

USER user_jenkins

# insert ssh keys
RUN mkdir -p ~/.ssh && \
    echo "$priv" > ~/.ssh/id_rsa && \
    echo "$pub" > ~/.ssh/id_rsa.pub && \
    chmod 644 ~/.ssh/id_rsa.pub && chmod 600 ~/.ssh/id_rsa && chmod 700 ~/.ssh 

Now try to build the docker image using docker build command. We need to keep the plugins.txt in the build context and ssh keys to be passed as a build parameter. 

The format of the plugins.txt file should as below 

mask-passwords:2.8
workflow-step-api:2.2
external-monitor-job:1.4
accelerated-build-now-plugin:1.0.1
pam-auth:1.1
ssh-credentials:1.10
plot:1.11
configurationslicing:1.47
active-directory:2.2
git-parameter:0.6.1
disk-usage:0.28
structs:1.2
envinject:1.92.1
release:2.5.4
custom-tools-plugin:0.4.4
coverity:1.7.1
elastic-axis:1.2
run-condition:1.0
slave-status:1.6
groovy-label-assignment:1.2.0
matrix-project:1.4.1
scm-api:1.2
warnings:4.56
docker-plugin:0.16.2
ssh-agent:1.13
maven-plugin:2.7.1
periodic-reincarnation:1.10
icon-shim:2.0.3
docker-slaves:1.0.6
mapdb-api:1.0.9.0
throttle-concurrents:1.9.0
analysis-core:1.79
jquery:1.11.2-0
slave-setup:1.10
subversion:1.54
postbuild-task:1.8
artifactdeployer:0.33
timestamper:1.8.8
gerrit-trigger:2.21.1
bouncycastle-api:1.648.3
translation:1.10
monitoring:1.59.0
credentials:2.1.13
javadoc:1.1
build-name-setter:1.6.5
antisamy-markup-formatter:1.1
PrioritySorter:3.4
extended-choice-parameter:0.74
workflow-scm-step:2.2
cvs:2.11
authentication-tokens:1.3
ldap:1.11
unreliable-slave-plugin:1.2
description-setter:1.10
parameterized-trigger:2.31
project-stats-plugin:0.4
token-macro:1.12.1
ant:1.2
script-security:1.29
leastload:1.0.3
junit:1.2-beta-4
ssh-slaves:1.9
repo:1.10.2
email-ext:2.44
purge-build-queue-plugin:1.0
git:2.5.2
matrix-auth:1.1
keepSlaveOffline:1.0
preSCMbuildstep:0.3
mailer:1.11
docker-commons:1.6
conditional-buildstep:1.3.5
jobConfigHistory:2.16
dashboard-view:2.9.10
durable-task:1.13
git-client:1.19.7

build-failure-analyzer:1.16.0

Passing ssh keys to the image during the build time is necessary for various reasons. One such reason is to clone from git or Github, which requires ssh keys to connect to the servers. Copying ssh keys is also tricky because we are copying both private and public keys. Earlier I used to keep the ssh keys in a plain text inside the docker file. But that is very dangerous. Now I am trying to pass them as a build parameter. The only concern with the approach is that the keys are available in the docker layer. There is an experimental option available in the docker engine to squash the layers so the layers are which are exposing the security information, can be merged into other layers. This can be achieved using –squash option in the docker build command.
The docker build command
$ docker build --build-arg priv="$(cat ~/.ssh/id_rsa)" --build-arg pub="$(cat ~/.ssh/id_rsa.pub)" -t jenkins_2 --no-cache  -f docker .
The key thing to be noted here is the --build-arg parameters. I am reading the existing ssh keys from the host machine and passing them as a parameter. Note that here 2 separate –-build-arg parameters one each for the private key and public key. The parameters are handled by ARG instruction in the docker file
Once the image is built successfully we can start the container.
Run command
$ docker run -d  -p 8082:8080  -p 5002:5000  jenkins_2
I am using different ports. You will get the error message as shown below or similar.  

17:46:34 root dev ~ → docker logs 4s6yuknv568
touch: cannot touch ‘/var/jenkins_home/copy_reference_file.log’: Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
Note that the ‘/var/jenkins_home/’ is the default home directory from where the Jenkins runs. If you are using a user other than default user Jenkins, you will get permission denied error.
To overcome this error there should be a proper folder created before running the container. The new folder created will be mapped as host volume as in the below command.  
$ su user_jenkins
$ mkdir -p /opt/my_jenkins_home
$ docker run -d  -p 8082:8080  -p 5002:5000 -v /opt/my_jenkins_home:/var/jenkins_home  jenkins_2