Tag Archives: docker

Why Your Dockerfile Sucks for Production

Harpooned by a Dockerfile that sucks

Don’t get harpooned by a Dockerfile that sucks

Your Docker Compose file might suck too.

I admit it! I am guilty of making Dockerfiles that suck.

We’re all guilty of being overly general in our Dockerfiles. Just look on Docker Hub or GitHub. But in some ways it might be making things less consistent across builds.

Example:

FROM debian:latest 

Which is fine for testing something or working to build Dockerfiles for development pipelines. But let’s look at the shortfalls and headaches we can all avoid. You never really know what something like the above will get you.

The same applies to running package manager with only package name and no version. In some instances you will want to update a package for security or bug fix purposes. But for Docker in Production you want to stipulate these things.

So first let’s stipulate the right known base image using it’s SHA256:

FROM debian@sha256:52af198afd8c264f1035206ca66a5c48e602afb32dc912ebf9e9478134601ec4

To get the SHA256 you can get it when you pull the intial image you’ll be using to build projects.

$ sudo docker pull debian:8.7
8.7: Pulling from library/debian
693502eb7dfb: Pull complete
Digest: sha256:52af198afd8c264f1035206ca66a5c48e602afb32dc912ebf9e9478134601ec4
Status: Downloaded newer image for debian:8.7

 

Now you know the exact version you will be running. Now be sure to stipulate for other packages you need to install.

RUN apt-get update && apt-get install -y \
 python=2.7.5-5 \
 python-pip=1.5.4-1 \
 some-package=1.1.1 \
 && rm -rf /var/lib/apt/lists/*

Now you have your Dockerfile not sucking so much for Production use!

You’ll also want to have internal Docker Registry for versioning and storing your containers once you build.

 

Running a Tor Hidden Service Using Docker

container

Many people seem to think that hosting a Tor hidden service is a complex or daunting task. But with Docker you can do it easily and reliably. In this example we will setup a dedicated hidden service (a hidden site in this instance).

First we’ll assume you have docker running on the computer you want to host the hidden service.

In this example we’ll make a directory to contain the configs for the hidden service so we will have the same .onion address. You may want to not mount this in order to get a new address every time you start the container depending on your use case.  But here we’ll store it so we can maintain the same address over restarts.

We’ll also start Nginx to host the example site.

First start the Nginx container:

 docker run -d --name hsnginx -v /home/username/hiddenservice/www/:/usr/share/nginx/html:ro -d nginx

Here you are telling Docker to run as a daemon with HTML files located in www directory of hiddenservice directory of username. Here we name it hsnginx to use with linking the hidden service container itself.

Now start the hidden service container:

 docker run -ti --link hsnginx -v /home/username/hiddenservice/config/:/var/lib/tor/hidden_service/ -d goldy/tor-hidden-service

This starts Tor and serving the hidden service provided by linking to hsnginx. To see the .onion name of your service look in /home/username/hiddenservice/config/ for the hostname file under hsnginx.

You now have a hidden service running Nginx. You can also use Apache or WordPress or any other initial container.

DISCLAIMER: I have only tested this on an Ubuntu Linux server and can not guarantee that your hidden service running in this manner will or will not leak information that might help to identify who is hosting it. Please do your own due diligence if wanting to use Docker to host a hidden service that needs complete anonymity. If you were to use Docker containers to host your hidden service you would most likely also want to add a restart policy to the run command. Use at your own risk.

What Comes After the Containers and Unikernels?

(Reposted here from my LinkedIn)

A brief history of everything… (SysAdmin related)

There has been a lot of hype over the last year on container solutions, like Docker, with an ever growing and sometimes fragmented group of supporting projects. What does this all mean for the cloud and the future of hosting applications?

Pre-Cloud and Virtualization (aka The Dark Ages)

Before the cloud and virtualization you had a bare metal server where you installed an operating system on to host your service or application. It was basically a 1 to 1 thing in terms of the hardware and operating system. There were hardware load balancers to distribute traffic to a pool of these physical servers. This was very inefficient due to much of the hardware not being utilized even during peak load times.

Dawn of Virtualization (aka The Not-So Dark Ages)

When the boom of virtualization in all its forms came about with VMware and other offerings, people saw the promise and hope of being able to more fully utilize their hardware by segmenting it into virtual machines to better use (and cost effectively) plan for capacity.

Continue reading