Running a Tor Hidden Service Using Docker

container

Many people seem to think that hosting a Tor hidden service is a complex or daunting task. But with Docker you can do it easily and reliably. In this example we will setup a dedicated hidden service (a hidden site in this instance).

First we’ll assume you have docker running on the computer you want to host the hidden service.

In this example we’ll make a directory to contain the configs for the hidden service so we will have the same .onion address. You may want to not mount this in order to get a new address every time you start the container depending on your use case.  But here we’ll store it so we can maintain the same address over restarts.

We’ll also start Nginx to host the example site.

First start the Nginx container:

 docker run -d --name hsnginx -v /home/username/hiddenservice/www/:/usr/share/nginx/html:ro -d nginx

Here you are telling Docker to run as a daemon with HTML files located in www directory of hiddenservice directory of username. Here we name it hsnginx to use with linking the hidden service container itself.

Now start the hidden service container:

 docker run -ti --link hsnginx -v /home/username/hiddenservice/config/:/var/lib/tor/hidden_service/ -d goldy/tor-hidden-service

This starts Tor and serving the hidden service provided by linking to hsnginx. To see the .onion name of your service look in /home/username/hiddenservice/config/ for the hostname file under hsnginx.

You now have a hidden service running Nginx. You can also use Apache or WordPress or any other initial container.

DISCLAIMER: I have only tested this on an Ubuntu Linux server and can not guarantee that your hidden service running in this manner will or will not leak information that might help to identify who is hosting it. Please do your own due diligence if wanting to use Docker to host a hidden service that needs complete anonymity. If you were to use Docker containers to host your hidden service you would most likely also want to add a restart policy to the run command. Use at your own risk.

anticipated-tech-2-1000x400

Protecting Your Privacy in a Selfie World

In today’s world where we post pictures of ourselves and pretty much everything around us there is the known risk of people being able to track you by the EXIF data with GeoTagging. But now a picture’s location can be determined by purely the image itself. No metadata needed.

There are many other things that can be a bad idea to take pictures of in your daily life. Namely, make sure you do not go on a live cam or take a picture with identifying information about you in it. Something such as a credit card could leave you exposed to a lot of grief.

(Source: Sneakey)

(Source: Sneakey)

NEVER have your keys of any kind in pictures since they are easily duplicated from an image.

It may also be a bad idea to have your fingerprints in high-res images. No one has come out admitting to the technology but the same that applies to facial recognition could soon, if not already identify you even more than just the facial recognition. Some “hackers” have shown it possible in a very time consuming manner but it is a proof of concept that is over a year old. By taking a “selfie” today you may be adding yourself to a massive database tomorrow. The technology is already being used widely.

(Source: PBS)

(Source: PBS)

Your image could be posted in many cases with no knowledge that a photo was even taken by you. I have seen myself in photos on Yelp where I happened to be in the restaurant when some stranger was taking a picture to post with their review. So at the time without my even knowing I was geotagged there. So between my own posting of images, strangers, public/private video cameras and the information my phone may leak I can go back in time and track my own movements.

What Comes After the Containers and Unikernels?

(Reposted here from my LinkedIn)

A brief history of everything… (SysAdmin related)

There has been a lot of hype over the last year on container solutions, like Docker, with an ever growing and sometimes fragmented group of supporting projects. What does this all mean for the cloud and the future of hosting applications?

Pre-Cloud and Virtualization (aka The Dark Ages)

Before the cloud and virtualization you had a bare metal server where you installed an operating system on to host your service or application. It was basically a 1 to 1 thing in terms of the hardware and operating system. There were hardware load balancers to distribute traffic to a pool of these physical servers. This was very inefficient due to much of the hardware not being utilized even during peak load times.

Dawn of Virtualization (aka The Not-So Dark Ages)

When the boom of virtualization in all its forms came about with VMware and other offerings, people saw the promise and hope of being able to more fully utilize their hardware by segmenting it into virtual machines to better use (and cost effectively) plan for capacity.

Continue reading