I was reading an article on BBC about Kelly Sutton of CultofLess.com and it got me to thinking about the potential for a new sub-culture in the US and rest of the world. Basically it is a movement (if you can call it that yet) of mainly 20-somethings relinquishing the vast majority of their posessions and living with as little as possible as long as they have their laptops and other digital goodies. Depending on friends and family to provide a place to sleep in some instances. This may yet be another symptom of steps toward what many call Technological Singularity in the sense that a new subset of people following this digital living and minimalism of possessions are becoming more popular. Now there are Hacker Spaces, imagine a day soon where there will be “Hacker Flop Houses” or “Hacker Hostels”. Even work could be transformed for this subset of people in terms of things already occurring. With the ability to telecommute or work on Human Intelligence Tasks (HITs) that services like Amazon’s Mechanical Turk, Crowd Flower, or others could allow people to live an almost digital hobo existence bedding down in hacker flop houses and moving on to another place whenever they choose. With no physical address these people could also become hired digital guns if they chose to follow a less reputable path. Using public Internet access to do their deeds and be gone to another town by the time anyone notices. Almost sounds like something from a William Gibson book but the groundwork for this sort of thing is already in place in many instances. Roving gangs of black hats working to do the deeds of the highest bidder. Sounds pretty scary to me. Although that is just an example of what might happen. Most people as in the original article I read are honest working people who are following a route less taken, which I can understand and support. It would also be a very effective way to save money if you lack the overhead of a place to live.
I have been reading recently about the “Lean Startup” concept, which is funny since I have been working with and been part of more than a few without knowing there is a nifty catch phrase for them. So now I am all up on the lingo and hash tag #leanstartup I can speak on it. Today is the perfect time to take a gamble and dive into a lean startup. Being that there are so many free tools and services you can use along with very inexpensive solutions to infrastructure that were not available even a few years ago. By using the power of Open Source software, free online services, and cloud computing one can build a scalable and very cost effective lean startup.
Let’s first look at your new companies hosting and servers. being that you are lean you may most likely be working from home. So let’s look at hosting and services you will need in order to host some of the Open Source applications will will talk about in a moment. In my experience the basic things you need to get going in terms of setting up shop are the basics:
As we understand it from the discussion on stage, a Think Cloud is a “body of knowledge” that is a real-time information base of Amazon cloud that can be pivoted all the way down to the threads and individual data concurrency. It would be an index that acts like a control point that helps define movement of data through a servers and compute tasks. Looking at the journey from the data point of view, including data about the environment itself and how to repair itself when damaged and keep data concurrency in tact.
SC has a good write up on cloud computing security:
Cloud computing, as least as a concept, is being driven largely by economics. It is generally less costly to run applications, add capacity and increase storage in the cloud, rather than investing in new hardware and software, and bringing on additional staff and beefing up networking.
“Cloud computing will happen because it has too much of an economic incentive and developer support – applications can be quickly added and developers can have a single place to maintain source code,” says Vatsal Sonecha, VP, business development & product management at TriCipher.
Overall, incentives include application-deployment speed, lower costs and fast prototyping. These are strong drivers. So much so that Gartner predicts that by 2012, 80 percent of Fortune 1000 companies will pay for some cloud computing service, and 30 percent of them will pay for a cloud computing infrastructure.
That is not to say that entire data centers will be moving to the cloud, at least in the largest companies. But for certain solutions, the cost benefits are hard to ignore.
I wanted to touch briefly on the security concerns for having Scalr accessible via the Internet. If you are running your own install of Scalr this is an important factor before even adding the first farm. For my own sake I will not getting into my exact setup, but instead talk about a few approaches to locking down access to Scalr.
Possibly the best approach is to limit access to Scalr interface to internal network requiring users to use OpenVPN or some other VPN solution to access internal resources which would include Scalr. If you are hosting Scalr on an AWS instance be sure to set the security group to only allow the port you are running for VPN. You can find a quick and dirty howto for OpenVPN on an EC2 instance at Google Books.
Another option is to use SSL and mod_access (Apache 1.3) or its renamed equivalent in Apache 2.2 mod_authz_host to limit those who have access to Scalr interface. You should for sure at least use SSL to access Scalr. You can also add a layer of authentication for good measure using Apache Basic Authentication.
Being that Scalr controls the rest of your AWS setup it is by far the one thing you want to lock down as much as possible.
Since I have been using Scalr to manage my Amazon Web Services farms I have been wanting more monitoring in terms of statistical information on services, traffic, disk usage, and uptime to name a few. Scalr has built in means of basic event notifications such as host up, host down, etc. Along with providing very basic load statistic via RRDtool. In the past I have always used Zabbix for most projects I have worked on so I wanted to be able to use it with Scalr. I am still testing the setup I am going to speak of so please keep that in mind. This is NOT a howto, but more of a brainstorming of how I plan on getting Zabbix integrated into my Scalr setup. In the Zabbix documentation (PDF) there are a few ways to use the auto-discovery that they cover (page 173). You can have Zabbix monitor a block of IPs to find new Zabbix Agents running for example. So here is what I will have my Zabbix Server do:
- Look for new Zabbix Agents on my AWS internal IP range.
- If the system.uname contains “Scalr” it will add to Scalr server group
- Server must be up for 30+ minutes
There will be other stipulations in order to get the server added to Zabbix. I will have system templates for each of my Scalr AMI roles. Once the server is added to Zabbix it will add them to to their respective groups and monitor for items and triggers listed in the system template. There will also be a rule to remove old instances after 24 hours from Zabbix after receiving the host down trigger. This way I will not have a bunch of old instances that were once monitored still cluttering Zabbix database. If you happen to also have Windows AWS instances you can add a rule to monitor these as well. The AMI just needs to have the Zabbix Windows Agent installed.
I am a Linux guy. But I am also a big lover and user of OpenBSD and FreeBSD. This got me to thinking of BSD and it’s place in Cloud Computing. In terms of Amazon Web Services EC2 I have yet to see it. When checking the FreeBSD and OpenBSD projects I have yet to see it at all in a Xen form. There are a few posting regarding getting it to sort of work. There is a wiki page for FreeBSD project dedicated to a Xen port. I believe this lack of Xen support will not help BSDs to compete with Linux flavors. I would love to be able to use BSD for certain roles.
I have been using Amazon Web Services for some time now and decided to use the Open Source Scalr Project to manage my farms on AWS. After overcoming many hurtles to getting Scalr running successfully I have been using it to manage my farms for about a month. Compared to the initial outlay required my RightScale the time it took to get Scalr running was nominal. Plus I like the ability to have a developer tweak the functionality of Scalr to fit our business requirements. There is an active Google Group for Scalr that I have used to solve most of my issues. People also have the option of using Scalr.net as a pay per month solution to manage their AWS farms. I chose to host my own instance of Scalr since we are doing large scale hosting and the previously mentioned need to customize it. I do enjoy the ease Scalr provides in bundling new custom roles I build for our various application servers. It allows you to simply press a button to save a new role for future use. Along with its ability to auto-scale as traffic dictates those are the two biggest pluses for me in using Scalr.
I will be adding more on my experiences with Scalr in coming days. If you are installing on CentOS5 I have some install notes I posted here.
I have been playing around with the AWS Console recently released. It is a good start to a nice AWS provided interface for controlling EC2. It seems to only make sense that they provide a console instead of forcing people to look elsewhere such as RightScale or Scalr. For that matter I am not sure why Amazon does not just buy RightScale and provide their services as part of AWS.
I came across Scalr by accident when I was browsing projects in Google Code. It appears as though Scalr has become a pay service to manage your AWS instances along similar lines to RightScale. But the main difference is that Scalr charges a scant $50 a month. From the Scalr Google Code page:
Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon’s EC2.
It allows you to create server farms through a web-based interface using prebuilt AMI’s for load balancers (pound or nginx), app servers (apache, others), databases (mysql master-slave, others), and a generic AMI to build on top of.
The health of the farm is continuously monitored and maintained. When the Load Average on a type of node goes above a configurable threshold a new node is inserted into the farm to spread the load and the cluster is reconfigured. When a node crashes a new machine of that type is inserted into the farm to replace it.
Multiple AMI’s are provided for load balancers, mysql databases, application servers, and a generic base image to customize. Scalr allows you to further customize each image, bundle the image and use that for future nodes that are inserted into the farm. You can make changes to one machine and use that for a specific type of node. New machines of this type will be brought online to meet current levels and the old machines are terminated one by one.
I would love to hear some comments from those already using the service and how it compares to RightScale.