Since I have been using Scalr to manage my Amazon Web Services farms I have been wanting more monitoring in terms of statistical information on services, traffic, disk usage, and uptime to name a few. Scalr has built in means of basic event notifications such as host up, host down, etc. Along with providing very basic load statistic via RRDtool. In the past I have always used Zabbix for most projects I have worked on so I wanted to be able to use it with Scalr. I am still testing the setup I am going to speak of so please keep that in mind. This is NOT a howto, but more of a brainstorming of how I plan on getting Zabbix integrated into my Scalr setup. In the Zabbix documentation (PDF) there are a few ways to use the auto-discovery that they cover (page 173). You can have Zabbix monitor a block of IPs to find new Zabbix Agents running for example. So here is what I will have my Zabbix Server do:
- Look for new Zabbix Agents on my AWS internal IP range.
- If the system.uname contains “Scalr” it will add to Scalr server group
- Server must be up for 30+ minutes
There will be other stipulations in order to get the server added to Zabbix. I will have system templates for each of my Scalr AMI roles. Once the server is added to Zabbix it will add them to to their respective groups and monitor for items and triggers listed in the system template. There will also be a rule to remove old instances after 24 hours from Zabbix after receiving the host down trigger. This way I will not have a bunch of old instances that were once monitored still cluttering Zabbix database. If you happen to also have Windows AWS instances you can add a rule to monitor these as well. The AMI just needs to have the Zabbix Windows Agent installed.
When I decided to take the route of running Scalr on our own servers to manage our Amazon Web Services farms one important consideration was Scalr’s use of DNS servers to change records. I made the choice of hosting our own DNS infrastructure in order to keep initial cost down. But also to allow us the flexibility to change and control our DNS internally. So now onto my approach to doing this most effectively. Firstly two separate DNS servers were chosen of the self-managed dedicated server form. One server was chosen in a west coast location while the second was on the east coast. Being that more of our traffic come from the western states the NS1 was selected accordingly. Now I used two non-Scalr managed AMIs to run our NS3 and NS4 servers. Each in a separate AWS datacenter. The idea being that the internal custom bundled AMIs for Scalr I built would use the NS3 and NS4 for their internal DNS. I find this to be an excellent mix of using AWS and old fashioned dedicated servers to manage our DNS.
I have been using Amazon Web Services for some time now and decided to use the Open Source Scalr Project to manage my farms on AWS. After overcoming many hurtles to getting Scalr running successfully I have been using it to manage my farms for about a month. Compared to the initial outlay required my RightScale the time it took to get Scalr running was nominal. Plus I like the ability to have a developer tweak the functionality of Scalr to fit our business requirements. There is an active Google Group for Scalr that I have used to solve most of my issues. People also have the option of using Scalr.net as a pay per month solution to manage their AWS farms. I chose to host my own instance of Scalr since we are doing large scale hosting and the previously mentioned need to customize it. I do enjoy the ease Scalr provides in bundling new custom roles I build for our various application servers. It allows you to simply press a button to save a new role for future use. Along with its ability to auto-scale as traffic dictates those are the two biggest pluses for me in using Scalr.
I will be adding more on my experiences with Scalr in coming days. If you are installing on CentOS5 I have some install notes I posted here.