Monday, December 3, 2012

foodcritic rake task ~ one that works for me

FoodCritic (a lint tool for your OpsCode Chef cookbooks)
$ gem install foodcritic --no-ri --no-rdoc

Rake Task to get the FoodCritic rolling at your cookbooks
(here cookbook reside in dir 'cookbooks' at the root of directory with Rakefile)
the one at wiki doesn't work for me, but this does

_

Monday, October 22, 2012

varnish ~ sometimes fails but don't tell

Varnish-Cache

We were playing around with the automation around famous Varnish-Cache service and stumbled upon something... the silent killer style of Varnish Cache.

The automated naming was such that it was appending different specific service-node host-names and service-names to form backends and then load balance onto them using "director ... round-robin" configuration. It was creating names that way to avoid name collision for same service running on different nodes load-balanced by Varnish-Cache.

We checked for the configuration correctness
$ varnishd -C -f /my/varnish/config/file

It passed.

We started the Varnish service
$ /etc/init.d/varnish start

It started.

We tried accessing the services via Varnish.

It failed saying there no http service running at Varnish machine:port.

Now what. Can't figure out anything wrong in configuration VCL. So, start looking at logs.
Tried starting varnishlog service

$ service varnishlog start

This failed giving error about _.vsm being not present, which was actually present and with right user permissions.

Then a colleague of mine suggests he has faced such issue before due to extremely long backend names.

I extremely shortened the backend name and it started working.

So, the length there does effect but the VCL gives no error when starting Varnish-Cache.

BTW, from the checks performed... the maximum character backend name working for the configuration was 44 Character long.

Saturday, August 11, 2012

Project::Inception.include? Infrastructure['base_architecture']

requirements change.....

Requirements are supposed to change, else there will be no market.

But when you are involved with the client to formulate the Project which has to be 'developed and released', you try to cover all the important specs related to 'development and release'.

In this era, mostly people can be seen to opt for IaaS, PaaS solutions for their production release (and for development depending on budget).
There are lots of Organizations, who have had their own SysAdmin Team for years for years and have more faith on them for cheaper, better, secure, and/or conrolled environment.
Then there are also projects that need to be on internal organizational networks due to some internal boxe-d service or company policies.

Either way, initially there can be two situations. They will have an idea about where they want it implemented, or not.

If they decide, you know better. It's better. You come up with the best suited solution respective to projected requirement. And get a yes from them.

If they decide to drive the decision. Even more better. Now analyze the solution, and project the respective pros, cons and the patch-y aid that will be required. Get them to agree with approach that wouldn't make the project-life a hell.


It needed to be done at start, once you start with project..... these will only slow you down and 'slum your time' once you are deciding it post-project-inception.

Monday, June 11, 2012

Creating RPM of MCollective for Ruby1.9

Note: With little tweaks comes major pre-requisite checks.

Clone the latest branch from GitHub repo for MCollective
$ git clone git://github.com/puppetlabs/marionette-collective.git

Removing Ruby 1.8 version specification from RedHat Spec
$ sed -i 's/.*ruby.abi.*//g' ext/redhat/mcollective.spec
from the changes 11/Jun/2011 : marionette-collective/commit/ba86f7762d
the lines removed by above command are
BuildRequires: ruby(abi) = 1.8
Requires: ruby(abi) = 1.8

Removing Rubygem related specification from RedHat Spec
$ sed -i 's/.*rubygem.*//g' ext/redhat/mcollective.spec
from the changes 11/Jun/2011 : marionette-collective/commit/ba86f7762d
the lines removed by above command are
Requires: rubygems
Requires: rubygem(stomp)

create the RPMs, generating this would require you to be on a RedHat-base system with packages like ruby, rubygem rake, redhat-lsb (& rpm-build, I think) installed
$ rake rpm



The checks you need:
On the machine where you install the created rpms..... remember to get ruby1.9.3 and rubygem stomp already installed.

Can get older created MCollective 1.3.2 RPMs friendly with Ruby 1.9 at http://yum-my.appspot.com/flat_web/index.htm .

The latest branch will give you version 2.0.1 .

Wednesday, April 25, 2012

quick PuppetMaster Service Script for gem installed puppet set-up

On easy rubygem-way `gem install puppet` installation method for Puppet doesn't get you the *nix platforms service script... so here is one allowing you to perform Start||Stop||Restart||Status service task for puppetmaster.
  • Save the file below as '/etc/init.d/puppetmaster'
    $ sudo curl -L -o /etc/init.d/puppetmaster https://gist.github.com/raw/2479100/15f79c68be3f6f6bf516adf385aac1f29f802a45/gistfile1.rb
  • and turn on its eXecutable bit
    $ sudo chmod +x /etc/init.d/puppetmaster
  • Now you can use it as
    $ service puppetmaster status
PuppetMaster Service Script

Wednesday, March 14, 2012

[Puppet] Exported Resources is a beautiful thing..... thing to use and improvise

Lately, I've been real blunt towards Puppet because of the soup that leaked some specific scenario flaws in very busy times.
So, it's my duty to applause if I like something which is a novel and beautiful concept.

For a well organized auto-magically managed set-up apart from a fine infrastructure and its configuration management mechanism, a very important part is for the monitoring and logging solution to spread across the infrastructure in a similar seamless and scalable fashion.


Puppet enables it very finely with the use of exporting and collecting resources.

Exported Resources are super virtual resources.
Once exported with the storeconfig setting being true, the puppetmaster consumes and keeps these virtual resources out available for all hosts.

Read in detail: http://docs.puppetlabs.com/guides/exported_resources.html

An example from the link above, using a simple File resource
node a {
  @@file { "/tmp/foo":
      content => "fjskfjs\n",
      tag => "foofile",
  }
}
node b {
  File <<| tag == 'foofile' |>>
}

It'll have its flaws (as all softwares do, mine and others)..... just hope they don't interfere in my use-cases.

Monday, March 5, 2012

MCollective can't handle Puppet ~ just like psychotic love stories

Some puppet-izers didn't like what I said in my last post... still a mess is a mess no matter how worth it might be.
Past 2 months, I've been in pain due to the psychotic love story of MCollective and Puppet.

YES, they are very helpful products to automate configuration management and orchestrate metadata-based multicast-ed actions.

YES, they are now under the same organization PuppetLabs which is whole-heartedly working to improve them so they could retain their status in the started-to-glamour-izing DevOps domain. So, both of them will improve a lot.

But, first of all.
If you don't properly test your corporate-aiming projects over Ruby1.9.x; please do post a big notice on your projects page or at least on first page of your amazing Doc.
My story for past few weeks:
I start using a project..... oh it's failing, debug..... oh it's still failing..... debug.
Ahhhhh, ok ask at IRC.
WHAT? I'm using Ruby1.9.x so its possible I might be facing *some* issues.
Is this is a Joke, I'm trying to stabilize an important project over here.
At least be truthful, and don't behave like TV Commercials with small+quick Disclaimers.

And Again.
I'm managing MCollective over all my instances via Puppet. Obviously, because that's what Puppet is for..... managing state of instances.
I've a CI which instigates MCollective to orchestrate different actions over my infrastructure. That's why I took the pain to get the changing-but-not-releasing MCollective over my infrastructure.


Now, this also involves MCollective firing up Puppet in non-daemonizing mode.
This used to raise an Exception in MCollective due to un-handled 'nil' return value failing up my entire build.
I go on IRC, again trying to get a solution..... I get to know that even this has been fixed later in the git-repo since I rpm-ed it last time. But I also get an answer that this wouldn't work as Puppet no-daemon mode has problems. And if I wanna follow this approach, its my decision.
First of all, what the hell should I do when there is just one straight approach left. Except for if I start using something else to manage MCollective.

Though, I took the latest pull..... rpm-ed it and updated all my nodes. And the Puppet non-daemon problem has not yet been raised.
But, what kind of response is it.
If it wasn't for the team decision..... I'd have changed it overnight x(

Wednesday, February 22, 2012

Puppet ain't sweet anymore & Marionette-Collective hopeful

its a fine piece of utility working mostly fine, but my luck..... mostly end up getting blocked in my tasks by the flaws in library/utility/framework :(
Puppet ain't sweet.

Puppet is one of the most famous automated configuration management, built upon ruby enforcing just it's DSL to be used to use the power underneath.
(new to it, wanna know more http://puppetlabs.com/puppet/how-puppet-works/)
MCollective is a novel concept with good technical design (which can be still matured, though)  for parallel server orchestration with a filter-enabled broadcast of request distribution.
(new to it, get a glimpse at http://puppetlabs.com/mcollective/introduction/)

Why? What happened? Puppet is Good.
Puppet has been there in the wild for a long time (at-least more than 5yrs) now as compared to the closest competitor to it 'Chef'.
Saying it out-loud the feel I'm getting using them lately ~ as if it is acting like an old pop-artist trying to put out a lot at-once to maintain their superiority over upcoming rockstars.
I've used their older service before which wasn't much glossy (at design level) but a lot more stable and composed at implementation..... which is more important from the perspective of a system-maintaining utility.

When you are always in pace of bringing changes and improving upon your current set-up, you don't want to keep on getting blocked by quickly adapted & recently obsoleted features.

Why do you want me to place 'pluginsync = true' on all the boxes to get this new feature working. Why can't enabling it over PuppetMaster should take care of it. In what kind of consistent system does you require custom-resource-type to be not recognizable over all clients. If you don't wanna it get used over any node, you wouldn't use it there. Plain Simple *hit.

You set-up a new puppet-master at Client side and find out your old, working, correct manifest from tested set-up suddenly start failing. Why? Because the freaking parser is broken and not able interpret the symbol notation anymore. Move onto value-convention and it is working.
Ok, you smashed your head and got that working. Now say you have learnt your lesson of not trusting the newer versions and Gemfile-d it for the bundler to handle. Suddenly it starts failing 'cuz you find out the newer version exists no-more. It was yanked off.

Some of there new suggested approaches in custom facter require Facter v 1.7.0; the gem available up their is still v1.6.5. What? Legacy fact distribution is not supported in the new distributions and is obsolete.

There are also several other design-level implementation which I don't agree and sometime don't agree at all. But, not considering them all, there is too much chaos in Puppet right-now.

What do they expect, to keep on pulling the latest build from repository..... building its rpm/gem and then using it. In what freaking world do you expect this from a platform which has been built to sustain a deterministic state of machine.

MCollective is mostly good, a light of hope that Puppet adopted to lit its gloomy days. But to find out its not properly tested over Ruby1.9.x gets me worried.

Friday, January 27, 2012

resque and WildCard usage

While testing we were starting resque workers as
 rake resque:work QUEUE=* 
this used to start the workers listening on all the Resque's queues and never exit.

So hoping Resque understands wildcards well, we made
 rake resque:work QUEUE=q01.* 
a part of our automated configuration tasks to start all our queues within Resque's queue starting with name q01 like q01.rss and q01.atom.

These queues were getting started fine and displayed well on Resque's WebUI.
But on scheduling a task for these queues, the worker started with q01.* wasn't able to pick up the queued task for neither q01.rss nor q01.atom.

Investigating the Resque codebase the understanding we developed was just plain '*' is separately handled out to return entire array of all available queues and is acted upon. But it doesn't handle Wildcards.

So, we started it as
 rake resque:work QUEUE=q01.rss 
rake resque:work QUEUE=q01.atom
and it worked fine.

Don't do the same mistake.

if you don't already know what is redis and resque:

Resque : a famous opensource ruby library to create background workers on multiple queues and processing them when any task is available
Redis : it's a popular opensource project for in-memory key value data-store 
(why were you reading then actually)