пʼятниця, 20 лютого 2015 р.

Just Enough Angular For Uchiwa

Uchiwa is a simple dashboard for Sensu, version 0.4.0 is especially great because of two new features:
  - bulk actions on events,
  - allows stashed checks to be hidden.

We can see pending events for all sources, stash/acknowledge them, filter out less important items.

Uchiwa is young, and therefore it's pages are very light, and provide little information, not enough to start analyzing it immediately.
Some people are experimenting with custom attributes (any sensu check may contain an arbitrary properties in payload),  http://roobert.github.io/2014/11/03/Sensu-with-Embedded-Graphite-Graphs/ , which I think is very nice idea.

But still, as I've said, not flexible enough.
In most cases, we still have to:
  • Ctrl+C a piece of the alert text (sometimes),
  • navigate to another browser tab, 
  • find the appropriate link (relevant for this type of alert),
  • Ctrl+V a piece of text,   
  • get to next panel,
  • start to analyze.

It would be great if we could make Uchiwa to build provide necessary links automatically when it's necessary.

Use case:

  • If some server runs out of disk space or starts to use swap aggressively - I'd like to see a link to a Grafana dashboard that shows all system metrics for this particular server.
  • If some service is running bad (avg response time increased, msg rate drops) - I'd like to see a link to a dashboard with all the details for this particular service and a list of dependent processes.
  • If process ships logs to Logstash server - I'd like to see a link to specific Kibana dashboard page, already filtered by this particular process name and/or host name.
  • Optional button to 'restart' the process (to be shown only when process is running bad)

Something like this:

What new information can we see here ?

At first,
We see a link to a dashboard for host 'i-abcd1234' (which is Grafana scripted dashboard).

We also see this host is running two back-end processes -  'Collector' and 'Aggregator'. For each of them, a direct link to process-specific dashboard (Grafana templated dashboard) and to Kibana logs filtered by process/host name is provided.

If host was running Collector process only, then Aggregator links would not be shown.

'disk_usage' check status is 2 (Critical), and we see additional link to detailed 'DiskIO' dashboard.

Nice, isn't it ?
What would you say if I tell you it requires to change just 2 lines of original Uchiwa code?

пʼятниця, 11 липня 2014 р.

Scripting Grafana dashboards

One day I came across Grafana and found it very attractive.
It's great usability and responsive design have quickly made it my favorite.
Templated dashboards feature is very handy.
But real power comes with scripted dashboards. It is possible to do async calls to external controller, get some JSON data, and construct dashboard object dynamically, based on the data received.

There were no good example or howto, though, so

четвер, 22 серпня 2013 р.

Graphite 0.9.12 available

A new Graphite version has been released.
Just look at the number of bugfixes  and new features.

This is the last major update of 0.9.x branch.

Next major release will be 0.10.x, it will get megacarbon branch merged into trunk.

Another important feature I'd love to see in Graphite - it is carbon relay performance improvement.

неділя, 23 червня 2013 р.

Ceres Essentials 2

A little bit more information.
Let's create another node test.item2, and save some datapoints to it (or, we may just copy test/item/ directory).

export CERES_TREE=/tmp/storage/ceres
export NODE=test.item2

$ ll `ceres-tree-find $CERES_TREE $NODE --fspath`
total 12
-rw-r--r-- 1 tmp tmp 144 Jun 22 20:48 1371923150@10.slice
-rw-r--r-- 1 tmp tmp   8 Jun 22 20:52 1371923570@10.slice
-rw-r--r-- 1 tmp tmp  88 Jun 22 20:59 1371923870@10.slice
I can read data with ceres-node-read, but right now I'm more interested to know what exactly data are stored to each slice. I can check it with 'slicecat' tool:

пʼятниця, 21 червня 2013 р.

Ceres essentials

I think it would be nice to have an idea of Ceres storage format.

Frist, create a Ceres tree:

export CERES_TREE=/tmp/storage/ceres
ceres-tree-create --verbose $CERES_TREE

ls -la $CERES_TREE
drwxr-xr-x 2 tmp tmp 4096 Jun 20 22:38 .ceres-tree

There is just a directory created, nothing more.
This is a top-level directory, all Ceres nodes are created under this directory.

Create a new node ( graphite metric):

export NODE=test.item
ceres-node-create --tree $CERES_TREE --step 10 $NODE
,'--step 10' sets the time interval between two consecutive datapoints.
Node directory has been created, with '.ceres-node' file:

ls -la $CERES_TREE/test/item
-rw-rw-r-- 1 tmp tmp 16 Jun 20 22:48 /tmp/storage/ceres/test/item/.ceres-node

cat $CERES_TREE/test/item/.ceres-node
{"timeStep": 10}
'.ceres-node' is a special file, it stores node metadata, in a JSON-format string.
(In Whisper, metadata are stored at very beginning of .wsp file).

By default, it contains only timeStep value that is required for Ceres to read/write datapoints correctly, but it can be any valid json string, for example:

субота, 8 червня 2013 р.

Graphite+Megacarbon+Ceres. Multi-node cluster setup

Graphite-0.10 is on its way to release.
One of it's most exciting features will be a new version of Carbon component (currently known as Megacarbon), and ability to store data to Ceres backend.

There is not that much information about it on Internet, though.
Literally, just few blog posts from people sharing their experience, and also rather old questions at Launchpad, partially migrated to Github.

And while being announced two years ago, Ceres comes completely undocumented.
But lack of documentation should never stop us to experiment!

I want to setup Graphite cluster of two nodes(that's what Ceres storage was originally invented for).
Each Graphite node is supposed to run one Webapp, one Relay, and two Carbon cache instances.

node graphite-1,   IP:
node graphite-2,   IP:

Typically, I put them behind the loadbalancer ( Monitoring agents will ship their data to this address.

Graphite node set up.

Install dependencies first:

# apt-get install -y python python-pip python-twisted python-memcache python-pysqlite2 python-simplejson python-cairo python-django python-django-tagging libapache2-mod-wsgi python-txamqp python-pyparsing python-zope.interface python-ldap

# pip install pytz mock nose

I use LDAP at my organizaton, therefore I install python-ldap.
Graphite webapp does not contain graphite.3rdparty.putz anymore, so I install it explicitly.
(If you are on Centos and can't find appropriate python-* package in repository - use pip install -U -r requirements for every component below).

Install Graphite components:

We need master branches, not 0.9.*-tagged versions for Webapp and Ceres. For Carbon, use megacarbon branch. Carbon master branch does not support Ceres storage backend (to the moment of writing).

cd /tmp
git clone https://github.com/graphite-project/ceres.git
git clone https://github.com/graphite-project/whisper.git
git clone https://github.com/graphite-project/carbon.git -b megacarbon
git clone https://github.com/graphite-project/graphite-web.git

CWD=$PWD; for dir in ceres whisper carbon graphite-web; do cd $CWD/$dir; python setup.py install; done

While I'm here, I do this:

cd /tmp/carbon
cp -R plugins /opt/graphite/
ceres-tree-create /opt/graphite/storage/ceres

(it is not required, but I'll need it later)

Configure apache httpd virtual host
(this part is pretty standard and haven't changed).

cp /opt/graphite/conf/graphite.wsgi.example /opt/graphite/conf/graphite.wsgi
cp /opt/graphite/examples/example-graphite-vhost.conf /etc/apache2/sites-available/graphite
a2dissite default
a2ensite graphite

, create this directory (otherwise I'll have to change WSGISocketPrefix in graphite vhost file):
mkdir -m755 /etc/apache2/run
chown www-data:www-data /etc/apache2/run

cd /opt/graphite/webapp/graphite/
cp local_settings.py.example local_settings.py
python manage.py syncdb
chown -R www-data:www-data /opt/graphite/{storage,webapp}

service apache2 restart

That's it for webapp installation.
(I'll configure local_setttings.py later, when I have Carbon set up on both nodes).

Carbon configuration.

With Megacarbon, a new 'pipeline' concept is introduced.
Instead of having separate carbon-cache.py, carbon-relay.py, carbon-aggregator.py scripts, there is a generic carbon-daemon.py script that accepts metric and then passes it to a "pipeline" of different functions

Available pipeline functions are:

   aggregate - use aggregation-rules.conf to buffer datapoints
               and compute aggregates. Generated metrics go
               through the whole pipeline (including aggregation).
               Loops are possible, be careful.
      filter - apply filer rules defined in filter-rules.conf to
               metric names. Rules are applied in order and action on match
               is taken immediately. Metrics which match an 'exclude' rule
               are dropped from further pipeline processors. Metrics matching no
               rules will be accepted by default
     rewrite - apply rewrite rules defined in rewrite-rules.conf
               to metric names. Renamed metrics continue through
               the pipeline, they do not start over at the beginning.
       relay - send metric data on to DESTINATIONS specified in relay.conf
       write - put metric data into the Cache and configure the Writer
               to write all cached data to the configured database.

Generally there are two behaviors: to relay the data on to other carbon daemons, or to write the data to disk (Whisper or Ceres). There are a few data manipulation features that can be configured as well.

In my setup, I need three Carbon processes.One "relay" process to accept metrics, filter them and relay to destination. And two "writer" processes, that will cache metric and write to the configured data storage (Ceres). Relay will use consistent-hashing and REPLICATION_FACTOR=2 to distribute metrics to at least two different writers located at different cluster nodes.

As I have said, there is no more carbon-cache.py, carbon-relay.py, carbon-aggregator.py scripts. Instead, there is a single unified script to manage carbon instances.

/opt/graphite/bin/carbon-daemon.py --help
Usage: carbon-daemon.py [options] <instance> <start|stop|status>
  -h, --help           show this help message and exit
  --debug              Run in the foreground, log to stdout
  --nodaemon           Run in the foreground
  --profile=PROFILE    Record performance profile data to the given file
  --profiler=PROFILER  Choose the profiler to use
  --savestats          Save raw performance profile data instead of parsed
  --pidfile=PIDFILE    Write pid to the given file
  --umask=UMASK        Use the given umask when creating files
  --config=CONFIG      Use the given instance configuration directory
  --logdir=LOGDIR      Write logs in the given directory

Use it to start my relay and writer instances, typically:

sudo -u www-data /opt/graphite/bin/carbon-daemon.py writer-1 start
sudo -u www-data /opt/graphite/bin/carbon-daemon.py writer-2 start
sudo -u www-data /opt/graphite/bin/carbon-daemon.py relay start

If you run the commands above right now, they will obviously fail, because of neither "writer" nor "relay" instances have been configured yet. Carbon configuration was changed in order to provide more flexibility.Instead of having to define everything in a single carbon.conf file (caches, relays, aggregates, whitelisting), different settings now are placed into separate files. (It is now more Hadoop-like).
Note: Both webapp and carbon-daemons should be able to access files under $GRAPHITE_ROOT. If I create a separate 'carbon' user to run carbon-daemons, then I'll have to create also 'carbon' group and add httpd user to this group and then set tricky permissions on $GRAPHITE_ROOT/storage. Instead, I just run carbon-daemon as www-data user.

Configure Carbon 'writer-1' instance.

export CONF_DIR=/opt/graphite/conf/carbon-daemons/
cd $CONF_DIR; ls -l
drwxr-xr-x 2 root root 4096 May 22 03:27 example
There just one 'example' directory, that's it - this is Carbon 'example' instance. It has its own set of configuration files:

# ls -l example
-rw-r--r-- 1 root root 1012 May  8 14:06 aggregation.conf
-rw-r--r-- 1 root root  612 May  8 14:06 aggregation-filters.conf
-rw-r--r-- 1 root root 1477 May  8 14:06 aggregation-rules.conf
-rw-r--r-- 1 root root 1038 May  8 14:06 amqp.conf
-rw-r--r-- 1 root root 3896 May  8 14:06 daemon.conf
-rw-r--r-- 1 root root 3702 May  8 14:06 db.conf
-rw-r--r-- 1 root root  645 May  8 14:06 filter-rules.conf
-rw-r--r-- 1 root root 1884 May  8 14:06 listeners.conf
-rw-r--r-- 1 root root 1134 May  8 14:06 management.conf
-rw-r--r-- 1 root root 2365 May  8 14:06 relay.conf
-rw-r--r-- 1 root root  893 May  8 14:06 relay-rules.conf
-rw-r--r-- 1 root root  509 May  8 14:06 rewrite-rules.conf
-rw-r--r-- 1 root root 5115 May  8 14:06 storage-rules.conf
-rw-r--r-- 1 root root 4123 May  8 14:06 writer.conf

To create first 'writer-1' instance we just need to create 'writer-1' directory here with the similar set of configuration files.
We can do it in traditional way:

cp -R example writer-1

,but I prefer to use the script provided:

/opt/graphite/bin/copy-instance.sh  example  writer-1 1

What it does it automatically sets Carbon configuration ports in some conventional way (see comments in configuration files). I prefer to use this script and start with 1, because I reserve 0 for 'relay' instance (a bit later).

Instance 'writer-1' exists, now check its configs.

writer-1/daemon.conf - 
Since this is a writer instance, it should contain only one 'write' function in the pipeline.(well, it can have more functions in pipeline, but there is a common recommendation to to keep writer's cpu usage low, therefore we leave only write here).
PIPELINE = write
writer-1/listeners.conf define ports this carbon instance will listen to (note I use ports for 2nd instance, this is because default 2003/2004 is reserved for relay)
port = 2103
type = plaintext-receiver
port = 2104
type = pickle-receiver

writer-1/db.conf - define a storage backend (Ceres) for this writer instance, and where to store data files.

DATABASE = ceres
LOCAL_DATA_DIR = /opt/graphite/storage/ceres/

writer-1/writer.conf - tune instance's cache performance, and set cache query port.
MAX_CACHE_SIZE = 2000000

writer-1/storage-rules.conf - define a set of rules for assigning metadata to individual metric nodes (known earlier as "retention periods").
pattern = ^collectd\.
retentions = 10s:1d,1m:30d,5m:1y

match-all = true
retentions = 1m:30d,5m:3y

This is minimal set of configuration files required for 'writer-1' instance. Just leave other files unchanged.

Start this 'writer-1' instance:

sudo -u www-data /opt/graphite/bin/carbon-daemon.py writer-1 star

,it should be able to receive metrics on 2103 port.

echo "test.foo $RANDOM `date +%s`" | nc localhost 2103

Configure "writer-2" instance.

It is the same as the first one, but should have different listener ports assigned. copy-instance.sh scripts makes it for me:

/opt/graphite/bin/copy-instance.sh  writer-1 writer-2 2

sudo -u www-data /opt/graphite/bin/carbon-daemon.py writer-2 start

Configure  'relay' instance.

/opt/graphite/bin/copy-instance.sh  example relay 0

(I set 0 because earlier we have reserved it for 'relay').

relay/daemon.conf  -  pipeline should contain 'relay' function at the and of the list. We do not want to aggregare anything here, but may want to filter incoming metrics (rules in filter-rules.conf).
PIPELINE = filter,relay

relay/listeners.conf  - set relay's listener ports here
port = 2003
type = plaintext-receiver
port = 2004
type = pickle-receiver

relay/relay.conf  -  set  consistent-hashing, and define relay destinations. List all cluster writer instances here.
RELAY_METHOD = consistent-hashing

relay/filter-rules.conf - define filter rules (if necessary).
# Only pass whitelisted metrics
include ^carbon\.
include ^stats\.
include ^collectd\.
include ^recordedfuture\.
exclude ^.*$

This is a minimal set of configuration files required for our 'relay' instance. 
Additionally, we may want to receive metrics using an amqp broker, therefore we set 'ENABLE_AMQP = True' in relay/amqp.conf .

Start 'relay' instance :

sudo -u www-data /opt/graphite/bin/carbon-daemon.py relay start

, it should receive metrics on port 2003.

echo "test.foo $RANDOM `date +%s`" | nc 2003

, and route it to one of the local caches (because another host is not available yet).

Just to make it clear to understand: Any two different carbon-daemon instances have their own set of configuration files. But the only real difference between them is in daemon.conf and listeners.conf. It makes it very easy for DevOps to modify existing Graphite cookbook to set any number of writers per machine.
In node attribute:

"carbon-daemons" : [
  "0" : {
     "type": "relay",
     "line_receiver_port" : "2003",
     "pickle_receiver_port" : "2004",
     "pipeline" : "filer,aggregate,relay",
  "1" : {
     "type": "writer",
     "line_receiver_port" : "2103",
     "pickle_receiver_port" : "2104",
     "pipeline" : "writer",
     "cache_query_port" : "7102"
  "2" : {
     "type": "writer",
     "line_receiver_port" : "2203",
     "pickle_receiver_port" : "2204",
     "cache_query_port" : "7202"

In recipe:

daemon = node["graphite"]["carbon-daemons"]
destinations = build_destinations_string

daemon.keys.each do |k|

  instance = "#{daemon[k]["type"]}-#{k}"
  conf_dir = "#{node['graphite']['base_dir']}/conf/carbon-daemons/#{instance}"

  directory conf_dir

  %w{ daemon.conf listeners.conf db.conf writer.conf relay.conf etc }.each do |t_name|

    template "#{conf_dir}/#{t_name}" do
      source  "carbon-daemons/" + t_name
      variables( :carbon_options => merge_daemon_defaults(daemon[k]),
                 :relay_destinations => destinations


  runit_service "carbon-" + instance do
    run_template_name 'megacarbon'
    options(:name => instance)


Set up another Graphite node.

graphite-2 node is set set up in exactly the same way. 

relay/relay.conf:DESTINATIONS strings must be identical for both nodes.

Finally, edit /opt/graphite/webapp/graphite/local_settings.py on both servers.

CLUSTER_SERVERS - may safely list both nodes ['',''], though there are known situations when it is bad (https://github.com/graphite-project/graphite-web/issues/222). Therefore it is commonly suggested to exclude local webapp instance from the list.

CARBONLINK_HOSTS - should list local 'writer' instances only. If we have just one writer instance, it is safe to address it like ['localhost:7102:cache1']. But if we have two or more 'writer' instances at the node - then we must use the same host address we use in relay's DESTINATIONS (for example ['', '']), otherwise webapp consistent-hashing will not work correct (webapp will select wrong Cache instance to query for 'hot' cached data). Order does matter - we should list local instances in exactly the same order as they appear in DESTINATIONS.

MEMCACHED_HOSTS - should be identical on all servers, again, order does matter.

Rowan Udell has a sweet blog post with good example of Carbon Upstart script:

That's all with setup.
Both routers have the same DESTINATIONS list, so they will always relay any metric to the same Cache instance.

What is really great - now I can add new Graphite nodes at any time,  and do not need to re-balance existing data between nodes. Also, I may remove Graphite node and not worry about configurations.

It does not come for free, though. 
Ceres has few significant differences from Whisper, and may require periodical maintenance. For example, it writes data to the shortest retention archive only, and never to other ones. It is my duty now to update retention archives. Fortunately, it comes with tools for that.