Category Archives: Impegni

La vera cuciniera Genovese

Grazie ai Distributed Proofreaders sono riuscito a preservare un vecchio libro che girava per casa, con pagine ormai ingiallite, strappate, tenuto insieme con lo spago. Ora è disponibile per tutti, nel pubblico dominio, in vari formati digitali, su Project Gutenberg:

http://www.gutenberg.org/ebooks/51857

Si tratta di una raccolta di ricette di inizio ‘900. Ci sono piatti di tutti i tipi, dolci e salati, oltre che conserve, gelati e liquori. Certi ingredienti sono un po’ difficili da trovare e alcuni attrezzi da cucina non esistono più se non in qualche piccolo museo di campagna, però ho già provato diverse ricette e sono venute buone buone 🙂

Datevi alla VERA cucina genovese!

OpenStack, Logstash and Elasticsearch: living on the edge

As part of my work for Bigfoot, I deployed a system to gather application logs and metering data coming from an OpenStack installation into Elasticsearch, for data analysis and processing. This post is based on OpenStack Icehouse.

Ceilometer to Elasticsearch

Ceilometer promises to gather metering data from the OpenStack cloud and aggregate it into a database, where it can be accessed by monitoring and billing software. In reality getting some data is very easy, but getting all the data is very hard, and in some cases, impossible.

Some OpenStack projects integrate the metering functionality and send their samples (VM CPU usage, disk, network, etc.) via RabbitMQ to a central agent. Other do not integrate Ceilometer (why? I don’t know) and the administrator has to run periodically a separate script/daemon to gather the information and send it out. The lack of documentation in this area is almost complete, I found out about “neutron-metering-agent” and “cinder-volume-usage-audit” by chance and doing Google searches for them, right now, results in nothing that explains what they are and how to use them.

Once the data is gathered, Ceilometer wants to store it into a database. MongoDB is highly suggested, but it does not work together with Sahara, another OpenStack project. The issue ticket has been opened almost a year ago, and it is still open.

Mysql is not up to the task, a week of data causes the database to grow to a few tens of gigabytes. The script provided by Ceilometer to delete old data then tries to load everything in memory, hits swap and all hope is lost. I let it run for a week trying to delete one day of data, then I decided to try something else.

Ceilometer can send the data in msgpack format via UDP to an external collector. I removed the central agent and the database and pointed Ceilometer to Logstash. Connecting Logstash to Elasticsearch is very easy and now I can do something useful with the data. It works quite well, and here is how I did it:

First, the Ceilometer pipeline.yaml:

sources:
  - name: bigfoot_source
    interval: 60
    meters:
        - "*"
    sinks:
        - bigfoot_sink
sinks:
  - name: bigfoot_sink
    transformers: 
    publishers:
      - udp://<logstash ip>:<logstash port>

This file has to be copied to all OpenStack hosts running Ceilometer agents (even on the Swift Proxy that does not have a standalone agent). Logstash needs an input:

input {
  udp {
    port => <some port>
    codec => msgpack
    type => ceilometer
  }
}

and some filters:

filter {
  if [type] == "ceilometer" and [counter_name] == "bandwidth" {
    date {
      match => [ "timestamp", "YYY-MM-dd HH:mm:ss.SSSSSS" ]
      remove_field => "timestamp"
      timezone => "UTC"
    }
  }
  if [type] == "ceilometer" and [counter_name] == "volume" {
    date {
      match => [ "timestamp", "YYY-MM-dd HH:mm:ss.SSSSSS" ]
      remove_field => "timestamp"
      timezone => "UTC"
    }
    date {
      match =>["[resource_metadata][created_at]","YYY-MM-dd HH:mm:ss"]
      remove_field => "[resource_metadata][created_at]"
      target => "[resource_metadata][created_at_parsed]"
      timezone => "UTC"
    }
  }
  if [type] == "ceilometer" and [counter_name] == "volume.size" {
    date {
      match => [ "timestamp", "YYY-MM-dd HH:mm:ss.SSSSSS" ]
      remove_field => "timestamp"
      timezone => "UTC"
    }
    date {
      match =>["[resource_metadata][created_at]","YYY-MM-dd HH:mm:ss"]
      remove_field => "[resource_metadata][created_at]"
      target => "[resource_metadata][created_at_parsed]"
      timezone => "UTC"
    }
  }
}

These filters are needed because date formats in Ceilometer messages are inconsistent. Without these filters Elasticsearch will try to parse them and fail, discarding the messages. Perhaps there are other messages with this problem, but the only way to find them is wait for a parser exception in Elasticsearch logs.

I think this configuration is much more scalable, flexible and useful to the end user than the standard Ceilometer way, with its database and custom API for which there are no consumers.

Application logs

Managing an OpenStack deployment is hard. It is a complex distributed system, composed of many processes running on different physical machines. Each process logs to a different file, on the machine it is running on. A general lack of documentation and inconsistent use of log levels across OpenStack projects means that trying to investigate an issue is a tedious and time consuming job. Processes need to be restarted with debugging enabled, and a few important lines get lost among tons of other stuff.

To try to simplify this situation I used again Logstash+Elasticsearch to gather all the application logs coming from OpenStack. To work at its best and provide meaningful searches (all messages from a certain Python module, all messages with exception traces), I decided to use a Python logging module that would translate the Python log objects into Logstash (json) dictionaries. This way the structure is conserved and not lost in syslog-like text translation.

Doing that is fairly simple, with a few caveats. First you will need to install python-logstash (my version reduces the number of fields that get discarded).

Then add this file to the configuration directory of the project you want to get the logs from (for example /etc/neutron/logging.conf):

[loggers]
keys=root

[handlers]
keys=stream

[formatters]
keys=

[logger_root]
level=INFO
handlers=stream

[handler_stream]
class=logstash.UDPLogstashHandler
args=(<logstash server>, <logstash port>, 'pythonlog', None, False, 1)

And finally add this option to Neutron’s config file (it is the same for the other projects):

log_config_append=/etc/neutron/logging.conf

Configuring logstash is easy, just add this:

input {
  udp {
    codec => "json"
    port => <some port>
    type => "pythonlog"
    buffer_size => 16384
  }
}

Now, the caveats:

  • this will disable completely any other logging option you set in the configuration files (debug, verbose, log_dir, …). Disregard what the documentation says about the fact the logging conguration will be appended, it does not, it overrides anything else and because of how the python logging system is made, there is nothing that can be done. If you use that option, all logging configuration has to reside in the logging.conf file.
  • The configuration above discard all logs with a level lower than INFO. DEBUG level is needed to investigate issues, but the volume of logs coming from a full OpenStack install at DEBUG level is just too big and imposes useless load.
  • Depending on the size and the load of your deployment, you may need to scale up both logstash and elasticsearch.

Bye bye swiss plates

image

With these I went up to Oslo and back, I made almost 30,000km in Switzerland, Italy, France, Germany, Denmark, Sweden and Norway.
With these I met a wonderful girl and made many friends, I moved all my stuff out of Switzerland under the nose of border guards and took three fines, all in France, all for speeding a few kilometres above the limit.
They are part of my history and if I could keep them, I would… but no, tomorrow I will go to the post office and send them away. They are needed for someone else.

To you, anonymous person in the Vaud canton, who will get them for your brand new car, best of luck, use them well and be careful of speed limits in France.

Dogane francesi

image

L’ufficio delle dogane francesi a Nizza si trova all’aeroporto, nella zona Cargo. È sufficiente seguire i cartelli ‘cargo’, lasciare un documento all’ingresso (non la carta d’identità, che serve dopo) e salire al primo piano, seguendo questa passerella che vedete in foto.

Si può andare lì per tutto, non solo per merci arrivate via aerea, ma anche per animali esotici, quadri o come nel mio caso, automobili…

Fai da te

Il Samsung serie 9 è un portatile con un connettore per l’alimentazione diverso da qualunque altro mai prodotto. Dopo aver girato inutilmente due o tre negozi di informatica, ho fatto un giro al Carrefour e ho preso quello che mi serviva: biadesivo, un po’ di filo e stagnola. Piccole soddisfazioni…

Nota per chi non lo sapesse ancora: ho cambiato lavoro, adesso sono ad Eurecom, in costa azzurra. Lavoro come Ingenieur de Recherche su problemi di cloud, big data e IaaS volendo usare un po’ di gergo tecnico e sufficientemente vago. Oggi sta nevicando, quindi è quasi come essere ancora in Svizzera.

SOSP 2011

image

In questi giorni sono a Cascais, in Portogallo per partecipare a SOSP 2012. Oggi è l’ultimo giorno e sembra che finalmente le previsioni meteo ci hanno azzeccato. Qui fuori c’è una vera tempesta atlantica che si sta scatenando contro le vetrate dall’albergo. I gabbiani sfruttano il vento per tuffarsi nella piscina e lavarsi dal sale del mare.
In teoria oggi pomeriggio sarebbe libero, ma è previsto un temporale. Per fortuna ieri siamo fuggiti dalla sessione “work in progress” e abbiamo fatto un giro a Sintra, sfruttando una giornata stupenda.

Allego una foto della camera, meglio, della suite che mi hanno assegnato per queste 4 notti. Non sono abituato a questo tipo di trattamento, al pomeriggio la cameriera passa per preparare la camera per la notte, chiude le tende e stende il pigiama sul letto, stende il tappetino in bagno per un’eventuale doccia.