This year’s Devoxx has started in a “Micheal Bay movie style” ie. with a big bang. The keynote was very entertaining thanks to James Veitch and his “Dot Con” talk. In this keynote he showed how he turns some of the most annoying scamming emails against its authors by twisting what they say, following incorrectly their instructions or even trying to “trick” them into sending him a new toaster.
Ted Neward and his “Guide to hacking Java” was a talk not for the faint hearted. He did a really good job by both interacting with the audience by letting people to actually choose what they wanted to hear about. Ted kept everyone interested till the last minute of his speech. If you really like to raise the JVM bonnet and get dirty you should have been there. Some of the things you could get familiar with were: JNI, how to create your own launcher, how actually libraries like Lombok work or learn a bit about Java Agents and debugging APIs.
Getting pointed with fingers because you still use WebSphere? Guys from the second floor making jokes about you because you’re still using Grunt? Matt Raible has a cure for that. During his talk on JHipster he showed how to stay cool and fresh in the sophisticated world of frameworks, tools, libraries. In his talk he gave an intro to JHipster and its capabilities. Matt created a simple application with auth, AngularJS, Gatling tests, UML diagrams and deployed everything to the cloud. In the meantime he could sip his favourite whisky and beer (not sure if it was the best mix though).
Christopher Batey usually has a lot of practical knowledge to share. The title “Jvm and Docker, a good idea?” didn’t seem like a big novelty. Fortunately Chris pointed a lot of topics during optimization of containers running a JVM, like: cgroups, namespaces, measuring and limiting resource usage etc. He suggested a few interesting tools which show what’s happening inside your containers:
For me the most exciting part was when Christopher discussed how to run applications which use a limited number of threads. He gave an example of using Ratpack which in many concepts is very similar to Vert.x which I use currently for building microservices. Can’t wait to play with all the tools at work.
Last day of Devoxx ended with a set of workshops. It was a pleasure to take part in the organised workshops, although many participants did not have pre-installed software and were of ranging capabilities so sometimes the pace seemed to slow for some of the people.
It was really good to see that the London Java community is so active! Devoxx gave me this year a lot of ideas and motivation, this is why I love conferences so much!
Having multiple distributed services can generate a great deal of data, including health statuses, state changes and many more. One of our projects required a service which could monitor such information and display it in a friendly manner to the user. One of the problems was that we had no influence on some of the services and no notion of technologies they used, except for the ubiquitous REST services via HTTP protocol. It would be a problem if dedicated database clients were required to communicate with the storage or we would have to convince everyone to a new messaging format.
InfluxDB
Fortunately there is a time series database which accepts purely REST communication to persist monitoring information – InfluxDB. It’s a database implemented in Go language which uses an SQL-like language designed for working with time series and analytics.
To display this data we decided to choose Grafana. It’s an easy to use, AngularJS based web-site which has the ability to query InfluxDBs REST interface. One of the advantages of this solution is that it does not require detailed technical knowledge. Moreover I must say that Grafana’s chart displaying layout is very neat.
Installation
To install InfluxDB we need to download a package from the official website and install it via Linux package manager. Grafana is a different story. It’s a static website communicating via AJAX with InfluxDB, so it needs an HTTP server such as Apache.
To save you the pain, I’ve prepared a bash script. First it installs Influx, Grafana and Apache. Then it configures Grafana to communicate with your local InfluxDB (via localhost so you might want to substitute it in config.js) and generates exemplary data persisted into InfluxDB using curl. To download the script, please execute the commands below in your terminal. Bear in mind that the script installs 32-bit version of InfluxDB – if you’ve got a 64bit OS, just change the InfluxDB package URL in the script. One more thing: I’ve tested it only under Ubuntu.
Now you are able to plot a chart in Grafana. Unfortunately I’ve had some problems with floating point precision in Bash so the chart isn’t as pretty as it should but that’s out of scope of this blog post. Let’s plot this ugly … chart!
But before we do that, it’s a good idea to familiarize ourselves with InfluxDB Rest interface.
InfluxDB HTTP interaction
If you open the previous script in the editor, you’ll see that some HTTP requests are being sent to InfluxDB. First, 2 databases are created (one for data and one for Grafana to save its settings) and then one of them is filled with data.
To create a database named “my_new_database”, we send a POST request with JSON body to the InfluxDB instance:
curl -X POST 'http://localhost:8086/db?u=root&p=root' -d '{"name": "my_new_database"}'
On the other hand, to insert data into the previously created database, we use this POST request with JSON payload:
curl -X POST -d '[{"name":"fun","columns":["val", "other"],"points":[["23.3256", 100]]}]' 'http://localhost:8086/db/my_new_database/series?u=root&p=root'
You might be interested what all these fields mean:
name – in SQL domain, we would call it a table,
columns – column names,
points – an array of values which sequence corresponds to the field “columns”. To make it more efficient you can insert data in bulk, by sending multiple array elements in one request. You can also manipulate the timestamp for each tuple, by adding a column “time” with specific timestamp value, otherwise InfluxDB will associate a timestamp by default.
Ok, so if we managed to create a table and fill it with data, it’s time to query it. Run this request in bash to see what data we previously inserted (this time we use a GET method):
curl -G 'http://localhost:8086/db/my_new_database/series?u=root&p=root&pretty=true' --data-urlencode "q=select * from my_new_database"
Of course, there are different ways to communicate with InfluxDB, for example you can use the Graphite protocol, send the data via UDP or use any of the language specific clients they provide. There’re many more quirks you can do with it, for more information visit the official docs
Grafana dashboard step by step
First, we will create our dashboard and save it. This way we’ll prove our Grafana configuration is working and it communicates properly with InfluxDB. Navigate to http://localhost/grafana create a dashboard and click save.
Now let’s create a new chart by following the second screenshot and clicking on the new chart title and then choosing “edit”.
Add a simple query to display a chart. Our data series name is “fun” and column is “val” (this data was generated by our script you’ve run in the installation section).
Next, to show more data, add a few queries by hand in row mode by clicking “Add query” and for each query choose “Raw query mode”:
select mean(val + 20) from “fun” group by time(1s)
select mean(val + 30) from “fun” group by time(10s)
Result should resemble screenshot 1.4.
By manipulating the “Display Styles” section and charts colours, I’ve managed to create this graph:
To wrap things up: in this article we’ve managed to configure a simple monitoring environment, we created a sample dashboard with aesthetic graphs and created basic queries for InfluxDB by our own and sent them with curl. Also I’ve showed you a couple of screenshots from Grafana so you could get more familiar with its interface.