Intro

Prometheus is an easy scalable solution to monitor your infrastructure and applications. Easy to install it can quickly start monitor your application and you don't even need to change its legacy code!

It is also a time-series database, a query language and an alert generator.

We will have a look at all these topics during the workshop. And a hype part of the session is that we're going to apply our freshly gained knowledge to an amusing application - cryptocurrency price tracking. I hope everyone will learn something new today and will have fun.

Β«Even though Borgmon remains internal to Google, the idea of treating time-series data as a data source for generating alerts is now accessible to everyone through those open source tools like Prometheus [...]Β»

β€” Site Reliability Engineering: How Google Runs Production Systems (O'Reilly Media)

There is a number of heading in this workshop that have a particular meaning:

  • 😱 In case of a bad internet connection you have to follow this advice
  • πŸ˜… This designate an assignment
  • πŸ™‹ This is a hint
  • And this is an answer

    42

And right there is your first assignment

πŸ˜… Setup tools

  1. Clone the workshop's GIT repository. Or just download it as an archive.
  2. Install Java 1.8
    • [OPTIONAL] Or Docker and Docker Compose to use a provided Dockerfile instead of Java installation.
  3. Install Python 3
    • [OPTIONAL] You can use Python 2 but you should be able to translate existing Python 3 code to Python 2 by yourself

You're going to need the following tools during the workshop:

What Version Where
Prometheus 2.3.1 https://prometheus.io/download/
AlertManager 0.15.0 https://prometheus.io/download/
Grafana 5.1.4 http://docs.grafana.org/installation/
Node exporter 0.16.0 https://github.com/martinlindhe/wmi_exporter
WMI exporter(Windows only) 0.3.3 https://github.com/martinlindhe/wmi_exporter

😱 In case of a bad internet connection you can find these binaries packages under the tools folder. Even if the connection is good you're welcome to use them as well πŸ˜€

To install them from a local source there is a handy script. You have to invoke it with the name of your operating system(linux, darwin, windows).

tools/setup.sh darwin

On Windows you have to use Bash for Windows or Cygwin(you will need to install unzip command line utility too) or extract the archives to corresponding directories in the work folder manually.

On OS X you should prefer to install Grafana with homebrew if possible as a local bundle was hacked by me from homebrew install and thus not so robust πŸ˜€

Ports

There is a bunch of tools we've just installed. If you ever forget what port is used by these tools feel free to look at ports cheatsheet

Project anatomy:

  • dockers - a Dockerfile for an application which we're going to monitor, however it's completely optional to use
  • prez - a workshop presentation source code
  • scraper - a source code of an application that monitors cryptocurrency prices
  • tools - archives of different tools for this workshop for the most popular operating systems. So you just have to untar them into the right directory.
  • work - the main working directory that contains a builded JAR of scraper application and empty folders for each tool which we are going to install during this workshop
  • workshop - this microsite source code

Overview

This workshop is all about monitoring so we need something to monitor :)

Our target here will be a small Scala application that queries a public web API available on CoinMarketCap and exposes its data under a REST over HTTP API. I wanted to make this look as much legacy as possible so this application gives us our old well know XML data.

This application is referred everywhere in this workshop as scraper.

Implementation details are not so important here as we're going to treat this application as a black box. It means we don't need to understand how it works and perhaps we don't even have its source code.

As a side note, I want to underline that it relays a lot on Typelevel functional web stack.

Our planning for the next 70 minutes

part 1

  • launch scrapper application, make sure it works as expected, explore all available API
  • install Prometheus, configure it to monitor itself and familiarize with its interface

part 2

  • explore how we can use node_exporter or wmi_exporter to monitor resource consumption
  • configure Prometheus in a way that we can see this consumption

part 3

  • install Grafana
  • start to create our dashboard

part 4

  • write our own Prometheus exporter for scraper application
  • finally, add a price of cryptocurrencies to Grafana dashboard

part 5

  • create a new time-series with the help of some built-in functions

part 6

  • configure a price alert
  • configure a price-change alert