Posts By: Dominick Bellizzi

From Google Analytics to Matomo Part 3

In Part 1 we talked about why we switched from Google Analytics to Matomo. In Part 2, we discussed how we designed the architecture. Finally, here in Part 3 we will look at the Matomo specific changes necessary to support our architecture.

First, we modified the Dockerfile so that we could run commands as part of the container startup. This allows classdojo_entrypoint.sh to run, but the process that the container ultimately creates is the long running apache2-foreground:

1# The matomo version here must exactly match the version in the matomo_plugin_download.sh script
2FROM matomo:4.2.1
3ADD classdojo_entrypoint.sh /classdojo_entrypoint.sh
4ADD ./tmp/SecurityInfo /var/www/html/plugins/SecurityInfo
5ADD ./tmp/QueuedTracking /var/www/html/plugins/QueuedTracking
6ADD ./tmp/dbip-city-lite-2021-03.mmdb /var/www/html/misc/DBIP-City.mmdb
7RUN chmod +x /classdojo_entrypoint.sh
8ENTRYPOINT ["/classdojo_entrypoint.sh"]
9CMD ["apache2-foreground"]

Next, we wrote a script to download plugins and geolocation data, to bake into the Docker image:

1#!/bin/sh
2set -e
3
4MATOMO_VERSION="4.0.2"
5
6rm -rf ./tmp
7mkdir ./tmp/
8cd ./tmp/
9
10# This script downloads and unarchives plugins. These plugins must be activated in the running docker container
11# to function, which happens in matomo_plugin_activate.sh
12curl -f https://plugins.matomo.org/api/2.0/plugins/QueuedTracking/download/${MATOMO_VERSION} --output QueuedTracking.zip
13unzip QueuedTracking.zip -d .
14rm QueuedTracking.zip
15curl -f https://plugins.matomo.org/api/2.0/plugins/SecurityInfo/download/${MATOMO_VERSION} --output SecurityInfo.zip
16unzip SecurityInfo.zip -d .
17rm SecurityInfo.zip
18
19curl -f https://download.db-ip.com/free/dbip-city-lite-2021-03.mmdb.gz --output dbip-city-lite-2021-03.mmdb.gz
20gunzip dbip-city-lite-2021-03.mmdb.gz
21
22cd ..

Then we write the entrypoint file itself. Since we overwrote the original entrypoint, our entrypoint needs to unpack the Matomo image and fix some permissions first, but then we activate plugins that we want to include:

1#!/bin/sh
2set -e
3
4if [ ! -e matomo.php ]; then
5 tar cf - --one-file-system -C /usr/src/matomo . | tar xf -
6 chown -R www-data:www-data .
7fi
8
9mkdir -p /var/www/html/tmp/cache/tracker/
10mkdir -p /var/www/html/tmp/assets
11mkdir -p /var/www/html/tmp/templates_c
12chown -R www-data:www-data /var/www/html
13find /var/www/html/tmp/assets -type f -exec chmod 644 {} \;
14find /var/www/html/tmp/assets -type d -exec chmod 755 {} \;
15find /var/www/html/tmp/cache -type f -exec chmod 644 {} \;
16find /var/www/html/tmp/cache -type d -exec chmod 755 {} \;
17find /var/www/html/tmp/templates_c -type f -exec chmod 644 {} \;
18find /var/www/html/tmp/templates_c -type d -exec chmod 755 {} \;
19
20# activate matomo plugins that were downloaded and added to the image
21/var/www/html/console plugin:activate SecurityInfo
22/var/www/html/console plugin:activate QueuedTracking
23
24exec "$@"
25
26We tie it together with a Makefile to build and publish these Docker images:
27
28build-img:
29 sh ./matomo_plugin_download.sh
30 docker build . -t classdojo/matomo
31 rm -rf ./tmp
32
33push-img:
34 docker tag classdojo/matomo:latest xxx.dkr.ecr.us-east-1.amazonaws.com/classdojo/matomo:latest
35 docker tag classdojo/matomo:latest xxx.dkr.ecr.us-east-1.amazonaws.com/classdojo/matomo:${BUILD_STRING}
36 aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin xxx.dkr.ecr.us-east-1.amazonaws.com
37 docker push xxx.dkr.ecr.us-east-1.amazonaws.com/classdojo/matomo:latest
38 docker push xxx.dkr.ecr.us-east-1.amazonaws.com/classdojo/matomo:${BUILD_STRING}

Inside our Nomad job specifications, we inject a config.ini.php file. This contains the customized config.ini.php for Matomo. It is a copy of the original Matomo config.ini.php file, but with some important changes:

1[General]
2proxy_client_headers[] = "HTTP_X_FORWARDED_FOR"
3force_ssl = 1
4enable_auto_update = 0
5multi_server_environment=1
6browser_archiving_disabled_enforce = 1

Proxy_client_headers and force_ssl are used as part of our SSL setup. Enable_auto_update prevents containers from updating separately, so that we can coordinate updates across all containers. Multi_server_environment prevents plugin installation from the UI and disables UI changes that write to the config.ini.php file. Browser_archiving_disabled_enforce ensures that the archiving job is the only job that can run archiving, and that archiving won’t happen on demand.

For our non-frontend ingestion containers, we also set:

1; Maintenance mode disables the admin interface, but still allows tracking
2maintenance_mode = 1

Another major change is that the Docker command for the queue processor is changed to:

1command = "/bin/sh"
2 args = ["-c", "while true; do /var/www/html/console queuedtracking:process; done"]

This allows the job to run in a loop, continuously processing the items in the queue.

Similarly, the archive job is changed to:

1command = "/var/www/html/console"
2 args = ["core:archive"]

Which runs the archiving job directly. The admin and ingestion containers all use the default docker command and arguments.

That’s the end of our current journey from Google Analytics to Matomo. There’s more work we have to do around production monitoring and making upgrades easier, but we’re very happy with the performance of Matomo at our scale, and its ability to grow with ClassDojo.

    In Part 1, we discussed why we were moving from Google analytics to Matomo. Now, in Part 2, we’ll talk about how we architected Matomo to handle traffic at ClassDojo’s scale.

    Architecture

    We are using the Matomo official docker image to run the Matomo PHP application. The application has a number of functions and provides:

    • An event ingestion endpoint (matomo.php)
    • An administration and reporting interface
    • Periodic jobs
    • Command line tools

    While a single container can perform all of these functions, they each have different performance and security characteristics and we've decided to separate them to take advantage of different configurations for each of them.

    Event ingestion

    First, we deployed an auto scaling multi container deployment to our Nomad clients. The containers serve matomo.js (a static javascript file) and matomo.php (the main php executable for event ingestion), and we route to these URLs via HAProxy.

    These event ingestion containers are publicly available, and contain another PHP script called index.php. This is the admin and reporting interface. We do not want to publicly expose this. Matomo can disable this interface by setting maintenance_mode=1 in the configuration file, and we've turned it on for these containers. Additionally, we rewrite all /ma/*.php script requests to /ma/matomo.php, which will force everyone to the event ingestion code instead of the admin code.

    Admin and reporting

    Next, we create a separate Nomad job for Matomo administration. It is deployed with a single container and haproxy will route to this container only on our internal network. Unlike the above, this one has the admin interface exposed. Matomo configuration happens in a combination of a PHP configuration file (config.ini.php) and in a mysql database. Changes that can be stored in the database are safe to use because they are synchronized across all running containers. But changes that are written to the config file are not, since they will only happen on the admin interface. For this reason, we set multi_server_environment=1 in the config file, which prevents changing any setting that would write to the config.ini.php file. Instead, these changes need to be deployed via nomad spec changes. Additionally, we turn auto updates off with enable_auto_update=0, so that matomo instances aren't updating themselves and trying to separately migrate the mysql database.

    Periodic jobs

    Out of the box, Matomo does everything on the tail end of user initiated scripts. This means when a user is using the admin site, Matomo might decide that it needs to do some log archiving, or report building. Or if there are events to process in a queue, Matomo might run them at the end of ingesting an event. This isn't ideal for us as it could create undesired performance problems (an admin site that slows down unexpectedly or tracking backing up and a queue growing too large). So we have disabled these periodic jobs (archiving and queue processing) and run them separately as 2 more Nomad periodic jobs. One job is for processing a queue of incoming events, and the second is for archiving our event databases

    Queue Processing

    By default, Matomo writes event entries directly to the database, but at our scale, we want to write to a fast queue, and then batch process the queue into the database. This lets us handle database failovers and upgrades, but also provides slack for when there is a spike in traffic. It also lets us run long queries on the admin site without worrying about impacting the incoming events. Matomo provides a QueueProcessing plugin that moves event ingestion to write to a redis queue. This is fast and reliable and can be processed out of band so that event ingestion can continue while DB maintenance happens.

    At first, we ran the queue processing job every minute as a Nomad periodic job. At our scale, we were not able to process the full queue in each minute, and events were backing up in the queue throughout the day. This caused delays in data showing up in matomo, but also we were running out of memory in Redis. We changed from a periodic job to a long running job that runs multiple queue workers (8 right now) by setting the numQueueWorkers setting in the QueuedTracking plugin. It’s important to remember to set both the numQueueWorkers setting and to create the same number of simultaneous queue worker jobs.

    Archiving

    Matomo stores each event individually, but also contains aggregate reports (today, this week, this month, last month, this year, etc). To build those reports, Matomo runs an "archive" process. This job runs once a day as a Nomad periodic job.

    We are happy with how we designed the Matomo architecture, but it took some time to get container configuration working. We’ll talk about this in Part 3.

      ClassDojo is committed to keeping the data from our teachers, parents, and students secure and private. We have a few principles that guide how we handle user data:

      • We minimize the information we collect, and limit it to only what is necessary to provide and improve the ClassDojo products. Data we don’t collect is data we don’t have to secure.
      • We limit sharing data with third parties, sharing only what is necessary for them to provide a service to ClassDojo, and making sure that they abide by ClassDojo and legal requirements for the shared data.
      • We delete data that we collect when it is no longer needed to run the ClassDojo product

      For a long time, we used Google Analytics on our main website, https://www.classdojo.com/. While we avoided including it in the ClassDojo signed-in product itself, we used Google Analytics to understand who was coming to our website and what pages they visited. We recently decided to remove Google Analytics from our main website and replace it with a self hosted version of https://matomo.org/.

      Self-hosted Matomo allows us to improve our data policies in a number of ways:

      • We no longer share user activity and browser information with Google directly
      • We no longer use common google tracking cookies that allow Google to correlate activity on the ClassDojo website with other websites
      • We can customize data retention, ensuring data collected is deleted after it is no longer necessary for our product quality work

      But there were some other requirements that we needed to verify before we migrated:

      • Would Matomo work in our infrastructure stack? We use stateless docker containers orchestrated by Hashicorp Nomad clients
      • Could we minimize the public surface area Matomo exposes for increased security?
      • Would Matomo scale for our regular usage patterns, and our occasional large spikes?
      • Could we deploy in a way where there is zero downtime maintenance?

      Matomo was able to meet these needs with some configuration, and now we’re collecting millions of actions per day on our main website. We'll publish Part 2 soon where we’ll talk about how we architected Matomo to do this.

        Older posts