We've all copy-pasted data into Excel to chart it, and it's fair to say that it can be a tedious process. On a good day, maybe some of us will use Gnuplot or some other static charting library to do the job but who wants to write code to generate a chart?
On the other hand, the BigData world has brought streaming and batch solutions which are amazing at what they do, but are also unfortunately pretty clunky in situations where we want to explore data in a highly interactive and dynamic way.
RTM is a simple tool destined to ingest mass amounts of measurements and compute statistics in real time with a maximum amount of flexibility.
The following command line will create a measurement with the name myMeasurementName, with an epoch start time of 1494428139000, a value of 1234 and flag it with the logical group name myMeasurementGroup.
You can also attach as much aditional custom data as you wish in order to flag your measurement or perform filtering or grouping in your analysis later on. Just append comma-separated key-value pairs like such:
It doesn't get much simpler than that, really. Of course, you don't have to use curl, any http client will do. We also have native connectors with direct access to the DB for higher performance, when needed.
Intuitive selectors let you filter your data based on as many criteria as you like. A default filter will be the logical group which your measurements belong to, but you can then decide to aim your analysis at a specific measurement or at measurements which hold a certain value for a custom field, for example. Regex, numerical comparison, and date comparisons are also supported.
A distributed, parallel service will take care of aggregating your data. You can define an interval size for grouping your data accross time (the time field is configurable and could actually represent anything other than time). You can also define split criteria (see the groupby field) in order to produce multiple series and isolate patterns in your samples. The service uses distributed incremental algorithms for all statistics even for more advanced statistics such as histograms (or percentiles). You can pick and choose how many CPUs can be used for any given query.
The table view is basically an export of the chart data and can be used to produce an analysis summary on your data set or test results. Metrics are selectable and the state of the view can be sent as a link by email or such. In fact, The state of the entire client is stored in the URL so as to be able to send the exact analysis state that you're in to a colleague or another user at any point in time during your analysis.
Lots of efforts have been made to ensure that the application scales well when increasing the volumes of measurements. As of version 2.0.1, the complete aggregation and grouping of 100 million data points takes about one minute with 4 cores. Memory is not an issue since we're using only incremental and distributed algorithms with hard per-thread memory bounds. If you click on the clip on the right side, you'll see how the real-time analysis of 13 million datapoints with just 2 CPU cores feels like.