History Recorder performance - Lab 02

Author: Sergey Sorokin

Published: 2010-04-20 17:09:00

Last update: 2013-08-31 10:37:43

Tags: m2m
historical data

Slug: History-Recorder-performance-Lab-02

In the latest release of CSWorks, we have improved Historical Data Server performance. Let's see what History Recorder is capable of now.


CSWorks 1.2.3730.0
Server: Intel Core 2 Quad Q6600 @ 2.40GHz, 4 GB RAM, Windows 7


1. Install SQL Server 2008 Express on your server machine.

2. Create database "CSWorks"

3. Create HistoricalData table - see "createCommand" parameter of in CSWorks.Server.HistoryRecorderService.exe.config.

4. Configure SQL Server data source and make it active in CSWorks.Server.HistoryRecorderService.exe.config:

  <dbTargets activeDbTarget="Standard SQLServer DbTarget">
    <dbTarget name="Standard SQLite DbTarget" ...
    <dbTarget name="Standard SQLServer DbTarget"
      connectionString="Data Source=localhost\sqlexpress; Initial Catalog=CSWorks;user id=sa;password=...;"
      maintenanceCommand="delete top (300000) from HistoricalData ..."

Please note we tell History Recorder to keep alarm event records in the database for 10 minutes only, and we perform record cleaning every 30 seconds. This only because our SQL Server is not capable of taking heavy load.

5. Restart History Recorder service. Make sure events are now written to CSWorks database (run "select * from HistoricalData" to confirm it).

6. Using cscript tool, run a script that generates 2000 historical data points:

function main()
  var areas = 200;
  var dpsInArea = 50;

  for (i = 0; i < areas; i++)
    var areaId = i.toString();
    while(areaId.length < 4)
      areaId = "0" + areaId;

    WScript.Echo("<!-- area 0000" + areaId + "-AAAA-0000-0000-000000000000 -->");

    for (j = 0; j < dpsInArea; j++)
      var dpId = j.toString();
      while(dpId.length < 4)
        dpId = "0" + dpId;
      dpId = areaId + dpId;
      WScript.Echo("<dataPoint id='{00000000-0000-0000-0000-0000" + dpId + "}' description='Tank fill - " + dpId + "' expression='tank1-" + dpId +"'/>");


7. Copy generated historical data point descriptions to RecorderDataPoints.xml:

  <dataPoint id='{00000000-0000-0000-0000-000000000000}' description='Tank 1 fill - 0000' expression='tank1-0000'/>
  <dataPoint id='{00000000-0000-0000-0000-000000001999}' description='Tank 1 fill - 1999' expression='tank1-1999'/>

History Recorder will pick up the change in a couple of seconds and will start saving observation for those 2000 data points to the database. Give our setup some time to stabilize.


After 10 minutes, History Recorder maintains about 3.5 million observation records in the database and writes about 6000 observation records every second on average. Database file size is between 1 and 1.5 gigabytes. Here is a screenshot with Performance Monitor and DbgView windows:

Since our test live data changes in a very predictable way, there is a clear pattern in observation recording on the top perfmon chart. History Recorder memory consumption is under control too. Tracing shows that History Recorder deletes between 120000 and 270000 "obsolete" observations every 30 seconds. As you may have noticed, the maximum number of record it is allowed to delete in one shot is 300K, see 'maintenanceCommand' parameter above. Our setup is properly balanced, so History Recorder does not reach this limit.

If you add more historical datapoints and make total count, say 5000, you may end up in a situation when History Recorder simply cannot write all collected observations in a timely manner, and they will accumulate in the memory buffer. Major symptom will be growing memory consumption by History Recorder. CSWorks 1.2.3800.0 introduces "Write Buffer Size" performance counter that shows current number observations to be written to the database by HistoryRecorder, so this overload scenario becomes more obvious.


Please plan your historical data management carefully. Use scalable database engine, and give it a lot of spare CPU resources. Use multiple History Recorder machines if needed. If the amount of data is extremely big, use multiple databases and apply partitioning technique described in CSWorks documentation.