Finally, after a few attempts, I created a new config and got it to work. For now, using an external InfluxDB and tools like Grafana at least provide some more options for analysis that the built in explorer.
However, after running for some hours the number of events drastically declines followed by a complete stop and finally the hub crashes. No idea what causing it as there is no logs available. Could there be a memory leak somewhere?
Eg.
SELECT count("value") FROM "gen_raw"."sensor_temp.evt.sensor.report" WHERE $timeFilter GROUP BY time(1h) fill(null)
Gives:
Time sensor_temp.evt.sensor.report.count
2021-01-27 02:00:00 85
2021-01-27 03:00:00 151
2021-01-27 04:00:00 179
2021-01-27 05:00:00 166
2021-01-27 06:00:00 162
2021-01-27 07:00:00 92
2021-01-27 08:00:00 90
2021-01-27 09:00:00 9
2021-01-27 10:00:00 4
2021-01-27 11:00:00 5
2021-01-27 12:00:00 0
2021-01-27 13:00:00 0
2021-01-27 14:00:00 70 <- hub reboot after crash
Similar issue on the local InfluxDB:
{
"serv": "ecollector",
"type": "cmd.tsdb.get_data_points",
"val_t": "object",
"val": {
"proc_id": 1,
"field_name": "value",
"measurement_name": "sensor_temp.evt.sensor.report",
"relative_time": "6h",
"from_time": "",
"to_time": "",
"group_by_time": "1h",
"fill_type": "null",
"data_function": "count"
},
...
}
Returns
{
"type": "evt.tsdb.data_points_report",
"serv": "ecollector",
"val_t": "object",
"val": {
"Results": [
{
"Series": [
{
"name": "sensor_temp.evt.sensor.report",
"columns": [
"time",
"value"
],
"values": [
[
1611730800,
10
],
[
1611734400,
15
],
[
1611738000,
2
],
[
1611741600,
1
],
[
1611745200,
0
],
[
1611748800,
0
],
[
1611752400,
93
]
]
}
],
"Messages": null
}
]
},
...
}
@alivinco Any ideas on how to monitor debug this behaviour?