Ankündigung

Einklappen
Keine Ankündigung bisher.

KNX node for node-red

Einklappen
X
 
  • Filter
  • Zeit
  • Anzeigen
Alles löschen
neue Beiträge

    Hi Carsten
    thanks. You can safely use notepad or wordpad or vs code with github plugin.

    Kommentar


      Hi xrk ,

      didn't want to take over the other thread with that topic ... I was curious about your Node-RED setup for Influx/Grafana. Do you just collect specific GAs or do you log every GA to the InfluxDB and just decide on what GAs to show in graphs? Would you mind sharing further details on how you set up all that stuff?
      Chris

      Kommentar


        Hi Chris,
        I log every KNX message (directed to a group address (GA)) into InfluxDB. This gives me the opportunity to analyze/visualize all info (using e.g. Grafana) afterwards if desired.

        The flow for this is very simple (someone with more experience with Node-Red might surely find a simpler solution, but it works for me very well):

        filedata/fetch?id=1491872&d=1586517541&type=thumb
        KNX Ultimate node is in Universal mode listening to all GAs, the output has type Write, Input is reacting to GroupValue write & response.

        The function node "Fields & Tags" is doing:
        Code:
        // No need to log Date & Time, in my case in GA 0/0/1&2
        if ((msg.topic == "0/0/1") || (msg.topic == "0/0/2")) return null;
        
        // Boolean values cannot be compacted by InfluxDB into lower resolution retention policies (=database tables)
        // The reason is that the mean/average computation of boolean results in null value in InfluxDB.
        // Thus we turn all booleans into numeric values: false=0, true=100.
        // See Post #200 below for further information about InfluxDB retention policies and compacting data
        if (typeof msg.payload === 'boolean'){
             if (msg.payload === true){
                 msg.payload = 100;
             } else {
                  msg.payload = 0;
             }
        }
        
        msg.payload = [{
             // First object lists the set of named fields (for further info, see InfluxDB Node Help in Node-Red)
             value: msg.payload,
        },
        {
             // The second object lists the set of named tags
             source : msg.knx.source,
             dpt: msg.knx.dpt,
             description: msg.devicename,
             event: msg.knx.event
        }];
        return msg;
        and change node "Filter" is configured like this:
        filedata/fetch?id=1491873&d=1586517805&type=thumb
        This will save to InfluxDB (see InfluxDB help to understand concepts such as measurement, field and tag):
        • date and time at the time of writing to InfluxDB as timestamp
        • group address of the KNX message as measurement
        • the value as the field value
        • source (physical address of the transmitting KNX node) as the tag value
        • dpt (KNX datapoint type) as the tag value
        • group address description as the tag value
        • event type (GroupValue write, response or read) as the tag value
        You do not have permission to view this gallery.
        This gallery has 2 photos.
        Zuletzt geändert von xrk; 05.09.2020, 13:47.
        Winston Churchill hat mal gesagt: "Ein kluger Mann macht nicht alle Fehler selbst. Er gibt auch anderen eine Chance.“

        Kommentar


          Thanks Risto,

          that's really compact
          Probably would have just created a new msg object in the function node so you'd not have to add the change node afterwards, but that's really a minor improvement.

          I must admit I haven't looked at InfluxDB/Grafana yet, but was there anything "special" for this setup to work or is everything after this Node-RED setup for storing the data a regular setup of the other two tools?
          Chris

          Kommentar


            It works with standard settings out-of-the-box for InfluxDB and Grafana.

            I needed some fiddling though for creating the retention policies (and the required continuous query) in InfluxDB, until I was satisfied. This is what I use presently (note the unusual infinite retention policy for the high-res "input" data):
            Code:
            CREATE database "db_knx"
            USE db_knx
            CREATE retention policy "rp_inf_highres" ON "db_knx" duration INF replication 1 DEFAULT
            CREATE retention policy "rp_inf_lowres" ON "db_knx" duration INF replication 1
            
            (the next command should be entered all in one line; here only for improved readability I split it into 5 lines)
            CREATE continuous query "cq_1day" ON "db_knx"
            BEGIN
              SELECT mean(value) AS value, min(value) AS min, max(value) AS max, stddev(value) AS stddev INTO "db_knx"."rp_inf_lowres".:MEASUREMENT
              FROM (select mean(value) AS value FROM db_knx.rp_inf_highres./.*/ GROUP BY time(1s) fill(previous)) GROUP BY time(1d),*
            END
            The lowres version of the data is needed for plotting something over a very long period (such as a year of data or so), as with highres this could otherwise be slow due to possibly too many datapoints. The lowres aggregates the values over 1 day, but also retains true min, max and stddev on input data that otherwise would not be included anymore in lowres data. The asterisks in the statement tell the query to go over each measurement series one by one (meaning each KNX group address separately).

            Here's one important point to understand with InfluxDB, especially when used with KNX data. InfluxDB mean value computation expects data to have a fixed time period in the database, which is not the case with KNX (consider for example a value in KNX that gets transmitted on an event, e.g. on change). The reason for the continuous query to first group by 1 second and then fill is to avoid this false mean/average value in computed lowres data. The problem with irregular time series data is nicely explained in this blog post at InfluxDB website.

            Do note that this continuous query solution is computationally quite expensive (every MEASUREMENT (=KNX group address) is expanded to 24*3600=86400 values per day and then the mean is computed), but is my way to produce more correct "average" values in the lowres data. As a reference, for example, on my Xeon server it takes roughly a minute to calculate the lowres data for all the data points across all GAs of the last 24h (the cq_1day runs once a day).
            Zuletzt geändert von xrk; 12.04.2020, 13:44.
            Winston Churchill hat mal gesagt: "Ein kluger Mann macht nicht alle Fehler selbst. Er gibt auch anderen eine Chance.“

            Kommentar


              Hey xrk

              nice, this works fine. I was using some linux command line tools to do this job, but i'll switch it over to nodered now.
              I'll leave it running a few days and check if lowres is filled.

              I have a few questions though: isn't the highres retention exactly the same as the autogen retention influxdb creates on its own?
              Does Grafana autoswitch to the lowres retention when filtering longer date ranges?
              Wouldn't it make sense to limit the retention time of the highres policy, as the lowres has all data for long date ranges?
              How can I get the runtime of cq_1day?

              The influxDB node logs some errors with different GAs, for example:
              A 400 Bad Request error occurred: {"error":"unable to parse '0/1/15,source=1.1.100,dpt=10.001,description=(Zentral-\u003eWetter)\\ Wetterstation.Messwert\\ GPS.Uhrzeit,event=GroupValue_Write value=Sat Apr 11 2020 09:44:16 GMT+0200 (GMT+02:00)': invalid boolean"}

              Thanks!
              Marc
              Zuletzt geändert von sinn3r; 11.04.2020, 09:00.

              Kommentar


                Hi Marc,
                Zitat von sinn3r Beitrag anzeigen
                isn't the highres retention exactly the same as the autogen retention influxdb creates on its own?
                Yes, I just wanted the retention policy to have a descriptive name "highres", which is why I redefined it.

                Zitat von sinn3r Beitrag anzeigen
                Does Grafana autoswitch to the lowres retention when filtering longer date ranges?
                There is unfortunately no embedded function in Grafana for this. But it can be relatively easily solved and here's how:
                As there are no conditional variables in Grafana, we solve the issue using a helper retention policy (=table) and some constant data values in that retention policy. We use Grafana dashboard variable to pull the correct retention policy value (rp_inf_highres or rp_inf_lowres) from this helper retention policy with a query that will depend on the current dashboard time window selection. And we will later use this variable to select the data from correct retention policy (highres or lowres). To clarify: we will have three retention policies in our database (highres, lowres and helper), helper contains only constants and the real measurement data is only inside highres and lowres.
                We start by creating the helper retention policy (named rp_inf_rpselector) and add our constants for later selection according to chosen dashboard time window:
                Code:
                CREATE retention policy "rp_inf_rpselector" ON "db_knx" duration INF replication 1
                INSERT INTO rp_inf_rpselector meas_rpselection,idx=1 rpselection="rp_inf_highres",start=0i,end=‭2592000000‬i -9223372036854775806
                INSERT INTO rp_inf_rpselector meas_rpselection,idx=2 rpselection="rp_inf_lowres",start=2592000000i,end=3110400000000i -9223372036854775806
                Some explanation about the above: the first datapoint we enter in the measurement meas_rpselection outputs tag value string "rp_inf_highres" if the time is between now (0) and 30 days (value in milliseconds 30*24*3600*1000=2592000000). The second datapoint we enter, outputs "rp_inf_lowres" from 30 days to roughly 100 years in the past. The weird value "-9223372036854775806" at the end is the minimum valid timestamp possible in InfluxDB (equates to year 1677).

                In Grafana, in each dashboard, define the following variable:
                Name: rpsel
                Type: Query
                Label: Retention Policy
                Hide: Variable
                Data source: InfluxDB
                Refresh: On Time Range Change
                Query: SELECT rpselection FROM rp_inf_rpselector.meas_rpselection WHERE $__to - $__from > "start" AND $__to - $__from <= "end"
                Regex: empty
                Sort: Disabled
                all other selections off

                (Background info about Grafana built in chosen time window defining variables $__to and $__from can be read here)
                Every time the time range in Grafana is changed, the Grafana dashboard variable rpsel is updated. Using the query above, the content of the variable will either be rp_inf_highres or rp_inf_lowres string.

                Now, in Grafana queries use $rpsel to automagically choose the correct retention policy according to chosen time window. With the values in the code box above, if the time window chosen is 30 days or less, rp_inf_highres is selected, otherweise rp_inf_lowres is used.

                Here is an article explaining all this in more detail (the interesting part on that page starts from subtitle Creating Visualizations in Grafana that Adapt Dynamically to a Suitable Retention Policy).

                Zitat von sinn3r Beitrag anzeigen
                Wouldn't it make sense to limit the retention time of the highres policy, as the lowres has all data for long date ranges?
                Yes, but I did not want to discard "old" highres data. Hard disk space is very cheap nowadays, my highres subfolder in InfluxDB is after roughly 9 months or so, only 350MB with 104 KNX devices on the bus. InfluxDB quite efficiently compresses the data.

                Zitat von sinn3r Beitrag anzeigen
                How can I get the runtime of cq_1day?
                Logging needs to be enabled with at least info level in influxdb.conf and the continuous queries log-enabled setting needs to be true (the default level is already info and true respectively, thus if you haven't tweaked around with these settings, you are good to go).
                You can extract the requested data about the runtime from the created log file (you calculate the continuous query runtime from the difference of the log event times between Continues query execution (start) and Continues query execution (end)).

                Zitat von sinn3r Beitrag anzeigen
                The influxDB node logs some errors with different GAs
                No errors whatsoever in my case. Did you do the function and change nodes exactly as in post #198? It might be best to discard all time & date GAs from saving into InfluxDB as I did in the Fields & Tags function node in the first line above, as it makes no sense to save them into InfluxDB. Your error message above hints at invalid boolean, which makes me think there might be a problem of interpreting DPT 10.001 as boolean maybe as well.

                Cheers,
                Risto

                P.S. In order not to take over this thread about Massimos fabulous KNX Ultimate node, may I suggest that in case there will be further discussions about InfluxDB or Grafana, we create a new thread about this topic and continue the discussion there.
                Zuletzt geändert von xrk; 14.04.2020, 12:57.
                Winston Churchill hat mal gesagt: "Ein kluger Mann macht nicht alle Fehler selbst. Er gibt auch anderen eine Chance.“

                Kommentar


                  Knx-Ultimate 1.1.68 is out with an important fix.
                  See here.
                  https://github.com/Supergiovane/node...r/CHANGELOG.md
                  please update. Cheers!

                  Kommentar


                    Hey TheMax74

                    first of all, thank you very much for your node and the support you're providing here! I'm playing around with it since two days and it's really awesome!

                    But now I ran into an issue, filling my Node-RED debug log. I suppose I've imported a node while playing around and I'm now in a state where it remembers something, I'm unable to find. I've disabled my flow already and created a new empty one with a single node in it, but I'm not able to get rid of the following error:

                    Code:
                    "KNXUltimate-config: Error in instantiating knxConnection Interface ens4 not found or has no useful IPv4 address!. Node undefined"
                    But as you can see from the screenshot my Node-RED instance (running within docker) doesn't have such a network interface. The setting is set to "Auto" anyways. And I do only have this single knxUltimate-config node, so this message can't come from another one.

                    Any help would be appreciated!

                    Thanks.

                    Node-RED Screenshot.png

                    Kommentar


                      Hi 0x52
                      that’s interesting.
                      Newer had such as problem.
                      Are you able to open the node-red configuration file with an editor?
                      If yes, ther’s should be a reference to a config node having the ens4 as interface.
                      Just search for “ens4”.
                      please let me know!
                      Take a look here.
                      https://discourse.nodered.org/t/old-...ost-nodes/5322
                      First, in settings.js set flowFilePretty: true, restart node red, make a small change and re-deploy. Then the file will be formatted nicely and you will be able to see what to delete. It is a json file so it should be obvious.

                      For your info, “Node undefined” in the log’s row should be “Node + the affected node id”.
                      That’s strange.... Undefined meant that the node is unexistant.
                      Zuletzt geändert von TheMax74; 17.05.2020, 06:51.

                      Kommentar


                        Oh man, thanks! "Have you tried turning it off and on again?" 🙄

                        I did a grep for "ens4" on the settings.json and didn't get any results. "ens" brought up a lot. So I enabled the "flowFilePretty" just to be sure, restarted the container to grep again, but didn't find any "ens4" as expected. When I looked at the web interface then, the error was gone 😣 So I assume it must have been any intermediary state which I hope never happens again 😆

                        Thanks again anyways! Solved my problem 🙂

                        Kommentar


                          Good to hear that!!

                          Kommentar


                            I just started with knx ultimate and struggle already importing the GAs from ETS 5.7.4 (Build 1093).
                            I reduced the GAs only to the headline and one additional line, but the import is still generating an error connot read property statusDisplayDataPoint of null.
                            What do I do wrong?
                            Thanks for your tips.

                            "Group name" "Address" "Central" "Unfiltered" "Description" "DatapointType" "Security"
                            "Schalt-Zentral-Ein/Aus" "0/1/0" "true" "true" "" "DPST-1-1" "Auto"

                            Kommentar


                              Hi
                              have you selected “tabulation” during export?
                              I see this in my log:

                              Node-RED.png

                              Please save the node and do a full deploy, prior to import the ets csv.

                              Please try to copy the attached ZIP and paste the content into the ETS import field, then let me know.
                              Angehängte Dateien
                              Zuletzt geändert von TheMax74; 29.05.2020, 09:01.

                              Kommentar


                                Chiao Supergiovane,

                                thanks for your support.

                                Yes, I did the export according to your video. (see also attached picture)

                                And I tried it also with your file. (Opening the file with notepad on Win10, marked the complete file, CTRL-C and then CTRL-V). But I still get the same error: "TypeError: Cannot read property 'statusDisplayDataPoint' of null".

                                Simply reading an knx object status (setting knx adress manually) and showing its status on a dashboard is working.

                                could you give me some more advice?
                                Thanks Stefan
                                Angehängte Dateien

                                Kommentar

                                Lädt...
                                X