We are using a CR1000X-logger.
We would like to measure a Thies ultrasonic at 10 Hz (~6 values to be stored as sample in a datatable) on one COMC-port. Additionally, there are a bunch of other measurements to perform, one bunch measured and written to datatable at 1 Hz and one bunch measured every 30 sec and written as averages every 10 minutes.
Currently, we have been able to run at 5 Hz, at 10 Hz, we get skipped scans and watchdog errors.
The program structure is currently one scan run every 200 ms which communicates with the Thies. The other measurements are placed in this scan, but only run once every 5 revolutions (spaced out so that the load should be as equal as possible), plus the 1/30 sec spaced out with another counter.
We have also made som attempts at using a programstructure consisting of one scan, one subscan and one slowsequence, but not really had much success. Do you think there are serious increases in performance using that approach?
Force the program to compile in pipeline mode by addin the PipelineMode instruction to the beginning of the program. Pipeline is more efficient than sequential mode.
I would recommend placing the 30 second measurement into a slow sequence scan.
The 1 second measurements would be simplest to just run at 10 Hz, if there is enough time available.
The Status table of the datalogger has values for MeasureTime and ProcessTime that give clarity about your scan time budget.
Thanks for your reply!
I tried pipeline mode, but now it complains that the measurements take too long time (113 ms, scan needs to be at 100 ms interval).
I don't really need to run everything at 10 Hz, but subscan doesn't work for just reading from a com-port and printing the result... how do I tell it to run different measurements in different rates in pipeline? Slowsequence doesn't work either, it refuses to compile with measurements there :/.
Any help appreciated!!
Look into speeding up your measurements by adjusting settling time and fNotch.
Thanks for your support! I will try that later today.
Another (but related question)... the status table gives similar results (113 ms measurement time) regardless of pipeline or sequential mode. Regardless of me telling it to not run all measurements on all scans... How much control do I really have over measurements in sequential mode? It kinda looks like it runs all measurements at the rate specified in the scan-instruction, regardless of code telling it not to. Does this seem likely?
Pipeline lets your processing run in parellel to measurements. In Sequential, it is only one process at a time.
In either scenario, you need to get the measurement time down. In Sequential mode you would need to get both measurements and processing to fit within the 100 milliseconds. In Pipeline, you have 100 milliseconds available for processing and 100 milliseconds for measurements.
Turn up fNotch to reduce the time measurements need. You lose some noise rejection, but the measurement takes less time. Start with the measurements which are bigger voltages, or where you aren't as concerned about accuracy.