PDA

View Full Version : qt5 internal clock sync?



stingray
18th January 2017, 14:07
so there is no pre-made internal clock qobject.

so what is the "best" way to get a internal clock?

for my use i need a clock that does not trigger early (that means precise timer)
also does not go banans after a few days cause of major out of sync with reality/systemtime +- 100ms is hard limit as out of sync as a showstopper.


in big strokes:

use a precise timer with timeout @ x ms and then grab the currentTime from systemclock to update my internal qdatetime that i use to trigger my signals. (this way its allways somewhat in sync with systemclock)
use a use a precise timer with timeout @ x ms and increment my clock internally.






i guest the currenttime systemclock version is more "expensive" then internal incrementation of my object and then say once a night check if a diff is building up and then its out of a set range sync it back into range.

right now im more afraid of overdoing this simple task then not getting it to work...

os wise, its linux commandline applications that will be running 24/7 so the time drift is one of my concerns.


ive made a little test which prints current time to console, with precision timer it seems accurate @1-2ms. and does not drift.

so the question is:

internal counters + check sync @ interval(daily weekly etc) + sync up when out set boundary syncwise
copy systemtime everytime timer triggers (seems like a more expensive copy to me but makes sync check and sync up obsolete. also makes alot of parsing out changes and triggers alot more work)


what path would you choose?

anda_skoa
19th January 2017, 11:58
Well, aside from not having any timing guarantees on a non RT kernel, you could use a timer with smaller intervals and adjust as needed.

I.e. whenever the internal timer triggers, you measure the time that has elapsed vs the time that you wanted to elapse and adjust the next interval to compensate.

There could also be system level APIs for higher precision "wake up" style timers.

Cheers,
_

stingray
19th January 2017, 12:14
i went with the lazy way, i took sync systemtime way.

just seems to redo most stuff that already is in QDateTime to build it from scratch.. (not like its a realtime embedded application where 15ns wrong is life or death...)

one maybe stupid question about slots and signals... if i need to process something, i emit a signal, it starts processing (but if it takes more then say xms when the timeout would trigger late) is there a way to prioritate signals?
(so it pauses and renters like a thread inside a single thread application?)

or do i need to split into multiple threads to get around this timing issue?

Added after 7 minutes:


Well, aside from not having any timing guarantees on a non RT kernel, you could use a timer with smaller intervals and adjust as needed.

I.e. whenever the internal timer triggers, you measure the time that has elapsed vs the time that you wanted to elapse and adjust the next interval to compensate.

There could also be system level APIs for higher precision "wake up" style timers.

Cheers,
_

wrote while you replyed anda_skoa, so that last post just looks wierd after yours..

the smaller the interval goes the more time the "irq-like" process will take and make stuff run out of sync easier with a internal clockwork (not syncbased) (run faster or slower then thought)

yea systembased api or even hardware api is also a idea, but that defeats the purpose of using Qt, as it most probably will be closely tied to the hardware it runs on.

anda_skoa
19th January 2017, 12:24
just seems to redo most stuff that already is in QDateTime to build it from scratch.. (not like its a realtime embedded application where 15ns wrong is life or death...)

Well, realtime is not a matter of interval size, but guarantee or how bad it is if something does not happen in time :)



one maybe stupid question about slots and signals... if i need to process something, i emit a signal, it starts processing (but if it takes more then say xms when the timeout would trigger late) is there a way to prioritate signals?

A signal/slot connection is just a way to describe a method call.
So the signal emit is just the begin of calling all slots connected to the signal.

You can think of the signal as a method that iterates over the list of receivers and calls each receiver's slot.



(so it pauses and renters like a thread inside a single thread application?)

The only thread in a single threaded application usually never pauses unless explicitly told to not do anything.



yea systembased api or even hardware api is also a idea, but that defeats the purpose of using Qt, as it most probably will be closely tied to the hardware it runs on.

Not necessarily, it is a matter of abstraction.
The whole point of Qt is providing abstractions, so that system differences are not visible to the application developer despite Qt itself needing platform specific code.

Cheers,
_