PDA

View Full Version : high resolution timer



ChasW
30th January 2007, 01:59
Are there equivalent functions in Qt to Win32's QueryPerformanceCounter() and QueryPerformanceFrequency() that would facilitate a method for timing a function or process?

Thank you,
Chas

wysota
30th January 2007, 08:05
You can either use QTimer or the native Windows interface.

bob2oneil
4th June 2011, 21:10
Hi wysota, I need to have my application perform some specific activity at a frequent interval ( < 100 ms). I need this activity to have high precision. I was wondering if something like QTimer might have some amount of slop in it if it is at all based on an event loop. I will likely be using this on a thread that does not have an event loop, and hence I can not hook up to a QTimer timeout() signal. According to the Qt documentation, QTimer will not timeout before the specified time, but can arrive very late.

QTimer does not seem to be a good candidate. I recall getting higher fidellity timing under Linux using the timer_create() API and putting into place a SIGEV_SIGNAL
handler.

However, Qt does not have direct support for Linux signals proper. A solution I have read on this subject involved having the signal handler inform the Qt application proper via a socket.

What do you recommend for a thread that may not be running and event loop to achieve high precision timing. I would like the solution to be cross platform if possible.

wysota
4th June 2011, 21:35
However, Qt does not have direct support for Linux signals proper.
It doesn't need any direct support. You can use any C/C++ call in any Qt application.

squidge
4th June 2011, 22:39
Under Windows, standard timers are accurate to within 16ms. If you want more accurate timers you can use multimedia timers, but they will use up more CPU usage and be OS specific. QTimer will automatically try to use the more accurate timers if the timeout requested is < 20ms.

Have you tried using a 15ms QTimer?

bob2oneil
7th June 2011, 20:40
Will try, and will delve into the QTimer source code for some Intel.

Thanks.

nish
7th June 2011, 20:45
try to use QBasicTimer or even QObject::timerEvent() which are supposed to be faster that QTimer.

Uwe
7th June 2011, 21:05
You will never get any high precision with a timer, that is using the event loop. The problem here is, that the notification about an expired timer might always be too late because of all other event processing.

Uwe

bob2oneil
13th June 2011, 17:32
Thanks Uwe, my intention is to have a Qt application with several secondary threads, and since I want the threads to pend(), and it is my understanding that the event loop via exec() does not actually pend, I will be using a derivation of QThread for my threads, which in fact do pend on some wakeup condition. Therefore, timing based on the requirement of an event loop, or any Qt timer implementation that requires generating a signal to a slot receiver is not applicable.

Per your and other's description, it would seem that Qt proper's timer support may not be the best fit for my requirements.

In a perfect world, what I need is a highly precise periodic timer to invoke some real time control. It would be signaled at a 100 ms rate with as much precision as possible. Upon a timeout condition, this waitable timer would kick off
a real time thread for instrument control and timing, which occurs at the 100 ms pacing. I suppose a periodic timer whose precision is increased with a smaller timeout value, say 20 ms, would be perfectly acceptable as I would
simply count to the 100 ms rate.

This timer would not be used for measuring elapsed time, there are plenty of those available that are sufficient.

I have used a Linux signal for periodic timing in the past. I would need to create a similar construct for Windows to maintain cross platform support.

Architecturally, I was thinking of a real time thread that gets signalled at a 100 ms rate, and it in turn uses a QWaitCondition to trigger the worker thread, and then pends.

Under Windows, the Waitable Timer object seems like perhaps a good equivalent to a Linux signal, it has 100 nanosecond intervals as the argument to the SetWaitableTimer() API

http://msdn.microsoft.com/en-us/library/ms687012(v=vs.85).aspx


Since Qt does not "natively" handle Linux signals, I was thinking that in response to the timeout signalling, I could simply kick off a QWaitCondition in a worker thread and therefore cross into the Qt world from an async event.

Anybody do something similar?

wysota
13th June 2011, 19:09
If you want precise guaranteed timing then you need a real time operating system. Neither Linux nor Windows are such a system. If you don't use a real-time OS then you have no guarantees and a myriad of things can go wrong.

bob2oneil
13th June 2011, 19:59
I know, trying to come up with an approach which can approximate an RTOS behavior on this one timing requirement as much as possible (or perhaps determine that it is not possible in the process).

wysota
13th June 2011, 20:18
You can't "approximate" RTOS. In computer science there is no "as much as possible". It's like trying to substitute a parachute with a regular bagpack --- sure it more or less looks like a parachute and it may even have some properties of a parachute but I surely wouldn't jump off a plane with it, even if it bore a striking resemblence to a parachute.

bob2oneil
13th June 2011, 20:37
No argument to your point. To my task, I should point out that I do have some amount of control of the workstation. I can setup the running daemon/service with a particular "nice" level, and prejudice the machine to favor my application.
This will not be a workstation with any general number of user applications, it shall be dedicated to the task.

wysota
13th June 2011, 20:45
I can setup the running daemon/service with a particular "nice" level, and prejudice the machine to favor my application.
And how for example do you plan to prevent the OS from flushing its memory pages at the exact time you need the timer to fire? Or running some other system maintenance task that can possibly take more time and eat more resources than earlier expected.


This will not be a workstation with any general number of user applications, it shall be dedicated to the task.
Then install RTOS on it...

http://en.wikipedia.org/wiki/List_of_real-time_operating_systems

bob2oneil
13th June 2011, 20:56
The choice of the OS is imposed upon me, I am not free to select it. Currently, our specificaiton is Windows XP and Redhat Enterprise Linux 3.5 This is unless these prove to be fundamentally non-appropriate to our task.

wysota
13th June 2011, 21:26
Well... a nuclear power plant will also work under Windows XP or RedHat until it proves non-appropriate. Unfortunately it equals to a nuclear meltdown. A trial by error is a bad approach when real-time accuracy is needed. The real question is if you really need such accuracy or maybe you just think you do. What happens if you perform your activity 10ms later? Will something blow up? What if you perform it 50ms later? Will the system become unusable? What if you have such latency that you need to perform the task two times? Will that make your system fail?

squidge
13th June 2011, 22:07
Well, you could install an application with the priority of "Real time" and use the QueryPerformanceCounter API to ensure precise timing (or even read the amount of clock ticks directly from the processor via rdtsc instruction).

Or, if timing isn't THAT accurate, you could use timeBeginPeriod to set the required accuracy (Eg. 1ms) and then use timeSetEvent. Rather than being event driven, timeSetEvent will call the function you pass in when the time period is up regardless. The aim is to call your function within the time you set in the timeBeginPeriod function (so, within 1ms in the examine above), but this is NOT GUARANTEED.

Note however that in both cases, if your process if Realtime Priority class and you use up too much CPU, the processing of the keyboard and mouse will not occur and so you will not be able to switch to any other process or kill the task. Therefore for any time sensitve work (such as controlling an external electronic module) I prefer to keep such stuff in other external electronic modules. Let the PC-based OS do the house keeping, let dedicated electronic controllers do the time critical stuff.

Uwe
15th June 2011, 15:28
Well not exactly what you are looking for, but Qwt has a class QwtSystemClock with a similar API as QTime ( not QTimer ! ) hiding the high precision time classes of the various platforms.

Uwe

bob2oneil
24th June 2011, 18:50
I have done a little bit of research on this subject, and concentrating on Windows only (as Linux time resolution is not problematic), I have tried the following solutions under Windows:

1. Waitable Timer
2. Multimedia Timer
3. Queued Timer
4. Windows "Select" via Winsock interface - requires a created but unbound socket

Of these four, the multimedia timer provides the highest accuracy. The other timer variants have very repeatable delays and small standard deviation, but
the values swing in 15 ms increments. For example, when specifying a 100ms delay to waitable or queued timers, the delay will be approximate 108 ms
to a high priority task. This delay value is also true for delay settings down to say 94ms, and then the delay swings to 93 ms for a full range of specifiations below 94 ms.
This magic 15 ms is consistent with the descriptions of the lack of accuracy of the GetSystemTimeAsFileTime() API, and descriptions of the 15 ms kernel task switching times.

I have used the QueryPerformance counter method, but there are various bugs associated with it, and my accessment is that it is good for elapsed time, but not necessarily for high time precision.
I am currently using it for high precision elapsed time in combination with a "leap forward" solution using GetTickCounts as discussed in the following URLs:

http://support.microsoft.com/kb/274323
http://support.microsoft.com/kb/895980

Since the QPC is essentially a free running counter, it has no ties to the actual time keeping on the workstation under Windows. A correlation of this value has to be made for my application,
and most solutions seem to suggest a differential approach, where an initial value is snapshotted, and then delays calculated from this initial snapshot.

I found a fairly comprehensive article on the problem and a proposed solution on a legacy MSDN article for reading highly precise time under Windows at the following URL. You will see that the issue
is fairly complex and has to account for a number of factors including detecting NTP time changes.

http://msdn.microsoft.com/en-us/magazine/cc163996.aspx#S6

Hope this helps someone else attempting to do a similar thing.

Thanks to everyone's input