From 3620cf97ad97239f3f39093c1c86acc336342b66 Mon Sep 17 00:00:00 2001 From: wm4 Date: Sun, 9 Mar 2014 15:34:26 +0100 Subject: timer: switch to CLOCK_MONOTONIC Apparently, this is always _really_ monotonic, despite what the Linux manpages say. So this should be much better than gettimeofday(). (At times there were kernel bugs which broke the monotonic property.) From the perspective of the player, time can still be discontinuous (you could just stop the process with ^Z), but at least it's guaranteed to be monotonic without further hacks required. Also note that clock_gettime() returns the time in nanoseconds. We want microseconds only, because that's the unit we chose internally. Another problem is that nanoseconds can wrap pretty quickly (less than 300 years in 63 bits), so it's just better to use microseconds. The devision won't make the code that much slower (compilers can avoid a real division). Note: this expects that the system provides clock_gettime() as well as CLOCK_MONOTONIC. Both are optional according to POSIX. The only system I know which doesn't have these, OSX, has seperate timer code anyway, but I still don't know whether more obscure (yet supported) platforms have a problem with this, so I'm playing safely. But this still expects that CLOCK_MONOTONIC always works at runtime if it's defined. --- osdep/timer-linux.c | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'osdep') diff --git a/osdep/timer-linux.c b/osdep/timer-linux.c index 1378e6ea7e..4ab19b6490 100644 --- a/osdep/timer-linux.c +++ b/osdep/timer-linux.c @@ -40,12 +40,22 @@ void mp_sleep_us(int64_t us) #endif } +#if defined(_POSIX_TIMERS) && _POSIX_TIMERS > 0 && defined(CLOCK_MONOTONIC) +uint64_t mp_raw_time_us(void) +{ + struct timespec ts; + if (clock_gettime(CLOCK_MONOTONIC, &ts)) + abort(); + return ts.tv_sec * 1000000LL + ts.tv_nsec / 1000; +} +#else uint64_t mp_raw_time_us(void) { struct timeval tv; gettimeofday(&tv,NULL); return tv.tv_sec * 1000000LL + tv.tv_usec; } +#endif void mp_raw_time_init(void) { -- cgit v1.2.3