this post was submitted on 15 Jun 2023
10 points (100.0% liked)
Programming
13455 readers
21 users here now
All things programming and coding related. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Okay, I've read up on this a bit an you raise some valid points.
For whatever reason, posix has decided that
CLOCKS_PER_SEC
must be exactly 1000000 on every system that wants to comply. That doesn't leave much room on a 32-bit system for maximum clock duration. It will wrap around well before the 10 hours the OP was giving by example.The other is I gather on some systems,
clock()
may be measuring the uptime of a particular process, which is not necessarily the same as real world time in multicore environments. (Fwiw I'm on a Mac where I don't think it works that way? You would need some special apis to get that kind of info.)I'm trying to think of the last time I programmed straight C and used
time.h
and the like. It was probably in developing scientific instrumentation? iirc the clock in that case was keyed to the VBL refresh of the display, meaningCLOCKS_PER_SEC
would have been something like 60 Hz. And I doubttime()
would have yielded any useful value. I wonder what the OP's use case is?Most of my hobby programming is in ANSI C and C99, so I'm unfortunately far too aware of the weird and counter-intuitive things the C and POSIX standards say. :P
clock()
is fantastic for sub-second timings, such as deltatimes in games, or peripheral synchronization, which matches the use case you mention very well. I recommendedtime()
over it as OP's use case is for calculating the amount of hours a user has had their software open, and unix timestamps are the perfect mechanism to do that in my opinion.