PC386 Clock Tick Driver

saeed salpha.2004 at gmail.com
Fri Jul 19 21:29:19 UTC 2013


Hi Tim,

First of all, thanks very much for your response.

> So your high tick rate is probably enough, as Joel has pointed out, to
> stress the interrupt handling code, and expose a real bug. If you can
> fix it, then you'll get a high-five from me :D
I got more curious to get to the end of this now. I'll claim that five in case I succeed. :D

> 1. Can I assume that your 10us figure is related to the minimum packet
> interval at 12900p/s i.e. 77.5us?
> 
> What about a tick period of 77.5us? OK, so I'm not serious about that
> figure, but choosing 10us seems like a conveniently 'round' number. You
> might not have the processor/memory/interrupt bandwidth to support 10us.
> How about 20us? 40us? That could get you out of jail without any other
> software changes.
> 
Yes, 10us is to achieve the maximum rate of 12900p/s. You're right, we can use about 80us for each tick but surprisingly, even at 10ms per tick we have one or two "spurious interrupts" occuring sometimes!
I think it might be related to large number of network ISRs which prevent clock ISRs to clear the interrupt flag in time.

> 2. Packet throughput vs jitter. Perhaps your packet sending/receiving
> hardware is more tolerant of packet jitter than you think. Send/Recv
> buffers are everywhere. Why not send, for example, a maximum of 2
> packets every 100us?
> 
> [edit] To make this point a little more strongly: Are you only trying to
> manage packet bandwidth, or is packet jitter a concern too?
It's a good idea, I liked it. But jitter is of our concern too. We need a regular basis for sending packets.

> 3. My way. Personally, I would queue the packets for transmission in a
> thread, but have them sent from within a timer (dedicated) interrupt
> handler. Packet jitter will be more bounded, perhaps even beautiful to
> some...
Interesting way indeed. But it requires a kernel hack while we need an application for doing this test.

> You've asserted that a 3.3GHz machine should be cool. Bear in mind that
> the NXP device you've mentioned probably has better interrupt latency
> [the time from interrupt line assertion until the first instruction of
> the interrupt handler]: There's more to it than pure CPU clock speed.
> The scheduler has to run, decide to schedule your transmitter thread, do
> the context switch, and return from interrupt *before* the next tick.
> And that's not even leaving time for your thread to run :)
> 
> I think Joel is suggesting that you might be getting another interrupt
> before the tick handler has returned. Check that.
> 
> Maybe go back to the NXP platform, use a GPIO pin to show what
> proportion of your time is spent within the tick handler. It's probably
> more than you realise :)
> 
> As you've probably realised by now, blah blah blah.
> 
> Anyway, HTH.
> Tim
I must do some more investigations about it and I hope to come back soon, with more valuable information about it or the best: fix the bug and get a high-five! ;)

Thanks again,
SAeeD



More information about the users mailing list