angelo_f at bigpond.com
Thu Sep 19 22:39:12 UTC 2002
Thanks, you explained that well
Joel Sherrill wrote:
>Angelo Fraietta wrote:
>>Would increasing the CONFIGURE_MICROSECONDS_PER_TICK define reduce
>>interrupt latency? I have a large number of interrupts happening and was
>>wondering that by increasing this time, the kernel would spend less time
>>servicing the task scheduler?, thus giving the external interrupts I
>>receive a bigger slice of the Pie.
>>It is currently set at 1000, so would increasing it to 2000
>>theoretically give better performance for interrupts?
>Like many things in life, it is a trade-off. Upping it to
>2000 would result in getting fewer clock tick interrupts. So
>1/2 of the system time spent servicing that interrupt source
>would disappear. That should free some CPU time. More importantly,
>it frees time you are in an ISR which could reduce the latency for
>the other interrupt sources. But it does so at a cost of accuracy
>in your delays and timeouts. Are all your intervals in 2 msec
>multiples? Can you live with a 1 msec average error vs 500 usec?
I think it would be hard to hear musically anything last than 5ms, so a
2ms would probably be OK.
>Are you using timeslicing at all that would be impacted by the
>It could really help though. And if you can live with it
>consider moving to 5 msecs or 10 msecs. The magic to 1 msec
>clock ticks is that all millisecond intervals have the
>math work out so you don't lose by division. :)
BTW, I will be doing the first performance with the Smart Controller
PO Box 859
Hamilton NSW 2303
There are those who seek knowledge for the sake of knowledge - that is CURIOSITY
There are those who seek knowledge to be known by others - that is VANITY
There are those who seek knowledge in order to serve - that is LOVE
Bernard of Clairvaux (1090 - 1153)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the users