ppisa4lists at pikron.com
Sat Apr 8 00:29:31 UTC 2006
On Saturday 08 April 2006 01:36, Martin Molnar wrote:
> The statements from Vxworks manual are correct. But it is not clearly
> stated whether a task`s priority remains or changes after the task
> releases the mutex with the highest ceiling and still holds another one
> with priority higher than its normal priority. So, I am not sure that
> vxworks do the same thing. Tests would releave it.
we have latest VxWorks version for real-time systems lectures
at our department. I keep myself off the closed source
but if we can prepare sources for testing, I can ask politely
my colleague to run the test and report results.
> What about implementing the changing of task`s priority this way:
> -enforce the mutex release rule that PCP mutexes are released in the
> opposite order of acquisition. I think , it is not a painful restriction.
> - for each task, maintain the "resources held set" implemented as stack.
> Every stack element(record) consists of two fields: the mutex id (to
> control that the mutex relase rule is kept) and task`s priority after
> releasing that mutex.
The stack of fixed size is bad programmers practice.
OK, you have task stack of fixed sizes, but it already stroked
to Joel and others by mysterious PC port fails.
> For better explation, consider the example:
> - task`s normal priority is 9.
> 1. Task acquires mutex 4 ( 4 is the ceiling of mutex). Then,put on the
> stack element(mutex id= 4 priority= current priority=9 ) and change
> task`s current priority to 4.
> 2.Task acquires mutex 8. Then put on the stack element(mutex id= 8
> priority=current priority= 4 ). Task`s current priority is not changed
> because it is higher than 8.
> Example of the stack:
> mutex id: 4 8 6 2 1
> priority: 9 4 4 4 2
OK, I am not sure, if you counted for what has to happen
when task with priority 3 wants to overtake mutex 8,
your queues has to be adjusted to
mutex id: 4 8 6 2 1
priority: 9 4 3 3 3
no change of actual task's priority, if task with priority
1 arrives now and asks for mutex 6, situation changes
mutex id: 4 8 6 2 1
priority: 9 4 3 1 1
and actual task priority has to be adjusted.
To build LIFO list, I would suggest to add only
one pointer to latest acquired mutex to the task structure
and pointer and priority fields to each mutex.
The last piece is identifier of actual holding thread
(but it is there already).
Mutex (by the definition) guarantees, that only one thread
of execution can hold it. This means, that no other thread
can use mutex private fields if they are assigned to actual
the mutex lock code
if mutex is available
adjust priority for ceiling
if holder->prio < thread->prio
escalate holder->prio to thread->prio
> It`s obvious how to release mutexes and change task`s priority.
Simple and O(1), but I am not sure, if above described complexity
worth to be implemented. The
The above algorithm can work even for recursive mutexes,
the whole acquisition of mutex is skipped if holder is actual
task and only recursion count is increased. This is as it is
implemented in RTEMS now.
The code can be speed up little, if priority stack is shifted
by one and first stage is stored directly in the TCB.
This way new attempting task is informed more quickly
that actual mutex priority is already escalated and
gets the value of that escalation => it does not to need
to go through list to find that fact in the mutex
pointing exactly to the mutex in the question.
This code can even work for different order of release
if task->last_mutex == mutex
task->last_mutex = mutex->prev_acquired
change task->prio to mutex->prev_prio
(little more complex with shifted priority stack)
go from task->last_mutex and find neiborhoods
of mutex and adjust list accordinly
the priority adjustment is problem in this situation,
with shifted stack it is feasible and simpler
If you think, that it worths the complexity, I can try to help
to implement it. But I am not sure, if this is not something,
which should be configurable. Good code tries to hold
mutexes for minimal necessary time, does not call more functions
with mutex hold etc etc. In other case, there are problems with
missed deadlock possibilities which creep into code as result
of other pieces adjustments ... . So I think, that for carefully
written code and applications actual solution could be
even more profitable than fully correct one.
Other possibility is to select this option on per mutex basis.
More information about the users