task delete and semaphore
WuHao
derossi at mail.ustc.edu.cn
Tue Jul 3 07:25:49 UTC 2007
Paul Whitfield 写道:
> Please keep this discussion on the RTEMS mailing list,
> as that way people who know about the internals of
> termios can respond
>
>>>> How does the task to be deleted get all resources it holds?
>>>> Typically when the task holds tty's write mutex.
>>>>
>>>
>>> As described in the manual, you should signal the task to clean up
>> ~~~~~
>> Suppose the task to be deleted gets CPU and begins to do clean up.
>> But it is holding a tty write mutex(osem), how can it release the
>> mutex since
>> termios does not provide any directive to do so?
>
> I hope I am not missing something here!
>
> The point is you should not delete a task by calling rtems_task_delete
> from ANOTHER task.
>
> You should send a message, event or signal to the task which then
> cleans up and deletes itself. This includes releasing any held
> semaphore and closing any open termios ports.
>
> This message will only be received by the task to be
> deleted when it has *finished* its termios processing,
> and therefore at that point it will not be holding
> semaphores or other resources.
>
Is that to say:
Task A wants to delete Task B.
1.Task A is not allowed to delete Task B by calling rtems_task_delete;
2.Task A may delete Task B by send a message, event or signal to latter.
2.If Task B does not know the time when Task A will send a message,
and indeed Task A wants to delete Task B anytime, Task B must keep
receiving message as soon as it has finished a libc processing?And howto?
Thanks.
> Looking at rtems_termios_write, it will release
> the semaphore tty->osem
>
> I hope that makes sense.
>
>
> Regards
>
> Paul
>
>
> =========================================================================
> The information in this e-mail is intended for the addressee only.
> Unauthorised use, copying, disclosure or distribution by anyone else
> is prohibited. Please let us know immediately if you receive this
> e-mail in error. Thank you.
> =========================================================================
>
More information about the users
mailing list