student for gsoc2008

Reng Zeng alan at fiucssa.org
Thu Mar 27 17:39:56 UTC 2008


Thanks Chris, your reply helps a lot on my proposal draft. I am writing down
my understanding inline marked [Alan] to confirm with you.

Regards,
Alan

2008/3/26, Chris Johns <chrisj at rtems.org>:
>
>
> Yes this is correct. We need to flatten the argument list to a block of
> memory
> to capture. The wrapper code generator will know all the details of the
> function's signature so can determine the amount of stack used to hold
> arguments. The variable argument functions such as printf are something I
> am
> yet to figure out. This is part of the work.


[Alan] I think it can determine the amount of stack required ONLY if we make
the generator running together with the RTEMS to generate wrapper code,
otherwise, the function signature itself can not figure out the required
memory.

I guess most of the functions are not variable argument one. For variable
one like printf, we could wrap as well as below, but I do not figure out how
the wrapper code generator could generate it :)

#include <stdio.h>
#include <stdarg.h>

int __wrap_printf(const char *template, ...)
{
    int iRet;
    va_list argp;
    va_start(argp, template);
    iRet = vprintf(template, argp);
    va_end(argp);
    return iRet;
}


> The capture engine's role changes a little when logging wrapped calls.
> When in
> the context switch path performance is key while in the wrapped mode the
> overhead of the block copy of arguments is something we know has to happen
> and
> have to factor in.
>
> The example code above it is close. We need to decide if we log only once
> when
> we have the return value with 2 time stamps (entry/exit). This lowers the
> overheads down but complicates the data stream on the host side as nested
> traced calls will appear before the current call.


[Alan] I understand what you are talking about is, whether to log during
both entry and exit, or to log during one of them only. Right? If so, that
could be part of work to decide the option.

> I believe Chris has an idea where the trigger is actually in the capture
> > engine and users can provide boolean expressions.  Greatly simplifies
> > the logging.
>
>
> We need generate some sort of id which is specific to a function's
> signature
> and then have a way to match this with the running target image and
> finally
> the resulting data. Once we have this the capture engine can handle the
> trigger and filter processing keeping this code in one place.
>
> We may have the tool generate code that is compiled into the application
> to
> handle static filters and triggers rather than a user interface. This code
> is
> just making calls to the capture engine.


[Alan] I understand the "static" here means, instead dynamically setting the
trigger/filter during run-time, statically have the trigger/filter written
into the code as pre-defined trigger/filter so that once trace is enabled,
the static trigger/filter is enabled.

Let me explain. We could have a target that is connected to via TCP. The
> user
> has wrapped a list of functions then interacts with the running target to
> set
> filters and triggers dynamically in a series of runs to isolate a bug. The
> second example is a system where the list of wrapped functions is
> generated
> from the static filter and trigger commands which are also turned into
> target
> code run during initialisation. Both systems use the same capture engine
> code.
>
>
> > Plus with the wrapping idea, you turn the wrapper completely off via
> > ld arguments.  That part can be done manually or via a GUI to generate
> > a file that is passed to the ld command line.
>
>
> This is something we need to figure out. For example do I build my RTEMS
> application as I do then say capture tool relink to be a traceable
> executable
> or do users application build systems need to know what to do. The issue
> here
> is how we manage or help the user with the build process when wrapping.


[Alan] Are we going to  build the wrapper code into the RTEMS if the
performance impact is acceptable? If so, why it is required for user
application to know how to build the wrapper code in?

>> 2. Triggers are to be managed by the event ID.
>
>
> I call "event ID" trace points. We can have system trace points like we
> currently have in the capture enigine, function trace points (wrapped
> functions) and user trace points where a user makes an explicit call.


[Alan] about the user trace points here, do you mean user application could
make an explicit call to capture engine in their application?

>>
> >> 3. I has no idea about the filter for these events. For existing task
> >> related one, we could filter by priority, but how about the wrapped
> >> one? Could you tell some example for me to consider?
> >>
> > Based upon the id of the executing thread. Try this example:
> >
> > log all semaphore manager calls for thread 32.
> > log all context switches
> > log all malloc/free calls for thread 75.
> >
> > Start tracing after 10 seconds and log for 15 seconds.
>
>
> The first examples are filters while the last one is a trigger. The way to
> think about this is to consider a hardware logic analyser. You can filter
> the
> signals you want to look at then you specify a trigger. Once triggered you
> capture based on the filtering.
>
> The detail of the implementation is the way in which a user says "malloc"
> and
> we match that to a specific trace point (id) which has a set of flags we
> check
> against. The example filter:
>
>
>   log all semaphore manager calls for thread 32
>
>
> is rather complex because we need to pattern match "rtems_semaphore_*" to
> find
> all the calls and the id's then add the thread id to each trace point's
> control data. The check of the thread id is similar to the "by" fields
> currently in the capture control structure. The above could be written as:
>
>   filter rtems_semaphore_* from 32
>
> To filter another task as well run the command again with a new thread id.


[Alan] That make sense, thank you!

>> Regarding to other parts of the project, I understand as below, please
> >> comment.
> >> 1. Transmission of logged data to host
> >>
> >>           o The first challenge of this part is to make the system and
> >>             the log survive from high traffic.
> >>           o The second challenge is to implement it in TCP while
> >>             providing the flexibility for other transports in future.
> >>           o The third challenge is to trade off minimizing the data to
> >>             be sent across the network against the over-head to
> >>             compress data.
> >>
> > Right.  This is something to investigate and do the trade-offs in the
> > early phase of the project.
>
>
> We also need to handle the control data to manage dynamic triggers and
> filters.
>
> [Alan] Yes, we should pass all the info host required to decode. Thanks
again for your detailed explanation!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/users/attachments/20080327/4fe79a51/attachment.html>


More information about the users mailing list