covoar SIGKILL Investigation

Joel Sherrill joel at rtems.org
Wed Aug 22 04:41:07 UTC 2018


On Tue, Aug 21, 2018, 10:26 PM Chris Johns <chrisj at rtems.org> wrote:

> On 22/08/2018 09:29, Joel Sherrill wrote:
> > On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee <
> vijaykumar9597 at gmail.com
> > <mailto:vijaykumar9597 at gmail.com>> wrote:
> >     On Wed, 22 Aug 2018 at 01:55, Joel Sherrill <joel at rtems.org
> >     <mailto:joel at rtems.org>> wrote:
> >
> >         How long is covoar taking for the entire set?
> >
> >     It works great. this is what `time` says
> >     --------
> >     real17m49.887s
> >     user14m25.620s
> >     sys0m37.847s
> >     --------
> >
> > What speed and type of processor do you have?
> >
>
> The program is single threaded so the preprocessing of each executable is
> sequential. Memory usage is reasonable so there is no swapping.
>
> Running covoar from the command line on a box with:
>
>  hw.machine: amd64
>  hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
>  hw.ncpu: 16
>  hw.machine_arch: amd64
>
> plus 32G of memory has a time of:
>
>       366.32 real       324.97 user        41.33 sys
>
> The approximate time break down is:
>
>  ELF/DWARF loading  : 110s (1m50s)
>  Objdump            : 176s (2m56s)
>  Processing         :  80s (1m20s)
>

I don't mind this execution time for the near future. It is far from
obscene after building and running 600 tests.

>
> The DWARF loading is not optimised and I load all source line to address
> maps
> and all functions rather that selectively scanning for specific names at
> the
> DWARF level. It is not clear to me scanning would be better or faster.


I doubt it is worth the effort. There should be few symbols in an exe we
don't care about. Especially once we start to worry about libc and libm.

My hope
> is moving to Capstone would help lower or remove the objdump overhead. Then
> there is threading for the loading.
>
> > I don't recall it taking near this long in the past. I used to run it as
> part of
> > development.
>
> The objdump processing is simpler than before so I suspect the time would
> have
> been at least 4 minutes.
>
> > But we may have more tests and the code has changed.
>
> I think having more tests is the dominant factor.
>
> > Reading dwarf
> > with the file open/closes, etc just may be more expensive than parsing
> the text
> > files.
>
> The reading DWARF is a cost and at the moment it is not optimised but it
> is only
> a cost because we still parse the objdump data. I think opening and closing
> files is not a factor.
>
> The parsing the objdump is the largest component of time. Maybe using
> Capstone
> with the ELF files will help.
>
> > But it is more accurate and lays the groundwork.for more types of
> analysis.
>
> Yes and think this is important.
>

+1

>
> > Eventually we will have to profile this code. Whatever is costly is done
> for
> > each exe so there is a multiplier.
> >
> > I suspect this code would parallelize reading info from the exes fairly
> well.
>
> Agreed.
>

Might be a good case for C++11 threads if one of the thread container
classes is a nice pool.

And we might have some locking to account for in core data structures. Are
STL container instances thread safe?

But an addition after feature stable relative to old output plus Capstone.

>
> > Merging the info and generating the reports not well due to data
> contention.
>
> Yes.
>
> > But optimizing too early and the wrong way is not smart.
>
> Yes. We need Capstone to be added before this can happen.
>

+1


I would also like to see gcov support but that will not be a factor in the
performance we have. It will add reading a lot more files (gcno) and
writing a lot of gcda at the end. Again more important to be right than
fast at first. And completely an addition.

>
> Chris
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20180821/d9cc5299/attachment-0002.html>


More information about the devel mailing list