covoar SIGKILL Investigation

Chris Johns chrisj at rtems.org
Wed Aug 22 03:26:09 UTC 2018


On 22/08/2018 09:29, Joel Sherrill wrote:
> On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee <vijaykumar9597 at gmail.com
> <mailto:vijaykumar9597 at gmail.com>> wrote:
>     On Wed, 22 Aug 2018 at 01:55, Joel Sherrill <joel at rtems.org
>     <mailto:joel at rtems.org>> wrote:
> 
>         How long is covoar taking for the entire set?
> 
>     It works great. this is what `time` says 
>     --------
>     real17m49.887s
>     user14m25.620s
>     sys0m37.847s
>     --------
> 
> What speed and type of processor do you have? 
> 

The program is single threaded so the preprocessing of each executable is
sequential. Memory usage is reasonable so there is no swapping.

Running covoar from the command line on a box with:

 hw.machine: amd64
 hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz
 hw.ncpu: 16
 hw.machine_arch: amd64

plus 32G of memory has a time of:

      366.32 real       324.97 user        41.33 sys

The approximate time break down is:

 ELF/DWARF loading  : 110s (1m50s)
 Objdump            : 176s (2m56s)
 Processing         :  80s (1m20s)

The DWARF loading is not optimised and I load all source line to address maps
and all functions rather that selectively scanning for specific names at the
DWARF level. It is not clear to me scanning would be better or faster. My hope
is moving to Capstone would help lower or remove the objdump overhead. Then
there is threading for the loading.

> I don't recall it taking near this long in the past. I used to run it as part of
> development. 

The objdump processing is simpler than before so I suspect the time would have
been at least 4 minutes.

> But we may have more tests and the code has changed.

I think having more tests is the dominant factor.

> Reading dwarf
> with the file open/closes, etc just may be more expensive than parsing the text
> files. 

The reading DWARF is a cost and at the moment it is not optimised but it is only
a cost because we still parse the objdump data. I think opening and closing
files is not a factor.

The parsing the objdump is the largest component of time. Maybe using Capstone
with the ELF files will help.

> But it is more accurate and lays the groundwork.for more types of analysis.

Yes and think this is important.

> Eventually we will have to profile this code. Whatever is costly is done for
> each exe so there is a multiplier.
> 
> I suspect this code would parallelize reading info from the exes fairly well. 

Agreed.

> Merging the info and generating the reports not well due to data contention.

Yes.

> But optimizing too early and the wrong way is not smart.

Yes. We need Capstone to be added before this can happen.

Chris



More information about the devel mailing list