<div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr">On Tue, Aug 21, 2018, 10:26 PM Chris Johns <<a href="mailto:chrisj@rtems.org">chrisj@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 22/08/2018 09:29, Joel Sherrill wrote:<br>
> On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee <<a href="mailto:vijaykumar9597@gmail.com" target="_blank" rel="noreferrer">vijaykumar9597@gmail.com</a><br>
> <mailto:<a href="mailto:vijaykumar9597@gmail.com" target="_blank" rel="noreferrer">vijaykumar9597@gmail.com</a>>> wrote:<br>
> On Wed, 22 Aug 2018 at 01:55, Joel Sherrill <<a href="mailto:joel@rtems.org" target="_blank" rel="noreferrer">joel@rtems.org</a><br>
> <mailto:<a href="mailto:joel@rtems.org" target="_blank" rel="noreferrer">joel@rtems.org</a>>> wrote:<br>
> <br>
> How long is covoar taking for the entire set?<br>
> <br>
> It works great. this is what `time` says <br>
> --------<br>
> real17m49.887s<br>
> user14m25.620s<br>
> sys0m37.847s<br>
> --------<br>
> <br>
> What speed and type of processor do you have? <br>
> <br>
<br>
The program is single threaded so the preprocessing of each executable is<br>
sequential. Memory usage is reasonable so there is no swapping.<br>
<br>
Running covoar from the command line on a box with:<br>
<br>
hw.machine: amd64<br>
hw.model: Intel(R) Core(TM) i7-6900K CPU @ 3.20GHz<br>
hw.ncpu: 16<br>
hw.machine_arch: amd64<br>
<br>
plus 32G of memory has a time of:<br>
<br>
366.32 real 324.97 user 41.33 sys<br>
<br>
The approximate time break down is:<br>
<br>
ELF/DWARF loading : 110s (1m50s)<br>
Objdump : 176s (2m56s)<br>
Processing : 80s (1m20s)<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">I don't mind this execution time for the near future. It is far from obscene after building and running 600 tests. </div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The DWARF loading is not optimised and I load all source line to address maps<br>
and all functions rather that selectively scanning for specific names at the<br>
DWARF level. It is not clear to me scanning would be better or faster. </blockquote></div></div><div dir="auto"><br></div><div dir="auto">I doubt it is worth the effort. There should be few symbols in an exe we don't care about. Especially once we start to worry about libc and libm.</div><div dir="auto"><br></div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">My hope<br>
is moving to Capstone would help lower or remove the objdump overhead. Then<br>
there is threading for the loading.<br>
<br>
> I don't recall it taking near this long in the past. I used to run it as part of<br>
> development. <br>
<br>
The objdump processing is simpler than before so I suspect the time would have<br>
been at least 4 minutes.<br>
<br>
> But we may have more tests and the code has changed.<br>
<br>
I think having more tests is the dominant factor.<br>
<br>
> Reading dwarf<br>
> with the file open/closes, etc just may be more expensive than parsing the text<br>
> files. <br>
<br>
The reading DWARF is a cost and at the moment it is not optimised but it is only<br>
a cost because we still parse the objdump data. I think opening and closing<br>
files is not a factor.<br>
<br>
The parsing the objdump is the largest component of time. Maybe using Capstone<br>
with the ELF files will help.<br>
<br>
> But it is more accurate and lays the groundwork.for more types of analysis.<br>
<br>
Yes and think this is important.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">+1</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
> Eventually we will have to profile this code. Whatever is costly is done for<br>
> each exe so there is a multiplier.<br>
> <br>
> I suspect this code would parallelize reading info from the exes fairly well. <br>
<br>
Agreed.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">Might be a good case for C++11 threads if one of the thread container classes is a nice pool. </div><div dir="auto"><br></div><div dir="auto">And we might have some locking to account for in core data structures. Are STL container instances thread safe? </div><div dir="auto"><br></div><div dir="auto">But an addition after feature stable relative to old output plus Capstone.</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
> Merging the info and generating the reports not well due to data contention.<br>
<br>
Yes.<br>
<br>
> But optimizing too early and the wrong way is not smart.<br>
<br>
Yes. We need Capstone to be added before this can happen.<br></blockquote></div></div><div dir="auto"><br></div><div dir="auto">+1</div><div dir="auto"><br></div><div dir="auto"><br></div><div dir="auto">I would also like to see gcov support but that will not be a factor in the performance we have. It will add reading a lot more files (gcno) and writing a lot of gcda at the end. Again more important to be right than fast at first. And completely an addition.</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Chris<br>
</blockquote></div></div></div>