covoar SIGKILL Investigation
Joel Sherrill
joel at rtems.org
Tue Aug 21 23:29:48 UTC 2018
On Tue, Aug 21, 2018, 4:05 PM Vijay Kumar Banerjee <vijaykumar9597 at gmail.com>
wrote:
>
> On Wed, 22 Aug 2018 at 01:55, Joel Sherrill <joel at rtems.org> wrote:
>
>>
>>
>> On Tue, Aug 21, 2018, 1:59 PM Vijay Kumar Banerjee <
>> vijaykumar9597 at gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Aug 21, 2018, 7:34 PM Joel Sherrill <joel at rtems.org> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Aug 21, 2018 at 2:14 AM, Chris Johns <chrisj at rtems.org> wrote:
>>>>
>>>>> On 21/08/2018 16:55, Vijay Kumar Banerjee wrote:
>>>>> > I tried running coverage with this latest
>>>>> > master, covoar is taking up all the memory (7 GB) including the swap
>>>>> (7.6 GB)
>>>>> > and after a while, still gets killed. :(
>>>>>
>>>>> I ran rtems-test and created the .cov files and then ran covoar from
>>>>> the command
>>>>> line (see below). Looking at top while it is running I see covoar
>>>>> topping out
>>>>> with a size around 1430M. The size is pretty static once the "Loading
>>>>> symbol
>>>>> sets:" is printed.
>>>>>
>>>>> I have run covoar under valgrind with a smaller number of executables
>>>>> and made
>>>>> sure all the allocations are ok.
>>>>>
>>>>> I get a number of size mismatch messages related the inline functions
>>>>> but that
>>>>> is a known issue.
>>>>>
>>>>> > can there be something wrong with my environment?
>>>>>
>>>>> I have no idea.
>>>>>
>>>>> > I tried running it on a different system,
>>>>> > coverage did run for the whole testsuite for score and rtems only.
>>>>> > (I mentioned the symbols as argument to --coverage)
>>>>> > but it doesn't run for all the symbol-sets, strange.
>>>>>
>>>>> I am not running coverage via the rtems-test command. I have been
>>>>> testing at the
>>>>> covoar command line.
>>>>>
>>>>> Can you please try a variant of:
>>>>>
>>>>> /opt/work/chris/rtems/rt/rtems-tools.git/build/tester/covoar/covoar \
>>>>> -v \
>>>>> -S
>>>>>
>>>>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/leon3-qemu-symbols.ini
>>>>> \
>>>>> -O /opt/work/chris/rtems/kernel/bsps/leon3/leon3-qemu-coverage/score \
>>>>> -E
>>>>>
>>>>> /opt/work/chris/rtems/rt/rtems-tools.git/tester/rtems/testing/coverage/Explanations.txt
>>>>> \
>>>>> -p RTEMS-5 `find . -name \*.exe`
>>>>> ?
>>>>>
>>>>> I have top running at the same time. The foot print grows while the
>>>>> DWARF info
>>>>> and .cov files are loaded which is
>>>>>
>>>>
>>>> Vijay .. I would add to make sure the gcov processing is turned off for
>>>> now.
>>>>
>>> it's turned off. :)
>>>
>>
>> Just checking. That code isn't ready yet. :)
>>
>>>
>>> After a lot of different attempts I realized that I just needed to waf
>>> build after pulling the new changes. Sorry about that.
>>>
>>
>> Lol. It is always the dumb things.
>>
>> It did run successfully !
>>> I'm now running coverage with rtems-test for the whole testsuite, I will
>>> be reporting about it soon :)
>>>
>>
>> How long is covoar taking for the entire set?
>>
>> It works great. this is what `time` says
> --------
> real 17m49.887s
> user 14m25.620s
> sys 0m37.847s
> --------
>
What speed and type of processor do you have?
I don't recall it taking near this long in the past. I used to run it as
part of development. But we may have more tests and the code has changed.
Reading dwarf with the file open/closes, etc just may be more expensive
than parsing the text files. But it is more accurate and lays the
groundwork.for more types of analysis.
Eventually we will have to profile this code. Whatever is costly is done
for each exe so there is a multiplier.
I suspect this code would parallelize reading info from the exes fairly
well. Merging the info and generating the reports not well due to data
contention.
But optimizing too early and the wrong way is not smart.
Not that I am complaining, it takes minutes to do a doc build these days
>>
>>
>>>>> Thanks
>>>>> Chris
>>>>>
>>>>
>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20180821/2e7fcb3e/attachment-0002.html>
More information about the devel
mailing list