<offllist> Coverage support for tester

Gedare Bloom gedare at rtems.org
Wed May 30 14:48:25 UTC 2018


Hello Vijay,

Do you expect/need an answer to something in here?

gedare

On Wed, May 30, 2018 at 10:29 AM, Vijay Kumar Banerjee
<vijaykumar9597 at gmail.com> wrote:
> On 30 May 2018 at 00:54, Joel Sherrill <joel at rtems.org> wrote:
>>
>>
>>
>> On Tue, May 29, 2018 at 11:27 AM, Vijay Kumar Banerjee
>> <vijaykumar9597 at gmail.com> wrote:
>>>
>>>
>>>
>>> On Tue, 29 May 2018, 20:10 Joel Sherrill, <joel at rtems.org> wrote:
>>>>
>>>>
>>>>
>>>> On Mon, May 28, 2018 at 11:08 PM, Chris Johns <chrisj at rtems.org> wrote:
>>>>>
>>>>> On 29/5/18 4:26 am, Joel Sherrill wrote:
>>>>> > On Mon, May 28, 2018 at 5:43 AM, Vijay Kumar Banerjee
>>>>> > <vijaykumar9597 at gmail.com
>>>>> > <mailto:vijaykumar9597 at gmail.com>> wrote:
>>>>> >
>>>>> >     Hello,
>>>>> >
>>>>> >     The coverage reports are now showing results.
>>>>> >     The current branch that holds all the changes is
>>>>> >     the cov-tester-support branch in my forked repo
>>>>> >
>>>>> >     https://github.com/thelunatic/rtems-tools/tree/cov-tester-support
>>>>> >
>>>>> > <https://github.com/thelunatic/rtems-tools/tree/cov-tester-support>
>>>>> >
>>>>> >     (Please have a look into the code and test it.)
>>>>> >
>>>>> >     It is close to merging (hopefully). These are the issues
>>>>> >     that would need a fix before it can be merged :
>>>>> >
>>>>> >     1. I have added some #FIXME in the code (have a look)
>>>>> >         in coverage script. I have set the value of the targe to be
>>>>> >         sparc-rtems5, which makes it limited to sparc-rtems5 only. We
>>>>> > can
>>>>> >     include the target in the bsp ini file, That would
>>>>> >     be a quick fix for this.
>>>>> >
>>>>> >
>>>>> > Yes. This needs to be fixed.
>>>>> >
>>>>> > My hack to add 4 in ObjdumpProcessor.cc needs to be addressed also.
>>>>> > I am thinking of adding a method to Target_*.cc to ask for the size
>>>>> > of an
>>>>> > instruction.
>>>>> > Then pass it the last instruction. That way we can throw on other
>>>>> > architectures for
>>>>> > now. If Chris solves this with his changes before we try another
>>>>> > architecture,
>>>>> > great.
>>>>> > If not, it will be easy to fix.
>>>>>
>>>>> What is the overall requirement?
>>>>
>>>>
>>>> To know the ending address of the function.
>>>>
>>>> Technically there are three pieces of information:
>>>>
>>>> + start address
>>>> + end address
>>>> + size
>>>>
>>>> If you know two of those, you can compute the third.
>>>>
>>>> I don't care if this comes from DWARF, ELF, or parsing the disassembly.
>>>> It just needs to be reliable.
>>>>
>>>> And.. I am not proposing my solution as permanent. Just to keep us
>>>> moving. You want to change to an internal disassembler (which
>>>> would also need to have the source interspersed) and DWARF. So
>>>> this code would go away then.
>>>>
>>>>>
>>>>>
>>>>> What defines the function and so size are attempting to get coverage
>>>>> of? What if
>>>>> that function calls an inline function and that function is inlined?
>>>>> What if
>>>>> that inlined function calls another inlined function?
>>>>
>>>>
>>>> Then it is all inlined. It is done consistently now. I have never seen a
>>>> case
>>>> where two instances of a method differed as the assembly level. [1]
>>>>
>>>> The actual body of the inlined method is evaluated at each expansion
>>>> point.
>>>> That is why a few years ago, I pushed hard to reduce the complexity of
>>>> inline methods because we got test patch explosion when an inlined
>>>> method
>>>> included complex conditionals.
>>>>
>>>> Similarly, I think it would be helpful to coverage and verification
>>>> efforts to
>>>> follow the **shudder** MISRA rule which want you to write simple
>>>> conditional
>>>> expressions rather than compound ones. I have taken to writing code this
>>>> way as much as possible. But haven't pushed it as a coding rule.
>>>>
>>>>   if (a) {
>>>>     if (b) {
>>>>       do x;
>>>>     }
>>>>  }
>>>>
>>>> Versus
>>>>   if (a && b) {
>>>>     do x;
>>>>   }
>>>>
>>>> The reason is that the first is easier to analyse coverage on.
>>>>
>>>> [1] We both expect LTO could change this.
>>>>
>>>> [2] ESA did specifically mention this one though. Also in general terms,
>>>> an RTEMS Project response to MISRA rules. Which ones we follow,
>>>> reject, etc.. But I refuse to adopt hard rules which can't be enforced
>>>> by free open source tools.
>>>>
>>>>>
>>>>>
>>>>> The DWARF data provides details about the PC low and PC high of what is
>>>>> called
>>>>> concrete functions and then it provides the details about inlines.
>>>>
>>>>
>>>> We don't (currently) report on the inlines as independent methods.
>>>>
>>>>>
>>>>>
>>>>> >
>>>>> >     2. coverage used to feed ini file for each symbol to covoar
>>>>> >         in a loop and store the result in a separate directory
>>>>> >         for each symbol . Which is needed no more needed as
>>>>> >         covoar can now process multi sets from a
>>>>> >          single ini file. I think the results from covoar should be
>>>>> >         store in a separate directory for each symbol
>>>>> >         example :- score/
>>>>> >
>>>>> >
>>>>> > A bit of history will help here. Originally covoar was run against a
>>>>> > single set of
>>>>> > code by the scripting framework. We would do coverage on either "core
>>>>> > parts"
>>>>> > or "developmental" (e.g. nearly all non-networked code). The
>>>>> > optimization was
>>>>> > either at -O2 or -Os. So there were four coverage variants.
>>>>> >
>>>>> > Turned out that when we added something to "all", the percentage
>>>>> > would drop
>>>>> > and reflect badly on the rest of the code. I remember adding the
>>>>> > dosfs and
>>>>> > the coverage dropped almost 20 percent.
>>>>> >
>>>>> > This led to the idea that we should report on a per
>>>>> > directory/subsystem basis.
>>>>> > The score, rtems, posix, sapi, and libcsupport should have high
>>>>> > coverage now
>>>>> > and the reports should reflect that independent of whether the dosfs
>>>>> > needs a
>>>>> > lot more tests.
>>>>> >
>>>>> > Before each directory/subsystem required a completely separate run of
>>>>> > covoar.
>>>>> > If we are headed to a solution where one analysis run of covoar
>>>>> > generates different
>>>>> > reports, that should speed up the processing time a lot!
>>>>> >
>>>>> > The other issue is what should the top directory look like/contain
>>>>> > when a single
>>>>> > run produces multiple subsystem reports. Seems like we would need at
>>>>> > least a
>>>>> > top level html and text file.
>>>>> >
>>>>> >
>>>>> >
>>>>> >     3. currently we are using the leon3-qemu-cov as the bsp.
>>>>> >         Are we going to have two ini file for each bsp ? ( one
>>>>> > without coverage
>>>>> >     and one with coverage support)
>>>>> >
>>>>> >     Earlier the approach was to include a section 'coverage'
>>>>> >     to the bsp ini to put the values we needed for coverage.
>>>>> >     I think that approach would be more "convenient" for the user.
>>>>> >
>>>>> >
>>>>> > This was something Chris suggested. I think it was to avoid adding to
>>>>> > the bsp ini file until the code was closer to merging.
>>>>> >
>>>>> > Chris.. what do you think? Long term, a section would be nice.
>>>>> >
>>>>>
>>>>> Sorry I cannot remember without looking it up and I am currently buried
>>>>> in
>>>>> family issues.
>>>>
>>>>
>>>> OK. Having the Python scripting loop over the sets or covoar looping
>>>> over them
>>>> is an open issue. Historically, looping over desired symbol sets was
>>>> outside
>>>> covoar. So looping inside covoar may take some work.
>>>
>>>
>>> covoar can already loop over the
>>> sets it seems, which is implemented
>>> in DesiredSymbols. But it stores all the
>>> reports generated from into the same directory.
>>
>>
>> If there is an index that makes its possible to navigate through the
>> different "subsystems", then that's the key thing. You don't want
>> to think score is poorly covered due to dosfs.
>>
>>>
>>>
>>> P.S:Sorry for the previous mail with no message
>>>>
>>>>
>>>> --joel
>>>>>
>>>>>
>>>>> Chris
>>>>
>>>>
>>
>
>
> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel



More information about the devel mailing list