Need help debugging sp16.exe
Joel Sherrill
joel at rtems.org
Sat Oct 10 23:38:36 UTC 2020
On Sat, Oct 10, 2020, 10:47 AM Richi Dubey <richidubey at gmail.com> wrote:
> Hi Mr. Huber,
>
> Thanks for checking in.
>
> I suggested to enable your new scheduler implementation as the default
>> to check if it is in line with the standard schedulers. I would first
>> get some high level data. Select a BSP with good test results on a
>> simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run the
>> tests and record the test data. Then enable the SMP EDF scheduler as the
>> default, run the tests, record the data. Then enable your scheduler as
>> the default, run the tests, record the data. Then get all tests which
>> fail only with your scheduler.
>
> Yes, this is something I've already done based on your previous
> suggestion. I set SCHEDULER_STRONG_APA(the current RTEMS master's version)
> as the default scheduler for both sp and SMP and ran the test (on both
> sparc/leon3 and arm/realview_pbx_a9_qemu). Then I set
> SCHEDULER_STRONG_APA(my version) as the default scheduler for both sp and
> SMP and ran the test and compared it with the master's strong apa result.
> The following (extra) tests failed:
>
> sp02.exe
> sp16.exe
> sp30.exe
> sp31.exe
> sp37.exe
> sp42.exe
> spfatal29.exe
> tm24.exe
>
> Do a high level analysis of all failing
>
> tests. Try to figure out a new scenario for the test smpstrongapa01.
>
> Okay, I would look into this. This is a great suggestion, thanks!
>
>
> Do all the development with RTEMS_DEBUG enabled!
>> Add _Assert() stuff to your scheduler. Check pre- and post-conditions of
>> all operations. Check invariants.
>
> How do I check postconditions? Using _Assert() or by manually debugging
> each function call?
>
There are existing checks scattered throughout the source. Do any need to
be in your scheduler?
Random thoughts:
+ Looks like the bsp has fast idle on but that should not impact anything.
+ Run this with the another scheduler and see if you can identify when that
scheduler makes the decision you are missing. There has to be one of the
scheduler hooks that is making a different decision. Run the test side by
side with two different schedulers. Alternate forward motion in the two and
compare the behaviour.
+ Adding trading might help but is probably more trouble to set up than
just comparing good and bad schedulers in parallel.
+ Look at what every failing test is doing. May be a common issue and one
is easier to debug
--joel
>
>
>
> On Sat, Oct 10, 2020 at 6:09 PM Sebastian Huber <
> sebastian.huber at embedded-brains.de> wrote:
>
>> Hello Richi,
>>
>> I suggested to enable your new scheduler implementation as the default
>> to check if it is in line with the standard schedulers. I would first
>> get some high level data. Select a BSP with good test results on a
>> simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run the
>> tests and record the test data. Then enable the SMP EDF scheduler as the
>> default, run the tests, record the data. Then enable your scheduler as
>> the default, run the tests, record the data. Then get all tests which
>> fail only with your scheduler. Do a high level analysis of all failing
>> tests. Try to figure out a new scenario for the test smpstrongapa01.
>>
>> Do all the development with RTEMS_DEBUG enabled!
>>
>> Add _Assert() stuff to your scheduler. Check pre- and post-conditions of
>> all operations. Check invariants.
>>
>> _______________________________________________
> devel mailing list
> devel at rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.rtems.org/pipermail/devel/attachments/20201010/07fb8332/attachment.html>
More information about the devel
mailing list