<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Oct 12, 2020 at 10:47 AM Richi Dubey <<a href="mailto:richidubey@gmail.com">richidubey@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">There are existing checks scattered throughout the source. Do any need to be in your scheduler?</blockquote><div>I don't understand. If there are already checks scattered through, why do I need more checks in my scheduler? Are these checks independent from the checks I might need in the scheduler? Please explain. </div></div></blockquote><div><br></div><div>Your scheduler is a unique piece of software. It may be making assumptions that are not checked in the generic scheduler code. And checks in other schedulers are of no use to you.</div><div><br></div><div>There may not be any but this is something to consider.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+ Looks like the bsp has fast idle on but that should not impact anything. </blockquote><div>What's fast idle? I found <a href="https://git.rtems.org/rtems/tree/c/src/lib/libbsp/arm/realview-pbx-a9/configure.ac" target="_blank">this</a>. :p How can time run as fast as possible?</div></div></div></blockquote><div><br></div><div>Simulators tend to run slowly. They also may spend a long time when testing RTEMS executing the idle thread waiting for time to pass. Fast idle just says if a clock tick occurs while the idle thread is running, call clock tick over and over until another thread is unblocked and preempts idle. <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+ Run this with the another scheduler and see if you can identify when that scheduler makes the decision you are missing. There has to be one of the scheduler hooks that is making a different decision. Run the test side by side with two different schedulers. Alternate forward motion in the two and compare the behaviour.</blockquote><div>This is genius. Thanks a lot. I'm gonna work on this. </div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+ Adding trading might help but is probably more trouble to set up than just comparing good and bad schedulers in parallel.</blockquote><div>What's trading? </div></div></div></blockquote><div><br></div><div>That's a bad typo or auto-correct. Probably should have been tactic. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+ Look at what every failing test is doing. May be a common issue and one is easier to debug </blockquote><div>Thanks. I'll check this. </div></div></div></blockquote><div><br></div><div>Looking across all failing tests usually helps. For some reason, one tends to be easier to debug than the others.</div><div><br></div><div>Also some of the tests have a lot of code up front that doesn't impact what you are testing. It may be possible to disable early parts of sp16 to reduce what you have to step through and compare schedulers.</div><div><br></div><div>--joel</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Oct 11, 2020 at 5:08 AM Joel Sherrill <<a href="mailto:joel@rtems.org" target="_blank">joel@rtems.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 10, 2020, 10:47 AM Richi Dubey <<a href="mailto:richidubey@gmail.com" target="_blank">richidubey@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Mr. Huber,<div><br></div><div>Thanks for checking in.</div><div><br></div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I suggested to enable your new scheduler implementation as the default<br>to check if it is in line with the standard schedulers. I would first<br>get some high level data. Select a BSP with good test results on a<br>simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run the<br>tests and record the test data. Then enable the SMP EDF scheduler as the<br>default, run the tests, record the data. Then enable your scheduler as<br>the default, run the tests, record the data. Then get all tests which<br>fail only with your scheduler.</blockquote><div>Yes, this is something I've already done based on your previous suggestion. I set SCHEDULER_STRONG_APA(the current RTEMS master's version) as the default scheduler for both sp and SMP and ran the test (on both sparc/leon3 and arm/realview_pbx_a9_qemu). Then I set SCHEDULER_STRONG_APA(my version) as the default scheduler for both sp and SMP and ran the test and compared it with the master's strong apa result. The following (extra) tests failed:</div><div><br></div><div> sp02.exe<br> sp16.exe<br> sp30.exe<br> sp31.exe<br> sp37.exe<br> sp42.exe<br> spfatal29.exe<br><div> tm24.exe</div></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> Do a high level analysis of all failing</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">tests. Try to figure out a new scenario for the test smpstrongapa01.</blockquote><div>Okay, I would look into this. This is a great suggestion, thanks!</div><div><br></div><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Do all the development with RTEMS_DEBUG enabled!<br>Add _Assert() stuff to your scheduler. Check pre- and post-conditions of<br>all operations. Check invariants.</blockquote><div>How do I check postconditions? Using _Assert() or by manually debugging each function call? </div></div></div></blockquote></div></div><div dir="auto"><br></div><div dir="auto">There are existing checks scattered throughout the source. Do any need to be in your scheduler?</div><div dir="auto"><br></div><div dir="auto">Random thoughts:</div><div dir="auto"><br></div><div dir="auto">+ Looks like the bsp has fast idle on but that should not impact anything. </div><div dir="auto"><br></div><div dir="auto">+ Run this with the another scheduler and see if you can identify when that scheduler makes the decision you are missing. There has to be one of the scheduler hooks that is making a different decision. Run the test side by side with two different schedulers. Alternate forward motion in the two and compare the behaviour.</div><div dir="auto"><br></div><div dir="auto">+ Adding trading might help but is probably more trouble to set up than just comparing good and bad schedulers in parallel.</div><div dir="auto"><br></div><div dir="auto">+ Look at what every failing test is doing. May be a common issue and one is easier to debug </div><div dir="auto"><br></div><div dir="auto">--joel</div><div dir="auto"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 10, 2020 at 6:09 PM Sebastian Huber <<a href="mailto:sebastian.huber@embedded-brains.de" rel="noreferrer" target="_blank">sebastian.huber@embedded-brains.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello Richi,<br>
<br>
I suggested to enable your new scheduler implementation as the default <br>
to check if it is in line with the standard schedulers. I would first <br>
get some high level data. Select a BSP with good test results on a <br>
simulator (for example sparc/leon3 or arm/realview_pbx_a9_qemu). Run the <br>
tests and record the test data. Then enable the SMP EDF scheduler as the <br>
default, run the tests, record the data. Then enable your scheduler as <br>
the default, run the tests, record the data. Then get all tests which <br>
fail only with your scheduler. Do a high level analysis of all failing <br>
tests. Try to figure out a new scenario for the test smpstrongapa01.<br>
<br>
Do all the development with RTEMS_DEBUG enabled!<br>
<br>
Add _Assert() stuff to your scheduler. Check pre- and post-conditions of <br>
all operations. Check invariants.<br>
<br>
</blockquote></div>
_______________________________________________<br>
devel mailing list<br>
<a href="mailto:devel@rtems.org" rel="noreferrer" target="_blank">devel@rtems.org</a><br>
<a href="http://lists.rtems.org/mailman/listinfo/devel" rel="noreferrer noreferrer" target="_blank">http://lists.rtems.org/mailman/listinfo/devel</a></blockquote></div></div></div>
</blockquote></div>
</blockquote></div></div>