Remaining BSP Build Failures + Adding "eth" linker script section for arm bsp.
Joel Sherrill
joel.sherrill at oarcorp.com
Thu Aug 28 14:24:43 UTC 2014
On 8/28/2014 4:27 AM, Pavel Pisa wrote:
> Hello Chris and others,
>
> I am attaching the reply to BSP thread, because it fits more there.
>
> On Thursday 28 of August 2014 07:21:02 Chris Johns wrote:
>> The number of BSPs building for ARM has exploded and for just the ARM
>> architecture there are now 27,417 tests built. If I could run each test
>> in 20 seconds it would take over 6 days to do this. If I could run 6
>> tests in parallel it would still take 24 hours.
>>
>> I wonder how many of these variants have had all the tests run on them ?
> Some though based on my run about 60 tests on TMS570 internal RAM variant
> (very high success rate) and smaller amount on SDRAM variant where are still
> quite common problems (?? loader, chip errata for unaligned multi registers
> loads, our BSP setup mistake ???).
>
> I have written script to load one test after another and then left user
> to reply if test was sucesfull or not.
>
> https://github.com/AoLaD/rtems-tms570-utils/blob/master/openocd/openocd-flat-load-test
>
> The scripts ensures that executable is load as one continuous binary, because
> JTAG access to external SDRAM does not work for bytes and 16-bit shorts on
> TMS570 but that is minor technical detail.
>
> I expect that I return from prolonged weekend on Wendesday, I try to rewrite
> test script in Python.
Check out rtems-tools/tester. Chris has been automating the running and
checking of
the tests. It is already in Python and supports a handful of simulators
plus JTAG
on the Zynq. This is the tester we are trying to migrate to (but it is
slow).
Help and more BSPs is appreciated.
> But there is significant problem for automation, the result printed on serial
> port has to be inspected by human. I can catch TEST START and END markers (no
> problem) but some tests (Exceptions one) even continue with significant
> part after END TEST marker. Other more important problem is, that there is not
> common format to check for success or failure or at least I am not aware of
> it. There are or have been *.scn files, but even significant differences
> in timing and even printed sequences are expected for many tests.
> It would worth to discuss some change to the format. If test starts and then
> ends, it is quite probable that target is OK and can collect if all subsequent
> tests gone OK. So it would be great if there is printed some status code after
> exit marker which cab be processed automatically i.e.
>
> *** END MARKER TEST NAME *** RESULT: OK ***
>
> *** END MARKER TEST NAME *** RESULT: 345 ***
>
> This can be captured and report build automatically. The full tests output
> should be archived, so human inspection for failed ones is possible.
Amar led a GSOC project to move to a new test framework and improve the
tests.
For the most part, if the end of test message is printed, the test passed.
The test macros which check the return status also check that the system
state looks
ok. For example, on single core in user space, the dispatch disable
level should always
be zero. If not, we have a critical section that isn't exited. Those
which use assert do
not have this check and probably should be switch to the more aggressive
macros.
Tests (like ticker) in which output order varies should eventually be
addressed.
But yes, this is a problem and all I know to say is that we have a large
problem
space, will have to nibble on it, and we will need help from everyone to
nibble
on one issue/test at a time.
FWIW I want to rename all the spNN and tmNN to have some indication
of the object class being tested. And to focus on a single object class
as much
as possible. A Google Code In student helped split up sp09 which is a
step in
the right direction.
> We have some experience with continuous code base testing.
> Some work related to SESAMO project http://sesamo-project.eu/
> some huge matrix testing for Volkswagen
> http://rtime.felk.cvut.cz/can/benchmark/3.0/
> and for our previous CAN related work
> http://rtime.felk.cvut.cz/can/benchmark/1/
>
> Mainly developed by Michal Sojka. He has even prepared Linux kernel
> CAN continuous code base testing but that waits for some polishment
> to be publically announced.
>
> If there is some way to check tests results automatically, I or Premek
> try prepare some setup for TMS570 RTEMS BSP testing as I have some time.
rtems-tools/tester. Any help on improving that is appreciated. I use it on
sis, psim, and jmr3904 and it can automatically run one simulator instance
per core. Speeds things up a lot.
> By the way, the testing will be even possible for flash (but only seldom
> because of wearing - 1000 erase cycles declared) because TMS570 Flash
> support is matturing and targetting OpenOCD mainline
>
> http://thread.gmane.org/gmane.comp.debugging.openocd.devel/25458
Hardware has a lifespan. Another worry of mine is simply the number of power
cycles and resets.
We have talked about using simulators very regularly (like every check
in) and
checking real hardware at regular but known intervals. That would at least
allow a git bisect.
> So we do not need any Ti tools for board setup now and when setup/loader
> code is implemented, we do not need CSS and HalCoGen even for
> initial MCU setup code build. That binary can be flashed by OpenOCD
> already if you do not fear much about bugs and possible target smashing.
:) Progress.
> Best wishes,
>
> Pavel
--
Joel Sherrill, Ph.D. Director of Research & Development
joel.sherrill at OARcorp.com On-Line Applications Research
Ask me about RTEMS: a free RTOS Huntsville AL 35805
Support Available (256) 722-9985
More information about the devel
mailing list