BSP Build Failures Appeal

Chris Johns chrisj at rtems.org
Mon Aug 25 01:26:54 UTC 2014


On 24/08/2014 7:59 pm, Pavel Pisa wrote:
> Hello Chris,
>
> On Sunday 24 of August 2014 05:33:45 Chris Johns wrote:
>> On 23/08/2014 1:57 am, Joel Sherrill wrote:
>>> The build failures I reported were with the latest RSB tools
>>> Please pitch in and let's resolve them.
>>
>> I have a regression build that includes building all BSPs using ...
>>
>> $ rm -rf build rsb-report-* log_* && ../source-builder/sb-set-builder
>> --prefix=$HOME/development/rtems/4.11 --log=log_all_rtems --with-rtems
>> --trace --regression 4.11/rtems-all
>>
>> ... running on sync.rtems.org as I am also seeing failures. This should
>> give me a list of error reports with the first failure on an architecture.
>>
>>> I left a similar build going for the weekend but using the
>>> head of gcc, newlib, and binutils. Hopefully the results
>>> are similar.
>>
>> Given the churn GSoC patches are creating I think we are still a while
>> away from a freeze before release branching 4.11.
>
> I have fault with latest RTEMS git on
>
>    tms570ls3137_hdk_intram
>    tms570ls3137_hdk_sdram
>    arm-lpc17xx_ea_ram
>    arm-lpc40xx_ea_ram
>

It is across most of the architectures including sparc without 
inspecting every error log.

> I expect that source of breakage is next commit
>
>    score: Add SMP support to the cache manager
>    http://git.rtems.org/rtems/commit/?id=ddbc3f8d83678313ca61d2936e6efd50b3e044b0
>

I agree.

> and the fact that CPU_INSTRUCTION_CACHE_ALIGNMENT does not get into defines
> for the most (or at least these without real cache) of ARM targets.

Daniel(s) ??

> May it be that some disable of SMP for CPUkit would help to these targets.
> But generally I am little afraid if cache management code does not
> cause overhead on systems without cache. In the theory there should
> be two builds of CPUkit for each multilib variant - one with SMP
> and another without if there is expected to build multiple BSPs
> against single CPUkit build.

Yes in theory this is correct. The intention of the cpukit was to 
support multilib building of RTEMS and a binary compatible layer all 
BSPs could use however linkages to parts of the source outside of the 
cpukit from within it has meant this has never been cleanly implemented 
and I suspect never will nor worth further efforts. The other major 
contributing factor is the multilibs for gcc is subset of the multilibs 
for RTEMS because the same instruction set can execute on a range of 
differing cores exploding the build matrix. This means building RTEMS 
for a BSP is practical.

> As I understand that is used for distribution
> but not common for source users who build for single BSP usually.
> But should be checked by someone who knows RTEMS better than me.

Multilib is not really used anywhere because of the complexities the 
configure options add. If you take ARM and it's increasing multilibs 
(gcc is taking a while to build on fast hardware these days) and then 
add the RTEMS number of specific variants and then multiple by the 
permutations of the configure options you have a massive build to 
support the distribution point of view. Then for a BSP you need to add 
on top of this BSPTOPS. So in theory each commit should run each of 
these variations for all BSPs to make sure nothing has broken however 
this is just not practical and that in my view means the way we handle 
configurations and variants is not sustainable and needs to change.

Patches need to be checked for this before being pushed and even then 
breakages like this happen from time to time.

Chris



More information about the devel mailing list