Chris Johns chrisj at rtems.org
Tue May 16 08:28:49 UTC 2017

On 16/5/17 5:54 pm, Sebastian Huber wrote:
> On 13/05/17 00:30, Chris Johns wrote:
>> On 12/5/17 8:24 pm, Sebastian Huber wrote:
>>> Mark tests that require review due to
>> If a test fails it fails. I feel the test results need to correctly
>> express the failures and we should not masks them in the way. I do not
>> agree with changes that suppress a failure.
>> I do not see SMP as any special over other reasons a test fails and I
>> hope we do not start adding them. We exclude tests from being built
>> because they do not fit into the target resources and that is fixed,
>> bounded.
> There are differences in non-SMP vs. SMP mode, e.g. disable thread
> preemption and interrupt level task modes are not supported. The tests
> are perfectly fine, but they use these explicit non-SMP features.


> You
> wanted more tests to run in the SMP mode if SMP is enabled. Now only 36
> tests are left over.

I think it is a reasonable expectation when you build an SMP RTEMS the
tests run with SMP enabled using all available cores.

> They should be fixed to not use these features, but
> someone has to look at each test individually for this. This could be as
> easy as this:
> https://git.rtems.org/rtems/commit/?id=00d982080cf1e630fea9c6e8b3a4e7a5be501781

Great and thanks.

> I cannot fix everything instantly. I have to prioritize my work. There
> are a lot of other things to do and is this really the most important
> stuff right now before the RTEMS 4.12 release?

Yes and I not suggesting you fix these tests or these tests need to be
fixed for 4.12.

I am saying we tag the tests as `expected-fail` and if you run them this
is the result you will see. Tagging them will not work until I figure
out a way to tag the test as an expected-fail for an SMP build. I am
currently looking at the regression I introduced with the parallel
building of the tests. I am close to having that resolved.

We would all love all tests to be run and all tests to pass on all
architecture with all build configuration however the reality is this
will not happen and so we need to exclude some tests from being built
and we need to tag some tests as expected failures.

What we do need to do is make sure the test results express the true
state. If a test is broken it should fail. If it is tagged an
expected-fail we do not consider it a regression.


More information about the devel mailing list