crypt01 execution time
Chris Johns
chrisj at rtems.org
Mon Apr 20 00:52:49 UTC 2015
On 15/04/2015 7:37 am, Joel Sherrill wrote:
>
>
> On 11/26/2014 1:12 AM, Sebastian Huber wrote:
>> On 25/11/14 23:25, Joel Sherrill wrote:
>>> How long is this test supposed to run?
>>>
>>> It takes 4:42 using sis on my computer which is a 2.9 Ghz i7 .
>> SIS is a slow simulator. On Qemu it runs much faster.
> To repeat a statement by Jiri, SIS is an accurate simulator. Just
> a different goal.
>>> Is there anything to do? Split it?
>> Splitting it up doesn't reduce the overall test time. The test cases in
>> this test are standard test cases. I am not so fond of removing them,
>> but in case the test time is too long for you then you can drop the ones
>> with the many rounds.
>>
> I took the statement "doesn't reduce overall test time" earlier at face
> value
> and didn't push back. It is a true statement to the extent that splitting
> it results in N tests which run for the same combined time. But what
> was missed is that the run-time of this test is much longer than any
> other test. This results in having to set a timeout value that accommodates
> this test while others could be timed out in the equivalent of 180
> seconds of CPU time.
I wonder if the test framework is a little weak in this area and we need
to improve it. The core issue is the ability to determine the
performance of a simulator and then the effect of running in parallel
assuming one instance per core. The type of host being used is a factor
we need to estimate and adjust for.
One possible way I have considered is to have a "benchmark" type
executable that is run by itself before the test run (which can have
parallel tests running) and the results are used to scale the host to
match the simulated CPU time. There needs to be some integer and
floating point code run to get a balanced result.
>
> Thus it isn't the test time itself that is the problem. It is the impact
> it has on setting the overall timeout value since this single test
> runs for 309.51s of simulated CPU time according to sis. Every test
> which times out is bound by the maximum allowed execution time
> of the slowest tests.
>
I think we should add support for some new test annotations. Along with
the test begin and end we current support we add "timeout reset" and
"timeout scalar". The first resets the timeout value so a collection of
tests can be in a single executable (which has some advantages in some
cases), and scalar allows a test to adjust the default timeout by a factor.
> RTEMS suffers from having many unwritten rules. Originally, all the
> sptests generally did not run much longer than ticker. That test takes
> 35 seconds of target time plus boot. Of course, this was never a hard
> rule and many exceptions crept in.
Yes and this is understandable. The purpose of the rtems-test command is
to collect and manage these issues and to provide a consistent user
interface.
> sis (and tsim) can be programmed to timeout after N seconds of simulated
> CPU time. Based on sim-scripts, 180 seconds is sufficient as a general
> timeout if you ignore crypt01. Splitting the test into multiple tests
> would
> bring it back in line with the maximum execution time of any other test.
Does this change the meaning of the time out value ? In the rtems-test
command the time out is the host's real-time because we cannot support
this in a common way across all simulators and it does not mean anything
with real hardware. The idea behind the "bench mark" executable was to
get some form of scaling factor between the CPU time and the host's
clock time.
>
> When testing with a simulator, we can run multiple instances in parallel.
> It is just a matter of managing the maximum run-time of any test so
> they cluster rather than having a single outlier driving the selection.
>
> This test should be easy to split. The code looks like this:
>
> static void Init(rtems_task_argument arg)
> {
> TEST_BEGIN();
>
> test_formats();
> test_md5();
> test_sha256();
> test_sha512();
> test_generic();
>
> TEST_END();
> rtems_test_exit(0);
> }
>
> The test has five sub-parts and it is just a matter of dividing the test
> into those subparts. The simplest way would be conditional compilation
> driven from the Makefile.am to build part 1, 2, 3, ... as needed.
>
> FWIW it took 24.61 seconds to run on a 2.4 Ghz older CPU. I added a
> print of uptime to get a feel of how it would need to be split.
>
> *** BEGIN OF TEST CRYPT 1 ***
> UPTIME: 0: 2479000
> UPTIME: 0: 3537000
What about adding:
*** TIMEOUT: RESET ***
> test crypt_md5_r()
> UPTIME: 0: 404662000
> test crypt_sha256_r()
> UPTIME: 196: 135394000
> test crypt_sha512_r()
> UPTIME: 644: 629217000
> test crypt_r()
> UPTIME: 663: 426214000
> *** END OF TEST CRYPT 1 ***
>
> Unfortunately, it looks like the big "chunks" are sha256 and
> sha512 which are both over 180 seconds. So the list of
> test input/output list for those would have to be split
> into different runs.
>
Scaling, eg:
*** TIMEOUT: SCALING: 2.000 ***
?
Chris
More information about the devel
mailing list