[PATCH 3/4] rtems-tester.txt, options.py: Improved clarity and spelling.
Ric Claus
claus at slac.stanford.edu
Wed Jul 29 10:56:06 UTC 2015
---
doc/rtems-tester.txt | 92 ++++++++++++++++++++++++-------------------------
rtemstoolkit/options.py | 2 +-
2 files changed, 47 insertions(+), 47 deletions(-)
diff --git a/doc/rtems-tester.txt b/doc/rtems-tester.txt
index 286d6ee..b6f62e2 100644
--- a/doc/rtems-tester.txt
+++ b/doc/rtems-tester.txt
@@ -15,7 +15,7 @@ RTEMS Tester
------------
The RTEMS Tester is a test framework. It includes a command line interface to
-run tests on supported targets. The framework provides backend support for
+run tests on supported targets. The framework provides back-end support for
common simulators and debuggers. The board support package (BSP) configurations
for RTEMS are provided and can be used to run all the tests provided with
RTEMS. The framework is not specific to RTEMS and can be configured to run any
@@ -25,22 +25,22 @@ RTEMS is an embedded operating system and is cross-compiled on a range of host
machines. The executables run on the target hardware and this can vary widely
from open source simulators, commercial simulators, debuggers with simulators,
to debuggers with hardware specific pods and devices. Testing RTEMS requires
-the cross-compiled test executable is transfered to the target hardware,
-executed and the output returned to the host where it is analyised to determine
+the cross-compiled test executable is transferred to the target hardware,
+executed and the output returned to the host where it is analyzed to determine
the test result. The RTEMS Tester provides a framework to do this.
Running all the RTEMS tests on your target is very important. It provides you
-with a tracable record your RTEMS version and its tools and working at the
+with a traceable record your RTEMS version and its tools are working at the
level the RTEMS development team expect when releasing RTEMS. Being able to
-easly run the tests and verify the results is critical in maintiaining a high
+easily run the tests and verify the results is critical in maintaining a high
standard.
The RTEMS Tester contains:
* Command line tool (+rtems-test+)
* BSP Configuration scripts
-* Backend Configuration scripts
-* Backend Python classes
+* Back-end Configuration scripts
+* Back-end Python classes
* Python based framework
IMPORTANT: If you have a problem please see the <<_bugs,reporting bugs>>
@@ -54,7 +54,7 @@ License
The RTEMS Tester is part of the RTEMS Tools Project. The code is released under
the OSI approved The BSD 2-Clause License. It is free to use and we encourage
-this including operating systems other than RTEMS.
+this, including on operating systems other than RTEMS.
The code and command line tools must retain the same names and always reference
the RTEMS Tools Project.
@@ -64,13 +64,13 @@ Quick Start
The quick start will show you how to run the test suite for a BSP. It will
explain how to get the RTEMS Tester, set it up and run the tests for the SIS
-BSP. It assumes you have a valid SPARC tool chain and built SIS BSP version of
-RTEMS. 4.11.
+BSP. It assumes you have a valid SPARC tool chain and have built the SIS BSP
+version of RTEMS. 4.11.
Setup
~~~~~
-Setup a development work space:
+Set up a development work space:
-------------------------------------------------------------
$ cd
@@ -81,14 +81,14 @@ $ cd development/rtems/test
First fetch the RTEMS tester from the RTEMS Tools repository::
-------------------------------------------------------------
-$ git git://git.rtems.org/rtems-tools.git rtems-tools.git
+$ git clone git://git.rtems.org/rtems-tools.git rtems-tools.git
$ cd rtems-tools.git/tester
-------------------------------------------------------------
-Available BSPs
-~~~~~~~~~~~~~~
+Available BSP testers
+~~~~~~~~~~~~~~~~~~~~~
-You can list the available BSP's with:
+You can list the available BSP testers with:
-------------------------------------------------------------
$ ./rtems-test --list-bsps
@@ -117,10 +117,10 @@ you to add it and submit the configuration back to the project.
=============================================================
Some of the BSPs may appear more than once in the list. These are aliased BSP
-configuration's that may use a different backend. An example is the SPARC
-Instruction Simulator (SIS) BSP. There is the 'sis' BSP which uses the GDB
-backend and the 'sis-run' which uses the command line version of the SIS
-simulator. We will show how to use +rtems-test+ conmand with the SIS BSP
+configurations that may use a different back-end. An example is the SPARC
+Instruction Simulator (SIS) BSP. There is the 'sis' tester which uses the GDB
+back-end and the 'sis-run' tester which uses the command line version of the SIS
+simulator. We will show how to use +rtems-test+ command with the SIS BSP
because it is easy to build an use.
Building RTEMS Tests
@@ -130,7 +130,7 @@ Build RTEMS with a configuration command line something similar to:
[NOTE]
=============================================================
-The following assumes a Unix type host and the tools have been built with
+The following assumes a Unix-type host and that the tools have been built with
a prefix of +$HOME/development/rtems/4.11+.
=============================================================
@@ -153,7 +153,7 @@ $ make <1>
it can.
Building all the tests takes time and it uses more disk so be patient. When
-finished all the tests will be built ready to run. Before running all the tests
+finished all the tests will be built and ready to run. Before running all the tests
it is a good idea to run the +hello+ test. The +hello+ test is an RTEMS version
of the classic "Hello World" example and running it shows you have a working
tool chain and build of RTEMS ready to run the tests. Using the run command:
@@ -218,7 +218,7 @@ The +rtems-test+ command line accepts a range of options. These are discussed
later in the manual. Any command line argument without a +--+ prefix is a test
executable. You can pass more than one executable on the command line. If the
executable is a path to a directory the directories under that path are
-searched for any file with a +.exe+ extension. This is the detault extension
+searched for any file with a +.exe+ extension. This is the default extension
for RTEMS executables built within RTEMS.
To run the SIS tests enter the following command from the top of the SIS BSP
@@ -270,8 +270,8 @@ tests log the complete output.
to the path specific tests can be run.
<6> The output has been shortened so it fits nicely here.
<7> The test results. It shows passes, fails, timeouts, and invalid results. In
-this run 495 tests passed and 5 tests timedout. The timeouts are probability
-due the tests not having enough execute time to complete. The default timeout
+this run 495 tests passed and 5 tests timed out. The timeouts are probably due
+to the tests not having enough execute time to complete. The default timeout
is 180 seconds and some of the interrupt tests need longer. The amount of time
depends on the performance of your host CPU running the simulations.
<8> The average time per test and the total time taken to run all the tests.
@@ -279,18 +279,18 @@ depends on the performance of your host CPU running the simulations.
This BSP requires the +--rtems-tools+ option because the SPARC GDB is the
+sparc-rtems4.11-gdb+ command that is part of the RTEMS tools. Not every BSP
will require this option so you will need to check the specifics of the BSP
-configration to determine if it is needed.
+configuration to determine if it is needed.
-The output you see is each test starting to run. The +rtems-test+ command can
-run multiple SIS GDB simulations in parallel so you will see a number start
-quickly and then tests start as others finish. The output shown here is from an
+The output you see is each test starting to run. The +rtems-test+ command by
+default runs multiple tests in parallel so you will see a number start quickly
+and then new tests start as others finish. The output shown here is from an
8 core processor so the first 8 are started in parallel and the status shows
-the order they actually started which is not 1 to 8.
+the order in which they actually started, which is not 1 to 8.
The test start line shows the current status of the tests. The status reported
is when the test starts and not the result of that test. A fail, timeout or
invalid count changing means a test running before this test started failed,
-not the starting test. The status here has 495 tests pass and no failures and 5
+not the starting test. The status here has 495 tests passed, no failures and 5
timeouts.:
-------------------------------------------------------------
@@ -298,14 +298,14 @@ timeouts.:
-------------------------------------------------------------
<1> The test number, in this case test 295 of 500 tests.
<2> Passed test count.
-<3> Failied test count.
+<3> Failed test count.
<4> Timeout test count.
<5> Invalid test count.
<6> Architecture and BSP.
<7> Executable name.
The test log records all the tests and results. The reporting mode by default
-only provides the output history if a test fails, timeouts, or is invalid. The
+only provides the output history if a test fails, times out, or is invalid. The
time taken by each test is also recorded.
The tests must complete in a specified time or the test is marked as timed
@@ -314,7 +314,7 @@ out. The default timeout is 3 minutes and can be globally changed using the
vary. When simulators are run in parallel the time taken depends on the
specifics of the host machine being used. A test per core is the most stable
method even though more tests can be run than available cores. If your machine
-needs longer or you are using a VM you may need to lengthen the time out.
+needs longer or you are using a VM you may need to lengthen the timeout.
Test Status
~~~~~~~~~~~
@@ -339,12 +339,12 @@ A test fails if the start marker is seen and there is no end marker.
.Timeout
If the test does not complete within the timeout setting the test is marked as
-timed out.
+having timed out.
.Invalid
If no start marker is seen the test is marked as invalid. If you are testing on
real target hardware things can sometimes go wrong and the target may not
-initialise or respond to the debugger in an expected way.
+initialize or respond to the debugger in an expected way.
Reporting
~~~~~~~~~
@@ -441,10 +441,10 @@ Running Tests in Parallel
-------------------------
The RTEMS Tester supports parallel execution of tests by default. This only
-makes sense if the test backend can run in parallel without resulting in
-resource contention. Simulators are an example of backends that can run in
-parallel. A hardware debug tool like a BDM or JTAG pod can only a single test
-at once to the tests need to be run one at a time.
+makes sense if the test back-end can run in parallel without resulting in
+resource contention. Simulators are an example of back-ends that can run in
+parallel. A hardware debug tool like a BDM or JTAG pod can manage only a
+single test at once so the tests need to be run one at a time.
The test framework manages the test jobs and orders the output in the report
log in test order. Output is held for completed tests until the next test to be
@@ -466,7 +466,7 @@ Options and arguments:
--jobs=[0..n,none,half,full] : Run with specified number of jobs, default: num CPUs.
--keep-going : Do not stop on an error.
--list-bsps : List the supported BSPs
---log file : Log file where all build out is written too
+--log file : Log file where all build output is written to
--macros file[,file] : Macro format files to load after the defaults
--no-clean : Do not clean up the build tree
--quiet : Quiet output (not used)
@@ -479,12 +479,12 @@ Options and arguments:
--warn-all : Generate warnings
-------------------------------------------------------------
-Developement
+Development
------------
The RTEMS Tester framework and command line tool is under active
-development. This are changing, being fixed, broken and generally improved. If
-you want to help please see the Wiki page for open itmes.
+development. These are changing, being fixed, broken and generally improved. If
+you want to help please see the Wiki page for open items.
@@ -493,9 +493,9 @@ History
The RTEMS Tester is based on a refactored base of Python code used in the RTEMS
Source Builder. This code provided a working tested base that has been extended
-and expanded to meet the needs of the RTEMS Tester. The tester uses the
+and expanded to meet the requirements for the RTEMS Tester. The tester uses the
specifics found in the various scripts and configurations in the
rtems-testing.git repo that has been accumulated over many years. The shell
-script implementation is restricted in what can it do and per BSP script is a
-maintenance burden, for example the command lines and options vary between each
+script implementation is restricted in what it can do and, per BSP script, is a
+maintenance burden. For example the command lines and options vary between each
script.
diff --git a/rtemstoolkit/options.py b/rtemstoolkit/options.py
index 97b8ba7..6a21f80 100644
--- a/rtemstoolkit/options.py
+++ b/rtemstoolkit/options.py
@@ -104,7 +104,7 @@ class command_line(object):
'--keep-going': 'Do not stop on an error.',
'--jobs=[0..n,none,half,full]': 'Run with specified number of jobs, default: num CPUs.',
'--macros file[,file]': 'Macro format files to load after the defaults',
- '--log file': 'Log file where all build out is written too',
+ '--log file': 'Log file where all build output is written to',
}
self.opts = { 'params' : [] }
self.command_path = command_path
--
1.8.1.4
More information about the users
mailing list