Browse Source

Updated regression setup instructions

[SVN r26636]
Aleksey Gurtovoy 21 years ago
parent
commit
dad4c69738

+ 64 - 60
tools/regression/xsl_reports/runner/instructions.html

@@ -3,9 +3,9 @@
 <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
 <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
-<meta name="generator" content="Docutils 0.3.4: http://docutils.sourceforge.net/" />
+<meta name="generator" content="Docutils 0.3.6: http://docutils.sourceforge.net/" />
 <title>Running Boost Regression Tests</title>
-<style type="text/css"><!--
+<style type="text/css">
 
 /*
 :Author: David Goodger
@@ -243,11 +243,11 @@ tt {
 ul.auto-toc {
   list-style-type: none }
 
---></style>
+</style>
 </head>
 <body>
 <h1 class="title">Running Boost Regression Tests</h1>
-<div class="document" id="running-boost-regression-tests">
+<div class="document" id="running-boost-regression">
 <div class="section" id="requirements">
 <h2><a name="requirements">Requirements</a></h2>
 <ul class="simple">
@@ -258,13 +258,13 @@ ul.auto-toc {
 <div class="section" id="installation">
 <h2><a name="installation">Installation</a></h2>
 <ul class="simple">
-<li>Download regression driver <tt class="literal"><span class="pre">regression.py</span></tt> from <a class="reference" href="http://cvs.sourceforge.net/viewcvs.py/*checkout*/boost/boost/tools/regression/xsl_reports/runner/regression.py">here</a> (<a class="reference" href="http://tinyurl.com/4fp4g">http://tinyurl.com/4fp4g</a>)
+<li>Download regression driver <tt class="docutils literal"><span class="pre">regression.py</span></tt> from <a class="reference" href="http://cvs.sourceforge.net/viewcvs.py/*checkout*/boost/boost/tools/regression/xsl_reports/runner/regression.py">here</a> (<a class="reference" href="http://tinyurl.com/4fp4g">http://tinyurl.com/4fp4g</a>)
 and put it in the directory where you want all the regression 
 test files to be placed.</li>
 </ul>
 <ul>
-<li><p class="first"><strong>Optional</strong>: If you already have <tt class="literal"><span class="pre">bjam</span></tt> and/or <tt class="literal"><span class="pre">process_jam_log</span></tt> executables
-you'd like to use, just put them in the same directory with <tt class="literal"><span class="pre">regression.py</span></tt>, e.g.:</p>
+<li><p class="first"><strong>Optional</strong>: If you already have <tt class="docutils literal"><span class="pre">bjam</span></tt> and/or <tt class="docutils literal"><span class="pre">process_jam_log</span></tt> executables
+you'd like to use, just put them in the same directory with <tt class="docutils literal"><span class="pre">regression.py</span></tt>, e.g.:</p>
 <pre class="literal-block">
 my_boost_regressions/
     regression.py
@@ -275,36 +275,33 @@ my_boost_regressions/
 </div>
 <div class="section" id="running-tests">
 <h2><a name="running-tests">Running tests</a></h2>
-<ul>
-<li><p class="first">To start a regression run, simply run <tt class="literal"><span class="pre">regression.py</span></tt> providing it with the only
-required option, runner id (something unique of your choice that will identify your 
-results in the reports <a class="footnote-reference" href="#runnerid1" id="id2" name="id2"><sup>1</sup></a>, <a class="footnote-reference" href="#runnerid2" id="id3" name="id3"><sup>2</sup></a>). For example:</p>
-<pre class="literal-block">
-python regression.py --runner=Metacomm
-</pre>
-<p>You can specify a particular set of toolsets you want to test with by passing them as 
-a comma-separated list using the <tt class="literal"><span class="pre">--toolsets</span></tt> option:</p>
+<p>To start a regression run, simply run <tt class="docutils literal"><span class="pre">regression.py</span></tt> providing it with the following
+two arguments:</p>
+<ul class="simple">
+<li>runner id (something unique of your choice that will identify your 
+results in the reports <a class="footnote-reference" href="#runnerid1" id="id2" name="id2">[1]</a>, <a class="footnote-reference" href="#runnerid2" id="id3" name="id3">[2]</a>)</li>
+<li>a particular set of toolsets you want to test with <a class="footnote-reference" href="#toolsets" id="id4" name="id4">[3]</a>.</li>
+</ul>
+<p>For example:</p>
 <pre class="literal-block">
-python regression.py --runner=Metacomm <strong>--toolsets=gcc,vc7</strong>
+python regression.py --runner=Metacomm --toolsets=gcc,vc7
 </pre>
-<p>If you are interested in seeing all available options, run <tt class="literal"><span class="pre">python</span> <span class="pre">regression.py</span></tt>
-or <tt class="literal"><span class="pre">python</span> <span class="pre">regression.py</span> <span class="pre">--help</span></tt>. See also the <a class="reference" href="#advanced-use">Advanced use</a> section below.</p>
+<p>If you are interested in seeing all available options, run <tt class="docutils literal"><span class="pre">python</span> <span class="pre">regression.py</span></tt>
+or <tt class="docutils literal"><span class="pre">python</span> <span class="pre">regression.py</span> <span class="pre">--help</span></tt>. See also the <a class="reference" href="#advanced-use">Advanced use</a> section below.</p>
 <p><strong>Note</strong>: If you are behind a firewall/proxy server, everything should still &quot;just work&quot;. 
 In the rare cases when it doesn't, you can explicitly specify the proxy server 
-parameters through the <tt class="literal"><span class="pre">--proxy</span></tt> option, e.g.:</p>
+parameters through the <tt class="docutils literal"><span class="pre">--proxy</span></tt> option, e.g.:</p>
 <pre class="literal-block">
-python regression.py --runner=Metacomm <strong>--proxy=http://www.someproxy.com:3128</strong>
+python regression.py ... <strong>--proxy=http://www.someproxy.com:3128</strong>
 </pre>
-</li>
-</ul>
 </div>
 <div class="section" id="details">
 <h2><a name="details">Details</a></h2>
 <p>The regression run procedure will:</p>
 <ul class="simple">
-<li>Download the most recent tarball from <a class="reference" href="http://www.boost-consulting.com">http://www.boost-consulting.com</a>, 
-unpack it in the subdirectory <tt class="literal"><span class="pre">boost</span></tt>.</li>
-<li>Build <tt class="literal"><span class="pre">bjam</span></tt> and <tt class="literal"><span class="pre">process_jam_log</span></tt> if needed. (<tt class="literal"><span class="pre">process_jam_log</span></tt> is an
+<li>Download the most recent tarball from <a class="reference" href="http://www.meta-comm.com/engineering/boost/snapshot/">http://www.meta-comm.com/engineering/boost/snapshot/</a>,
+unpack it in the subdirectory <tt class="docutils literal"><span class="pre">boost</span></tt>.</li>
+<li>Build <tt class="docutils literal"><span class="pre">bjam</span></tt> and <tt class="docutils literal"><span class="pre">process_jam_log</span></tt> if needed. (<tt class="docutils literal"><span class="pre">process_jam_log</span></tt> is an
 utility, which extracts the test results from the log file produced by 
 Boost.Build).</li>
 <li>Run regression tests, process and collect the results.</li>
@@ -318,44 +315,44 @@ merge all submitted test runs and publish them at
 <h2><a name="advanced-use">Advanced use</a></h2>
 <div class="section" id="incremental-runs">
 <h3><a name="incremental-runs">Incremental runs</a></h3>
-<p>You can run <tt class="literal"><span class="pre">regression.py</span></tt> in incremental mode <a class="footnote-reference" href="#incremental" id="id4" name="id4"><sup>3</sup></a> by simply passing 
+<p>You can run <tt class="docutils literal"><span class="pre">regression.py</span></tt> in incremental mode <a class="footnote-reference" href="#incremental" id="id5" name="id5">[4]</a> by simply passing 
 it an identically named command-line flag:</p>
 <pre class="literal-block">
-python regression.py --runner=Metacomm <strong>--incremental</strong>
+python regression.py ... <strong>--incremental</strong>
 </pre>
 </div>
-<div class="section" id="dealing-with-misbehaved-tests-compilers">
-<h3><a name="dealing-with-misbehaved-tests-compilers">Dealing with misbehaved tests/compilers</a></h3>
+<div class="section" id="dealing-with-misbehaved">
+<h3><a name="dealing-with-misbehaved">Dealing with misbehaved tests/compilers</a></h3>
 <p>Depending on the environment/C++ runtime support library the test is compiled with, 
 a test failure/termination may cause an appearance of a dialog window, requiring
 human intervention to proceed. Moreover, the test (or even of the compiler itself)
-can fall into infinite loop, or simply run for too long. To allow <tt class="literal"><span class="pre">regression.py</span></tt> 
-to take care of these obstacles, add the <tt class="literal"><span class="pre">--monitored</span></tt> flag to the script 
+can fall into infinite loop, or simply run for too long. To allow <tt class="docutils literal"><span class="pre">regression.py</span></tt> 
+to take care of these obstacles, add the <tt class="docutils literal"><span class="pre">--monitored</span></tt> flag to the script 
 invocation:</p>
 <pre class="literal-block">
-python regression.py --runner=Metacomm <strong>--monitored</strong>
+python regression.py ... <strong>--monitored</strong>
 </pre>
 <p>That's it. Knowing your intentions, the script will be able to automatically deal 
-with the listed issues <a class="footnote-reference" href="#monitored" id="id5" name="id5"><sup>4</sup></a>.</p>
+with the listed issues <a class="footnote-reference" href="#monitored" id="id6" name="id6">[5]</a>.</p>
 </div>
 <div class="section" id="getting-sources-from-cvs">
 <h3><a name="getting-sources-from-cvs">Getting sources from CVS</a></h3>
 <p>If you already have a CVS client installed and configured, you might prefer to get
 the sources directly from the Boost CVS repository. To communicate this to the 
-script, you just need to pass it your SourceForge user ID using the <tt class="literal"><span class="pre">--user</span></tt> 
+script, you just need to pass it your SourceForge user ID using the <tt class="docutils literal"><span class="pre">--user</span></tt> 
 option; for instance:</p>
 <pre class="literal-block">
-python regression.py --runner=Metacomm <strong>--user=agurtovoy</strong>
+python regression.py ... <strong>--user=agurtovoy</strong>
 </pre>
-<p>You can also specify the user as <tt class="literal"><span class="pre">anonymous</span></tt>, requesting anonymous CVS access. 
+<p>You can also specify the user as <tt class="docutils literal"><span class="pre">anonymous</span></tt>, requesting anonymous CVS access. 
 Note, though, that the files obtained this way tend to lag behind the actual CVS 
 state by several hours, sometimes up to twelve. By contrast, the tarball the script 
 downloads by default is at most one hour behind.</p>
 </div>
-<div class="section" id="integration-with-a-custom-driver-script">
-<h3><a name="integration-with-a-custom-driver-script">Integration with a custom driver script</a></h3>
+<div class="section" id="integration-with-a-custom">
+<h3><a name="integration-with-a-custom">Integration with a custom driver script</a></h3>
 <p>Even if you've already been using a custom driver script, and for some 
-reason you don't  want <tt class="literal"><span class="pre">regression.py</span></tt> to take over of the entire test cycle, 
+reason you don't  want <tt class="docutils literal"><span class="pre">regression.py</span></tt> to take over of the entire test cycle, 
 getting your regression results into <a class="reference" href="http://www.boost.org/regression-logs/developer/">Boost-wide reports</a> is still easy!</p>
 <p>In fact, it's just a matter of modifying your script to perform two straightforward 
 operations:</p>
@@ -363,13 +360,13 @@ operations:</p>
 <li><p class="first"><em>Timestamp file creation</em> needs to be done before the CVS update/checkout.
 The file's location doesn't matter (nor does the content), as long as you know how 
 to access it later. Making your script to do something as simple as
-<tt class="literal"><span class="pre">echo</span> <span class="pre">&gt;timestamp</span></tt> would work just fine.</p>
+<tt class="docutils literal"><span class="pre">echo</span> <span class="pre">&gt;timestamp</span></tt> would work just fine.</p>
 </li>
-<li><p class="first"><em>Collecting and uploading logs</em> can be done any time after <tt class="literal"><span class="pre">process_jam_log</span></tt>' s
+<li><p class="first"><em>Collecting and uploading logs</em> can be done any time after <tt class="docutils literal"><span class="pre">process_jam_log</span></tt>' s
 run, and is as simple as an invocation of the local copy of
-<tt class="literal"><span class="pre">boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py</span></tt>
+<tt class="docutils literal"><span class="pre">$BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py</span></tt>
 script that was just obtained from the CVS with the rest of the sources.
-You'd need to provide <tt class="literal"><span class="pre">collect_and_upload_logs.py</span></tt> with the following three
+You'd need to provide <tt class="docutils literal"><span class="pre">collect_and_upload_logs.py</span></tt> with the following three
 arguments:</p>
 <pre class="literal-block">
 --locate-root   directory to to scan for &quot;test_log.xml&quot; files
@@ -378,10 +375,10 @@ arguments:</p>
                 as a timestamp of the run (&quot;timestamp&quot; by default)
 </pre>
 <p>For example, assuming that the run's resulting  binaries are in 
-<tt class="literal"><span class="pre">/Volumes/stuff/users/alexy/boost_regressions/results</span></tt> directory,
-the  <tt class="literal"><span class="pre">collect_and_upload_logs.py</span></tt> invocation might look like this:</p>
+<tt class="docutils literal"><span class="pre">/Volumes/stuff/users/alexy/boost_regressions/results</span></tt> directory,
+the  <tt class="docutils literal"><span class="pre">collect_and_upload_logs.py</span></tt> invocation might look like this:</p>
 <pre class="literal-block">
-python boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py 
+python $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py 
    --locate-root=/Volumes/stuff/users/alexy/boost_regressions/results
    --runner=agurtovoy
    --timestamp=/Volumes/stuff/users/alexy/boost_regressions/timestamp
@@ -393,20 +390,20 @@ python boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py
 <div class="section" id="feedback">
 <h2><a name="feedback">Feedback</a></h2>
 <p>Please send all comments/suggestions regarding this document and the testing procedure 
-itself to the <a class="reference" href="mailto:boost&#64;lists.boost.org.">Boost developers list</a> (<a class="reference" href="mailto:boost&#64;lists.boost.org">mailto:boost&#64;lists.boost.org</a>).</p>
+itself to the <a class="reference" href="http://lists.boost.org/mailman/listinfo.cgi/boost-testing">Boost Testing list</a>.</p>
 </div>
 <div class="section" id="notes">
 <h2><a name="notes">Notes</a></h2>
-<table class="footnote" frame="void" id="runnerid1" rules="none">
+<table class="docutils footnote" frame="void" id="runnerid1" rules="none">
 <colgroup><col class="label" /><col /></colgroup>
 <tbody valign="top">
 <tr><td class="label"><a class="fn-backref" href="#id2" name="runnerid1">[1]</a></td><td>If you are running regressions interlacingly with a different 
 set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need 
-to provide a <em>different</em> runner id for each of these runs, e.g. <tt class="literal"><span class="pre">your_name-intel</span></tt>, and
-<tt class="literal"><span class="pre">your_name-gcc</span></tt>.</td></tr>
+to provide a <em>different</em> runner id for each of these runs, e.g. <tt class="docutils literal"><span class="pre">your_name-intel</span></tt>, and
+<tt class="docutils literal"><span class="pre">your_name-gcc</span></tt>.</td></tr>
 </tbody>
 </table>
-<table class="footnote" frame="void" id="runnerid2" rules="none">
+<table class="docutils footnote" frame="void" id="runnerid2" rules="none">
 <colgroup><col class="label" /><col /></colgroup>
 <tbody valign="top">
 <tr><td class="label"><a class="fn-backref" href="#id3" name="runnerid2">[2]</a></td><td>The limitations of the reports' format/medium impose a direct dependency
@@ -415,11 +412,18 @@ for your runner id. If you are running regressions for a single compiler, please
 sure to choose a short enough id that does not significantly disturb the reports' layout.</td></tr>
 </tbody>
 </table>
-<table class="footnote" frame="void" id="incremental" rules="none">
+<table class="docutils footnote" frame="void" id="toolsets" rules="none">
+<colgroup><col class="label" /><col /></colgroup>
+<tbody valign="top">
+<tr><td class="label"><a class="fn-backref" href="#id4" name="toolsets">[3]</a></td><td>If <tt class="docutils literal"><span class="pre">--toolsets</span></tt> option is not provided, the script will try to use the 
+platform's default toolset (<tt class="docutils literal"><span class="pre">gcc</span></tt> for most Unix-based systems).</td></tr>
+</tbody>
+</table>
+<table class="docutils footnote" frame="void" id="incremental" rules="none">
 <colgroup><col class="label" /><col /></colgroup>
 <tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id4" name="incremental">[3]</a></td><td><p>By default, the script runs in what is known as <em>full mode</em>: on 
-each <tt class="literal"><span class="pre">regression.py</span></tt> invocation all the files that were left in place by the 
+<tr><td class="label"><a class="fn-backref" href="#id5" name="incremental">[4]</a></td><td><p class="first">By default, the script runs in what is known as <em>full mode</em>: on 
+each <tt class="docutils literal"><span class="pre">regression.py</span></tt> invocation all the files that were left in place by the 
 previous run -- including the binaries for the successfully built tests and libraries 
 -- are deleted, and everything is rebuilt once again from scratch. By contrast, in 
 <em>incremental mode</em> the already existing binaries are left intact, and only the 
@@ -430,23 +434,23 @@ but unfortunately they don't always produce reliable results. Some type of chang
 to the codebase (changes to the bjam testing subsystem in particular)
 often require switching to a full mode for one cycle in order to produce 
 trustworthy reports.</p>
-<p>As a general guideline, if you can afford it, testing in full mode is preferable.</p>
+<p class="last">As a general guideline, if you can afford it, testing in full mode is preferable.</p>
 </td></tr>
 </tbody>
 </table>
-<table class="footnote" frame="void" id="monitored" rules="none">
+<table class="docutils footnote" frame="void" id="monitored" rules="none">
 <colgroup><col class="label" /><col /></colgroup>
 <tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id5" name="monitored">[4]</a></td><td>Note that at the moment this functionality is available only if you 
+<tr><td class="label"><a class="fn-backref" href="#id6" name="monitored">[5]</a></td><td>Note that at the moment this functionality is available only if you 
 are running on a Windows platform. Contributions are welcome!</td></tr>
 </tbody>
 </table>
 </div>
 </div>
-<hr class="footer" />
+<hr class="docutils footer" />
 <div class="footer">
 <a class="reference" href="instructions.rst">View document source</a>.
-Generated on: 2004-08-10 02:23 UTC.
+Generated on: 2005-01-07 13:27 UTC.
 Generated by <a class="reference" href="http://docutils.sourceforge.net/">Docutils</a> from <a class="reference" href="http://docutils.sourceforge.net/rst.html">reStructuredText</a> source.
 </div>
 </body>

+ 29 - 27
tools/regression/xsl_reports/runner/instructions.rst

@@ -32,30 +32,29 @@ __ http://cvs.sourceforge.net/viewcvs.py/*checkout*/boost/boost/tools/regression
 Running tests
 -------------
 
-* To start a regression run, simply run ``regression.py`` providing it with the only
-  required option, runner id (something unique of your choice that will identify your 
-  results in the reports [#runnerid1]_, [#runnerid2]_). For example::
+To start a regression run, simply run ``regression.py`` providing it with the following
+two arguments:
 
-    python regression.py --runner=Metacomm
-  
-  You can specify a particular set of toolsets you want to test with by passing them as 
-  a comma-separated list using the ``--toolsets`` option:
-  
-  .. parsed-literal::
+- runner id (something unique of your choice that will identify your 
+  results in the reports [#runnerid1]_, [#runnerid2]_)
 
-     python regression.py --runner=Metacomm **--toolsets=gcc,vc7**
-  
-  
-  If you are interested in seeing all available options, run ``python regression.py``
-  or ``python regression.py --help``. See also the `Advanced use`_ section below.
+- a particular set of toolsets you want to test with [#toolsets]_.
+
+For example::
+
+    python regression.py --runner=Metacomm --toolsets=gcc,vc7
+    
+
+If you are interested in seeing all available options, run ``python regression.py``
+or ``python regression.py --help``. See also the `Advanced use`_ section below.
   
-  **Note**: If you are behind a firewall/proxy server, everything should still "just work". 
-  In the rare cases when it doesn't, you can explicitly specify the proxy server 
-  parameters through the ``--proxy`` option, e.g.:
+**Note**: If you are behind a firewall/proxy server, everything should still "just work". 
+In the rare cases when it doesn't, you can explicitly specify the proxy server 
+parameters through the ``--proxy`` option, e.g.:
 
-  .. parsed-literal::
+.. parsed-literal::
 
-     python regression.py --runner=Metacomm **--proxy=http://www.someproxy.com:3128**
+    python regression.py ... **--proxy=http://www.someproxy.com:3128**
 
 
 Details
@@ -63,7 +62,7 @@ Details
 
 The regression run procedure will:
 
-* Download the most recent tarball from http://www.boost-consulting.com, 
+* Download the most recent tarball from http://www.meta-comm.com/engineering/boost/snapshot/,
   unpack it in the subdirectory ``boost``.
 
 * Build ``bjam`` and ``process_jam_log`` if needed. (``process_jam_log`` is an
@@ -91,7 +90,7 @@ it an identically named command-line flag:
 
 .. parsed-literal::
 
-      python regression.py --runner=Metacomm **--incremental**
+      python regression.py ... **--incremental**
 
 
 Dealing with misbehaved tests/compilers
@@ -106,7 +105,7 @@ invocation:
 
 .. parsed-literal::
 
-      python regression.py --runner=Metacomm **--monitored**
+      python regression.py ... **--monitored**
 
 
 That's it. Knowing your intentions, the script will be able to automatically deal 
@@ -123,7 +122,7 @@ option; for instance:
 
 .. parsed-literal::
 
-      python regression.py --runner=Metacomm **--user=agurtovoy**
+      python regression.py ... **--user=agurtovoy**
 
 You can also specify the user as ``anonymous``, requesting anonymous CVS access. 
 Note, though, that the files obtained this way tend to lag behind the actual CVS 
@@ -148,7 +147,7 @@ operations:
 
 2. *Collecting and uploading logs* can be done any time after ``process_jam_log``' s
    run, and is as simple as an invocation of the local copy of
-   ``boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py``
+   ``$BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py``
    script that was just obtained from the CVS with the rest of the sources.
    You'd need to provide ``collect_and_upload_logs.py`` with the following three
    arguments::
@@ -162,7 +161,7 @@ operations:
    ``/Volumes/stuff/users/alexy/boost_regressions/results`` directory,
    the  ``collect_and_upload_logs.py`` invocation might look like this::
 
-       python boost/tools/regression/xsl_reports/runner/collect_and_upload_logs.py 
+       python $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py 
           --locate-root=/Volumes/stuff/users/alexy/boost_regressions/results
           --runner=agurtovoy
           --timestamp=/Volumes/stuff/users/alexy/boost_regressions/timestamp
@@ -175,9 +174,9 @@ Feedback
 --------
 
 Please send all comments/suggestions regarding this document and the testing procedure 
-itself to the `Boost developers list`__ (mailto:boost@lists.boost.org).
+itself to the `Boost Testing list`__.
 
-__ mailto:boost@lists.boost.org.
+__ http://lists.boost.org/mailman/listinfo.cgi/boost-testing
 
 
 Notes
@@ -193,6 +192,9 @@ Notes
    for your runner id. If you are running regressions for a single compiler, please make 
    sure to choose a short enough id that does not significantly disturb the reports' layout.
 
+.. [#toolsets] If ``--toolsets`` option is not provided, the script will try to use the 
+   platform's default toolset (``gcc`` for most Unix-based systems).
+
 .. [#incremental] By default, the script runs in what is known as *full mode*: on 
    each ``regression.py`` invocation all the files that were left in place by the 
    previous run -- including the binaries for the successfully built tests and libraries 

粤ICP备19079148号