Browse Source

Replace old docs with excerpts from http://article.gmane.org/gmane.comp.lib.boost.testing/5020

[SVN r41091]
Beman Dawes 18 years ago
parent
commit
a9f0686783

+ 100 - 476
tools/regression/xsl_reports/runner/instructions.html

@@ -1,485 +1,109 @@
-<?xml version="1.0" encoding="utf-8" ?>
-<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
-<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+<html>
+
 <head>
-<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
-<meta name="generator" content="Docutils 0.5: http://docutils.sourceforge.net/" />
+<meta http-equiv="Content-Language" content="en-us">
+<meta name="GENERATOR" content="Microsoft FrontPage 5.0">
+<meta name="ProgId" content="FrontPage.Editor.Document">
+<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
 <title>Running Boost Regression Tests</title>
-<style type="text/css">
-
-/*
-:Author: David Goodger
-:Contact: goodger@users.sourceforge.net
-:date: $Date$
-:version: $Revision$
-:copyright: This stylesheet has been placed in the public domain.
-
-Default cascading style sheet for the HTML output of Docutils.
-*/
-
-body {
-    background-color: #fffff5;
-}
-
-h2 {
-    text-decoration: underline;
-}
-
-.first {
-  margin-top: 0 }
-
-.last {
-  margin-bottom: 0 }
-
-a.toc-backref {
-  text-decoration: none ;
-  color: black }
-
-blockquote.epigraph {
-  margin: 2em 5em ; }
-
-dd {
-  margin-bottom: 0.5em }
-
-div.abstract {
-  margin: 2em 5em }
-
-div.abstract p.topic-title {
-  font-weight: bold ;
-  text-align: center }
-
-div.attention, div.caution, div.danger, div.error, div.hint,
-div.important, div.note, div.tip, div.warning, div.admonition {
-  margin: 2em ;
-  border: medium outset ;
-  padding: 1em }
-
-div.attention p.admonition-title, div.caution p.admonition-title,
-div.danger p.admonition-title, div.error p.admonition-title,
-div.warning p.admonition-title {
-  color: red ;
-  font-weight: bold ;
-  font-family: sans-serif }
-
-div.hint p.admonition-title, div.important p.admonition-title,
-div.note p.admonition-title, div.tip p.admonition-title,
-div.admonition p.admonition-title {
-  font-weight: bold ;
-  font-family: sans-serif }
-
-div.dedication {
-  margin: 2em 5em ;
-  text-align: center ;
-  font-style: italic }
-
-div.dedication p.topic-title {
-  font-weight: bold ;
-  font-style: normal }
-
-div.figure {
-  margin-left: 2em }
-
-div.sidebar {
-  margin-left: 1em ;
-  border: medium outset ;
-  padding: 0em 1em ;
-  background-color: #ffffee ;
-  width: 40% ;
-  float: right ;
-  clear: right }
-
-div.sidebar p.rubric {
-  font-family: sans-serif ;
-  font-size: medium }
-
-div.system-messages {
-  margin: 5em }
-
-div.system-messages h1 {
-  color: red }
-
-div.system-message {
-  border: medium outset ;
-  padding: 1em }
-
-div.system-message p.system-message-title {
-  color: red ;
-  font-weight: bold }
-
-div.topic {
-  margin: 2em }
-
-h1.title {
-  text-align: center }
-
-h2.subtitle {
-  text-align: center }
-
-ol.simple, ul.simple {
-  margin-bottom: 1em }
-
-ol.arabic {
-  list-style: decimal }
-
-ol.loweralpha {
-  list-style: lower-alpha }
-
-ol.upperalpha {
-  list-style: upper-alpha }
-
-ol.lowerroman {
-  list-style: lower-roman }
-
-ol.upperroman {
-  list-style: upper-roman }
-
-p.attribution {
-  text-align: right ;
-  margin-left: 50% }
-
-p.caption {
-  font-style: italic }
-
-p.credits {
-  font-style: italic ;
-  font-size: smaller }
-
-p.label {
-  white-space: nowrap }
-
-p.rubric {
-  font-weight: bold ;
-  font-size: larger ;
-  color: maroon ;
-  text-align: center }
-
-p.sidebar-title {
-  font-family: sans-serif ;
-  font-weight: bold ;
-  font-size: larger }
-
-p.sidebar-subtitle {
-  font-family: sans-serif ;
-  font-weight: bold }
-
-p.topic-title {
-  font-weight: bold }
-
-pre.address {
-  margin-bottom: 0 ;
-  margin-top: 0 ;
-  font-family: serif ;
-  font-size: 100% }
-
-pre.line-block {
-  font-family: serif ;
-  font-size: 100% }
-
-pre.literal-block, pre.doctest-block {
-  margin-left: 2em ;
-  margin-right: 2em ;
-  background-color: #eeeeee }
-
-span.classifier {
-  font-family: sans-serif ;
-  font-style: oblique }
-
-span.classifier-delimiter {
-  font-family: sans-serif ;
-  font-weight: bold }
-
-span.interpreted {
-  font-family: sans-serif }
-
-span.option {
-  white-space: nowrap }
-
-span.option-argument {
-  font-style: italic }
-
-span.pre {
-  white-space: pre }
-
-span.problematic {
-  color: red }
-
-table {
-  margin-top: 0.5em ;
-  margin-bottom: 0.5em }
-
-table.citation {
-  border-left: solid thin gray ;
-  padding-left: 0.5ex }
-
-table.docinfo {
-  margin: 2em 4em }
-
-table.footnote {
-  border-left: solid thin black ;
-  padding-left: 0.5ex }
-
-td, th {
-  padding-left: 0.5em ;
-  padding-right: 0.5em ;
-  vertical-align: top }
-
-th.docinfo-name, th.field-name {
-  font-weight: bold ;
-  text-align: left ;
-  white-space: nowrap }
-
-h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
-  font-size: 100% }
-
-tt {
-  background-color: #eeeeee }
-
-ul.auto-toc {
-  list-style-type: none }
-
-</style>
+<link rel="stylesheet" type="text/css" href="../../../../doc/html/minimal.css">
 </head>
+
 <body>
-<div class="document" id="running-boost-regression-tests">
-<h1 class="title">Running Boost Regression Tests</h1>
 
-<div class="section" id="requirements">
+<table border="0" cellpadding="5" cellspacing="0" style="border-collapse: collapse" bordercolor="#111111" width="831">
+  <tr>
+    <td width="277">
+<a href="../../../../index.htm">
+<img src="../../../../boost.png" alt="boost.png (6897 bytes)" align="middle" width="277" height="86" border="0"></a></td>
+    <td width="531" align="middle">
+    <font size="7">Running Boost Regression Tests</font>
+    </td>
+  </tr>
+</table>
+
 <h2>Requirements</h2>
-<ul class="simple">
-<li>Python 2.3 or higher</li>
-<li>Some spare disk space (~5 Gb per each tested compiler)</li>
-</ul>
-<p>That's it! You don't even need an SVN client installed.</p>
-</div>
-<div class="section" id="installation">
-<h2>Installation</h2>
-<ul class="simple">
-<li>Download regression driver <tt class="docutils literal"><span class="pre">regression.py</span></tt> from <a class="reference external" href="http://svn.boost.org/svn/boost/trunk/tools/regression/xsl_reports/runner/regression.py">here</a> (<a class="reference external" href="http://tinyurl.com/236tty">http://tinyurl.com/236tty</a>)
-and put it in the directory where you want all the regression
-test files to be placed.</li>
-</ul>
 <ul>
-<li><p class="first"><strong>Optional</strong>: If you already have <tt class="docutils literal"><span class="pre">bjam</span></tt> and/or <tt class="docutils literal"><span class="pre">process_jam_log</span></tt> executables
-you'd like to use, just put them in the same directory with <tt class="docutils literal"><span class="pre">regression.py</span></tt>, e.g.:</p>
-<pre class="literal-block">
-my_boost_regressions/
-    regression.py
-    bjam<em>[.exe]</em>
-</pre>
-</li>
+  <li>Python 2.3 or later.<br>
+&nbsp;</li>
+  <li>Subversion 1.4 or later.<br>
+&nbsp;</li>
+  <li>At least 5 gigabytes of disk space per compiler to be tested.</li>
 </ul>
-</div>
-<div class="section" id="running-tests">
-<h2>Running tests</h2>
-<p>To start a regression run, simply run <tt class="docutils literal"><span class="pre">regression.py</span></tt> providing it with the following
-two arguments:</p>
-<ul class="simple">
-<li>runner id (something unique of your choice that will identify your
-results in the reports <a class="footnote-reference" href="#runnerid1" id="id2">[1]</a>, <a class="footnote-reference" href="#runnerid2" id="id3">[2]</a>)</li>
-<li>a particular set of toolsets you want to test with <a class="footnote-reference" href="#toolsets" id="id4">[3]</a>.</li>
-</ul>
-<p>For example:</p>
-<pre class="literal-block">
-python regression.py --runner=Metacomm --toolsets=gcc-4.2.1,msvc-8.0
-</pre>
-<p>If you are interested in seeing all available options, run <tt class="docutils literal"><span class="pre">python</span> <span class="pre">regression.py</span></tt>
-or <tt class="docutils literal"><span class="pre">python</span> <span class="pre">regression.py</span> <span class="pre">--help</span></tt>. See also the <a class="reference internal" href="#advanced-use">Advanced use</a> section below.</p>
-<p><strong>Note</strong>: If you are behind a firewall/proxy server, everything should still &quot;just work&quot;.
-In the rare cases when it doesn't, you can explicitly specify the proxy server
-parameters through the <tt class="docutils literal"><span class="pre">--proxy</span></tt> option, e.g.:</p>
-<pre class="literal-block">
-python regression.py ... <strong>--proxy=http://www.someproxy.com:3128</strong>
-</pre>
-</div>
-<div class="section" id="details">
-<h2>Details</h2>
-<p>The regression run procedure will:</p>
-<ul class="simple">
-<li>Download the most recent tarball from <a class="reference external" href="http://www.meta-comm.com/engineering/boost/snapshot/">http://www.meta-comm.com/engineering/boost/snapshot/</a>,
-unpack it in the subdirectory <tt class="docutils literal"><span class="pre">boost</span></tt>.</li>
-<li>Build <tt class="docutils literal"><span class="pre">bjam</span></tt> and <tt class="docutils literal"><span class="pre">process_jam_log</span></tt> if needed. (<tt class="docutils literal"><span class="pre">process_jam_log</span></tt> is an
-utility, which extracts the test results from the log file produced by
-Boost.Build).</li>
-<li>Run regression tests, process and collect the results.</li>
-<li>Upload the results to <a class="reference external" href="ftp://fx.meta-comm.com/boost-regression">ftp://fx.meta-comm.com/boost-regression</a>.</li>
-</ul>
-<p>The report merger process running continuously on MetaCommunications site will
-merge all submitted test runs and publish them at
-<a class="reference external" href="http://engineering.meta-comm.com/boost-regression/">http://engineering.meta-comm.com/boost-regression/</a>.</p>
-</div>
-<div class="section" id="advanced-use">
-<h2>Advanced use</h2>
-<div class="section" id="providing-detailed-information-about-your-environment">
-<h3>Providing detailed information about your environment</h3>
-<p>Once you have your regression results displayed in the Boost-wide
-reports, you may consider providing a bit more information about
-yourself and your test environment. This additional information will
-be presented in the reports on a page associated with your runner ID.</p>
-<p>By default, the page's content is just a single line coming from the
-<tt class="docutils literal"><span class="pre">comment.html</span></tt> file in your <tt class="docutils literal"><span class="pre">regression.py</span></tt> directory, specifying
-the tested platform. You can put online a more detailed description of
-your environment, such as your hardware configuration, compiler builds,
-and test schedule, by simply altering the file's content. Also, please
-consider providing your name and email address for cases where Boost
-developers have questions specific to your particular set of results.</p>
-</div>
-<div class="section" id="incremental-runs">
-<h3>Incremental runs</h3>
-<p>You can run <tt class="docutils literal"><span class="pre">regression.py</span></tt> in incremental mode <a class="footnote-reference" href="#incremental" id="id5">[4]</a> by simply passing
-it an identically named command-line flag:</p>
-<pre class="literal-block">
-python regression.py ... <strong>--incremental</strong>
-</pre>
-</div>
-<div class="section" id="dealing-with-misbehaved-tests-compilers">
-<h3>Dealing with misbehaved tests/compilers</h3>
-<p>Depending on the environment/C++ runtime support library the test is compiled with,
-a test failure/termination may cause an appearance of a dialog window, requiring
-human intervention to proceed. Moreover, the test (or even of the compiler itself)
-can fall into infinite loop, or simply run for too long. To allow <tt class="docutils literal"><span class="pre">regression.py</span></tt>
-to take care of these obstacles, add the <tt class="docutils literal"><span class="pre">--monitored</span></tt> flag to the script
-invocation:</p>
-<pre class="literal-block">
-python regression.py ... <strong>--monitored</strong>
-</pre>
-<p>That's it. Knowing your intentions, the script will be able to automatically deal
-with the listed issues <a class="footnote-reference" href="#monitored" id="id6">[5]</a>.</p>
-</div>
-<div class="section" id="getting-sources-from-svn">
-<h3>Getting sources from SVN</h3>
-<p>If you already have an SVN client installed and configured, you might
-prefer to get the sources directly from the <a class="reference external" href="http://svn.boost.org/trac/boost/wiki/BoostSubversion">Boost Subversion
-Repository</a>. To communicate this to the script, you just need to
-pass it your Boost SVN user ID using the <tt class="docutils literal"><span class="pre">--user</span></tt> option; for
-instance:</p>
-<pre class="literal-block">
-python regression.py ... <strong>--user=agurtovoy</strong>
-</pre>
-<p>You can also specify the user as <tt class="docutils literal"><span class="pre">anonymous</span></tt>, requesting anonymous
-SVN access.</p>
-<p>The main advantage of obtaining the sources through SVN is an
-immediate availability of the most recent check-ins: the sources
-extracted from a tarball the script downloads by default can be up to
-one hour behind the actual repository state at the time of test run.</p>
-</div>
-<div class="section" id="integration-with-a-custom-driver-script">
-<h3>Integration with a custom driver script</h3>
-<p>Even if you've already been using a custom driver script, and for some
-reason you don't  want <tt class="docutils literal"><span class="pre">regression.py</span></tt> to take over of the entire test cycle,
-getting your regression results into <a class="reference external" href="http://www.boost.org/regression-logs/developer/">Boost-wide reports</a> is still easy!</p>
-<p>In fact, it's just a matter of modifying your script to perform two straightforward
-operations:</p>
-<ol class="arabic">
-<li><p class="first"><em>Timestamp file creation</em> needs to be done before the SVN update/checkout.
-The file's location doesn't matter (nor does the content), as long as you know how
-to access it later. Making your script to do something as simple as
-<tt class="docutils literal"><span class="pre">echo</span> <span class="pre">&gt;timestamp</span></tt> would work just fine.</p>
-</li>
-<li><p class="first"><em>Collecting and uploading logs</em> can be done any time after <tt class="docutils literal"><span class="pre">process_jam_log</span></tt>' s
-run, and is as simple as an invocation of the local copy of
-<tt class="docutils literal"><span class="pre">$BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py</span></tt>
-script that was just obtained from the SVN with the rest of the sources.
-You'd need to provide <tt class="docutils literal"><span class="pre">collect_and_upload_logs.py</span></tt> with the following three
-arguments:</p>
-<pre class="literal-block">
---locate-root   directory to to scan for &quot;test_log.xml&quot; files
---runner        runner ID (e.g. &quot;Metacomm&quot;)
---timestamp     path to a file which modification time will be used
-                as a timestamp of the run (&quot;timestamp&quot; by default)
-</pre>
-<p>For example, assuming that the run's resulting  binaries are in the
-<tt class="docutils literal"><span class="pre">$BOOST_ROOT/bin</span></tt> directory (the default Boost.Build setup), the
-<tt class="docutils literal"><span class="pre">collect_and_upload_logs.py</span></tt> invocation might look like this:</p>
-<pre class="literal-block">
-python $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py
-   --locate-root=$BOOST_ROOT/bin
-   --runner=Metacomm
-   --timestamp=timestamp
-</pre>
-</li>
-</ol>
-</div>
-<div class="section" id="patching-boost-sources">
-<h3>Patching Boost sources</h3>
-<p>You might encounter an occasional need to make local modifications to
-the Boost codebase before running the tests, without disturbing the
-automatic nature of the regression process. To implement this under
-<tt class="docutils literal"><span class="pre">regression.py</span></tt>:</p>
-<ol class="arabic simple">
-<li>Codify applying the desired modifications to the sources
-located in the <tt class="docutils literal"><span class="pre">./boost</span></tt> subdirectory in a single executable
-script named <tt class="docutils literal"><span class="pre">patch_boost</span></tt> (<tt class="docutils literal"><span class="pre">patch_boost.bat</span></tt> on Windows).</li>
-<li>Place the script in the <tt class="docutils literal"><span class="pre">regression.py</span></tt> directory.</li>
+<h2>Step by step instructions</h2>
+<ol>
+  <li>Create a new directory for the branch you want to test.<br>
+&nbsp;</li>
+  <li>Download the
+  <a href="http://svn.boost.org/svn/boost/trunk/tools/regression/src/run.py">
+  run.py</a> script into that directory.<br>
+&nbsp;</li>
+  <li>Run &quot;<code>python run.py [options] [commands]</code>&quot;.</li>
 </ol>
-<p>The driver will check for the existence of the <tt class="docutils literal"><span class="pre">patch_boost</span></tt> script,
-and, if found, execute it after obtaining the Boost sources.</p>
-</div>
-</div>
-<div class="section" id="feedback">
-<h2>Feedback</h2>
-<p>Please send all comments/suggestions regarding this document and the testing procedure
-itself to the <a class="reference external" href="http://lists.boost.org/mailman/listinfo.cgi/boost-testing">Boost Testing list</a>.</p>
-</div>
-<div class="section" id="notes">
-<h2>Notes</h2>
-<table class="docutils footnote" frame="void" id="runnerid1" rules="none">
-<colgroup><col class="label" /><col /></colgroup>
-<tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id2">[1]</a></td><td>If you are running regressions interlacingly with a different
-set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need
-to provide a <em>different</em> runner id for each of these runs, e.g. <tt class="docutils literal"><span class="pre">your_name-intel</span></tt>, and
-<tt class="docutils literal"><span class="pre">your_name-gcc</span></tt>.</td></tr>
-</tbody>
-</table>
-<table class="docutils footnote" frame="void" id="runnerid2" rules="none">
-<colgroup><col class="label" /><col /></colgroup>
-<tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id3">[2]</a></td><td>The limitations of the reports' format/medium impose a direct dependency
-between the number of compilers you are testing with and the amount of space available
-for your runner id. If you are running regressions for a single compiler, please make
-sure to choose a short enough id that does not significantly disturb the reports' layout.</td></tr>
-</tbody>
-</table>
-<table class="docutils footnote" frame="void" id="toolsets" rules="none">
-<colgroup><col class="label" /><col /></colgroup>
-<tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id4">[3]</a></td><td>If <tt class="docutils literal"><span class="pre">--toolsets</span></tt> option is not provided, the script will try to use the
-platform's default toolset (<tt class="docutils literal"><span class="pre">gcc</span></tt> for most Unix-based systems).</td></tr>
-</tbody>
-</table>
-<table class="docutils footnote" frame="void" id="incremental" rules="none">
-<colgroup><col class="label" /><col /></colgroup>
-<tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id5">[4]</a></td><td><p class="first">By default, the script runs in what is known as <em>full mode</em>: on
-each <tt class="docutils literal"><span class="pre">regression.py</span></tt> invocation all the files that were left in place by the
-previous run -- including the binaries for the successfully built tests and libraries
--- are deleted, and everything is rebuilt once again from scratch. By contrast, in
-<em>incremental mode</em> the already existing binaries are left intact, and only the
-tests and libraries which source files has changed since the previous run are
-re-built and re-tested.</p>
-<p>The main advantage of incremental runs is a significantly shorter turnaround time,
-but unfortunately they don't always produce reliable results. Some type of changes
-to the codebase (changes to the bjam testing subsystem in particular)
-often require switching to a full mode for one cycle in order to produce
-trustworthy reports.</p>
-<p class="last">As a general guideline, if you can afford it, testing in full mode is preferable.</p>
-</td></tr>
-</tbody>
-</table>
-<table class="docutils footnote" frame="void" id="monitored" rules="none">
-<colgroup><col class="label" /><col /></colgroup>
-<tbody valign="top">
-<tr><td class="label"><a class="fn-backref" href="#id6">[5]</a></td><td>Note that at the moment this functionality is available only if you
-are running on a Windows platform. Contributions are welcome!</td></tr>
-</tbody>
-</table>
-</div>
-</div>
-<div class="footer">
-<hr class="footer" />
-Generated on: 2007-08-05 04:33 UTC.
-Generated by <a class="reference external" href="http://docutils.sourceforge.net/">Docutils</a> from <a class="reference external" href="http://docutils.sourceforge.net/rst.html">reStructuredText</a> source.
-
-</div>
-</body>
-</html>
+<dl>
+  <dd>
+  <pre>commands: cleanup, collect-logs, get-source, get-tools, patch,
+regression, setup, show-revision, test, test-clean, test-process,
+test-run, update-source, upload-logs
+
+options:
+   -h, --help            show this help message and exit
+   --runner=RUNNER       runner ID (e.g. 'Metacomm')
+   --comment=COMMENT     an HTML comment file to be inserted in the
+                         reports
+   --tag=TAG             the tag for the results
+   --toolsets=TOOLSETS   comma-separated list of toolsets to test with
+   --incremental         do incremental run (do not remove previous
+                         binaries)
+   --timeout=TIMEOUT     specifies the timeout, in minutes, for a single
+                         test run/compilation
+   --bjam-options=BJAM_OPTIONS
+                         options to pass to the regression test
+   --bjam-toolset=BJAM_TOOLSET
+                         bootstrap toolset for 'bjam' executable
+   --pjl-toolset=PJL_TOOLSET
+                         bootstrap toolset for 'process_jam_log'
+                         executable
+   --platform=PLATFORM
+   --user=USER           Boost SVN user ID
+   --local=LOCAL         the name of the boost tarball
+   --force-update=FORCE_UPDATE
+                         do an SVN update (if applicable) instead of a
+                         clean checkout, even when performing a full run
+   --have-source=HAVE_SOURCE
+                         do neither a tarball download nor an SVN update;
+                         used primarily for testing script changes
+   --proxy=PROXY         HTTP proxy server address and port
+                         (e.g.'<a rel="nofollow" href="http://www.someproxy.com:3128'" target="_top">http://www.someproxy.com:3128'</a>)
+   --ftp-proxy=FTP_PROXY
+                         FTP proxy server (e.g. 'ftpproxy')
+   --dart-server=DART_SERVER
+                         the dart server to send results to
+   --debug-level=DEBUG_LEVEL
+                         debugging level; controls the amount of
+                         debugging output printed
+   --send-bjam-log       send full bjam log of the regression run
+   --mail=MAIL           email address to send run notification to
+   --smtp-login=SMTP_LOGIN
+                         STMP server address/login information, in the
+                         following form:
+                         &lt;user&gt;:&lt;password&gt;@&lt;host&gt;[:&lt;port&gt;]
+   --skip-tests=SKIP_TESTS
+                         do not run bjam; used for testing script changes</pre>
+  </dd>
+</dl>
+<p>To test trunk use &quot;<code>--tag=trunk</code>&quot; (the default), and to test the 
+release use &quot;<code>--tag=branches/release</code>&quot;. Or substitute any Boost tree 
+of your choice.</p>
+
+<hr>
+
+<p>© Copyright Rene Rivera, 2007<br>
+Distributed under the Boost Software License, Version 1.0. See
+<a href="http://www.boost.org/LICENSE_1_0.txt">www.boost.org/LICENSE_1_0.txt</a></p>
+
+<p>Revised 
+<!--webbot bot="Timestamp" S-Type="EDITED" S-Format="%B %d, %Y" startspan -->November 14, 2007<!--webbot bot="Timestamp" endspan i-checksum="39589" --> </font>
+</p>
+
+</body>

+ 0 - 258
tools/regression/xsl_reports/runner/instructions.rst

@@ -1,258 +0,0 @@
-Running Boost Regression Tests
-==============================
-
-
-Requirements
-------------
-
-* Python 2.3 or higher
-* Some spare disk space (~5 Gb per each tested compiler)
-
-That's it! You don't even need an SVN client installed.
-
-Installation
-------------
-
-* Download regression driver ``regression.py`` from here__ (http://tinyurl.com/236tty)
-  and put it in the directory where you want all the regression 
-  test files to be placed.
-
-__ http://svn.boost.org/svn/boost/trunk/tools/regression/xsl_reports/runner/regression.py
-
-
-* **Optional**: If you already have ``bjam`` and/or ``process_jam_log`` executables
-  you'd like to use, just put them in the same directory with ``regression.py``, e.g.:
-
-  .. parsed-literal::
-
-    my_boost_regressions/
-        regression.py
-        bjam\ *[.exe]*
-
-
-Running tests
--------------
-
-To start a regression run, simply run ``regression.py`` providing it with the following
-two arguments:
-
-- runner id (something unique of your choice that will identify your 
-  results in the reports [#runnerid1]_, [#runnerid2]_)
-
-- a particular set of toolsets you want to test with [#toolsets]_.
-
-For example::
-
-    python regression.py --runner=Metacomm --toolsets=gcc-4.2.1,msvc-8.0
-    
-
-If you are interested in seeing all available options, run ``python regression.py``
-or ``python regression.py --help``. See also the `Advanced use`_ section below.
-  
-**Note**: If you are behind a firewall/proxy server, everything should still "just work". 
-In the rare cases when it doesn't, you can explicitly specify the proxy server 
-parameters through the ``--proxy`` option, e.g.:
-
-.. parsed-literal::
-
-    python regression.py ... **--proxy=http://www.someproxy.com:3128**
-
-
-Details
--------
-
-The regression run procedure will:
-
-* Download the most recent tarball from http://www.meta-comm.com/engineering/boost/snapshot/,
-  unpack it in the subdirectory ``boost``.
-
-* Build ``bjam`` and ``process_jam_log`` if needed. (``process_jam_log`` is an
-  utility, which extracts the test results from the log file produced by 
-  Boost.Build).
-
-* Run regression tests, process and collect the results.
-
-* Upload the results to ftp://fx.meta-comm.com/boost-regression.
-
-
-The report merger process running continuously on MetaCommunications site will 
-merge all submitted test runs and publish them at 
-http://engineering.meta-comm.com/boost-regression/.
-
-
-Advanced use
-------------
-
-Providing detailed information about your environment
-.....................................................
-
-Once you have your regression results displayed in the Boost-wide
-reports, you may consider providing a bit more information about
-yourself and your test environment. This additional information will
-be presented in the reports on a page associated with your runner ID.
-
-By default, the page's content is just a single line coming from the
-``comment.html`` file in your ``regression.py`` directory, specifying
-the tested platform. You can put online a more detailed description of
-your environment, such as your hardware configuration, compiler builds,
-and test schedule, by simply altering the file's content. Also, please
-consider providing your name and email address for cases where Boost
-developers have questions specific to your particular set of results.
-
-
-Incremental runs
-................
-
-You can run ``regression.py`` in incremental mode [#incremental]_ by simply passing 
-it an identically named command-line flag:
-
-.. parsed-literal::
-
-      python regression.py ... **--incremental**
-
-
-Dealing with misbehaved tests/compilers
-.......................................
-
-Depending on the environment/C++ runtime support library the test is compiled with, 
-a test failure/termination may cause an appearance of a dialog window, requiring
-human intervention to proceed. Moreover, the test (or even of the compiler itself)
-can fall into infinite loop, or simply run for too long. To allow ``regression.py`` 
-to take care of these obstacles, add the ``--monitored`` flag to the script 
-invocation:
-
-.. parsed-literal::
-
-      python regression.py ... **--monitored**
-
-
-That's it. Knowing your intentions, the script will be able to automatically deal 
-with the listed issues [#monitored]_.
-
-
-Getting sources from SVN
-........................
-
-If you already have an SVN client installed and configured, you might
-prefer to get the sources directly from the `Boost Subversion
-Repository`__. To communicate this to the script, you just need to
-pass it your Boost SVN user ID using the ``--user`` option; for
-instance:
-
-__ http://svn.boost.org/trac/boost/wiki/BoostSubversion
-
-.. parsed-literal::
-
-      python regression.py ... **--user=agurtovoy**
-
-You can also specify the user as ``anonymous``, requesting anonymous
-SVN access.  
-
-The main advantage of obtaining the sources through SVN is an
-immediate availability of the most recent check-ins: the sources
-extracted from a tarball the script downloads by default can be up to
-one hour behind the actual repository state at the time of test run.
-
-
-Integration with a custom driver script
-.......................................
-
-Even if you've already been using a custom driver script, and for some 
-reason you don't  want ``regression.py`` to take over of the entire test cycle, 
-getting your regression results into `Boost-wide reports`__ is still easy!
-
-In fact, it's just a matter of modifying your script to perform two straightforward 
-operations:
-
-1. *Timestamp file creation* needs to be done before the SVN update/checkout.
-   The file's location doesn't matter (nor does the content), as long as you know how 
-   to access it later. Making your script to do something as simple as
-   ``echo >timestamp`` would work just fine.
-
-2. *Collecting and uploading logs* can be done any time after ``process_jam_log``' s
-   run, and is as simple as an invocation of the local copy of
-   ``$BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py``
-   script that was just obtained from the SVN with the rest of the sources.
-   You'd need to provide ``collect_and_upload_logs.py`` with the following three
-   arguments::
-
-        --locate-root   directory to to scan for "test_log.xml" files
-        --runner        runner ID (e.g. "Metacomm")
-        --timestamp     path to a file which modification time will be used 
-                        as a timestamp of the run ("timestamp" by default)
-
-   For example, assuming that the run's resulting  binaries are in the
-   ``$BOOST_ROOT/bin`` directory (the default Boost.Build setup), the 
-   ``collect_and_upload_logs.py`` invocation might look like this::
-
-       python $BOOST_ROOT/tools/regression/xsl_reports/runner/collect_and_upload_logs.py 
-          --locate-root=$BOOST_ROOT/bin
-          --runner=Metacomm
-          --timestamp=timestamp
-
-
-__ http://www.boost.org/regression-logs/developer/
-
-
-Patching Boost sources
-......................
-
-You might encounter an occasional need to make local modifications to
-the Boost codebase before running the tests, without disturbing the
-automatic nature of the regression process. To implement this under
-``regression.py``:
-
-1. Codify applying the desired modifications to the sources
-   located in the ``./boost`` subdirectory in a single executable
-   script named ``patch_boost`` (``patch_boost.bat`` on Windows).
-
-2. Place the script in the ``regression.py`` directory.
-
-The driver will check for the existence of the ``patch_boost`` script,
-and, if found, execute it after obtaining the Boost sources.
-
-
-Feedback
---------
-
-Please send all comments/suggestions regarding this document and the testing procedure 
-itself to the `Boost Testing list`__.
-
-__ http://lists.boost.org/mailman/listinfo.cgi/boost-testing
-
-
-Notes
------
-
-.. [#runnerid1] If you are running regressions interlacingly with a different 
-   set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need 
-   to provide a *different* runner id for each of these runs, e.g. ``your_name-intel``, and
-   ``your_name-gcc``.
-
-.. [#runnerid2] The limitations of the reports' format/medium impose a direct dependency
-   between the number of compilers you are testing with and the amount of space available 
-   for your runner id. If you are running regressions for a single compiler, please make 
-   sure to choose a short enough id that does not significantly disturb the reports' layout.
-
-.. [#toolsets] If ``--toolsets`` option is not provided, the script will try to use the 
-   platform's default toolset (``gcc`` for most Unix-based systems).
-
-.. [#incremental] By default, the script runs in what is known as *full mode*: on 
-   each ``regression.py`` invocation all the files that were left in place by the 
-   previous run -- including the binaries for the successfully built tests and libraries 
-   -- are deleted, and everything is rebuilt once again from scratch. By contrast, in 
-   *incremental mode* the already existing binaries are left intact, and only the 
-   tests and libraries which source files has changed since the previous run are 
-   re-built and re-tested.
-
-   The main advantage of incremental runs is a significantly shorter turnaround time, 
-   but unfortunately they don't always produce reliable results. Some type of changes
-   to the codebase (changes to the bjam testing subsystem in particular)
-   often require switching to a full mode for one cycle in order to produce 
-   trustworthy reports. 
-   
-   As a general guideline, if you can afford it, testing in full mode is preferable.
-
-.. [#monitored] Note that at the moment this functionality is available only if you 
-   are running on a Windows platform. Contributions are welcome!
-   

+ 0 - 1
tools/regression/xsl_reports/runner/instructions2html

@@ -1 +0,0 @@
-rst2html.py -dtg --embed-stylesheet --stylesheet=default.css --initial-header-level=2 instructions.rst instructions.html

粤ICP备19079148号