diff --git a/README.html b/README.html index bec684d25994e7a0d2a411cea1e212d4a30b3f74..6efe21d66b14fb4a2c72d1565976324f68cf7e65 100644 --- a/README.html +++ b/README.html @@ -34,7 +34,7 @@ </head> <body> <h1 id="artifacts-for-continuous-model-validation-using-reference-attribute-grammars">Artifacts for "Continuous Model Validation Using Reference Attribute Grammars"</h1> -<p><em>Note: There is a variant of this submission including a docker image (provided as a link) and one without it (uploaded in HotCRP). We ecourage using the one including the image, since building the image takes a long time.</em></p> +<p><em>Note: There is a variant of this submission including a docker image (provided as a link) and one without it (uploaded in HotCRP). We encourage using the one including the image, since building the image takes a long time.</em></p> <h3 id="authors">Authors</h3> <ul> <li>Johannes Mey <a href="mailto:johannes.mey@tu-dresden.de">johannes.mey@tu-dresden.de</a></li> @@ -60,11 +60,6 @@ <p>For reasons of readability and simplicity, there are some minor differences in naming in the source codes and the measured resulting data. Most importantly, the names of the three presented JastAdd implementation variants are different in the code and the diagrams.</p> <p>The following table shows the relation of the terminology used in the paper and in the code.</p> <table> -<colgroup> -<col width="28%" /> -<col width="35%" /> -<col width="35%" /> -</colgroup> <thead> <tr class="header"> <th>Name used in Paper</th> @@ -74,19 +69,19 @@ </thead> <tbody> <tr class="odd"> -<td><p>Name Lookup</p></td> -<td><p>Jastadd (Name Lookup)</p></td> -<td><p>jastadd-namelookup</p></td> +<td>Name Lookup</td> +<td>Jastadd (Name Lookup)</td> +<td>jastadd-namelookup</td> </tr> <tr class="even"> -<td><p>Intrinsic References</p></td> -<td><p>Jastadd (Optimized)</p></td> -<td><p>jastadd-optimized</p></td> +<td>Intrinsic References</td> +<td>Jastadd (Optimized)</td> +<td>jastadd-optimized</td> </tr> <tr class="odd"> -<td><p>Grammar Extension</p></td> -<td><p>Jastadd (Specialized)</p></td> -<td><p>jastadd-specialized</p></td> +<td>Grammar Extension</td> +<td>Jastadd (Specialized)</td> +<td>jastadd-specialized</td> </tr> </tbody> </table> @@ -139,8 +134,8 @@ <p><strong>Please Note:</strong> The measurements were conducted using a timeout for the whole run. If a run was not completed, no individual times of the steps appear in the measurements and diagrams. Thus, some tools do not have measurements for all problem sizes.</p> <h3 id="the-source-code">The Source Code</h3> <p>For this publication, we tried to modify the source code of the benchmark itself as little as possible. Therefore, unfortunately, the code base is rather large and confusing. The following section tries to point to the parts relevant for this paper.</p> -<p>The benchmark is structured in modules, some of which form the code of the benchmark, some are provided by the contesting tools, and some are related to required model serializations. There are some naming conventions: - Tool-related modules are in directories starting with <code>trainbenchmark-tool</code>. - Model serialization-related modules start with <code>trainbenchmark-generator</code>. - All other modules are core modules of the bechmark.</p> -<p>The JastAdd-based solutions use a preprocessor to generate Java files, for the presented variant. Each JastAdd configuraration must be presented to the benchmark as a separate tool. Thus there are two directories for each variant, one for the batch processing mode and one for the incremental mode. Because these two modes share almost all the source code, a third directory is used to store this shared code. Finally, there is a directory for code shared between all JastAdd variants. These are the important directories:</p> +<p>The benchmark is structured in modules, some of which form the code of the benchmark, some are provided by the contesting tools, and some are related to required model serializations. There are some naming conventions: - Tool-related modules are in directories starting with <code>trainbenchmark-tool</code>. - Model serialization-related modules start with <code>trainbenchmark-generator</code>. - All other modules are core modules of the benchmark.</p> +<p>The JastAdd-based solutions use a preprocessor to generate Java files, for the presented variant. Each JastAdd configuration must be presented to the benchmark as a separate tool. Thus, there are two directories for each variant, one for the batch processing mode and one for the incremental mode. Because these two modes share almost all the source code, a third directory is used to store this shared code. Finally, there is a directory for code shared between all JastAdd variants. These are the important directories:</p> <ul> <li><a href="trainbenchmark/trainbenchmark-tool-jastadd-namelookup-base">JastAdd with Name Lookup</a> <ul> @@ -163,8 +158,8 @@ <li><a href="trainbenchmark/trainbenchmark-tool-jastadd-base">Common JastAdd Code</a></li> </ul> <h3 id="reproducing-the-measurements">Reproducing the Measurements</h3> -<p><strong><span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested to run the benchmark with a smaller maximum problem size, less repetitions, and a shorter timeout.</span></strong> Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark).</p> -<p>To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly. Alternatively, it is, on course, also possible to simply run the provided gradle build scripts. However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We stronly suggest running the Docker variant.</p> +<p><strong><span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested running the benchmark with a smaller maximum problem size, fewer repetitions, and a shorter timeout.</span></strong> Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark).</p> +<p>To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly. Alternatively, it is, on course, also possible to simply run the provided gradle build scripts. However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant.</p> <h4 id="running-the-benchmark-with-docker">Running the Benchmark with Docker</h4> <h5 id="loading-the-docker-image">Loading the Docker Image</h5> <ul> @@ -189,7 +184,7 @@ </ul> <h5 id="running-the-docker-image">Running the Docker Image</h5> <ul> -<li><code>docker run -it -v "$PWD"/docker-results:/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark</code></li> +<li><code>docker run -it -v "$PWD"/ :/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark</code></li> <li>This makes the results and diagrams available outside the container in the directories <code>docker-results</code> and <code>docker-diagrams</code> respectively</li> <li>Once running, a command prompt is opened and some information is displayed</li> <li>Follow the instructions below</li> diff --git a/README.md b/README.md index 6ee6797a11a128ed7957075ac4ee1bc0e7cf4316..750ba78685ef8e2d8cce4aaf4b67d0634fb72afc 100644 --- a/README.md +++ b/README.md @@ -39,15 +39,11 @@ Most importantly, the names of the three presented JastAdd implementation varian The following table shows the relation of the terminology used in the paper and in the code. -+-----------------------+-----------------------------+----------------------------+ | Name used in Paper | Name used in result data | Name used in source code | -+=======================+=============================+============================+ +|-----------------------|-----------------------------|----------------------------| | Name Lookup | Jastadd (Name Lookup) | jastadd-namelookup | -+-----------------------+-----------------------------+----------------------------+ | Intrinsic References | Jastadd (Optimized) | jastadd-optimized | -+-----------------------+-----------------------------+----------------------------+ | Grammar Extension | Jastadd (Specialized) | jastadd-specialized | -+-----------------------+-----------------------------+----------------------------+ ## The Grammar Extension Preprocessor *RelAst* diff --git a/README.txt b/README.txt index e0a2675cd734e4df226ad2ff23f0998e29f9d320..750ba78685ef8e2d8cce4aaf4b67d0634fb72afc 100644 --- a/README.txt +++ b/README.txt @@ -2,7 +2,7 @@ *Note: please use the HTML version of this README.* -*Also Note: There is a variant of this submission including a docker image (provided as a link) and one without it (uploaded in HotCRP). We ecourage using the one including the image, since building the image takes a long time.* +*Also Note: There is a variant of this submission including a docker image (provided as a link) and one without it (uploaded in HotCRP). We encourage using the one including the image, since building the image takes a long time.* ### Authors @@ -39,15 +39,11 @@ Most importantly, the names of the three presented JastAdd implementation varian The following table shows the relation of the terminology used in the paper and in the code. -+-----------------------+-----------------------------+----------------------------+ | Name used in Paper | Name used in result data | Name used in source code | -+=======================+=============================+============================+ +|-----------------------|-----------------------------|----------------------------| | Name Lookup | Jastadd (Name Lookup) | jastadd-namelookup | -+-----------------------+-----------------------------+----------------------------+ | Intrinsic References | Jastadd (Optimized) | jastadd-optimized | -+-----------------------+-----------------------------+----------------------------+ | Grammar Extension | Jastadd (Specialized) | jastadd-specialized | -+-----------------------+-----------------------------+----------------------------+ ## The Grammar Extension Preprocessor *RelAst* @@ -120,10 +116,10 @@ The benchmark is structured in modules, some of which form the code of the bench There are some naming conventions: - Tool-related modules are in directories starting with `trainbenchmark-tool`. - Model serialization-related modules start with `trainbenchmark-generator`. -- All other modules are core modules of the bechmark. +- All other modules are core modules of the benchmark. The JastAdd-based solutions use a preprocessor to generate Java files, for the presented variant. -Each JastAdd configuraration must be presented to the benchmark as a separate tool. Thus there are two directories for each variant, one for the batch processing mode and one for the incremental mode. +Each JastAdd configuration must be presented to the benchmark as a separate tool. Thus, there are two directories for each variant, one for the batch processing mode and one for the incremental mode. Because these two modes share almost all the source code, a third directory is used to store this shared code. Finally, there is a directory for code shared between all JastAdd variants. These are the important directories: @@ -144,11 +140,11 @@ These are the important directories: ### Reproducing the Measurements -**<span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested to run the benchmark with a smaller maximum problem size, less repetitions, and a shorter timeout.</span>** Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark). +**<span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested running the benchmark with a smaller maximum problem size, fewer repetitions, and a shorter timeout.</span>** Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark). To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly. Alternatively, it is, on course, also possible to simply run the provided gradle build scripts. -However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We stronly suggest running the Docker variant. +However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant. #### Running the Benchmark with Docker @@ -167,7 +163,7 @@ However, since there are some software requirements imposed by the benchmark, pa ##### Running the Docker Image -- `docker run -it -v "$PWD"/docker-results:/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark` +- `docker run -it -v "$PWD"/ :/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark` - This makes the results and diagrams available outside the container in the directories `docker-results` and `docker-diagrams` respectively - Once running, a command prompt is opened and some information is displayed - Follow the instructions below