Skip to content
Snippets Groups Projects
Commit 57ddc7d8 authored by René Schöne's avatar René Schöne
Browse files

Merge branch 'artifact-evaluation' into artifact-evaluation-poster

parents ba71777e 1ff625ef
No related branches found
No related tags found
No related merge requests found
root = true
[*.java]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = tab
insert_final_newline = true
max_line_length = 140
trim_trailing_whitespace = true
[*.jadd]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = tab
insert_final_newline = true
max_line_length = 140
trim_trailing_whitespace = true
[*.jrag]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = tab
insert_final_newline = true
max_line_length = 140
trim_trailing_whitespace = true
......@@ -24,6 +24,5 @@ niklas.fors@cs.lth.se
Jesper Öqvist
jesper.oqvist@cs.lth.se
Uwe Aßmann
uwe.assmann@tu-dresden.de
Copyright (c) 2018, The Authors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the Lund University nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
......@@ -159,7 +159,7 @@
</ul>
<h3 id="reproducing-the-measurements">Reproducing the Measurements</h3>
<p><strong><span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested running the benchmark with a smaller maximum problem size, fewer repetitions, and a shorter timeout.</span></strong> Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark).</p>
<p>To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly. Alternatively, it is, on course, also possible to simply run the provided gradle build scripts. However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant.</p>
<p>To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly. Alternatively, it is, of course, also possible to simply run the provided gradle build scripts. However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant.</p>
<h4 id="running-the-benchmark-with-docker">Running the Benchmark with Docker</h4>
<h5 id="loading-the-docker-image">Loading the Docker Image</h5>
<ul>
......@@ -184,7 +184,7 @@
</ul>
<h5 id="running-the-docker-image">Running the Docker Image</h5>
<ul>
<li><code>docker run -it -v &quot;$PWD&quot;/ :/trainbenchmark/results:Z -v &quot;$PWD&quot;/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark</code></li>
<li><code>docker run -it -v &quot;$PWD&quot;/docker-results:/trainbenchmark/results:Z -v &quot;$PWD&quot;/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark</code></li>
<li>This makes the results and diagrams available outside the container in the directories <code>docker-results</code> and <code>docker-diagrams</code> respectively</li>
<li>Once running, a command prompt is opened and some information is displayed</li>
<li>Follow the instructions below</li>
......
......@@ -143,7 +143,7 @@ These are the important directories:
**<span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested running the benchmark with a smaller maximum problem size, fewer repetitions, and a shorter timeout.</span>** Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark).
To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly.
Alternatively, it is, on course, also possible to simply run the provided gradle build scripts.
Alternatively, it is, of course, also possible to simply run the provided gradle build scripts.
However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant.
#### Running the Benchmark with Docker
......@@ -163,7 +163,7 @@ However, since there are some software requirements imposed by the benchmark, pa
##### Running the Docker Image
- `docker run -it -v "$PWD"/ :/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark`
- `docker run -it -v "$PWD"/docker-results:/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark`
- This makes the results and diagrams available outside the container in the directories `docker-results` and `docker-diagrams` respectively
- Once running, a command prompt is opened and some information is displayed
- Follow the instructions below
......
......@@ -143,7 +143,7 @@ These are the important directories:
**<span style="color:red">Please Note: Reproducing the graphs as presented in the paper and supplied here takes a very long time depending on the utilized hardware. It is strongly suggested running the benchmark with a smaller maximum problem size, fewer repetitions, and a shorter timeout.</span>** Most results of the benchmark are observable with more restricted setup as well. In the following, we will provide a suggested way to run the benchmark in different sizes. Note that running the benchmark requires a significant amount of disk space (up to 10GB when running the full benchmark).
To reproduce the measurements, there are several options. We provide a prepared Docker image that can be run directly.
Alternatively, it is, on course, also possible to simply run the provided gradle build scripts.
Alternatively, it is, of course, also possible to simply run the provided gradle build scripts.
However, since there are some software requirements imposed by the benchmark, particularly for creating the diagrams using R. We strongly suggest running the Docker variant.
#### Running the Benchmark with Docker
......@@ -163,7 +163,7 @@ However, since there are some software requirements imposed by the benchmark, pa
##### Running the Docker Image
- `docker run -it -v "$PWD"/ :/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark`
- `docker run -it -v "$PWD"/docker-results:/trainbenchmark/results:Z -v "$PWD"/docker-diagrams:/trainbenchmark/diagrams:Z trainbenchmark`
- This makes the results and diagrams available outside the container in the directories `docker-results` and `docker-diagrams` respectively
- Once running, a command prompt is opened and some information is displayed
- Follow the instructions below
......
#!/bin/bash
git clean -d -x -f
mkdir -p ModelValidationWithRAGs/trainbenchmark
mkdir -p ModelValidationWithRAGs/docker-results/
mkdir -p ModelValidationWithRAGs/docker-diagrams/
cp -a \
.editorconfig .dockerignore \
.dockerignore \
README.{md,html,txt} AUTHORS.txt \
Dockerfile docker/ run_docker.sh \
paper-results/ \
......@@ -13,14 +14,11 @@ cp -a \
ModelValidationWithRAGs/ # the target directory
# remove build and gradle files
rm -r ModelValidationWithRAGs/trainbenchmark/*/build
rm -r ModelValidationWithRAGs/trainbenchmark/.gradle
rm -rf ModelValidationWithRAGs/trainbenchmark/*/build
rm -rf ModelValidationWithRAGs/trainbenchmark/.gradle
# remove generate files
rm -r ModelValidationWithRAGs/trainbenchmark/*/src/main/java-gen/*
rm ModelValidationWithRAGs/trainbenchmark/trainbenchmark-tool-jastadd-specialized-base/src/main/jastadd/TrainGen.*
rm -rf ModelValidationWithRAGs/trainbenchmark/*/src/main/java-gen/*
rm -f ModelValidationWithRAGs/trainbenchmark/trainbenchmark-tool-jastadd-specialized-base/src/main/jastadd/TrainGen.*
rm ModelValidationWithRAGs/preprocessor/.gitignore
echo "Now please run 'sudo docker save --output ModelValidationWithRAGs/trainbenchmark-docker.tar trainbenchmark && chown $(whoami):$(whoami) ModelValidationWithRAGs/trainbenchmark-docker.tar'"
# todos
# preprocessor/.classpath missing
# also preprocessor/tests/errors/{Errors.out,Inheritance.out}
echo "Now please run 'sudo docker save --output ModelValidationWithRAGs/trainbenchmark-docker.tar trainbenchmark && sudo chown $(whoami):$(whoami) ModelValidationWithRAGs/trainbenchmark-docker.tar'"
This diff is collapsed.
Copyright (c) 2018, The Authors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the Lund University nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
RailwayContainer ::= Route* Region*;
abstract RailwayElement ::= <id:int>;
Region : RailwayElement ::= TrackElement* Sensor*;
Semaphore : RailwayElement ::= <Signal:Signal>;
Route : RailwayElement ::= <Active:boolean> SwitchPosition* <_impl_requires:RefList<Sensor>> <_impl_entry:Semaphore> <_impl_exit:Semaphore>;
SwitchPosition : RailwayElement ::= <Position:Position> <_impl_target:Switch>;
Sensor : RailwayElement ::= <_impl_monitors:RefList<TrackElement>>;
abstract TrackElement : RailwayElement ::= <_impl_monitoredBy:RefList<Sensor>> <_impl_connectsTo:RefList<TrackElement>>;
Segment : TrackElement ::= <Length:int> Semaphore*;
Switch : TrackElement ::= <CurrentPosition:Position> <_impl_positions:RefList<SwitchPosition>>;
import java.util.ArrayList;
import java.util.Collections;
aspect RelAstAPI {
public Route.Route(int id, boolean Active, List<SwitchPosition> SwitchPosition) {
setid(id);
setActive(Active);
setSwitchPositionList(SwitchPosition);
}
public SwitchPosition.SwitchPosition(int id, Position Position) {
setid(id);
setPosition(Position);
}
public Sensor.Sensor(int id) {
setid(id);
}
public TrackElement.TrackElement(int id) {
setid(id);
}
public Segment.Segment(int id, int Length, List<Semaphore> Semaphore) {
setid(id);
setLength(Length);
setSemaphoreList(Semaphore);
}
public Switch.Switch(int id, Position CurrentPosition) {
setid(id);
setCurrentPosition(CurrentPosition);
}
// rel Route.requires* -> Sensor
public java.util.List<Sensor> Route.requires() {
RefList<Sensor> l = get_impl_requires();
return l != null ? Collections.unmodifiableList(l) : Collections.emptyList();
}
public void Route.addToRequires(Sensor o) {
assertNotNull(o);
RefList<Sensor> list = get_impl_requires();
if (list == null) {
list = new RefList<>();
}
list.add(o);
set_impl_requires(list);
}
public void Route.removeFromRequires(Sensor o) {
assertNotNull(o);
RefList<Sensor> list = get_impl_requires();
if (list != null && list.remove(o)) {
set_impl_requires(list);
}
}
// rel Route.entry? -> Semaphore
public Semaphore Route.entry() {
return get_impl_entry();
}
public void Route.setEntry(Semaphore o) {
set_impl_entry(o);
}
public boolean Route.hasEntry() {
return entry() != null;
}
public void Route.clearEntry() {
setEntry(null);
}
// rel Route.exit? -> Semaphore
public Semaphore Route.exit() {
return get_impl_exit();
}
public void Route.setExit(Semaphore o) {
set_impl_exit(o);
}
public boolean Route.hasExit() {
return exit() != null;
}
public void Route.clearExit() {
setExit(null);
}
// rel Sensor.monitors* <-> TrackElement.monitoredBy*
public java.util.List<TrackElement> Sensor.monitors() {
RefList<TrackElement> l = get_impl_monitors();
return l != null ? Collections.unmodifiableList(l) : Collections.emptyList();
}
public void Sensor.addToMonitors(TrackElement o) {
assertNotNull(o);
RefList<TrackElement> list = get_impl_monitors();
if (list == null) {
list = new RefList<>();
}
RefList<Sensor> list2 = o.get_impl_monitoredBy();
if (list2 == null) {
list2 = new RefList<>();
}
list.add(o);
list2.add(this);
set_impl_monitors(list);
o.set_impl_monitoredBy(list2);
}
public void Sensor.removeFromMonitors(TrackElement o) {
assertNotNull(o);
RefList<TrackElement> list = get_impl_monitors();
if (list != null && list.remove(o)) {
RefList<Sensor> list2 = o.get_impl_monitoredBy();
if (list2 != null) list2.remove(this);
set_impl_monitors(list);
o.set_impl_monitoredBy(list2);
}
}
public java.util.List<Sensor> TrackElement.monitoredBy() {
RefList<Sensor> l = get_impl_monitoredBy();
return l != null ? Collections.unmodifiableList(l) : Collections.emptyList();
}
public void TrackElement.addToMonitoredBy(Sensor o) {
assertNotNull(o);
RefList<Sensor> list = get_impl_monitoredBy();
if (list == null) {
list = new RefList<>();
}
RefList<TrackElement> list2 = o.get_impl_monitors();
if (list2 == null) {
list2 = new RefList<>();
}
list.add(o);
list2.add(this);
set_impl_monitoredBy(list);
o.set_impl_monitors(list2);
}
public void TrackElement.removeFromMonitoredBy(Sensor o) {
assertNotNull(o);
RefList<Sensor> list = get_impl_monitoredBy();
if (list != null && list.remove(o)) {
RefList<TrackElement> list2 = o.get_impl_monitors();
if (list2 != null) list2.remove(this);
set_impl_monitoredBy(list);
o.set_impl_monitors(list2);
}
}
// rel TrackElement.connectsTo* -> TrackElement
public java.util.List<TrackElement> TrackElement.connectsTo() {
RefList<TrackElement> l = get_impl_connectsTo();
return l != null ? Collections.unmodifiableList(l) : Collections.emptyList();
}
public void TrackElement.addToConnectsTo(TrackElement o) {
assertNotNull(o);
RefList<TrackElement> list = get_impl_connectsTo();
if (list == null) {
list = new RefList<>();
}
list.add(o);
set_impl_connectsTo(list);
}
public void TrackElement.removeFromConnectsTo(TrackElement o) {
assertNotNull(o);
RefList<TrackElement> list = get_impl_connectsTo();
if (list != null && list.remove(o)) {
set_impl_connectsTo(list);
}
}
// rel SwitchPosition.target <-> Switch.positions*
public Switch SwitchPosition.target() {
return get_impl_target();
}
public void SwitchPosition.setTarget(Switch o) {
assertNotNull(o);
if (get_impl_target() != null) {
RefList<SwitchPosition> list2 = get_impl_target().get_impl_positions();
list2.remove(this);
get_impl_target().set_impl_positions(list2);
}
set_impl_target(o);
RefList<SwitchPosition> list = o.get_impl_positions();
if (list == null) {
list = new RefList<>();
}
list.add(this);
o.set_impl_positions(list);
}
public java.util.List<SwitchPosition> Switch.positions() {
RefList<SwitchPosition> l = get_impl_positions();
return l != null ? Collections.unmodifiableList(l) : Collections.emptyList();
}
public void Switch.addToPositions(SwitchPosition o) {
assertNotNull(o);
if (o != null && o.get_impl_target() != null) {
RefList<SwitchPosition> list2 = o.get_impl_target().get_impl_positions();
if (list2.remove(o))
o.get_impl_target().set_impl_positions(list2);
}
RefList<SwitchPosition> list = get_impl_positions();
if (list == null) {
list = new RefList<>();
}
list.add(o);
set_impl_positions(list);
o.set_impl_target(this);
}
public void Switch.removeFromPositions(SwitchPosition o) {
assertNotNull(o);
RefList<SwitchPosition> list = get_impl_positions();
if (list != null && list.remove(o)) {
set_impl_positions(list);
if (o.get_impl_target() == this) {
o.set_impl_target(null);
}
}
}
public boolean ASTNode.violateLowerBounds() {
return !getLowerBoundsViolations().isEmpty();
}
public java.util.List<Pair<ASTNode, String>> ASTNode.getLowerBoundsViolations() {
ArrayList<Pair<ASTNode, String>> list = new ArrayList<>();
computeLowerBoundsViolations(list);
return list;
}
public void ASTNode.computeLowerBoundsViolations(java.util.List<Pair<ASTNode, String>> list) {
for (int i = 0; i < getNumChildNoTransform(); i++) {
getChildNoTransform(i).computeLowerBoundsViolations(list);
}
}
public void SwitchPosition.computeLowerBoundsViolations(java.util.List<Pair<ASTNode, String>> list) {
if (target() == null) {
list.add(new Pair<>(this, "target"));
}
super.computeLowerBoundsViolations(list);
}
public class Pair<T1, T2> {
public final T1 _1;
public final T2 _2;
public Pair(T1 _1, T2 _2) {
ASTNode.assertNotNull(_1);
ASTNode.assertNotNull(_2);
this._1 = _1;
this._2 = _2;
}
public boolean equals(Object other) {
if (other instanceof Pair) {
Pair<?,?> p = (Pair<?,?>) other;
return _1.equals(p._1) && _2.equals(p._2);
} else {
return false;
}
}
public int hashCode() {
return 31*_1.hashCode() + _2.hashCode();
}
}
public static void ASTNode.assertNotNull(Object obj) {
if (obj == null) {
throw new NullPointerException();
}
}
}
File added
#!/usr/bin/env python
import csv
import re
import os.path
import sys
# import subprocess
import collections
pattern = re.compile('matches-([^-]*)-([^-]*)-railway-([^-]*)-([^-]*)-.csv')
class TableRow(object):
def __init__(self, size, rowname):
self.size = size
self.rowname = rowname
self.values = {}
def set(self, index, iteration, value):
columns = self.values.get(iteration)
if not columns:
columns = ['?'] * (self.size)
self.values[iteration] = columns
columns[index] = value
def get_rows(self):
return [[self.rowname, k] + v + [all((e == v[0] for e in v[1:]))]
for k, v in sorted(self.values.iteritems())]
class Table(object):
def __init__(self):
self.rownames = set()
self.tools = set()
self.rows = collections.OrderedDict()
def add_tool(self, toolname):
self.tools.add(toolname)
def add_rowname(self, rowname):
self.rownames.add(rowname)
def create_rows(self):
def key_rowname(t):
tokens = t.rsplit('-', 1)
return tokens[0] + tokens[1].zfill(4)
self.tools = collections.OrderedDict(((e, i) for i, e in enumerate(sorted(self.tools))))
self.rownames = sorted(self.rownames, key=key_rowname)
for rowname in self.rownames:
tr = TableRow(len(self.tools), rowname)
self.rows[rowname] = tr
def set_value(self, rowname, toolname, iteration, value):
row = self.rows[rowname]
row.set(self.tools[toolname], iteration, value)
def write(self, fdr):
# header
writer = csv.writer(fdr)
writer.writerow(['Scenario', 'Iteration'] + self.tools.keys() + ['all?'])
for _, row in self.rows.iteritems():
writer.writerows(row.get_rows())
def split_name(filename):
"""Returns (tool, Query, Scenario, Size)"""
m = re.match(pattern, os.path.basename(filename))
return m.groups() if m else None
def main(directory):
table = Table()
# first get tool- and row-names
for filename in os.listdir(directory):
if not filename.startswith('matches'):
continue
tup = split_name(filename)
table.add_tool(tup[0])
table.add_rowname('{}-{}-{}'.format(*tup[1:]))
table.create_rows()
# then fill in the table
for filename in os.listdir(directory):
if not filename.startswith('matches'):
continue
with open(os.path.join(directory, filename)) as fdr:
reader = csv.reader(fdr)
reader.next() # skip header
for row in reader:
# Tool, Workload, Description, Model, Run, Query, Iteration, Matches
rowname = '{}-{}'.format(row[1], row[3][8:])
toolname = row[0]
iteration = int(row[6])
matches = row[7]
table.set_value(rowname, toolname, iteration, matches)
with open('summary-{}.csv'.format(os.path.basename(os.path.normpath(directory))), 'w') as fdr:
table.write(fdr)
if __name__ == '__main__':
main(sys.argv[1])
meld <(sort expected_tar_content | sed 's|/$||') <(find ModelValidationWithRAGs/ | sort)
meld expected_tar_content <(cd ModelValidationWithRAGs/;find)
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment