Skip to content
Snippets Groups Projects
Select Git revision
  • ebae4fea5936b960691eabe85909b7219c3e6845
  • master default protected
  • artefact-eval
  • modelica
  • visibilityscopes
  • scopetree
  • similarCfg
  • wip-reusable
8 results

reusable-analysis

Johannes Mey's avatar
Johannes Mey authored
ebae4fea
History

Reusing Static Analysis across Different Domain-Specific Languages using Reference Attribute Grammars (Artefact)

This repository contains the source code for the publication: Johannes Mey, Thomas Kühn, René Schöne, and Uwe Aßmann. “Reusing Static Analysis across Different Domain-Specific Languages Using Reference Attribute Grammars.” Programming 4, no. 3 (February 17, 2020).

DOI of the publication: 10.22152/programming-journal.org/2020/4/15

A git repository containing this artefact is available at https://git-st.inf.tu-dresden.de/jastadd/reusable-analysis/

Running the Evaluation

Preparation for the Qualitas Corpus

Note, that for all parts involving Java, the Qualitas Corpus is required to be downloaded first. To do that:

  1. Visit http://qualitascorpus.com/docs/faq.html#download and follow the instructions there to download both parts of the archive.
  2. Unpack the archives.
  3. Create a symlink to QualitasCorpus-$VERSION/Systems/ named qualitas (alternatively, or if docker does not follow symbolic links for security reasons, create a directory named qualitas and copy all systems into it)
  4. Create a directory docker-results which serves as a shared directory with the container.

Preparation for the Docker Image

Load the docker image using docker load --input reusable-analysis.tar

Running the Docker Container

Run the container using docker run --rm -it -v "$PWD"/docker-results:/reusable-analysis/benchmark:Z -v "$PWD"/qualitas:/reusable-analysis/qualitas:Z reusable-analysis

Inside the Container

For convenience, there are several scripts to execute parts of the evaluation found in the paper named after the respective part and start with run_. The main evaluation is performed with run_scc_java. Those scripts internally call the correct Gradle tasks.

Measured Data

The measurements were performed on an Intel i7-8700 workstation with 64GB of memory using Fedora Linux 29 running on kernel 4.18, OpenJDK version 1.8 and JastAdd version 2.3.

The data obtained using the provided artefact are contained in the file measured_results.csv.

This file is comma-separated file with the following columns:

Title Explanation
1 Domain language on which the analysis is performed
2 Analysis kind of analysis (type or package)
3 Internal true if direct, false if reusable
4 JavaFiles number of files analyzed
5 Nodes number of nodes in the dependency graph
6 Edges number of edges in the dependency graph
7 NodesAndEdges number of elements in the dependency graph
8 SCCs number of computed SCCs
9 FullTime total time of the run (parse+generation+analysis)
10 ParseTime parse time
11 GenerationTime generation time of the problem-specific structure
12 AnalysisTime analysis time
13 GenAnaTime sum of generation and analysis time
14 Run number of the run (0-100)
15 Scenario name of the analyzed program
16 TimeTime wall-clock time as taken by the time command
17 Exit return value of the benchmark run (always 0)