diff --git a/sections/ConstBasedAppr.tex b/sections/ConstBasedAppr.tex
index f8ecfea97d307c4d1c100eefee45db71ecfccf2f..8dc6a6aee3c2ae98f28df5e47e07c5439b4396c6 100644
--- a/sections/ConstBasedAppr.tex
+++ b/sections/ConstBasedAppr.tex
@@ -1,10 +1,10 @@
 \section{Constraint-based Approach}
-The next another automated testing tool is RESTest [4] consisting of a constraint-based black-box approach. Like other approaches, RESTest generates valid and invalid test cases, but more effectively through the use of automated analysis of dependencies between parameters and test oracles (4xx and 2xx status codes).
+The next another automated testing tool is RESTest \cite{martin2020restest} consisting of a constraint-based black-box approach. Like other approaches, RESTest generates valid and invalid test cases, but, more effectively through the use of automated analysis of dependencies between parameters and test oracles (4xx and 2xx status codes).
 
 %Process
-This approach is constraint-based and follows also a model-based approach. An OpenAPI specification as input is considered as a system model and optionally describe dependencies between parameters using exclusive libraries (e.g. IDL4OAS, analyzes corresponding dependencies). With this system model and described inter-parameter dependencies a test model with conformed configuration data is generated. Configuration data of a default test model is manually modifiable. After generation of a test model the test model and system model are directly set up for generating an abstract test cases with generation strategies e.g. random input generation. Generation strategies are not only a generation approach, they can also include automated analysis using an IDL extension to find out if an operation defined in the system model (OpenAPI) accords with the analyzed inter-parameter dependencies. Lastly, abstract test cases are rewritten in executable test cases which send requests at the client side.
+This approach is constraint-based and follows also a model-based approach. An OpenAPI specification as input is considered as a system model and optionally describe dependencies between parameters using exclusive libraries (e.g. IDL4OAS, analyzes corresponding dependencies). With this system model and described inter-parameter dependencies, a test model with conformed configuration data is generated. Configuration data of a default test model is manually modifiable. After the generation of a test model, the test model and system model are directly set up for generating an abstract test cases with generation strategies e.g. random input generation. Testing strategies are not only a generation approach, they can also include automated analysis using an IDL extension to find out if an operation defined in the system model (OpenAPI) accords with the analyzed inter-parameter dependencies. Lastly, abstract test cases are rewritten in executable test cases which send requests at the client side.
 
 %Research results
-The research in [4] consists of experiments with 6 commercial APIs. It resulted maximally 99\% more and 60\% on average valid test cases than random testing and found more than 2000 failures which are not detectable by random testing.
+The research in \cite{martin2020restest} consists of experiments with 6 commercial APIs. It resulted maximally 99\% more and 60\% on average valid test cases than random testing and found more than 2000 failures which were not detectable by random testing.
 
 % [4] RESTest
\ No newline at end of file
diff --git a/sections/ODGAppr.tex b/sections/ODGAppr.tex
index abeca35fe47e9273766bd70dbd3a053316156231..4129c1a48c9edf04b4a91d5f197fab6030e135fc 100644
--- a/sections/ODGAppr.tex
+++ b/sections/ODGAppr.tex
@@ -1,13 +1,13 @@
 \section{Operation Dependency Graph Approach}
 
-We saw the constraint-based approach tests with parameter dependencies. There is one more way to analyze automatically dependencies of properties in an OpenAPI document. It is able with RESTTESTGEN [5]. \\
+We saw the constraint-based approach tests with parameter dependencies. There is one more way to analyze automatically dependencies of properties in an OpenAPI document. It is able with RESTTESTGEN \cite{viglianisi2020resttestgen}. \\
 What RESTTESTGEN does differently is it analyzes operation dependencies of an API and computes an operation dependency graph.
 
 %Process
-This graph is a directed graph. If there are two operations derived as nodes and one edge between them, the edge is labeled with a data. This data is an output data of one operation and an input data of the another operation (e.g. an operations getUsers and getUserById could have a data dependency with the data userId). 
+The graph is a directed graph. If there are two operations derived as nodes and one edge between them, the edge is labeled with a data. This data is an output data of one operation and an input data of the another operation (e.g. an operations getUsers and getUserById could have a data dependency with the data userId). 
 After this analysis is done, RESTTESTGEN is ready to generate valid and also invalid test cases. \\
-Firstly, test cases are automatically generated with the module, Nominal Tester. Inputs of this module are an OpenAPI specification and its analyzed operation dependency graph. Test cases created in this module comform the specification and its constraints. \\
-Subsequently, Error Handling Tester takes generated valid test cases as input and constructs several invalid test cases based on the constructions of inputs. Sending invalid test cases provokes the data validation of the target API and may create unexpected accepted responses.
+Firstly, test cases are automatically generated with the module, Nominal Tester. Inputs of this module are an OpenAPI specification and its analyzed operation dependency graph. With that inputs, the test cases are inferred. Test cases created in this module comform the specification and its constraints. \\
+Subsequently, Error Handling Tester takes generated valid test cases as input and constructs several invalid test cases based on the constructions of nominal test cases. Sending invalid test cases provokes the data validation of the target API and may create unexpected accepted responses.
 
 %Research results
 Experiments with RESTTESTGEN have resulted that this tool is effective in generating test cases, because it was applicable in 87 real-world APIs and had operation coverage of 98\%. Developers of RESTTESTGEN still have plans to solve limitations (e.g. authentications are missing, test oracle is only status code, the execution is not iterated), so it has improvements of the capability of testing and presence of security errors.
diff --git a/sections/background.tex b/sections/background.tex
index 1d6029fc4f63cc4b7c23401bbb3868ef6d4af679..8bf8bd6de5b1343044b5d9d97befb73338640a22 100644
--- a/sections/background.tex
+++ b/sections/background.tex
@@ -1,5 +1,5 @@
 \chapter{Background}\label{ch:background}
-In this Chapter, we present basis concepts and technologies of this work to understand the following contents after this chapter.
+In this Chapter, we present basis concepts and technologies of this work to understand the following contents.
 
 \input{sections/testing}
 \input{sections/restapi}
diff --git a/sections/benefits.tex b/sections/benefits.tex
index 4f37096ee23f2398c9d455cb11373300c85ec1b9..68139d20ef1f18a1e3923fd2f075698cbe5001b9 100644
--- a/sections/benefits.tex
+++ b/sections/benefits.tex
@@ -5,7 +5,7 @@
 \end{description}
 
 % Helpful attributes (Collection, inherited, easier return with =)
-During the implementation phase of parameter inference, RAG was beneficial at writing codes. At writing codes for parser and back-transformation, attributes could not be applied, because the functional parts change the structure of AST, which conflicts to the definition of attribute and also in JastAdd. So, it could not be realized that attributes can be practical. But, following attributes and features of JastAdd were helpful to construct the generators and implement them:
+During the implementation phase of parameter inference, RAG was beneficial at writing codes. At writing codes for parser and back-transformation, attributes could not be applied, because the functional parts change the structure of AST, which conflicts to the definition of attribute and also in JastAdd. So, it could not be realized that attributes can be practical in this time. But, following attributes and features of JastAdd were helpful to construct the generators and implement them:
 
 \begin{itemize}
 \item As long as a returning value can be called or storable by an AST node, the syntax in RAG is reduced with an equals sign and simpler than in common programming language. Therefore, the source codes are compact. To compare how RAG could be implemented effectively, see the implementation of an attribute in this work in Listing~\ref{lst:attribute-infParam} and how it is defined in common Java description in Listing~\ref{lst:java-infParam}.
@@ -35,24 +35,4 @@ eq OpenAPIObject.getPathsObject(int i).inferUrl(){
 \end{lstlisting} 
 
 % Extension
-Additionally, JastAdd frameworks developed so far could improve testing methods in RAGs. Improvements with those extensions are huge advantages of RAG. ExtendJ, previously JastAddJ, is a Java compiler extended with JastAdd \cite{ekman2007jastadd}. It improves Java compliance, compilation speed, specification size and the support for non-trivial extensions compared to other extensible Java compilers. Analyses suggested in RAG \cite{fors2020principles}, e.g. Name Analysis, Type Analysis, might also be usable at parameter inference, which the name inference in this paper uses the similar pattern as Name Analysis (Lookup). Relational Reference Attribute Grammars \cite{mey2018continuous} could improve problems with references and enable to build a new test model and connect it to the OpenAPI model implemented in this paper, parallel to \cite{gotz2018jastadd}, \cite{mey2020reusing}. A recently invented framework, RagConnect, enables to connect RAG-based models to models with other basis \cite{schone2020connecting}. Since version 0.2.1, it has a new model connection with REST and might help at generating test cases, because test cases are based on requests and responses of REST APIs. All of these extensions are open topics, so, it must be worthful to discuss and research how they can be used into OpenAPI Testing to suggest tests with a better quality.
-
-\begin{comment}
-> To evaulate how well-suited RAGs are for API testing, we revisit the research questions...
-
-### 6.2 Benefits
-- schöne beschreibung mit attributen (verweis auf beispiele aus 5)
-- erweiterbarkeit von RAGs
-  - https://programming-journal.org/2020/4/15/
-  - https://scholar.google.de/scholar?hl=de&as_sdt=0%2C5&q=exstensible+java+compiler+hedin&btnG=
-
-ExtendJ - The JastAdd Extensible Java Compiler -> enables static analysis and quality better
-
-relational RAGs (referenzen verbessern, verschiedene ASTs verbinden -> testbeschriebungs-modell und OAPI modell), 
-
-RagConnect (https://jastadd.pages.st.inf.tu-dresden.de/ragconnect/use_cases/) - erweiterung für verbindung mit laufzeitumgebungen (weil ein testsystem ja auch ein laufzeitsystem ist, könnte es helfen, mit openapi requests und responses umzugehen. hier könnte der experimentelle REST support helfen):
-
-Design and patterns
- Lookup (P1) : name analysis for parameter inference
-Expected Type (P3) : type analysis for parameter inference
-\end{comment}
\ No newline at end of file
+Additionally, JastAdd frameworks developed so far could improve testing methods in RAGs. Improvements with those extensions are huge advantages of RAG. ExtendJ, previously JastAddJ, is a Java compiler extended with JastAdd \cite{ekman2007jastadd}. It improves Java compliance, compilation speed, specification size and the support for non-trivial extensions compared to other extensible Java compilers. Analyses suggested in RAG \cite{fors2020principles}, e.g. Name Analysis, Type Analysis, might also be usable at parameter inference, which the name inference in this paper uses the similar pattern as Name Analysis (Lookup). Relational Reference Attribute Grammars \cite{mey2018continuous} could improve references and enable to build a new test model and connect it to the OpenAPI model implemented in this paper, parallel to \cite{gotz2018jastadd}, \cite{mey2020reusing}. A recently invented framework, RagConnect, enables to connect RAG-based models to models with other basis \cite{schone2020connecting}. Since version 0.2.1, it has a new model connection with REST and might help at generating test cases, because test cases are based on requests and responses of REST APIs. All of these extensions are open topics. So, it might be worthful to discuss and research how they can be used into OpenAPI Testing to suggest tests with a better quality.
\ No newline at end of file
diff --git a/sections/conclusion.tex b/sections/conclusion.tex
index 79c984c862ea56f3a97cb66ed419cb196441c259..d435c79fbe25f99a96cf279f04c7d428eb093e94 100644
--- a/sections/conclusion.tex
+++ b/sections/conclusion.tex
@@ -1,9 +1,9 @@
 \chapter{Conclusion}\label{ch:conclusion}
 
-In this paper, we have firstly researched which approaches to test REST APIs documented by OpenAPI are developed so far. Most of approaches are based on Fuzzing and use concepts e.g. Parameter Inference, Operation Dependency Inference, Model Transformations, Stateful Dynamic Analysis, etc. to get more sensible results than results in random testing. 
+In this paper, we have researched which approaches to test REST APIs documented by OpenAPI are developed so far. Most of approaches are based on Fuzzing and use concepts e.g. Parameter Inference, Operation Dependency Inference, Model Transformations, Stateful Dynamic Analysis, etc. to get more sensible results than results in random testing. 
 
 % 2 
-RAGO API is a first framework that configures OpenAPIs in RAG generates random test requests of an API descriebed in OpenAPI and infers parameters by responses in the API. The OpenAPI model is validated by \textbf{974 APIs} selected in the repository of apis.guru\footref{apis-guru}. This OpenAPI model could be the basis of published testing approaches introduced in this paper or any other test methods and be extended with JastAdd frameworks released already publicly. We have presented an example of an Fuzzing approach and usage of RAGs in attributes. Especially, collection attributes were the most helpful attributes in this work, because it simplified the implementation of references of OpenAPI structures used for grammar and parameter inference. 
+RAGO API is a first framework that configures OpenAPIs in RAG generates random test requests of an API descriebed in OpenAPI and infers parameters by responses in the API. The OpenAPI model is validated by \textbf{974 APIs} selected in the repository of apis.guru\footref{apis-guru}. This OpenAPI model could be the basis of published testing approaches introduced in this paper or any other test methods and be extended with JastAdd frameworks released already publicly. We have presented an example of an Fuzzing approach and usage of RAGs in attributes and could generate requests of a pet store\footref{petstore} and expected responses in this server where \textbf{maximum over 300 valid requests were inferred}. In this work, collection attributes were the most helpful attributes, because it simplified the implementation of references used for grammar and parameter inference.
 
 % 3
 Finally, we have presented several suggestions to extend this tool intended to improve basic functionalities e.g. references with \cite{mey2018continuous} and also to develop new approaches e.g. model transformation with \cite{gotz2018jastadd}, \cite{mey2020reusing}, \cite{schone2020connecting}
\ No newline at end of file
diff --git a/sections/feasibility.tex b/sections/feasibility.tex
index 53cedf9154924bb221bf7777733298958eee312c..061a9f8692ba50b2f3656f2835b295619227fe97 100644
--- a/sections/feasibility.tex
+++ b/sections/feasibility.tex
@@ -4,8 +4,7 @@
 \item{\textbf{RQ1 : Which approaches and techniques for automated tests of OpenAPI specifications are researched and developed so far?}}
 \end{description}
 
-The results of current existing approaches are shown in Table 3.1. %\cite{tab:appr-sum}
-As we can see, most of current approaches to test OpenAPI are in black-box and use Fuzzing method to generate test cases. There are differences between approaches at test generation phase. 
+The results of current existing approaches are shown in \Cref{tab:sum}. As we can see, most of current approaches to test OpenAPI are in black-box and use Fuzzing method to generate test cases. There are differences between approaches at test generation phase. 
 
 % 2
 Several of them inference parameters or operations or use test model generation. They also have differences at bug types. They all consider 500 status codes as bugs, but, three of them consider 200 status codes in test cases expected errors and 4xx status codes in test cases expected successful responses as bugs. Exact bug reports of approaches which experimented with industrial APIs excluded Property-based Approach formulate that they can be interesting research targets combined with RAG.
@@ -14,7 +13,7 @@ Several of them inference parameters or operations or use test model generation.
 \item{\textbf{RQ2 : Are suggested testing approaches from the literatures also available in RAG?}}
 \end{description}
 
-To determine an answer of this question, we have constructed a data structure for OpenAPI specification in \Cref{ch:rago} to parse OpenAPI documents and validated the parser with 98 commercial APIs, i.e. the parser taken an OpenAPI document returns the same elements except elements with empty values and sibling elements of references in an OpenAPI document. The input and output documents still semantically are same. In \Cref{ch:stm}, we have also implemented two Fuzzing approaches (Random Testing, Parameter Inference). Parameter Inference is motivated by Specification-based Approach\cite{ed2018automatic} and RESTTESTGEN\cite{viglianisi2020resttestgen}, where seperate properties of responses are collected and used as inputs in a parameter with the same schema of a response.
+To determine an answer of this question, we have constructed a data structure for OpenAPI specification in \Cref{ch:rago} to parse OpenAPI documents and \textbf{validated the parser with 974 commercial APIs}, i.e. the parser taken an OpenAPI document returns the same elements except elements with empty values and sibling elements of references in an OpenAPI document. The input and output documents still semantically are same. In \Cref{ch:stm}, we have also implemented two Fuzzing approaches (Random Testing, Parameter Inference). Parameter Inference is motivated by Specification-based Approach\cite{ed2018automatic} and RESTTESTGEN\cite{viglianisi2020resttestgen}, where seperate elements of responses are collected and used as inputs in a parameter with the same schema of a response.
 
 % 3
 Several approaches are based on their own metamodels. It is not clear yet, whether such model-based approaches could be developed in RAGs. For implementing metamodel generations of approaches in RAGs, JastAdd frameworks  \cite{gotz2018jastadd}, \cite{mey2020reusing} could be helpful to transform a model to another model. It is worthful to discuss and research how such frameworks developed already could be applied to those model-based approaches, because it would save an enormous amount of effort at implementing.
\ No newline at end of file
diff --git a/sections/introduction.tex b/sections/introduction.tex
index 448b2e1e478b93267c0eea194785aafb8971dcd1..7742e8c799caf4963ef53fca902ca77c7951fdb7 100644
--- a/sections/introduction.tex
+++ b/sections/introduction.tex
@@ -1,9 +1,9 @@
 \chapter{Introduction}\label{ch:introduction}
-Automated testing is one type of software testing which is expected to review, validate software products and find errors in these products. Compared to manual testing, it has a huge benefit in execution of test cases, while test cases are automatically generated. Also, it might be a meaningful approach to raise the security and quality in web-based software architectures using REST interfaces.
+Automated testing is one type of software testing which is expected to review, validate software products and find errors. Compared to manual testing, it has a huge benefit in execution of test cases, that numerous test cases are automatically generated. In web-based software architectures using REST interfaces, it also could be 
+a meaningful approach to raise the security and quality.
 
 % example with [2]
-One kind of testing such architectures is to validate REST interfaces and check if there are errors in them. In the last years, the OpenAPI Specification\footnote{\url{https://swagger.io/specification}} has become the common way to document the communication endpoints and exchanged data structures. This is useful to test REST interfaces, because it enables the black-box testing approach.
-OpenAPI specifications are described as JSON or YAML documents and are defined by a semi-formal specification describing permitted and required elements as well as their semantics. While there is no formal definition of the full OpenAPI language, a meta-schema exists for the JSON Schema parts of the language.
+One kind of testing such architectures is to validate REST interfaces and check if there are errors in them. In the last years, the OpenAPI Specification\footnote{\url{https://swagger.io/specification}} has become the common way to document the communication endpoints and exchanged data structures of REST APIs. Such documentations enable to test REST APIs in black-box testing approaches. OpenAPI specifications are described in JSON or YAML documents and defined by a semi-formal specification describing permitted and required elements as well as their semantics. While there is no formal definition of the full OpenAPI language, a meta-schema exists for the JSON Schema parts of the language.
 
 % abstract 
 This paper investigates how such specifications can be used to test interfaces described by them. Since these specifications are tree-shaped, the investigation focuses on a grammar-based analysis approach, reference attribute grammars (RAGs) \cite{hedin2000reference}. 
@@ -16,7 +16,7 @@ To sum up, following three questions are formulated as research questions:
 \item{\textbf{RQ3 : Which advantages can RAG provide with its features at expressing testing approaches?}} 
 \end{description}
 
-\textbf{RQ1} is intended to investigate which approaches might be able to implement in RAG. Corresponding literatures are introduced in \Cref{ch:curr-appr}. \textbf{RQ2} focuses on the implementation the data structure and testing methods in RAG and is answered in \Cref{ch:rago} and \Cref{ch:stm}. In \Cref{ch:suitability}, Results for \textbf{RQ3} shows concretely which features of RAG are usable and which benefits exist at this point.
+\textbf{RQ1} is intended to investigate which approaches might be able to implement in RAG. Corresponding literatures are introduced in \Cref{ch:curr-appr}. \textbf{RQ2} focuses on the implementation the data structure and testing methods in RAG and is answered in \Cref{ch:rago} and \Cref{ch:stm}. In \Cref{ch:suitability}, Results for \textbf{RQ3} shows concretely which features of RAG could be usable and which benefits exist.
 
 % [1] https://swagger.io/specification/ 
 % [2] Basu, Anirban (2015). Software Quality Assurance, Testing and Metrics. PHI Learning. ISBN 978-81-203-5068-7.
\ No newline at end of file
diff --git a/sections/openapi.tex b/sections/openapi.tex
index dca9a6741c3f41e9d9bff9d423c0f364b78214b6..18035186d89a95e031052d49f303147a6093c873 100644
--- a/sections/openapi.tex
+++ b/sections/openapi.tex
@@ -1,13 +1,13 @@
 \section{OpenAPI Specification} \label{sec:openapi}
 
 %General
-Nowadays, there is a way to document REST APIs which allows both humans and computers to understand the whole structure of a RESTful web service and is globally used. It is called OpenAPI or also known as Swagger (older than 3.0.0 version). \\
-An OpenAPI document with its specification is written in a structured JSON or YAML and describes which API operations are available, what kind of details they have, how to reach them using a URI, what parameters and request bodies are required and optional in available operations and what authentication schema is. \\
+Nowadays, there is a way to document REST APIs which allows both humans and computers to understand the whole structure of a RESTful web service. It is called OpenAPI or also known as Swagger (older than 3.0.0 version). \\
+An OpenAPI document with its specification is written in a structured JSON or YAML file and describes which API operations are available, what kind of details they have, how to reach them using a URI, what parameters and request bodies are required and optional in available operations and what authentication schema is. \\
 OpenAPI specification does not hold on strict definitions and has a tree-shaped structure, which facilitates to use and extend in more than 25 programming languages, while JSON schema is available in them. An example of an OpenAPI document is shown in Listing~\ref{lst:openapi}.
 
 %Testing OpenAPI (Black-Box)
-This specification is relevant to automated testing of REST APIs, because it enables black-box testing. There are already many interesting approaches \cite{atlidakis2019restler}, \cite{ed2018automatic}, \cite{karlsson2020quickrest}, \cite{martin2020restest}, \cite{viglianisi2020resttestgen}, more concrete explanations to separate approaches are in \cref{ch:curr-appr}. \\
-Black-box testing needs the specification of the REST API (in our case, OpenAPI specification) and generates test cases automatically. OpenAPI specifications are defined semi-formal, so test generations could generate unrealistic or invalid inputs. Nonetheleoss it is a meaningful way to test APIs automatically, because it does not require access to the source code and is available for every API regardless of in which programming language it is implemented or whether it is locally or remotely deployed \cite{arcuri2019restful}. \\
+As an OpenAPI specification describes the interfaces of a server, black-box testings are available. There are many interesting approaches in an black-box architecture: \cite{atlidakis2019restler}, \cite{ed2018automatic}, \cite{karlsson2020quickrest}, \cite{martin2020restest}, \cite{viglianisi2020resttestgen}, more concrete explanations to separate approaches are in \cref{ch:curr-appr}. \\
+Black-box testing needs the specification of the REST API (in our case, OpenAPI specification) and generates test cases automatically, but, does not require an access to the source code. OpenAPI specifications are language-agnostic. So, it is available for every API regardless of in which programming language it is implemented. On the other hand, it might produce test results which are not expected or valid, because the access to the server is limited.
 
 \begin{lstlisting}[language=json, caption={Example OpenAPI}, label={lst:openapi}]
 {
diff --git a/sections/propBasedAppr.tex b/sections/propBasedAppr.tex
index ab978ff2dc4d77c06429c788bc216ff1b3ab0cd3..97e12c704c773fff4a8b8cb6609b0fb9f3476aad 100644
--- a/sections/propBasedAppr.tex
+++ b/sections/propBasedAppr.tex
@@ -1,12 +1,12 @@
 \section{Property-based Approach}
-QuickREST, the prototype of property-based approach, has been introduced by Karlsson [3]. It suggests a method to generate random test inputs (requests and parameters) applying property-based testing. Test inputs are either completely random or matching to the given OpenAPI specification. 
+QuickREST, the prototype of property-based approach, has been introduced by Karlsson\cite{karlsson2020quickrest}. It suggests a method to generate random test inputs (requests and parameters) applying property-based generation. Test inputs are either completely random or matching to the given OpenAPI specification. 
 
 %Process
-Property-based testing is not only a generation of test inputs, it checks if generated inputs are considered as expected properties with the aid of shrinking. Shrinking means if a generated test input is not accepted as expected, a smallest test input which fails in the same way is searched. With those features it is possible to formulate and verify properties of the test results (e.g. response body), so the testing method produces better results than only reporting HTTP responses. \\
+Property-based testing is not only a generation of test inputs, it checks if generated inputs are considered as expected properties with the aid of shrinking. Shrinking means a smallest test input which fails in the same way is searched, if a generated test input is not accepted as expected. With those features, it is possible to formulate and verify properties of the test results e.g. response body, so the testing method produces better results than only reporting HTTP responses. \\
 The first concrete step of QuickREST is generating test inputs including random parameters and request bodies. Test inputs are characterized as URLs. \\
 Secondly, responses to generated requests (test inputs) are checked, if they provide status codes defined in the OpenAPI specification, body payloads also defined in the specification and no 500 status codes. If such conditions are not satisfied, shrinking mentioned above will be executed. 
 
 %Research results
-After Karlsson's team has experimented with their proof-of-concept and real APIs (e.g. GitLab) as input they could determine that QuickREST finds real bugs (500 status codes), but is still limited in industry. For the future work developments of a model of call sequences considered as interactions of real users and of an automated analysis of logs while executing tests to improve the effectiveness and size of explorations.
+After Karlsson's team has experimented with their proof-of-concept and real APIs (e.g. GitLab) as inputs, they could determine that QuickREST finds real bugs (500 status codes), but, is still limited in industry. For the future work, developments of a model for call sequences considered as interactions of real users and of an automated analysis of logs are needed to improve the effectiveness and size of explorations.
 
 %[3] QuickREST
\ No newline at end of file
diff --git a/sections/rag.tex b/sections/rag.tex
index 38bbcb0f083c99f5ffe9ed31a9d84a52a29fd0fe..fb23c17455085df045c0283cde11f651d50dd852 100644
--- a/sections/rag.tex
+++ b/sections/rag.tex
@@ -1,23 +1,19 @@
 \section{Reference Attribute Grammar}
-One interesting formal system which is applicable as an approach for automated testing with OpenAPI specifications is RAGs.\\
-RAG is an extension of Attribute Grammars (AG) while AG is an extension to context-free grammars again.
+One of main research targets of this work is Reference Attribute Grammar\cite{hedin2000reference}, an extension of Attribute Grammar (AG)\cite{knuth1968semantics}. 
 
 %Attribute Grammar
 The concept of AG was introduced by Knuth \cite{knuth1968semantics} as a solution of problems with context-free grammars.
-Context-free grammars (e.g. EBNF, BNF) can only specify syntax with terminal, non-terminal symbols and production rules, but in computer programs it is necessary to define the meaning of semantic rules, data types and values while AGs allow this possible. \\
-In AGs terminal and non-terminal symbols are characterized as attributes and have semantic information and their own values.
-The whole structure of a AG is described in an abstract syntax tree (AST) and every attribute is a node of this AST.
-The attributes are specified as (1) synthesized (defined from the values of child nodes) or (2) inherited (defined from values of parent nodes).
-If a production $X_0 \rightarrow X_1 ... X_n$ is given, a semantic rule to this production means $a_0 = f(a_1, ... , a_m)$ with $a_0$ as the value of applying the semantic function $f$ to the attributes $a_1, ... , a_m$. \\
-AGs are impractical for description of syntax when every dependency of all attributes is local (i.e. definable only with synthesized and inherited) and can follow the syntax tree, but not for descriptions of syntax with non-local dependencies, because it is too low-level at this point.
-This situation leads to problems in terms complexity of analysis and extension.
+Context-free grammars (e.g. EBNF, BNF) can only specify syntax with terminal, non-terminal symbols and production rules. But, in computer programs, it is necessary to define the meaning of semantic rules, data types and values and AGs allow to define them. \\
+In AGs, terminal and non-terminal symbols are 
+all terminal and non-terminal symbols are characterized as nodes in an Abstract Syntax Tree (AST) and have own attributes to describe which functionality or value they have. 
+The attributes are specified as \textbf{synthesized} or \textbf{inherited}. In a synthesized attribute, the value is determined by the attribute value at child nodes. Otherwise, the attribute is inherited. AGs are impractical for description of syntax when every dependency of all attributes is local (i.e. definable only with synthesized and inherited) and can follow the syntax tree, but not for descriptions of syntax with non-local dependencies, e.g. a dependency from a root node to a leaf node. This situation leads to problems in terms complexity of analysis and extension.
 
 
 %RAG
-In 2000 Hedin has proposed RAG as an object-oriented extension of AG to solve these problems \cite{hedin2000reference}.
-This approach enables the references of attributes as a node in an AST and every property of structured attributes (i.e. sets, dictionaries, lists) can also be references. \\
-If there is a reference attribute stored in an AST, it represents a direct connection from an any node that is freely distant (non-local) and to itself. The value of the referred node is directly usable in the referring node without accessing any other nodes in the AST. An example of an AST in RAG compared to a traditional AST without RAG is shown in \Cref{fig:ast}. \\
-Such features of RAG represent advantages over AG, largely in efficiency. It is not necessary to duplicate a same value of a node to another node and semantic functions in a complex data structure can be split into smaller functions which are completely describable in RAG. Consequently, RAG can extend existing grammars and give them more functionalities.
+In 2000, Hedin proposed RAG as an object-oriented extension of AG to solve these problems \cite{hedin2000reference}.
+This approach enables the reference of nodes in an AST. Every node could be referred and belong to structured attributes e.g. sets, dictionaries, lists, etc.
+If a node has a reference attribute, the attribute represents a direct connection from an any node that is freely distant (non-local) and to itself. The value of the referred node is directly usable in the referring node without accessing any other nodes in the AST. A graphical example of an AST in RAG compared to a traditional AST without RAG is shown in \Cref{fig:ast}. \\
+Such features of RAG represent advantages over AG, largely in efficiency. It is not necessary to duplicate a same value of a node to be utilized in another node and semantic functions in a complex data structure can be split into smaller functions which are completely describable in RAG. Consequently, RAG can extend existing grammars and give them more functionalities.
 
 \begin{figure}
 \centering
@@ -26,9 +22,10 @@ Such features of RAG represent advantages over AG, largely in efficiency. It is
 \end{figure}
 
 %JastAdd
-An extensible system producing language-based compilers and tools is called, JastAdd. It enables to generate modules and tools based on RAG, where properties of AST to be programmed declaratively. There are already several tools implemented and extended with JastAdd (e.g. ExtendJ, JModelica.org, abc, Soot, McLab, Palacom, etc.\footnote{\url{https://jastadd.cs.lth.se/web/applications.php}}) \\
-Generally, JastAdd needs two input data to generate classes written in Java. One input data is a file in ast file where AST nodes are described. With this input a class for every non-terminal node is generated and constructed using defined attributes. Other input data is one of more declarative definitions of corresponding attributes and methods. A graphical model of JastAdd is described in \Cref{fig:jastadd}.\\
-Examples are in Listing~\ref{lst:example-ast} and Listing~\ref{lst:example-attributes} implementing basic arithmetic operations. Listing~\ref{lst:example-ast} shows how separate AST nodes can be defined in RAG and Listing~\ref{lst:example-attributes} explains a case of how attributes can be implemented, especially synthesized attributes by eq (equation). \\ \\
+An extensible system producing language-based compilers and tools is called, JastAdd. It evaluates definitions of structures of AST nodes and attributes and generates modules and tools. There are already several tools implemented and extended with JastAdd (e.g. ExtendJ, JModelica.org, abc, Soot, McLab, Palacom, etc.\footnote{\url{https://jastadd.cs.lth.se/web/applications.php}}) \\
+Generally, JastAdd needs two input data to generate Java classes. One input data is an ast-file where AST nodes are described. With this input, a class for every terminal node is generated and reconstructed in Java. Other input data is one or more declarative definitions of corresponding attributes. A graphical model of JastAdd is in \Cref{fig:jastadd}.\\
+Examples are in Listing~\ref{lst:example-ast} and Listing~\ref{lst:example-attributes} implementing basic arithmetic operations. Listing~\ref{lst:example-ast} shows how separate expressions can be constructed in AST and Listing~\ref{lst:example-attributes} explains a case of how attributes could be implemented.
+Especially, synthesized attributes concretized by eq (equation). \\ \\
 
 \begin{lstlisting}[language=AST,label={lst:example-ast},caption={Example of an AST}]
 Root ::= Def* Exp ;
diff --git a/sections/rago.tex b/sections/rago.tex
index 8458a747b8528b034778d21fa9c8a032867848f8..3a00eb605b4cf9da75b6773ef3206889aad84df1 100644
--- a/sections/rago.tex
+++ b/sections/rago.tex
@@ -1,5 +1,5 @@
 \chapter{RAGO API}\label{ch:rago}
-In this section, we propose RAGO API, the first REST API fuzzing framework modeled in RAG (JastAdd). RAGO API parses the OpenAPI specification in Java to transfer it into a RAG and generates test cases (requests) that automatically send requests to test the target API. 
+In this Chapter, we propose RAGO API, the first REST API fuzzing framework modeled in RAG (JastAdd). RAGO API parses the OpenAPI specification in Java to transfer it into a RAG and produces requests that automatically test the target API. 
 
 % Generation
 Requests of RAGO API are generated in two basic fuzzing methods. Firstly, generating requests with random values. Secondly, inferring parameters available in responses (i.e. an object returned by a response could be usable as a value in an input parameter).
diff --git a/sections/ragoGram.tex b/sections/ragoGram.tex
index 70362e13518cef271476c7a47b623c1ea387e895..3bee9db4676d78049e794536360bf95739eafb13 100644
--- a/sections/ragoGram.tex
+++ b/sections/ragoGram.tex
@@ -1,9 +1,9 @@
 \section{Grammar}
 
-As mentioned in \Cref{sec:openapi}, OpenAPI specifications are written in structured JSON or YAML and do not hold on strict definitions (i.e. programming language-agnostic). It means that specifications can be described and implemented in any programming language or grammar, which also applies to RAG. 
+As mentioned in \Cref{sec:openapi}, OpenAPI specifications are written in structured JSON or YAML and do not hold on strict definitions i.e. programming language-agnostic. It means that specifications can be described and implemented in any programming language or grammar, which also applies to RAG. 
 
 % AST
-To use OpenAPI in RAG, it is firstly necessary to rewrite the OpenAPI structure in an AST. We have constructed this AST in 95 AST-nodes to define 30 objects. The version of OpenAPI considered in this framework is 3.0.0. To have a better overview, the definition of Parameter Object in our AST is shown in Listing~\ref{lst:rago-ast} and can be compared with the definition in the OpenAPI official GitHub\footnote{\url{https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.0.md}}. Every Reference Object has a String token named Ref and refers an object in the OpenAPI document with attributes in Listing~\ref{lst:rago-reference}.
+To use OpenAPI in RAG, it is firstly necessary to rewrite the OpenAPI structure in an AST. We have constructed this AST in 95 AST nodes to define 30 objects. The version of OpenAPI considered in this framework is 3.0.0. To have a better overview, the definition of Parameter Object in our AST is shown in Listing~\ref{lst:rago-ast} and can be compared with the definition in the OpenAPI official documentation\footnote{\url{https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.0.md}}. Every Reference Object has a String token named Ref and refers an object in the OpenAPI document with attributes in Listing~\ref{lst:rago-reference}.
  
 \begin{lstlisting}[language=AST,label={lst:rago-ast},caption={Parameter Object}]
 abstract ParameterOb;
@@ -11,7 +11,6 @@ ParameterReference : ParameterOb ::= <Ref>;
 ParameterObject : ParameterOb ::= <Name> <In> <Description> <Required:Boolean> <DeprecatedBoolean:Boolean> <AllowEmptyValue:Boolean> <Style> <Explode:Boolean> <AllowReserved:Boolean> [SchemaOb] <Example:Object> ExampleTuple* ContentTuple* Extension*;
 \end{lstlisting} 
 
-\newpage
 
 \begin{lstlisting}[language=JRAG,label={lst:rago-reference},caption={Attributes for Reference}]
 coll List<ParameterTuple> OpenAPIObject.parameterTuples() [new ArrayList<>()] root OpenAPIObject;
@@ -29,7 +28,7 @@ eq ParameterReference.parameterObject() {
 }
 \end{lstlisting} 
 
-During transferring the structure from OpenAPI to RAG, several properties of JastAdd were detectable:
+During transferring the structure from OpenAPI to RAG, several characteristics of JastAdd were detectable:
 \begin{description}
 \item \textbf{JastAdd does not support any map structure. So, nodes derived from maps in OpenAPI must be configured in tuples (List of a named tuple containing a key and a value).}
 \end{description} 
diff --git a/sections/ragoProcess.tex b/sections/ragoProcess.tex
index 83d1548a13e578eaef6766080801f890062e724e..ff4c1517ac095d26a214a4c2e050097a5a04ad80 100644
--- a/sections/ragoProcess.tex
+++ b/sections/ragoProcess.tex
@@ -6,13 +6,13 @@
 \caption{Process of RAGO (Parse, Backtransformation)}\label{fig:ragoprocess}
 \end{figure}
 
-After the completion of the syntactic AST structure, declarative definitions are expected to concrete what individual attributes mean semantically. It was a biggest effort to this work. A diagram in \Cref{fig:ragoprocess} represents a graphic overview of the process. 
+After a completion of a syntactic AST structure, values of OpenAPI documents must be stored in AST nodes. A diagram in \Cref{fig:ragoprocess} represents a graphic overview of the parser we implemented in a process.
 
 %Parse OpenAPI -> Java
-Before working with attribute definitions, an input specification in JSON or YAML file needs to be parsed in Java to process it in RAG. A simple JSON or YAML parser does not hold on the schema of an OpenAPI specification. So, we used an additional library, openapi4j\footnote{\url{https://www.openapi4j.org/parser.html}}. It enables to parse and to validate an OpenAPI document in Java and performs well. It has small limitations (e.g. no support for type 'any' and mapping with schema name outside of 'components/schemas' not supported). But, they are not critical to this work.
+Before working with attribute definitions, an input specification in JSON or YAML file needs to be parsed in Java to store it into a RAG. A simple JSON or YAML parser does not hold on the schema of an OpenAPI specification. So, we used an additional library, openapi4j\footnote{\url{https://www.openapi4j.org/parser.html}}. It enables to parse and to validate an OpenAPI document in Java and performs well. It has small limitations e.g. no support for type 'any' and mapping with schema name outside of 'components/schemas' not supported. But, they are not critical to this work.
 
 %Parse Java -> RAG
-The next step of processing this framework is transferring parsed objects in java into AST nodes. This step is done with attribute definitions in jrag file. A small definition example of Parameter Object is in Listing~\ref{lst:parser}.
+The next step of processing this framework is transferring parsed objects in Java into AST nodes. This step is done with declarative definitions (not attributes) in jrag file. A small definition example of Parameter Object is in Listing~\ref{lst:parser}.
  
 \begin{lstlisting}[language=JRAG,label={lst:parser},caption={Parser for Parameter Object}]
 { ...
@@ -24,15 +24,15 @@ The next step of processing this framework is transferring parsed objects in jav
     parameterObject.setDescription(parameter.getDescription());
   if( parameter.getDeprecated() != null )
     parameterObject.setDeprecatedBoolean(parameter.getDeprecated());
-...
+  ...
   return parameterObject;
 }
 \end{lstlisting}
  
-First, "parameter" is an object parsed with openapi4j and "parameterObject" is a node which should describe the structure of Parameter Object in RAG. It checks whether values in "parameter" exist and are set into the matching node in the AST (in this case "parameterObject"). At the end of the attribute, it returns the node.
+First, "parameter" is an object parsed with openapi4j and "parameterObject" is a node which should describe the structure of Parameter Object in RAG. It checks whether values in "parameter" exist and are set into the matching node in the AST (in this case "parameterObject"). At the end of the method, it returns the node.
 
 %Transfer back RAG -> Java for validation
-To see if the parser implemented with JastAdd works correctly, parsed nodes in an AST that describes OpenAPI structure must be transferred back into the structure of openapi4j and validated. The validation of objects after processing is explained in \Cref{sec:ragotest}. The way of definitions to transfer RAG into openapi4j is similar to the parser, it only provides the opposite direction of implementation. Listing~\ref{lst:backtransf} allows to compare itself with Listing~\ref{lst:parser}.
+To see if the parser implemented with JastAdd works correctly, parsed nodes in an AST that describes OpenAPI structure must be transferred back into the structure of openapi4j and validated. The validation of objects after processing is explained in \Cref{sec:ragotest}. The way of definitions to transfer RAG into openapi4j is similar to the parser, it only provides the opposite direction of implementation. Listing~\ref{lst:backtransf} allows to compare itself to Listing~\ref{lst:parser}.
 
 \begin{lstlisting}[language=JRAG,label={lst:backtransf},caption={Back-Transformation for Parameter Object}]
 { ...
diff --git a/sections/ragoTest.tex b/sections/ragoTest.tex
index 908e05d8bbfd2fcaba8c4a5bc6ecfe9f0b1f9d4f..29e54e6c697d425303ccb06ed463ee427045d2e0 100644
--- a/sections/ragoTest.tex
+++ b/sections/ragoTest.tex
@@ -1,10 +1,10 @@
 \section{Validation} \label{sec:ragotest}
 
 % General
-For correct (re-)constructions, objects after the processing phase must be the same before. In that sense, a generated JSON or YAML must be equivalent to the given OpenAPI document. Thankfully, there are several practical libraries to compare two JSONs (JsonNode, JsonDiff, JsonPath) and conditions for the assertion were uncomplicated.
+For correct (re-)constructions, objects after the back-transformation must be the same before. In that sense, a generated JSON or YAML must be equivalent to the given OpenAPI document. Thankfully, there are several practical libraries to compare two JSONs (JsonNode, JsonDiff, JsonPath) and conditions for the assertion were uncomplicated.
 
 % Differences
-In the validation, differences with empty values and differences in Reference Objects are excluded. After the observation of OpenAPI documents, we have noticed that every author has an individual implementing behavior (i.e. description parts are always initialized in some APIs or nodes with empty values do not exist in others). Sibling elements of references do not provide semantical differences. So, it was not sensible to generate strictly equivalent values\footnote{\url{https://swagger.io/docs/specification/using-ref/}}. Its implementation is shown in Listing~\ref{lst:validation}.
+In the validation, differences with empty values and differences in Reference Objects are excluded. After the observation of OpenAPI documents, we have noticed that every author has an individual implementing behavior e.g. description parts are always initialized in some APIs or nodes with empty values do not exist in others. Empty values and Sibling elements of references do not provide semantical differences. So, it was not sensible to generate strictly equivalent values\footnote{\url{https://swagger.io/docs/specification/using-ref/}}. Its implementation is shown in Listing~\ref{lst:validation}.
 
 \begin{lstlisting}[language=JRAG,label={lst:validation},caption={Validation Method}]
 JsonNode diff = JsonDiff.asJson(expectedNode, actualNode);
@@ -18,17 +18,17 @@ for( int i = diff.size()-1 ; i >= 0 ; i-- ){
   else if( !JsonPath.parse(actualNode.toString()).read(pathNode.substring(0, pathNode.lastIndexOf(".")).concat(".$ref"), String.class).isEmpty() )
     ((ArrayNode) diff).remove(i);
 }
-// if the Jsons are equivalent, there is no reason to to the text comparison.
+// if the Jsons are equivalent, there is no reason for the text comparison.
 // if there is a difference, a text comparison might look better than just the diff.
 if (diff.size() != 0) {
   Assertions.assertEquals(actualNode.toPrettyString(), expectedNode.toPrettyString(), "JSONs are different:\n" + diff.toPrettyString());
 }
 \end{lstlisting}
 
-Firstly, it takes a JSON node which is expected after processing phase and an other node which is actually created as inputs. A path shows only which file is concerned. Then, nodes are compared with JsonDiff. It returns an ArrayNode, "diff" describing which differences they have and where the differences have appeared. Unfortunately, it does not have a filter with empty values. So, the validation method tries to get a value of a path in "diff" and checks if it is empty. The differences with empty values are removed from this ArrayNode. Afterwards, it executes an assertion with a text comparison, if "diff" has an element. A text comparison gives a concreter but bigger overview than JsonDiff.  As described in the comment, it is not necessary to verify nodes twice.
+Firstly, it takes a JSON node which is expected after processing phase and an other node which is actually created. Then, nodes are compared with JsonDiff. It returns an ArrayNode, "diff", describing which differences they have and where the differences have appeared. Unfortunately, it does not have a filter with empty values. So, the validation method tries to get a value of a path in "diff" and checks if it is empty. The differences with empty values or sibling elements of references are removed from this ArrayNode. Afterwards, it executes an assertion with a text comparison, if "diff" has an element. A text comparison gives a bigger overview in Pretty String.
 
 % Result
-As a result, we have validated the functionality of the structure transfer, while \textbf{974 APIs} from the repository of apis.guru\footnote{\url{https://github.com/APIs-guru/openapi-directory}\label{apis-guru}} are constructed in RAG and reconstructed in openapi4j. APIs involving validation or null pointer errors at openapi4j were excluded. This significant amount of validations presents that this grammar parses and transforms OpenAPI documents which are reliably semantically equivalent to their inputs.
+As a result, we have validated the functionality of the structure transfer, while \textbf{974 APIs} from the repository of apis.guru\footnote{\url{https://github.com/APIs-guru/openapi-directory}\label{apis-guru}} are constructed in RAG and reconstructed in openapi4j. APIs involving validation or null pointer errors at openapi4j were excluded. This significant amount of validations presents that this grammar parses and transforms reliably OpenAPI documents which are semantically equivalent to their specifications.
 
 % ^{3} : api.guru
 % ^{4} : https://swagger.io/docs/specification/using-ref/
\ No newline at end of file
diff --git a/sections/relatedwork.tex b/sections/relatedwork.tex
index 53d224cd9082b75d1a59733df2a045507024402b..a72b802fa53c4ca55a7e992bc016394872ca8954 100644
--- a/sections/relatedwork.tex
+++ b/sections/relatedwork.tex
@@ -1,6 +1,6 @@
 \chapter{Current Approaches for OpenAPI Testing}\label{ch:curr-appr}
 
-There are already many approaches suggesting to test REST APIs with OpenAPI specifications automatically. Mostly, they are black-box approaches where an access to a source code is irrelevant \cite{atlidakis2019restler}, \cite{ed2018automatic}, \cite{karlsson2020quickrest}, \cite{martin2020restest}, \cite{viglianisi2020resttestgen}. There is also a white box approach \cite{arcuri2019restful} which is not the main research target of this paper, but still an interesting way. \\ This section introduces which approaches have been released so far, how they work and which did they achieve. 
+This Chapter introduces the current approaches developed for automated testing of REST APIs described in OpenAPI documents. Several approaches suggesting to test APIs with OpenAPI specifications are mostly black-box approaches where an access to a source code is irrelevant \cite{atlidakis2019restler}, \cite{ed2018automatic}, \cite{karlsson2020quickrest}, \cite{martin2020restest}, \cite{viglianisi2020resttestgen}. There is also a white box approach \cite{arcuri2019restful} which is not the main research target of this paper, but, still an interesting way. \\ Mainly, this Chapter investigates which approaches have been released so far, how they work and what they have achieved. 
 
 
 \input{sections/specBasedAppr}
diff --git a/sections/restapi.tex b/sections/restapi.tex
index 56d0dfb717316b1e44528d2f7433f190a478ee66..2bd6d4ec21b655a3bcaf1ce767237c4cadf013dc 100644
--- a/sections/restapi.tex
+++ b/sections/restapi.tex
@@ -1,8 +1,7 @@
 \section{REST API}
 
 %General
-REST API is an API (Application Programming Interface) that conforms the common architectural style for web services, REST (REpresentational State Transfer). \\
-The concept of REST has been proposed by Roy Fielding in 2000 \cite{fielding2000architectural}.
+REST API is an API (Application Programming Interface) that conforms the common architectural style for web services, REST (REpresentational State Transfer) and is created to standardize the design and development of World Wide Web. The concept of REST has been proposed by Roy Fielding in 2000 \cite{fielding2000architectural}.
 %Constraints
 It offers constraints to increase simplicity, performance, visibility, modifiability, portability and reliability of APIs. \\
 Constraints defined in REST are following: 
diff --git a/sections/restler.tex b/sections/restler.tex
index 726be1d140d2f94f69716944da9d859b0aba9b62..c4333f7017587dc032f9d69f2ea7fd3c4cee5e5b 100644
--- a/sections/restler.tex
+++ b/sections/restler.tex
@@ -1,18 +1,8 @@
 \section{Stateful REST API Fuzzer}
-The last approach related to this paper is RESTler, the first stateful REST API fuzzer [6]. Test cases generated by this tool are also called sequences, i.e. a set of requests, where a request depends on previous requests, if there are several test cases executed.
+The last approach related to this paper is RESTler, the first stateful REST API fuzzer \cite{atlidakis2019restler}. Test cases generated by this tool are also called sequences, i.e. a set of requests, where a request depends on previous requests, if there are several test cases executed.
 
 %Process
-Fuzzing is an automated technology for software tests in which the program to be tested is repeatedly loaded with random data at one or more input interfaces. So, execution time of RESTler can be arbitrarily controlled by a tester. \\
 The focus of the test generation in RESTler is not on whether a request must be valid or invalid in the specification, but on inferring dependencies between requests (e.g. a constraint, executing a request B after a request A, because the input type in B corresponds to the output type in A) and analysis of some hidden contributions (e.g. if a request C is refused after a sequence A after B, C after the sequence A:B will be not executed).
 
 %Research Results
-RESTler found 28 bugs in GitLab and several bugs in Microsoft Azure and Office365. In terms of coverage, it is proved that the code coverage increases, before it stops to gain at one time. This tool is not commonly applicable yet, but still valuable to develop more, because it suggests to test APIs with stateful test cases.
-
-
-
-\begin{comment}
-This paper introduces RESTler, the first stateful REST API fuzzer. RESTler analyzes the API specification of a cloud service and generates sequences of requests that automatically test the service through its API. RESTler generates test sequences by (1) inferring producer-consumer dependencies among request types declared in the specification (e.g., inferring that “a request B should be executed after request A” because B takes as an input a resource-id x produced by A) and by (2) analyzing dynamic feedback from responses observed during prior test executions in order to generate new tests (e.g., learning that “a request C after a request sequence A;B is refused by the service” and therefore avoiding this combination in the future).
-We present experimental results showing that these two techniques are necessary to thoroughly exercise a service under test while pruning the large search space of possible request sequences. We used RESTler to test GitLab, an open-source Git service, as well as several Microsoft Azure and Office365 cloud services. RESTler found 28 bugs in GitLab and several bugs in each of the Azure and Office365 cloud services tested so far. These bugs have been confirmed and fixed by the service owners.
-
-RESTler is the first automatic tool for stateful fuzzing of cloud services through their REST APIs. While still in early stages of development, RESTler was able to find 28 bugs in GitLab and several bugs in each of the four Azure and Office365 cloud services tested so far. Although still preliminary, our results are encouraging. How general are these results? To find out, we need to fuzz more services through their REST APIs and check more properties to detect different kinds of bugs and security vulnerabilities. Indeed, unlike buffer overflows in binary-format parsers, use-after-free bugs in web browsers, or cross-site-scripting attacks in web-pages, it is still unclear what security vulnerabilities might hide behind REST APIs. While past human-intensive pen testing efforts targeting cloud services provide evidence that such vulnerabilities do exist, this evidence is still too anecdotal. New automated tools, like RESTler, are needed for more systematic answers. How many bugs can be found by fuzzing REST APIs? How security-critical will they be? This paper provides a clear path forward to answer these questions.
-\end{comment}
\ No newline at end of file
+RESTler found 28 bugs in GitLab and several bugs in Microsoft Azure and Office365. In terms of coverage, it is proved that the code coverage increases, before it stops to gain at one time. This tool is not commonly applicable yet, but still valuable to develop more, because it suggests to test APIs with stateful test cases.
\ No newline at end of file
diff --git a/sections/specBasedAppr.tex b/sections/specBasedAppr.tex
index 140d4f42faf6ed980911bdc1c006f5cec9081c93..1a572a4c0e20b9e32132c780bbff19d2219ea13c 100644
--- a/sections/specBasedAppr.tex
+++ b/sections/specBasedAppr.tex
@@ -1,12 +1,12 @@
 \section{Specification-based Approach}
-In 2018 Ed-douibi proposed a prototype to generate test cases for REST APIs relying on their OpenAPI specifications [2]. This approach receives an OpenAPI specification in JSON file first, configure meta-models (an OpenAPI meta-model and a test suite meta-model) and generate test cases.
+In 2018 Ed-douibi proposed a prototype to generate test cases for REST APIs relying on their OpenAPI specifications \cite{ed2018automatic}. This approach receives an OpenAPI specification in JSON file first, configures meta-models (an OpenAPI meta-model and a test suite meta-model) and generates test cases.
 
 %Process
-After a model extraction (OpenAPI specification into OpenAPI meta-model) it returns a configured OpenAPI meta-model contains a set of valid models for a language to simplify the integration and modification of information in the specification. \\
-A Model transformation (OpenAPI meta-model into test suite meta-model) considers only properties which are relevant to test cases (HTTP-requests). There are several production rules to define valide and invalide requests and generate them. After the model transformation test cases are generated and used to test the corresponding REST API. \\
-The main research goals of Ed-douibi and his team were not only suggesting a test method but also finding out which coverage level (in terms of endpoints, operations, parameters and data definitions of the OpenAPI definition) their tool implementation has and where REST APIs in reality fail mostly in the definitions and implementation.
+After a model extraction (OpenAPI specification into OpenAPI meta-model), it returns a configured OpenAPI meta-model that contains a set of valid models for a language to simplify the integration and modification of information in the specification. \\
+A Model transformation (OpenAPI meta-model into test suite meta-model) considers only properties which are relevant to test cases (HTTP-requests). There are several production rules, for instance, parameter inferences, to define valid and invalid requests and generate them. After the model transformation, test cases are generated and used to test the REST API. \\
+The main research goals of Ed-douibi and his team were not only suggesting a test method, but, also finding out which coverage level (in terms of endpoints, operations, parameters and data definitions of the OpenAPI definition) their tool implementation has and where REST APIs in reality fail mostly in the definitions and implementation.
 
 %Research results
-As results they could determine that the tool implementation is practically usable in real REST APIs because 87\% of operations, 62\% of parameters, 81\% of endpoints and 76\% of definitions in 91 APIs were testable. They also figured out the main failing points in the definition and implementation: mistakes in the definition (e.g. missing required field, wrong JSON schema) and bad implementation of the APIs (e.g. unhandled exceptions in the server). 
+As results, they could determine that the tool implementation is practically usable in real REST APIs, because 87\% of operations, 62\% of parameters, 81\% of endpoints and 76\% of definitions in 91 APIs were testable. They also figured out the main failing points in the definition and implementation: mistakes in the definition (e.g. missing required field, wrong JSON schema) and bad implementation of the APIs (e.g. unhandled exceptions in the server). 
 
 %[2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8536162&casa_token=Jo2-ebbuzXsAAAAA:93fO5rSQWxTCoBBFA5pf8dfdyGE0-r2N77nMemvtFLZWIyfWfuvix___iya5o1K8m-4TNxuRZQ&tag=1
\ No newline at end of file
diff --git a/sections/stm.tex b/sections/stm.tex
index 9a7604cb7719066b0d809031027a64af75bb7fd6..119904163f8b026a358957685cb31b71720c3a4c 100644
--- a/sections/stm.tex
+++ b/sections/stm.tex
@@ -1,6 +1,6 @@
 \chapter{Test Methods}\label{ch:stm}
 
-As we have discussed in \Cref{ch:curr-appr}, most black-box REST API tests principally use Fuzzing, sending unexpected, random data or data providing errors into input interfaces. For this purpose, the newly restructured OpenAPI can be extended, while test cases are generated and generated requests are sent into API. The implementation of this work contains two basic approaches related to Fuzzing. Following sections present Random Testing in \Cref{sec:stm1} and Parameter Inference in \Cref{sec:stm2}.
+As we have discussed in \Cref{ch:curr-appr}, most black-box REST API tests principally use Fuzzing, sending unexpected, random data or data providing errors into input interfaces. For this purpose, the newly constructed OpenAPI grammar can be extended with Fuzzing tests. \Cref{sec:stm1} introduces a basic random test method. In \Cref{sec:stm2}, an approach for the parameter inference we used in this work is presented.
 
 \input{sections/stm1}
 \input{sections/stm2}
\ No newline at end of file
diff --git a/sections/stm1.tex b/sections/stm1.tex
index e48e9eabe2f21e1d2e579351c00fea95a01db7ec..1b4199d6538963ea127edc1d0fe19d83fc846df2 100644
--- a/sections/stm1.tex
+++ b/sections/stm1.tex
@@ -1,7 +1,7 @@
 \section{Random Testing} \label{sec:stm1}
 
-OpenAPI defines a parameter in an operation in four types, Path, Query, Header and Cookie\footnote{\url{https://swagger.io/docs/specification/describing-parameters}}. In this work, only Path and Query parameters are considered to research the functionality of the Fuzzing prototype in RAG. These parameters are clearly describable in String values and also comfortably testable, because they target only variable URIs. \\
-For the experiments, the OpenAPI document of a Pet Store\footnote{\url{https://petstore.swagger.io}\label{petstore}} is mainly used. Additionally, only GET and POST operations are tested to research basic functionality firstly.
+OpenAPI defines a parameter in an operation in four types, Path, Query, Header and Cookie\footnote{\url{https://swagger.io/docs/specification/describing-parameters}}. In this work, only Path and Query parameters are considered to research the functionality of Fuzzing in RAGO API. These parameters are clearly describable in String values and also comfortably testable, because they target only variable URIs. \\
+For the experiments, the OpenAPI document of a Pet Store\footnote{\url{https://petstore.swagger.io}\label{petstore}} is mainly used. Additionally, only GET and POST operations are tested to research basic functionality first.
 
 % Process
 The main code at random testing is following:
@@ -23,11 +23,12 @@ syn String ParameterObject.randomPathParameter(String uri); // Generate random P
 
 syn String ParameterObject.randomQueryParameter(String uri); // Generate random Query and save in URI
 \end{lstlisting}
-Initially, the generator for random testing computes a list of parameter objects stored in RAG and iterates all elements in this list (Line 2, Listing~\ref{lst:random-testing}). Subsequently, each iteration examines in which type the parameter is and produces a random URI with synthesized attributes (Listing~\ref{lst:rt-attribute}). This URI is saved in a String variable (Line 4-7, Listing~\ref{lst:random-testing}). Finally, the test generator sends a request
+
+Initially, the generator for random testing computes a list of parameter objects and iterates all elements in the list (Line 2, Listing~\ref{lst:random-testing}). Subsequently, each iteration examines in which type the parameter is and produces a random URI with synthesized attributes (Listing~\ref{lst:rt-attribute}). This URI is saved in a String variable (Line 4-7, Listing~\ref{lst:random-testing}). Finally, the test generator sends a request
 with the generated URI (Line 9, Listing~\ref{lst:random-testing}).
 
 % Result
-Besides operations with requirements of request bodies, \textbf{results of this implementation made possible to observe that parameters were successfully randomized and they produced documented status codes in Pet store\footref{petstore} (200, 400, 404, 405 status codes).} For future work, constraints of schema (minItems, maxItems, minLengths, maxLengths, etc.) can be completely extended. In this approach, the generator considers only the existence of enums.
+Besides operations with requirements of request bodies, results of this implementation made possible to observe \textbf{that parameters were successfully randomized and they produced documented status codes in Pet store\footref{petstore} (200, 400, 404, 405 status codes).} For the future work, constraints of schema (minItems, maxItems, minLengths, maxLengths, etc.) can be completely extended. In this approach, the generator considers only the existence of enumerations.
 
 % ^5 : https://swagger.io/docs/specification/describing-parameters/
 % ^6 : https://petstore.swagger.io
\ No newline at end of file
diff --git a/sections/stm2.tex b/sections/stm2.tex
index 9b8e105042411e41f34d2c9fbd06afb986409ea8..c61bf36d6194abf1ebc7826c2f6117f44ab6b47a 100644
--- a/sections/stm2.tex
+++ b/sections/stm2.tex
@@ -1,8 +1,8 @@
 \section{Parameter Inference} \label{sec:stm2}
-Random testing is a one of easiest way to test API and can be useful in some situations. However, it is not effective in REST API testing, because the coverage of the tested API would be particularly low and random values are unusually valid. During the observation in \Cref{sec:stm1}, it was clear to see that random testing mostly produces only requests that receive only 4xx HTTP stauts codes from commercial APIs. 
+Random testing is a one of easiest way to test API and can be useful in some situations. However, it is not effective in REST API testing, because the coverage of the tested API would be particularly low and random values are unusually valid\cite{karlsson2020quickrest}. During the observation in \Cref{sec:stm1}, it was clear to see that random testing mostly produces only requests that receive only 4xx HTTP stauts codes from commercial APIs. 
 
 % General
-To solve this problem, most of REST API testing approaches use a stateful process, because it enables to analyze properties of APIs and infer inputs which are more appropriate than random inputs. There are several suggestions in \Cref{ch:curr-appr}, this framework investigates a inference of parameters with operation dependency motivated by Specification-based Approach \cite{ed2018automatic} and RESTTESTGEN \cite{viglianisi2020resttestgen}. Generally, it collects all responses and inferences parameters contributing the same schema of a succesful response. If there is a schema set in a request and a response, parameters of them are inferred by three strategies:
+To solve this problem, most of REST API testing approaches use a stateful process, because it enables to analyze elements of APIs and infer inputs which are more appropriate than random inputs. There are several suggestions in \Cref{ch:curr-appr}. This framework investigates a inference of parameters with algorithms motivated by Specification-based Approach \cite{ed2018automatic} and RESTTESTGEN \cite{viglianisi2020resttestgen}. Generally, it collects all responses and inferences parameters contributing the same schema of a succesful response. If there is a same schema set in a request and a response, parameters of them are inferred by three strategies:
 \begin{itemize}
 \item \textbf{Case insensitive}
 \item \textbf{Id completion} in a field name (e.g. if a property is named with "id", it gets an additional field name available in the specification)
@@ -19,9 +19,9 @@ for (ResponseTuple t : getResponseTuples()) {
   if (responseCode == 200) {
     SchemaObject respSchema = t.getResponseSchema();
     if (respSchema.getType().equals("array"))
-      writeDictionaryWithArray(respSchema, response.toString());
+      list = writeDictionaryWithArray(respSchema, response.toString());
     else
-      writeDictionary(respSchema, response.toString());
+      list.add(writeDictionary(respSchema, response.toString()));
   }
 }
 ...
@@ -64,7 +64,9 @@ syn List<String> ParameterObject.addinfPathParameters(String pathRef,List<String
 }
 \end{lstlisting}
 
-Before it starts with the parameter inference, random tests of \Cref{sec:stm1} are generated first (Line 1, Listing~\ref{lst:parameter-inference}). During this execution, the status code of a response is checked if it is a successful response with 200 status code (Line 4, Listing~\ref{lst:parameter-inference}). Afterwards, the response schema of returned values is also checked. If it is in type array, the function "writeDictionary" is iterated, otherwise it only executed once (Listing~\ref{lst:parameter-dict}). The attribute "writeDictionary" saves the returend values of a successful response in seperate properties and write them in a dictionary (e.g. properties "id" and "name" are seperately stored with their value in the dictionary). If the schema of a response provides a reference of a schema object, the field name gets a name of a reference as prefix. Subsequently, this implementation does the similar way of execution in random testing at the generation phase. Firstly, it iterates all parameter objects (Line 14, Listing~\ref{lst:parameter-inference}) and examines whether the parameter type is Path or Query (Line 15-18, Listing~\ref{lst:parameter-inference}). Both attributes return URIs with parameter values inferred by the dictionary and case insensitive comparison (Line 4, Listing~\ref{lst:parameter-case}). Generated URIs are put in a list. Lastly, the generator attribute sends requests with the URIs and starts with observation (Line 20-21, Listing~\ref{lst:parameter-inference}).
+Before it starts with the parameter inference, random tests of \Cref{sec:stm1} are generated first (Line 1, Listing~\ref{lst:parameter-inference}). During this execution, the status code of a response is checked if it is a successful response with 200 status code (Line 4, Listing~\ref{lst:parameter-inference}). Afterwards, the response schema of returned values is also checked. If it is in type array, the function "writeDictionary" is iterated, otherwise it only executed once (Line 6-9, Listing~\ref{lst:parameter-inference}). The attribute "writeDictionary" saves the returend values of a successful response in seperate properties and write them in a dictionary (e.g. properties "id" and "name" are seperately stored with their value in the dictionary). If the schema of a response provides a reference of a schema object, the field name gets a name of a reference as prefix (Listing~\ref{lst:parameter-dict}). Subsequently, this implementation does the similar way of execution in random testing at the generation phase. Firstly, it iterates all parameter objects (Line 14, Listing~\ref{lst:parameter-inference}) and examines whether the parameter type is Path or Query (Line 16-19, Listing~\ref{lst:parameter-inference}). Both attributes return URIs with parameter values inferred by the dictionary and case insensitive comparison (Line 4, Listing~\ref{lst:parameter-case}). Generated URIs are put in a list. Lastly, the generator attribute sends requests with the URIs and starts with observation (Line 20-21, Listing~\ref{lst:parameter-inference}).
+
+\newpage
 
 % Result
-As results, \textbf{the test case generator with parameter inference implemented in this framework could create maximum over 300 acceptable URIs for the parameter petId in the selected API, pet store\footref{petstore}, at the operation getPetById.} It generated also numerous requests denied by the server. After the observation of several execution iterations, it can be assumed that the API with this operation sends 200 or 404 status codes randomly or according to some rules, because a same URI provided status codes in that way.
\ No newline at end of file
+As results, the test case generator with parameter inference implemented in this framework could \textbf{create maximum over 300 acceptable URIs for the parameter petId in the selected API, pet store\footref{petstore}, at the operation getPetById.} It generated also numerous requests denied by the server. After the observation of several execution iterations, it can be assumed that the API with this operation sends 200 or 404 status codes randomly or according to some rules, because a same URI provided status codes in that way.
\ No newline at end of file
diff --git a/sections/summary.tex b/sections/summary.tex
index 2f3e57f08b1ffad40c57c78644027d86b80b5a8d..a2f83ca4e1bb7ac4e184d497fb5f3153d1ba3f53 100644
--- a/sections/summary.tex
+++ b/sections/summary.tex
@@ -43,4 +43,4 @@ Inference of Request Dependency &
 \caption{Overview of Approaches in Table}\label{tab:sum}
 \end{table}
 
-In this work, we found that most current testing approaches using OpenAPI are based on Fuzzing. HTTP status codes were mostly the basis of test oracles, Specification-based, Constraint-based and Operation Dependency Graph Approaches collect also schema validation errors. All of the approaches considered 500 status codes as bugs and several of them expects only 200 status codes by valid test cases and 4xx status codes by error test cases. To result more precisely, they all suggest to inference data of a server or dependency between requests or requests and responses. This usage of inferences requires the stateful process which means requests depend on previous sequences (i.e. requests, responses). The simplified explanantion of current researches is in \Cref{tab:sum}
\ No newline at end of file
+In this work, we found that most current testing approaches using OpenAPI are based on Fuzzing, sending random, invalid or unexpected data into interfaces and observe responses. HTTP status codes were mostly the basis of test oracles, Specification-based, Constraint-based and Operation Dependency Graph Approaches collect also schema validation errors. All of the approaches considered 500 status codes as bugs and several of them expects only 200 status codes by valid test cases and 4xx status codes by error test cases. To result more precisely, they all suggest to inference data of a server or dependency between requests or requests and responses. This usage of inferences requires the stateful process which means requests depend on previous sequences (i.e. requests, responses). The simplified explanantion of current researches is in \Cref{tab:sum}
\ No newline at end of file
diff --git a/sections/testing.tex b/sections/testing.tex
index ed872e9f44b6d49c20f647859293faa0cb715e50..51d43b8d0b26bbecf6127f6f3d1bb5f5bb086c9f 100644
--- a/sections/testing.tex
+++ b/sections/testing.tex
@@ -1,12 +1,12 @@
 \section{Software Testing}
 
-Software testing is a method to see whether a software product works with its expected functionality and is defect free. The problem without testing is that bugs are expensive or also critical in terms of security. Solving this problem brings the software huge advantages (e.g. effectiveness, stronger security, robust software quality). \\
+Software testing is an examination to see whether a software product works with its expected functionality and is defect free. The problem without testing is that bugs could be expensive or also critical in terms of security. Solving this problem brings the software huge advantages (e.g. effectiveness, stronger security, robust software quality). \\
 There are several classifications in software testing. One classification is related to goals of testing: testing if a software does not have critical bugs (Functional Testing), testing if a software is effective enough (Non-Functional Testing) and modifying an existent software product to correct appeared bugs (Regression Testing) \cite{anirban2015software}. 
-If a software product is implemented small enough, it is commonly better to test manually and a tester individually defines test cases. But, in most industrial softwares, like REST APIs, it is necessary to automate tests, because the products are too big and manual testing is too difficult at this point.
+If a software product is implemented small enough, it is commonly better to test manually and a tester individually defines test cases. But, in most industrial softwares, like REST APIs, it is necessary to automate tests, because the products might be too large to write all cases that must be tested manually.
 
 %2
-Afterwards, there is a box approach to divide software testing into three categories, black and white Box Testing \cite{saleh2009software}. Black box tests are developed without knowledge of the internal structure of the system to be tested, but on the basis of development documents. In practice, black box tests are usually not processed by developers of a target software, but by technically oriented testers or by special test departments or test teams. White box tests are developed on the basis of knowledge about the internal structure of the component to be tested. \\
-This paper focuses on automated testing with a black box approach. In \Cref{sec:openapi} the selection of the approach is explained.
+There is also a type the testings are distinguished by, box approach. It has three categories, black and white Box Testing \cite{saleh2009software}. Black box tests are developed without knowledge of the internal structure of the system to be tested, but on the basis of development documents. In practice, black box tests are usually not processed by developers of a target software, but by technically oriented testers or by special test departments or test teams. White box tests are developed on the basis of knowledge about the internal structure of the component to be tested. \\
+This paper focuses on automated testing with a black box approach. In \Cref{sec:openapi}, the selection of the approach is explained.
 
 
 % [1] Saleh, K.A. (2009). Software Engineering. J. Ross Publishing. pp. 224–41.