Overview
In the paper, we describe
FlexJava , a small set of language extensions, that significantly reduces the annotation effort, paving the way for practical approximate programming. These extensions enable programmers to annotate approximation-tolerant method outputs.
The
FlexJava compiler, which is equipped with an approximation safety analysis, automatically infers the operations and data that affect these outputs and selectively marks them approximable while giving safety guarantees. The automation and the languageācompiler codesign relieve programmers from manually and explicitly annotating data declarations or operations as safe to approximate. FlexJava is designed to support safety, modularity, generality, and scalability in software development.
We have implemented
FlexJava annotations as a Java library and we demonstrate its practicality using a wide range of Java applications.
This replication package contains the
FlexJava compiler that supports fine-grained and coarse-grained approximation.
Note that
FlexJava language/compiler are able to support general types of coarse-grained approximation technologies but here we provide NPU framework as an example of its use.
For fine-grained approximation, we also included the modified EnerJ simulator that allows you to execute
FlexJava binaries for quality and energy measurement.
Moreover, all the benchmarks that we used for experiments in the paper are included in the package.
Download Tools
We created a VHD (Virtual Hard Disk) on VirtualBox so that your can readily download the entire image file and run the experiments without manually installing all the tools and setting up environments.
You can access the image file in the following link:
For users who want to just investigate the source code, we also have Git repositories in the following Bitbucket pages:
Or you can just type the following command to clone the Git repository into your local machine:
Fine-grained FlexJava:   git clone https://bitbucket.org/act-lab/r2.code.git |
Coarse-grained FlexJava:   git clone https://bitbucket.org/act-lab/flexjava.npubench.git |
The Bitbucket pages have detailed README files explaining how to setup the tools and run the analysis. The source code embedded in the VM image have the same version of the Git repositories.
Instructions
This instruction assume that you have downloaded the VHD file (VM imagefile) and built the VM environment with virtualization tools such as VirtualBox. If you explore the source code, you may need to follow the instructions in the Bitbucket pages and setup a proper environment first.
Fine-grained Approximation
Build Tools
We have already installed and built all source code necessary to run the analysis and simulation.
You can find the files under
r2.code directory of the home directory:
flexjava@FSE-Artifact-Evaluation:~$ cd r2.code |
If you want to modify the source code and rebuild the tools, simply type the following:
flexjava@FSE-Artifact-Evaluation:~/r2.code$ ./build.sh |
Program with FlexJava
All the benchmarks are placed under
r2.apps directory.
- sor, smm, mc, fft, lu: SciMark2 benchmark (science kernels)
- sobel: Image edge detection
- simpleRaytracer(raytracer): 3D image renderer
- jmeint: jMonKeyEngine game (triangle intersection kernel)
- zxing: Bar code decoder for mobile phones
The source code have already been annotated with
FlexJava annotations.
You can find the source code at
src directory of the individual benchmark directory.
Run Approximation Safety Analysis
We are now ready to run the analysis and observe the results. Let's go with one of the benchmarks,
mc.
In order to run the analysis, you can simply type the following commands:
flexjava@FSE-Artifact-Evaluation:~/r2.code$ cd r2.apps/mc |
flexjava@FSE-Artifact-Evaluation:~/r2.code/r2.apps/mc$ ./analyze.py |
This sciript will (1) compile the source code, (2) run the analysis, and (3) perform the source code hightlighting (back annotation) on a replicated source directory,
src-marked. You can observe where the approximation is applied in the src-marked directory and if you are not satisfied with the results, you would be able to modify the annotations in the source code directory
src and rerun the analysis by typing "
./analyze.py".
Run Simulation
If the analysis results are satisfactory, we can move on to the next step, simulation.
The script for running simulation,
runsimulation.py, takes an argument, which specifies a system model that you want to simulate on. There are four system models that are supported by EnerJ simulator.
- aggressive
- high
- medium
- low
For example, when you want to run simulation on the
medium system model, you can type the following command:
flexjava@FSE-Artifact-Evaluation:~/r2.code/r2.apps/mc$ ./runsimulation.py medium |
Note that since the architecture model in the simulator is probabilistic, we ran the experiments multiple times and averaged to get the results in the paper. For the reason, the results from your simulation may not be exactly matched with the results provided in the paper.
Coarse-grained Approximation (NPU)
Program with FlexJava
NPU benchmarks and tools that we used are based on AxBench (
http://axbench.org/) and they are ported from C/C++ to Java. You can find the files in
flexjava.npubench directory under the home directory of the VHD image.
flexjava@FSE-Artifact-Evaluation:~$ cd flexjava.npubench |
The "
application.java" directory contains the four benchmarks (fft, jmeint, sobel, and simpleRaytracer) that have been evaluated in the paper for coarse-grained approximation (NPU). Let's have a look an example,
sobel.
flexjava@FSE-Artifact-Evaluation:~/flexjava.npubench$ cd applications.java/sobel |
Build Benchmark
We have a
Makefile that performs the necessary preprocessing for annotations using C/C++-based AxBench tools and compiles the processed Java source code. Simply you can type the following:
flexjava@FSE-Artifact-Evaluation:~/flexjava.npubench/applications.java/sobel$ make |
Run NPU Code
We have a script that (1) trains the specified approximable region, (2) algorithmically transforms the region into a neural network, and (3) run the transformed program using a machine learning library, FANN (
http://leenissen.dk/fann/wp/). To run the script, follow the commands below:
flexjava@FSE-Artifact-Evaluation:~/flexjava.npubench/applications.java/sobel$ cd ../.. |
flexjava@FSE-Artifact-Evaluation:~/flexjava.npubench$ ./run_java.sh run sobel |
Then you will see the compilation parameters required for training (e.g. learning rate). You can give any values to the parameters but the followings are the default values that you can pass:
Learning rate [0.1-1.0]: 0.1 |
Epoch number [1-10000]: 1 |
Sampling rate [0.1-1.0]: 0.1 |
Test data fraction [0.1-1.0]: 0.5 |
Maximum number of layers [3|4]: 3 |
Maximum number of neurons per layer [2-64]: 2 |