Shirley's Next-Generation Benchmark Suite


This page contains a suite of open-source and portable (all in C) real-application benchmarks that I have built for my Undergraduate Thesis Project.

Real-application benchmarks are more accurate and useful in computer research; however, publicly available ones are hard to obtain.  To help deal with this issue, I have gathered several real-application benchmarks from academic researcher and from other Internet sources, and put them in a suite, along with results from simulation tests.

This benchmark suite includes 9 programs:


The benchmarks have been compiled for and undergone simulations on the SimpleScalar tool set.  The compiled programs' (on Sun's machine, using big-endian architecture version)  source code and their brief descriptions are listed below:
 
 
Network/Program 
Application
Description 
ADALINE
Adaline Network
Pattern Recognition 
Classification of Digits 0-9 
The Adaline is essentially a single-layer backpropagation network. It is trained on a pattern recognition task, where the aim is to classify a bitmap representation of the digits 0-9 into the corresponding classes. Due to the limited capabilities of the Adaline, the network only recognizes the exact training patterns. When the application is ported into the multi-layer backpropagation network, a remarkable degree of fault-tolerance can be achieved. 
BPN
Backpropagation Network
Time-Series Forecasting 
Prediction of the Annual Number of
Sunspots 
This program implements the now classic multi-layer backpropagation network with bias terms and momentum. It is used to detect structure in time-series, which is presented to the network using a simple tapped delay-line memory.  The program learns to predict future sunspot activity from historical data collected over the past three centuries. To avoid overfitting, the termination of the learning procedure is controlled by the so-called stopped training method. 
HOPFIELD
Hopfield Model 
Autoassociative Memory 
Associative Recall of Images 
The Hopfield model is used as an autoassociative memory to store and recall a set of bitmap images. Images are stored by calculating a corresponding weight matrix.
Thereafter, starting from an arbitrary configuration, the memory will settle on exactly that stored image, which is nearest to the starting configuration in terms of Hamming distance. Thus given an incomplete or corrupted version of a stored image, the network is able to recall the corresponding original image. 
CPN
Counterpropagation Network 
Vision 
Determination of the Angle of Rotation
The counterpropagation network is a competitive network, designed to function as a self-programming lookup table with the additional ability to interpolate between entries.  The application is to determine the angular rotation of a rocket-shaped object, images of which are presented to the network as a bitmap pattern. The performance of the network is a little limited due to the low resolution of the bitmap. 
SOM 
Self-Organizing Map 
Control
Pole Balancing Problem 
The self-organizing map is a competitive network with the ability to form topology-preserving mappings between its input and output spaces. In this program the   network learns to balance a pole by applying forces at the base of the pole. The  behavior of the pole is simulated by numerically integrating the differential equations for  its law of motion using Euler's method. The task of the network is to establish a  mapping between the state variables of the pole and the optimal force to keep it balanced. This is done using a reinforcement learning approach: For any given state of  the pole, the network tries a slight variation of the mapped force. If the new force  results in better control, the map is modified, using the pole's current state variables and the new force as a training vector. 
ART
Adaptive Resonance Theory 
Brain Modeling                           Stability-Plasticity Demonstration  This program is mainly a demonstration of the basic features of the adaptive resonance theory network, namely the ability to plastically adapt when presented with new input patterns while remaining stable at previously seen input patterns. 
ARMAEST22e Image Modeling
Auto-Regression Moving Average modeling 
This program models images with 2-D Auto-regressive moving average, by estimating the parameter to reconstruct the image.  It conveys the 2-D image input with a 2-D noise array and random number sequences to generate the AR model.  Monte Carlo simulations running the algorithm compute higher-order statistics; each run does a new parameter estimate of the original parameter, where each pixel value is based on the original pixel value and some noise. 
Perceptron Artificial Intelligence Concept-Learning  The perceptron is a program that learn concepts, i.e. it can learn to respond with True (1) or False (0) for inputs we present to it, by repeatedly "studying" examples presented to it.  The Perceptron is a single layer neural network connected to two inputs through a set of 2 weights, with an additional bias input.  Its weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the perceptron learning rule, which has been proven to converge on a solution in finite time if a solution exists.  The perceptron calculates its output using the equation:  P * W + b > 0, where P is the input vector presented to the network, W is the vector of weights and b is the bias. 

Here is the makefile to invoke the sim-bpred bench predictor processor simulator of the SimpleScalar:


Here is the table of simulation statistics for the benchmarks:
 
# instr  IPC Bpred_cond
_rate
L1 
D-cache
L1 
I-Cache
L2 U-cache  
Benchmark from sim-fast from hydra 32K 4-way 32K
8-way
Bim_8k Gag_1_
32k_15
Gas_1_
32k_8
Gas_1_
8k_6
Pas_4k_
32k_4
Pas_1k_
32k_4
32K 4-way 32K 8-way 32K 4-way 32K 8-way 32K 
4-way
32K 8-way
Perceptron 7896 7894 0.1115 0.1200 0.8201 0.7195 0.7703 0.7876 0.8069 0.8069 0.1099 0.1047 0.0527 0.0781 0.9231 1.0000
armaest22e 347102230 347102228  0.8829 0.7815 0.8342 0.8507 0.8791 0.8733
ART 363558 363556 1.1643 1.3404 0.8714 0.9227 0.9226 0.9212 0.8932 0.8931 0.0066 0.5700 0.0030 0.0038 0.6915 1.0000
Adaline 1190335183 1190335181 2.6200 3.8388 0.9682 0.9698 0.9686 0.9685 0.9686 0.9686 0.0007 0.0000 0.0001 0.0000 0.0031 1.0000
BPN 1810185865 1810185863 0.9577 0.9601 0.9601 0.9695 0.9694
BAM 2163222 2163220 2.1711 3.1231 0.9183 0.9473 0.9483 0.9486 0.9430 0.9429 0.0029 0.0015 0.0005 0.0008 0.4350 1.0000
CPN 1614897 1614895 1.7910 2.3181 0.9519 0.9740 0.9740 0.9735 0.9762 0.9750 0.0090 0.0036 0.0047 0.0020 0.2655 0.9687
Hopfield 50933781 50933779 2.4346 4.2750 0.9740 0.9862 0.9861 0.9861 0.9762 0.9760 0.0076 0.0003 0.0000 0.0000 0.0191 0.4391
SOM 274526826 274526824  0.8150 0.8421 0.8313 0.8256 0.8285 0.8284

 

Plots for the time-varying behavior of the programs, generated by sim-missr and gnuplot:

Interval Conditional Branch Misprediction Rate
Interval D-L1 Data Cache Miss Rate
Interval I-L1 Instruction Cache Miss Rate
Interval U-L2 Unified Cache Miss Rate
 


Go to the University of Virginia home page

Maintained by skc2s@virginia.edu
Last Modified: May 11, 2001