## The Key Features of IDEA™

- Proprietary & Unique Algorithms/Modules
- Intelligent Patching of Data Files:
- Key Performance Indicators (KPI):
- KPI Behavior:
- Optimized Clustering:
- Reinforced Cluster Analysis:
- Intelligent Data Partitioning:
- Dependency:
- Complete Neural networks and Regression Modules
- Linear Regression
- Non-Linear Regression
- Multiple Linear and Non-Linear Regressions
- Backpropagation Neural Networks
- Learning Algorithms
- Vanilla Backpropagation
- Accelerated Backpropagation using learning rates and momentum
- Quick-Prop
- R-Prop
- Architecture
- Multiple Hidden Layers (sequential)
- Multiple Transfer Functions
- General Regression Neural Networks
- Conventional Learning
- Genetically enhanced learning
- Recurrent Neural Networks
- Learning Algorithms
- Vanilla Backpropagation
- Accelerated Backpropagation using learning rates and momentum
- Quick-Prop
- R-Prop
- Architecture
- Multiple Hidden Layers (sequential)
- Multiple Transfer Functions

- Powerful Pre- and Post-processing Modules
- Basic and Advance Statistical Analysis
- Conventional Cluster Analysis (K-Mean) - Self Organizing Maps (SOM)
- Intelligent Cluster Analysis (Fuzzy C-Mean)
- Sensitivity Analysis
- Two Dimensional Analysis
- Three Dimensional Analysis
- Uncertainty Analysis using Monte Carlo Simulation
- General Model Behavior
- Type Curve Development
- Based on single record (Single Well)
- Based on groups of records (Groups of Wells)
- Based on entire dataset (All the Wells in the Field)
- Application of developed Model to New Datasets

This unique algorithm provides a tested and proven methodology that can patch the holes that may exist in a data set. The intelligent algorithm that incorporates a combination of neural networks and genetic algorithms rescues the information content of the data record by substituting an optimum value for the missing cell/cells in the record. This process has been validated using data generated by a complex non-linear equation, a numerical simulator and field data. A report demonstrating the capabilities of this methodology is available upon request.

This is an algorithm that identifies most influential
parameters in any given process *prior to modeling*.
It examines the influence of each input parameter on the
output (either one at a time or in combination with all
other parameters combinations of 2, 3 …) and then ranks
each input based on its overall influence on the output.
The outcome of the algorithm is a tornado chart ranking
all input parameters based on their importance in the
process. This is a tested and proven algorithm that has
been validated using data generated by a complex
non-linear equation, a numerical simulator and field data.
A report demonstrating the capabilities of this methodology
is available upon request.

This module shows the behavior of each input parameter on how it influences and affects the output in a simple two-dimensional plot allowing the user to see clear trends out of scattered input-output relationships.

During any cluster analysis user need to identify two important items in order to achieve the best separation of data records into clusters. First, the number of clusters and second, the combination of parameters that results in optimum clusters. This information usually is not available to the user, especially for large data sets and data sets that represent unknown behavior, emphasizing a key technical shortcoming of software applications that use SOM (Self Organizing Maps) as their main modeling and analysis technique. This module helps users in identification of these two important clustering characteristics and takes the guess work out of the analysis.

This a unique and powerful set of algorithms and interfaces that allows users to perform supervised clustering of the data using the output as a guide for identification of cluster centers without using it (the output) to actually perform the clustering.

This module provides an effective and powerful alternative to random partitioning of the dataset into training, calibration and verification (validation) datasets. It makes sure that original dataset is divided such that all three partitioned datasets are statistically representative.

This module allows users to define inputs to a data driven model as a function of other inputs (i.e. relative permeability as a function of saturation) when both are input to the model. The dependencies are user defined and can be in the form of tables, equations or other models (neural networks) allowing development of fully dynamic data driven models.