Help Manual

Contents






Sigma Magic Help Version 18

Bagging Models

Overview

Bagging, or bootstrap aggregation, is the ensemble learning method commonly used to reduce variance within a noisy dataset. In bagging, a random sample of data in a training set is selected with replacement — meaning that the individual data points can be chosen more than once. After several data samples are generated, these weak models are trained independently, and depending on the type of task — regression or classification, for example — the average or majority of those predictions yield a more accurate estimate. An example of a bagging model is the random forest algorithm.

The key benefit of a bagging model is the ease of implementation, as it is relatively easy to combine the prediction of base learners or estimators to improve model performance. Bagging can reduce the variance within a learning algorithm. This is particularly helpful for high-dimensional data. The key challenge is that drawing business insights from the models is difficult due to the averaging involved. Bagging is also computationally expensive and is not particularly suited for real-time applications.

Note that this functionality requires that RScript software be installed on your computer and linked to the Sigma Magic software within the options menu. The Bagging Models worksheet can be added to your active workbook by clicking on Analytics and then selecting Bagging Models.

Inputs

Setup

Click on Analysis Setup to open the menu options for this tool. A sample screenshot of the menu is shown below.
menu 1
1
Response Type: Specify the response data type you want to use to fit this model. The typical options for the response type are shown below. The options that apply are shown in the dropdown box.
OptionDescription
BinaryThe response variable only contains two classes. For example, True/False, Good/Bad, etc.
ClassificationThe response variable can contain two or more classes. For example, Grades A, B, C, D, E, and F.
RegressionThe response variable is usually continuous. For example, temperatures, pressures, etc., can take on any value.
2
Model: Specify which library you want to use to fit your model. Usually, several different libraries are available, and more are added regularly. Currently, the following libraries are available depending on your selected response variable.
NumResponseModelNameLibraryNum Params
1BinaryORFplsOblique Random ForestobliqueRF1
2BinaryORFridgeOblique Random ForestobliqueRF1
3BinaryORFsvmOblique Random ForestobliqueRF
4ClassificationcforestConditional Inference Random Forestparty1
5ClassificationparRFParallel Random Foreste1071, randomForest, for each, import1
6ClassificationrFernsRandom FernsrFerns1
7ClassificationordinalRFRandom Foreste1071, ranger, dplyr, ordinalForest3
8ClassificationrangerRandom Foreste1071, ranger, dplyr3
9ClassificationRboristRandom ForestRborist2
10ClassificationrfRandom ForestrandomForest1
11ClassificationrfRulesRandom Forest Rules-Based ModelrfRules2
12ClassificationRRFRegularized Random ForestrandomForest, RRF3
13ClassificationRRFglobalRegularized Random ForestRRF2
14RegressioncforestConditional Inference Random Forestparty1
15RegressionparRFParallel Random Foreste1071, randomForest, foreach, import1
16RegressionqrfQuantile Random ForestquantregForest1
17RegressionrangerRandom Foreste1071, ranger, dplyr3
18RegressionRboristRandom ForestRborist2
19RegressionrfRandom ForestrandomForest1
20RegressionrfRulesRandom Forest Rules-Based ModelrfRules2
21RegressionRRFRegularized Random ForestrandomForest, RRF3
22RegressionRRFglobalRegularized Random ForestRRF2
3
Pre-Process: Specify if you want to pre-process your data before you fit a model. The available options are:
OptionDescription
NoneNo pre-processing of data is performed before model fitting. The data is used as-is.
CenterSubtract a mean value from each of the data points so that the average factor value is zero.
Center, ScaleSubtract a mean value from each of the data points and divide by its standard deviation so that the average factor value is zero and the standard deviation is one.
RangeAdjust the values such that the minimum value is mapped to 0 and the maximum value is mapped to 1.
ScaleDivide each of the values by the standard deviation so that the standard deviation of the factor is 1.
4
Model Selection: Specify the metric to use for tuning the model. This metric selects the best model from an available list of models. The available options depend on the type of response.
OptionResponse TypeDescription
RMSERegressionUses the smallest value of the Root Mean Square Error (RMSE) to pick the best model.
MAERegressionUses the smallest value of the Mean Absolute Error (MAE) to pick the best model.
AccuracyBinary, ClassificationUses the largest value of the percentage of items that are matched correctly to pick the best model.
KappaBinary, ClassificationUses the largest value of the Kappa statistic to pick the best model.
ROCBinaryUses the largest value of the Area Under the ROC curve to pick the best model.
5
Selected Model: The details of the selected model are displayed in this box based on the response type and the model you have chosen. You can review the model description to determine if this is the model you want to fit for your data.
6
Help Button: Click on the Help Button to view the help documentation for this tool.
7
Cancel Button: Click on the Cancel Button to discard your changes and exit this menu.
8
OK Button: Click on the OK Button to save your changes and try to execute the program. Note that we do not control the algorithms for fitting these models, and there is no guarantee that the model will properly fit your selected data set. You can contact the R community for any possible resolution if there are model errors.

Data

You will see the following dialog box if you click the Data button. Here, you can specify the data required for this analysis. Data
1
Search Data: The available data displays all the columns of data that are available for analysis. You can use the search bar to filter this list and speed up the search for the right data for analysis. Enter a few characters in the search field, and the software will filter and display the filtered data in the Available Data box.
2
Available Data: The available data box contains the list of data available for analysis. If your workbook has no data in tabular format, this box will display "No Data Found." The information displayed in this box includes the row number, whether the data is Numeric (N) or Text (T), and the name of the column variable. Note that the software displays data from all the tables in the current workbook. Even though data within the same table have unique column names, columns across different tables can have similar names. Hence, it is crucial that you not only specify the column name but also the table name.
3
Add or View Data: Click on this button to add more data to your workbook for analysis or to view more details about the data listed in the available data box. This button opens the Data Editor dialog box, where you can import more data into your workbook. You can also switch from the list view to a table view to see the individual data values for each column.
4
Required Data: The code for the required data specifies what data can be specified for that box. An example code is N: 2-4. If the code starts with an N, select only numeric columns. You can choose numeric and text columns if the code begins with a T. The numbers to the right of the colon specify the min-max values. For example, if the min-max values are 2-4, you must select a minimum of 2 columns of data and a maximum of 4 columns in this box. If the minimum value is 0, then no data is required to be specified for this box.
5
Select Button: Click on this button to select the data for analysis. Any data you choose for the analysis is moved to the right. To select a column, click on the columns in the Available Databox to highlight them and then click on the Select Button. A second method to choose the data is to double-click on the columns in the list of Available Data. Finally, you can drag and drop the columns you are interested in by holding down the select columns using your left mouse key and dragging and dropping them in one of the boxes on the right.
6
Selected Data: You must select one column for the response data and one or more columns for the factors/features of the given model. The list box header will be black if the correct number of data columns is specified. If sufficient data is not selected, the list box header will be displayed in red. Note that you can double-click on any of the columns in this box to remove them from the box.
7
View Selection: Click on this button to view the data specified for this analysis. The data can be viewed in a tabular format or a graphical summary of the selected data.

Train

If you click the Train button, the software will let you pick the options for training the given model. Training is a step where we split the data into groups - a train data set and a test data set. The train data set is used to build the model, and the test data set is used to evaluate how good of a model we have built. We don't want to use the same data set for both test and train since we can have overfitting, and the model may perform excellently on the train data set but may perform poorly on data sets it has not seen.

The train function can be used for the following:
  1. Evaluate the effect of model tuning parameters on performance
  2. Choose the "optimal" model across these paramters
  3. Estimate model performance from a training set
A sample screenshot of the training page is shown in this figure. Data
1
Method: Specify the method to use to determine the test/train data sets. The available options are:
OptionDescription
NoneDo not split the data into test and train data sets. Use the entered data for training and testing purposes. Note that using this method, you cannot tune the model using tuning length or grid methods.
BootUse the bootstrap method for determining the test and train data sets. The bootstrap method involves repeatedly drawing samples from the data set with replacement. If you use this method, you must specify the number of resamples.
Cross Validation (CV) Cross-validation involves dividing the available data into multiple folds or subsets and using one of these folds for validation and the remaining folds for training. This process is repeated multiple times, using a different fold for validation. Finally, the results from each validation step are averaged to produce a more robust estimate of the model's performance. If you use this method, you must specify the number of folds.
Leave One Out Cross Validation (LOOCV)It is a method of cross-validation where each data point is used for validation, and the rest of the data is used for training. This method is very computationally expensive if you have large data sets but is simple to use and requires no configuration.
Leave Group Out Cross Validation (LGOCV)It is a method of cross-validation where a set of data points is used for validation, and the rest of the data is used for training. You must specify this method's holdout percentage (0-100)%.
Repeated Cross Validation (RepeatedCV)In this method the cross-validation is repeated multiple times. You will need to specify the number of folds and the number of repeats. For example, if you specify five repeats of 10-fold cross-validation, it will perform 10-fold cross-validation on the training data 5 times, using a different set of folds for each cross-validation. The rationale is to come up with better accuracy and robust results.
2
Holdout Percent: Specify the % of data to hold out for performing validation testing. The % specified should be between 0 - 100%. Note that this value is only enabled for the LGOCV method.
3
Num K-Folds: Specify the number of K-folds for cross-validation. For example, specifying five means the data is split into five groups, each containing 20% of the data points. The first 20% of the data points are used for the first validation and the remaining 80% for training. The second 20% of the data is used for the second validation, and so on. This value is to be specified for CV and RepeatedCV methods. For the bootstrap sampling method, select the number of resamples of the data to use.
4
Num Repeats: Specify the number of repeats for repeated cross-validations. This textbox is disabled for other methods.
5
Random Seed: Specify the random seed value. Use a value of 0 for truly random numbers. If you want to replicate your results between runs, specify the seed value for the random number generator.
6
Sub Sampling: Specify the sampling strategy to use when selecting the samples. In classification problems, a disparity in the frequencies of the observed classes can significantly impact model fitting. Class imbalances can be mitigated using a sampling strategy.
OptionDescription
NoneDo not use any sampling strategy. Select the samples at random.
UpUse the up-sampling strategy. Randomly sample with replacement the minority class to be the same size as the majority class.
DownUse the down-sampling strategy. Randomly subset all the classes in the training set so that their class frequencies match the least prevalent class.

Tuning

If you click the Tuning button, the software will let you pick the options for tuning the given model. Tuning refers to identifying the best set of hyperparameters that gives the best fit for the given model. This method only applies to those models that contain one or more hyperparameters. The main steps of tuning are:
  1. Define the set of model parameter values to evaluate
  2. For each parameter set, determine the samples to use
  3. Pre-process the data if required
  4. Fit the model on the train data set and use the test data set for evaluation
  5. Calculate the average performance across predictions
  6. Determine the optimal parameter set
  7. Fit the final model to all the training data using the optimal parameter set
A sample screenshot of the tuning page is shown in this figure. Data
1
Tuning Method: Specify the method to use to determine the best set of hyperparameters. The following options are available:
OptionDescription
FixedDo not tune the hyperparameters. Use the values specified below for the hyperparameters. It is your responsibility to provide the right values for these parameters.
GridTune the hyperparameters using a grid search method. Using this method, the entire range of the hyperparameters is searched based on the parameters you provided. Make sure you provide the right range for each of the given hyperparameters. Example depth = c(1, 5, 9) implies that three values of the depth are tried; if a constant value is specified, such as shrinkage = 0.1, then the shrinkage value is fixed at 0.1.
RandomSpecify the tuning length. The system will randomly sample the hyperparameters based on the tuning length. For example, if the tuning length is 3, then the system will try three different settings for the hyperparameters and report the best of what it finds among these three searches.
2
Tuning Length: This textbox only applies if your model has hyperparameters and you use the random search method to select the tuning parameters; otherwise, this textbox is disabled.
3
Model Parameters: Depending on the model you select for fitting to your data, you may need to specify hyperparameters for that model. Some models may not have hyperparameters, while others may have one or more parameters. For some situations where you specify the tuning method is Random, you can specify the hyperparameter as Auto and let the system select this for you. For other cases where the tuning method is Fixed, you must specify the hyperparameters for the model.

Verify

If you click the Verify button, the software will perform some checks on the data you entered. A sample screenshot of the data is shown in this figure. Pre-Process Inputs 4
1
Verify Checks: For each main tab of the analysis setup, one or more checks are performed to see if the inputs have been specified correctly. A green checkmark will be placed at the top if all the checks are successful. You can only compute the analysis results if all checks have been passed; otherwise, you must fix the input errors before generating analysis results.
2
Check Status: The results of the analysis checks are listed here. If the checks are passed, they are shown as green checkmarks. If the verification checks fail, they are shown as a red cross. The verification checks are shown in the orange exclamation mark if the verification checks result in a warning. Finally, any checks required to be performed by the user are shown as blue info icons.
3
Overall Status: The overall status of all the checks for the given analysis is shown here. The overall status check shows a green thumps-up sign if everything is okay and a red thumps-down sign if any checks have not passed. Note that you cannot proceed with generating analysis results for some analyses if the overall status is not okay.

Outputs

Click on Compute Outputs to update the results on the worksheet. A sample screenshot of the worksheet is shown below.
Bagging Example
1
Notes Section: The notes section summarizes the input data, and the analysis results section shows the response variable and input variables' names. To make predictions using the developed model, click the Make Prediction button on the main menu bar and enter the values for the input variables. When you click Compute Outputs, the output column updates the model predictions.
2
Graph Section: The graph section shows a chart for the model accuracy, a plot of hyperparameter tuning optimization if applicable, and the variable importance plot showing which input factors impact the output response most.
If you want to use this model to make predictions, click the Make Predictions button on the menu bar. Refer to the help file on this topic for more details.

Notes

Here are a few pointers regarding this analysis:
  • This analysis requires that the R software needs to be installed on your computer. Further, you will need to provide a link to the RScript executable file under Sigma Magic Options so that the software can use the R software to generate analysis results.

Examples

The following examples are in the Examples folder.
  • For the data in the file, perform the bagging analysis to predict the gear based on other variable inputs. (Bagging Analysis 1.xlsx)

References

For more information on this topic, please refer to the following articles. Do note that if any external links are mentioned below, they are for reference purposes only.