Help Manual


Sigma Magic Help Version 17

Boosted Models


Boosting is an ensemble learning method combining weak learners into a strong learner to minimize training errors. A weak learner is a model that performs slightly better than random chance on a given problem. Examples include decision trees with limited depth, shallow decision stumps (single-level trees), or linear models. The idea is to combine the predictions of these weak models to create a strong learner. In boosting, a random sample of data is selected, fitted with a model, and then trained sequentially—that is, each model tries to compensate for the weaknesses of its predecessor. Each iteration combines the weak rules from each classifier to form one strong prediction rule.

Boosting builds models sequentially, with each subsequent model focusing on correcting errors by combining the previous models. The training process assigns higher weights to misclassified instances, directing the attention of the subsequent weak models to those areas where the model needs improvement. Each weak learner contributes to the final prediction with a weight based on its accuracy. The weights are determined during the training process, and models that perform better have a greater influence on the final prediction. AdaBoost, in particular, adjusts the weights of incorrectly classified instances during each iteration, making them more influential in subsequent training rounds. This adaptive learning helps the algorithm focus on difficult-to-classify instances.

Gradient Boosting is a generalization of boosting that can be applied to various loss functions. It works by minimizing the loss function using gradient descent. Popular implementations of gradient boosting include XGBoost, LightGBM, and CatBoost. Boosted models often incorporate regularization techniques to prevent overfitting. Regularization parameters control the complexity of individual weak learners and the overall complexity of the boosted model.

Weak learners are trained in parallel in bagging, but in boosting, they learn sequentially. This means that a series of models are constructed, and with each new model iteration, the weights of the misclassified data in the previous model are increased. This redistribution of weights helps the algorithm identify the parameters it needs to focus on to improve its performance.

Boosted models typically achieve high predictive accuracy and are robust against overfitting. They are widely used in practice for various tasks, including classification and regression problems. Boosted models are powerful and versatile, and they have been successful in various applications such as computer vision, natural language processing, and structured data analysis. They often outperform individual models and are considered state-of-the-art in many machine-learning competitions and real-world scenarios.

Note that this functionality requires that RScript software be installed on your computer and linked to the Sigma Magic software within the options menu. Boosted Models worksheet can be added to your active workbook by clicking on Analytics and then selecting Boosted Models.



Click on Analysis Setup to open the menu options for this tool. A sample screenshot of the menu is shown below.
menu 1
Response Type: Specify the response data type you want to use to fit this model. The typical options for the response type are shown below.
BinaryThe response variable only contains two classes. For example, True/False, Good/Bad, etc.
ClassificationThe response variable can contain two or more classes. For example, Grades A, B, C, D, E, and F.
RegressionThe response variable is usually continuous. For example, temperatures, pressures, etc., can take on any value.
Model: Specify which library you want to use to fit your model. Usually, several different libraries are available, and more are added regularly. Currently, the following libraries are available depending on the response variable you selected.
NumResponseModelNameLibraryNum Params
1BinaryadaBoosted Classification Treesada, plyr3
2BinarygamboostBoosted Generalized Additive Modelmboost, plyr, import2
3BinaryglmboostBoosted Generalized Linear Modelmboost, plyr2
4BinaryC5.0C5.0C50, plyr3
5BinaryC5.0CostCost Sensitive C5.0C50, plyr4
6BinarydeepboostDeep Boostdeepboost5
7ClassificationAdaBoostM1AdaBoost Classification Treesadabag, plyr3
8ClassificationAdaBagNagged AdaBoostadabag, plyr2
9ClassificationBstLmBoosted Linear Modelbst, plyr2
10ClassificationLogitBoostBoosted Logistic RegressioncaTools1
11ClassificationblackboostBoosted Treeparty, mboost, plyr, partykit2
12ClassificationbstTreeBoosted Treebst, plyr3
13ClassificationxgbLineareXtreme Gradient Boostingxgboost, plyr4
14ClassificationxgbTreeeXtreme Gradient Boostingxgboost, plyr7
15ClassificationgbmStochastic Gradient Boostingxgboost, plyr4
16RegressiongamboostBoosted Generalized Additive Modelmboost, plyr, import2
17RegressionglmboostBoosted Generalized Linear Modelmboost, plyr2
18RegressionBstLmBoosted Linear Modelbst, plyr2
19RegressionblackboostBoosted Treeparty, mboost, plyr, partykit2
20RegressionbstTreeBoosted Treebst, plyr3
21RegressionxgbLineareXtreme Gradient Boostingxgboost, plyr4
22RegressionxgbTreeeXtreme Gradient Boostingxgboost, plyr7
23RegressiongbmStochastic Gradient Boostingxgboost, plyr4
Pre-Process: Specify if you want to pre-process your data before you fit a model. The available options are:
NoneNo pre-processing of data is performed before model fitting. The data is used as-is.
CenterSubtract a mean value from each of the data points so that the average factor value is zero.
Center, ScaleSubtract a mean value from each of the data points and divide by its standard deviation so that the average factor value is zero and the standard deviation is one.
RangeAdjust the values such that the minimum value is mapped to 0 and the maximum value is mapped to 1.
ScaleDivide each of the values by the standard deviation so that the standard deviation of the factor is 1.
Model Selection: Specify the metric to use for tuning the model. This metric selects the best model from an available list of models. The available options depend on the type of response.
OptionResponse TypeDescription
RMSERegressionUses the smallest value of the Root Mean Square Error (RMSE) to pick the best model.
MAERegressionUses the smallest value of the Mean Absolute Error (MAE) to pick the best model.
AccuracyBinary, ClassificationUses the largest value of the percentage of items that are matched correctly to pick the best model.
KappaBinary, ClassificationUses the largest value of the Kappa statistic to pick the best model.
ROCBinaryUses the largest value of the Area Under the ROC curve to pick the best model.
Selected Model: The details of the selected model are displayed in this box based on the response type and the model you have chosen. You can review the model description to determine if this is the model you want to fit for your data.
Help Button: Click on the Help Button to view the help documentation for this tool.
Cancel Button: Click on the Cancel Button to discard your changes and exit this menu.
OK Button: Click on the OK Button to save your changes and try to execute the program. Note that we do not control the algorithms for fitting these models, and there is no guarantee that the model will properly fit your selected data set. You can contact the R community for any possible resolution if there are model errors.


You will see the following dialog box if you click the Data button. Here, you can specify the data required for this analysis. Data
Search Data: The available data displays all the columns of data that are available for analysis. You can use the search bar to filter this list and speed up finding the right data for analysis. Enter a few characters in the search field, and the software will filter and display the filtered data in the Available Data box.
Available Data: The available data box contains the list of data available for analysis. If your workbook has no data in tabular format, this box will display "No Data Found." The information displayed in this box includes the row number, whether the data is Numeric (N) or Text (T), and the name of the column variable. Note that the software displays data from all the tables in the current workbook. Even though data within the same table have unique column names, columns across different tables can have similar names. Hence, it is crucial that you not only specify the column name but also the table name.
Add or View Data: Click on this button to add more data to your workbook for analysis or to view more details about the data listed in the available data box. When you click on this button, it opens the Data Editor dialog box, where you can import more data into your workbook. You can also switch from the list view to a table view to see the individual data values for each column.
Required Data: The code for the required data specifies what data can be specified for that box. An example code is N: 2-4. If the code starts with an N, select only numeric columns. If the code begins with a T, you can select numeric and text columns. The numbers to the right of the colon specify the min-max values. For example, if the min-max values are 2-4, you must select a minimum of 2 columns of data and a maximum of 4 columns in this box. If the minimum value is 0, then no data is required to be specified for this box.
Select Button: Click on this button to select the data for analysis. Any data you choose for the analysis is moved to the right. To select a column, click on the columns in the Available Databox to highlight them and then click on the Select Button. A second method to choose the data is to double-click on the columns in the list of Available Data. Finally, you can drag and drop the columns you are interested in by holding down the select columns using your left mouse key and dragging and dropping them in one of the boxes on the right.
Selected Data: You will need to select one column for the response data and one or more columns for the factors/features of the given model. If the correct number of data columns is specified, the list box header will be black. If sufficient data is not selected, the list box header will be displayed in red. Note that you can double-click on any of the columns in this box to remove them from the box.
View Selection: Click on this button to view the data specified for this analysis. The data can be viewed in a tabular format or a graphical summary of the selected data.


If you click the Train button, the software will let you pick the options for training the given model. Training is a step where we split the data into groups - a train data set and a test data set. The train data set is used to build the model, and the test data set is used to evaluate how good of a model we have built. We don't want to be using the same data set for both test and train since we can have overfitting and the model may perform excellently on the train data set but poorly on data sets it has not seen. The train function can be used for the following:
  1. Evaluate the effect of model tuning parameters on performance
  2. Choose the "optimal" model across these paramters
  3. Estimate model performance from a training set
A sample screenshot of the training page is shown in this figure. Data
Method: Specify the method to use to determine the test/train data sets. The available options are:
NoneDo not split the data into test and train data sets. Use the entered data for training and testing purposes. Note that using this method, you cannot tune the model using tuning length or grid methods.
BootUse the bootstrap method for determining the test and train data sets. The bootstrap method involves repeatedly drawing samples from the data set with replacement. If you use this method, you must specify the number of resamples.
Cross Validation (CV)Cross-validation involves dividing the available data into multiple folds or subsets and using one of these folds for validation and the remaining folds for training. This process is repeated multiple times, using a different fold for validation. Finally, the results from each validation step are averaged to produce a more robust estimate of the model's performance. If you use this method, you must specify the number of folds.
Leave One Out Cross Validation (LOOCV)It is a method of cross-validation where each data point is used for validation, and the rest of the data is used for training. This method is very computationally expensive if you have large data sets but is simple to use and requires no configuration.
Leave Group Out Cross Validation (LGOCV)It is a method of cross-validation where a set of data points is used for validation, and the rest of the data is used for training. You must specify this method's holdout percentage (0-100)%.
Repeated Cross Validation (RepeatedCV)In this method the cross-validation is repeated multiple times. You will need to specify the number of folds and the number of repeats. For example, if you specify five repeats of 10-fold cross-validation, it will perform 10-fold cross-validation on the training data 5 times, using a different set of folds for each cross-validation. The rationale is to come up with better accuracy and robust results.
Holdout Percent: Specify the % of data to hold out for performing validation testing. The % specified should be between 0 - 100%. Note that this value is only enabled for the LGOCV method.
Num K-Folds: Specify the number of K-folds for cross-validation. If you specify 5, it means that the data is split into five groups. Hence, each group contains 20% of the data points. The first 20% of the data points are used for the first validation and the remaining 80% for training. The second 20% of the data is used for the second validation. This value is to be specified for CV and RepeatedCV methods. For the bootstrap sampling method, select the number of resamples of the data to use.
Num Repeats: Specify the number of repeats for repeated cross-validations. This textbox is disabled for other methods.
Random Seed: Specify the random seed value. Use a value of 0 for truly random numbers. If you want to replicate your results between runs, specify the seed value for the random number generator.
Sub Sampling: Specify the sampling strategy to use when selecting the samples. In classification problems, a disparity in the frequencies of the observed classes can significantly impact model fitting. Class imbalances can be mitigated using a sampling strategy.
NoneDo not use any sampling strategy. Select the samples at random.
UpUse the up-sampling strategy. Randomly sample with replacement the minority class to be the same size as the majority class.
DownUse the down-sampling strategy. Randomly subset all the classes in the training set so that their class frequencies match the least prevalent class.


If you click the Tuning button, the software will let you pick the options for tuning the given model. Tuning refers to identifying the best set of hyperparameters fitting the given model. This method only applies to those models that contain one or more hyperparameters. The main steps of tuning are:
  1. Define the set of model parameter values to evaluate
  2. For each parameter set, determine the samples to use
  3. Pre-process the data if required
  4. Fit the model on the train data set and use the test data set for evaluation
  5. Calculate the average performance across predictions
  6. Determine the optimal parameter set
  7. Fit the final model to all the training data using the optimal parameter set
A sample screenshot of the tuning page is shown in this figure. Data
Tuning Method: Specify the method to use to determine the best set of hyperparameters. The following options are available:
FixedDo not tune the hyperparameters. Use the values specified below for the hyperparameters. It is your responsibility to provide the right values for these parameters.
GridTune the hyperparameters using a grid search method. Using this method, the entire range of the hyperparameters is searched based on the parameters you provided. Make sure you provide the right range for each of the given hyperparameters.
RandomSpecify the tuning length. The system will randomly sample the hyperparameters based on the tuning length. For example, if the tuning length is 3, the system will try three different settings for the hyperparameters and report the best of what it finds among these three searches.
Tuning Length: This textbox only applies if your model has hyperparameters and you use the random search method to select the tuning parameters; otherwise, this textbox is disabled.
Model Parameters: Depending on the model you select for fitting to your data, you may need to specify hyperparameters for that model. Some models may not have hyperparameters, while others may have one or more parameters. For some situations where you specify the tuning method is Random, you can specify the hyperparameter as Auto and let the system select this for you. For other cases where the tuning method is Fixed, you must specify the hyperparameters for the model.


If you click the Verify button, the software will perform some checks on the data you entered. A sample screenshot of the data is shown in this figure. Pre-Process Inputs 4
Verify Checks: For each main tab of the analysis setup, one or more checks are performed to see if the inputs have been specified correctly. If all the checks are successful, a green checkmark will be placed at the top. You can only compute the analysis results if all checks have been passed; otherwise, you must fix the input errors before generating analysis results.
Check Status: The results of the analysis checks are listed here. If the checks are passed, they are shown as green checkmarks. If the verification checks fail, they are shown as a red cross. The verification checks are shown in the orange exclamation mark if the verification checks result in a warning. Finally, any checks required to be performed by the user are shown as blue info icons.
Overall Status: The overall status of all the checks for the given analysis is shown here. The overall status check shows a green thumps-up sign if everything is okay and a red thumps-down sign if any checks have not passed. Note that you cannot proceed with generating analysis results for some analyses if the overall status is not okay.


Click on Compute Outputs to update the results on the worksheet. A sample screenshot of the worksheet is shown below.
Boosting Example
Notes Section: The notes section summarizes the input data, and the analysis results section shows the response variable and input variables' names. To make predictions using the developed model, click the Make Prediction button on the main menu bar and enter the values for the input variables. When you click Compute Outputs, the output column updates the model predictions.
Graph Section: The graph section shows a chart for the model accuracy, a plot of hyperparameter tuning optimization if applicable, and the variable importance plot showing which input factors impact the output response most.
If you want to use this model to make predictions, click the Make Predictions button on the menu bar. Refer to the help file on this topic for more details.


Here are a few pointers regarding this analysis:
  • This analysis requires that the R software needs to be installed on your computer. Further, you will need to provide a link to the RScript executable file under Sigma Magic Options so that the software can use the R software to generate analysis results.


The following examples are in the Examples folder.
  • For the data in the file, perform the boosted analysis for predicting the gear based on other variable inputs. (Boosted Analysis 1.xlsx)


For more information on this topic, please refer to the following articles. Do note that if any external links are mentioned below, they are for reference purposes only.

© Rapid Sigma Solutions LLP. All rights reserved.