Skip to main content
All CollectionsFAQModeling FAQRegression Analysis FAQ
How do I interpret error metrics for my regression model?
How do I interpret error metrics for my regression model?

This article discusses how to interpret error metrics for your regression model in the G2M platform

Updated over 6 months ago

Once a regression model is trained the following error metrics are generated:

  • R2: the coefficient of determination R2 is a measure of how good the regression model is at predicting, it is a measure of how closely predicted outcomes match actual outcomes. It ranges from -1 to +1. It is generally positive for most models although in some cases it may be negative if the model is particularly bad and shows decreasing predictions with increasing actuals.

  • r2: the Pearson r2, defined as the square of the Pearson correlation, is a measure of whether there is any linear relationship between predicted and actual outcomes. It ranges from 0 to 1. In cases with strong model bias you may have low R2 and high r2: the predictions may be not good due to their bias but there is a straightforward relationship between the biased predictions and the actual outcomes.

  • RMSE: the root mean square error (RMSE) measure the average difference between predicted outcomes and actual outcomes. It is defined mathematically as the standard deviation of the residuals. It is used by many regression algorithms as the objective function to be minimized.

  • MSE: the mean square error (MSE) is the square of the RMSE and is defined as the variance of the residuals.

  • MAE: the mean absolute error (MAE) is defined at the average absolute value of the difference between predictions and actuals. It is similar to RMSE but in practice is unbiased (unlike RMSE) and has a lower sample variance. It tends to be a more robust error metric.

  • MAPE: the mean absolute percentage error (MAPE) is very similar to MAE and is defined as the average absolute percentage value of the difference between predictions and actuals. The value is expressed as a number, not a percentage, meaning that 100 means 1e2 and not 100%. In cases where actual outcomes are small MAPE can be quickly biased to be very large as the averaging is not weighted.

Did this answer your question?