Skip to main content
All CollectionsFAQModeling FAQPropensity Scoring FAQ
Which algorithm should I pick for my propensity model?
Which algorithm should I pick for my propensity model?

Provides a list of algorithms with a brief summary for each of the main machine learning algorithms used for propensity modeling

Updated over a week ago

If you're not sure which algorithm to use for your propensity model you should start with a logistic regression and try the following:

  • Logistic regression. The Logistic Regression algorithm is very efficient and is an easy way to compute propensities quickly on any dataset. It is a great place to start. However, it tends to be less accurate with datasets that have a large number of variables, complex / non-linear relationships, or collinear variables. In that case you should try other algorithms such as LightGBM or Random Forest.

  • LightGBM. The LightGBM algorithm tends to be one of the faster and memory-efficient classification algorithms. It generally performs better than others for larger datasets but its results may be harder to interpret.

  • Random forest. When in doubt use the Random Forest algorithm if you've already tried the Logistic Regression. Random Forest is a robust algorithm that does well on a variety of tabular datasets. It tends to be somewhat slower but will do a great job classifying most datasets.

  • XGBoost. The XGBoost classifier algorithm performs similarly to Random Forest. In general XGBoost will be less robust in terms of overfitting data and/or handling messy data but it will do better when the dataset is unbalanced, i.e. when the outcome you are trying to predict is infrequent.

  • Gradient boosting classifier. In most cases the Gradient Boosting classifier will not perform as well as Random Forest. It is more sensitive to overfitting and slower than the latter. However, it occasionally does better in cases where Random Forest may be biased or limited, e.g., with categorical variables with many levels.

  • Adaptive boosting. The AdaBoost (adaptive boosting) classifier will generally not perform as well as Random Forest or XGBoost. It will be both slower and more sensitive to noise. However, it can occasionally perform better with high-quality datasets when overfitting is a concern.

  • Extra trees. The Extra Trees (extremely randomized trees) classifier is similar to the Random Forest method. It tends to be significantly faster but will not do as well as Random Forest with noisy datasets and a large number of variables.


Did this answer your question?