Skip to main content
What is Hyper Tuning?

Hyper Tuning optimizes machine learning hyperparameters to improve accuracy, efficiency, and prevent overfitting using search techniques.

Updated over a week ago

Hyper Tuning refers to the process of optimizing hyperparameters in machine learning models to improve performance. Unlike model parameters which are learned during training, hyperparameters (e.g., learning rate, batch size, number of layers) must be set before training begins. Techniques like grid search, random search, and more advanced methods such as Bayesian optimization and genetic algorithms help automate this tuning process. The goal is to find the best combination of hyperparameters that minimize loss and maximize accuracy, efficiency, or generalization to unseen data. Proper hyperparameter tuning can significantly enhance a model’s performance, reducing overfitting and improving overall predictive power.

Algorithms With Hyper Tuning In The G2M Platform

  • Classifiers:

    • Random Forest

    • Gradient Boost

    • Adaptive Boost

    • XGBoost

    • Extra Trees

    • LightGBM

  • Regressors:

    • Random Forest

    • XGBoost

    • Gradient Boost

    • LASSO

    • Bayesian Ridge

    • Linear

    • Ridge

How the Selection Works In G2M Platform


Some algorithms, like Bayesian Ridge, Lasso, and Ridge, are built to be able to use either GridSearchCV or BayesSearchCV. The decision depends on how many hyperparameter options are available. If there are only a few settings to test, GridSearchCV is chosen because it can afford to check every possibility. However, if there are many possible configurations, BayesSearchCV is used to make smarter, data-driven guesses rather than exhaustively searching every option. This ensures a balance between thoroughness and efficiency.

  1. GridSearchCV: Tests all possible hyperparameter combinations—very thorough but slow.

  2. RandomizedSearchCV: Randomly picks a subset of combinations—faster but might miss the best one.

  3. BayesSearchCV: Learns from past results to pick smarter combinations—efficient and often the best balance of speed and accuracy.

Algorithms With GridSearchCV

  • Classifiers

    • Adaptive Boost

    • Extra Trees

    • Gradient Boost

    • LGBM

    • Random Forest

    • XGBoost

  • Regressors

    • Bayesian Ridge

    • Gradient Boost

    • Lasso

    • LGBM

    • Linear

    • Random Forest

    • Ridge

    • XGBoost

Algorithms With RandomizedSearchCV

  • Regressors

    • Linear

Algorithms With BayesSearchCV

  • Regressors

    • Bayesian Ridge

    • Lasso

    • Ridge

Did this answer your question?