LightGBM

model = lgb.LGBMClassifier(learning_rate=0.09,max_depth=-5,random_state=42)
model.fit(x_train,y_train,eval_set=[(x_test,y_test),(x_train,y_train)],
          verbose=20,eval_metric='logloss')

Here is what the above code is Doing:
1. We are using the LGBMClassifier class of the lightgbm library.
2. We are using the learning_rate parameter of the LGBMClassifier class to define the step size shrinkage used to prevent overfitting.
3. We are using the max_depth parameter of the LGBMClassifier class to define the maximum depth of a tree.
4. We are using the random_state parameter of the LGBMClassifier class to define the random number seed so that same random numbers are generated every time we run the code.
5. We are using the fit method of the LGBMClassifier class to train the algorithm on the training data.
6. We are using the eval_set parameter of the fit method to define the dataset for evaluation.
7. We are using the verbose parameter of the fit method to define the evaluation metric.
8. We are using the eval_metric parameter of the fit method to define the evaluation metric.