Skip to main content

Table 3 Table of results of hyperparameter tuning

From: Application of artificial intelligence for forecasting surface quality index of irrigation systems in the Red River Delta, Vietnam

No

Model name

Hyperparameter tuning

1

Gradient boosting (GB)

Distribution = “Gaussian”

cv.folds = 10:

shrinkage parameter = 0.01

Each terminal node should have at least 10 observations: n.minobsinnode = 10

n.trees = 500

2

eXtreme gradient boosting (XGBoost)

The number of trees (nround = 100);

The shrinkage parameter λ (eta in the params): 0.01;

The number of splits in each tree: max.depth = 5

3

Recurrent neural networks (RNN)

learning_rate = 0.001

epochs = 500

batch_size = 32

validation_split = 0.2

verbose = 1

4

Long short-term memory (LSTM)

learning_rate = 0.00001

epochs = 1000,

batch_size = 32,

validation_split = 0.2,

verbose = 1