There's no hard limit to the number of iterations for LogisticRegression
; instead it tries to detect convergence with a specified tolerance, tol
: the smaller tol
, the longer the algorithm will run.
From the source code, I gather that the algorithms stops when the norm of the objective's gradient is less than tol
times its initial value, before training started. This is worth documenting.
As for random forests, training stops when n_estimators
trees have been fit of maximum depth max_depth
, constrained by the parameters min_samples_split
, min_samples_leaf
and max_leaf_nodes
. Tree learning is completely different from iterative linear model learning.