Question

When applying logistic regression, one is essentially applying the following function $1/(1 + e^{\beta x})$ to provide a decision boundary, where $\beta$ are a set of parameters that are learned by the algorithm, and $x$ is an input feature vector. This appears to be the general framework provided by widely available packages such as Python's sklearn.

This is a very basic question, and can be manually implemented by normalization of the features, but shouldn't a more accurate decision boundary be given by: $1/(1 + e^{\beta (x - \alpha)})$, where $\alpha$ is an offset? Of course an individual can manually subtract a pre-specified $\alpha$ from the features ahead of time and achieve the same result, but wouldn't it be best for the logistic regression algorithm to simply let $\alpha$ be a free parameter that is trained, like $\beta$? Is there a reason this is not routinely done?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top