I think that @RickardSjogren is describing the eigenvectors, while @BigPanda is giving the loadings. There's a big difference: Loadings vs eigenvectors in PCA: when to use one or another?.
I created this PCA class with a loadings
method.
Loadings, as given by pca.components_ * np.sqrt(pca.explained_variance_)
, are more analogous to coefficients in a multiple linear regression. I don't use .T
here because in the PCA class linked above, the components are already transposed. numpy.linalg.svd
produces u, s, and vt
, where vt
is the Hermetian transpose, so you first need to back into v
with vt.T
.
There is also one other important detail: the signs (positive/negative) on the components and loadings in sklearn.PCA
may differ from packages such as R.
More on that here:
In sklearn.decomposition.PCA, why are components_ negative?.