Here's the complement of your question -- some methods that are derived from KNN and linear regression.
Techniques for regression
LASSO, ridge regression and elastic net are linear regression with a regularization penalty.
Local linear regression builds a nonlinear estimator using locally linear models (it's kind of a combination of linear regression and K nearest neighbor).
Nonlinear regression techniques generalize to the case where there isn't a simple linear dependency on the inputs, but the there is still a parametric model.
Poisson regression generalizes linear regression to the case where the quantity being estimated is a count (i.e. it cannot be negative, and it is always a round number)
Heirarchical linear models, for example where A is used to predict B, which is in turn used to predict C.
Least absolute deviation tries to minimize the L1 norm, rather than the L2 norm as in linear regression.
Various robust regression techniques try to provide robustness in the face of outliers.
Techniques for classification
Logistic regression and probit regression fall into the class of generalized linear models, and so are related to linear regression.
Similarly, they generalize to multinomial probit and multinomial logit models when there are more than two categories.
Some neural nets can be viewd as heirarchical, multinomial logistic regressions.
Support vector machines can, in some cases, be viewed as a least-squares regression with binary targets, operating in a higher-dimensional space than that occupied by the original data.
Techniques not inspired by KNN or LR
Some techniques that aren't obviously inspired by k nearest neighbors or linear regression include
Decision trees (and random forests, a development of decision trees).
Naive Bayes (which works with probability distributions)
Markov chain, hidden Markov model, Kalman filter and particle filter models, which impose additional structure on the problem that isn't easily captured by nearest neighbors or linear dependence.