Machine learning (ML) algorithms are being used to generate predictions in every corner of our decision-making life. Methods range from “simple” algorithms such as trees, forests, naive Bayes, linear and logistic regression models, and nearest-neighbor methods, through improvements such as boosting, bagging, regularization, and ensembling, to computationally-intensive, blackbox deep learning algorithms.
The new fashion of “apply deep learning to everything” has resulted in breakthroughs as well as in alarming disasters. Is this due to the volatility of deep learning algorithms? I argue this is because of the growing divorce between predictive algorithm developers, their deployment context, and their end users’ actions.
The full article is posted on Medium
No comments:
Post a Comment