[ad_1]
Among the ML libraries, scikit-learn is the de facto simplest and easiest framework to learn ML. It is based on the scientific stack (mostly NumPy), focuses on traditional yet powerful algorithms like linear regression/support vector machines/dimensionality reductions, and provides lots of tools to build around those algorithms (like model evaluation and selection, hyperparameter optimization, data preprocessing, and feature selection).
But its main advantage is, without a doubt, its documentation and user guide. You can literally learn almost everything just from the scikit-learn website, with lots of examples.
Note that other popular frameworks are TensorFlow and PyTorch, but they have steeper learning curves and focus on more complex subjects like computer vision and neural networks. Since this my first real contact with ML, I figured I’d start with sklearn.
I already started reading the documentation a few months ago but was kinda lost given its size. While the documentation is huge and very well written, I am not sure the best way to learn scikit-learn is to follow through the whole documentation one page after another.
The good news, and the thing that triggered my intent to learn scikit-learn further, was the start of the “official” MOOC of scikit-learn, created by the actual team of scikit-learn.
In this series, I will try to summarize what I learned from each of the 6 modules that compose the MOOC. This is an excellent exercise for me to practice my memory and summarize what I learned, and a good introduction for you if you want to get in touch with sklearn.
Source link