If you want to study machine learning but don't have the luxury of attending university full-time, you're in luck. There is a wonderfully rich collection of courses, posts, videos, notebooks, and tutorials online. There is so much, in fact, that it can be hard to know where to start. I put together this guide as a starting place, a first foothold for anyone who wants to jump in.
To pull together these recommendations, I sent out a call for help on LinkedIn and Twitter. The replies show not just how much is out there, but how passionate people are about the teachers that helped a concept click for them. (Thank you to all of you who added your recommendations!)
Here are some hand-picked short lists. I have a few other lists of relevant background topics, like learning Python, linear algebra, and statistics, but those listed here focus on the core concepts of machine learning (ML).
Getting started
Josh Starmer's Statquest is a playfully narrated video series covering a remarkably broad set of ML concepts. They're free, but if you're in a position to do so, consider donating to the cause.
Machine Learning Flash Cards from Chris Albon do a beautiful job of slicing through the language barrier between English and ML-ese. One by one, they break down terms and concepts into easy to digest illustrated explanations. The complete set is $12 USD, but Chris has a standing offer that you can contact him and request a set for free for any reason, no questions asked.
Grokking Machine Learning is a hot off the presses book by Luis Serrano covering a textbook's worth of ML using concrete examples and a delightfully accessible style. (e2eML readers can help themselves to 40% discount off all formats of the books with the code serranopc.)
Among the legendary math videos of 3 Blue 1 Brown, a.k.a. Grant Sanderson, is a neural networks series that helps to build a strong intuition for how they work without watering down any of the math.
Going deep
There are two books that are similar in scope in approach: Aurélien Geron's Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow and Python Machine Learning by Sebastian Raschka and Vahid Mirjalili. Both contain ML examples from all across the Python toolset, using NumPy, Scikit-Learn, and TensorFlow to cover all the most common learning methods. All the code examples and notebooks from both are generously provided in public GitHub repositories (PML code, HOML code). And both come warmly recommended by a large number of readers. But they aren't identical. Their differences in coverage and voice make them excellent complements.
Andrew Ng's classic machine learning course was mentioned more often than any other resource.
If you're mainly focused on neural networks, there are a pair of resources you might find useful: fast.ai courses, taught by Jeremy Howard, Rachael Thomas, and Sylvain Gugger, and Stanford's CS231n lectures from Winter 2016 taught by Andrej Karpathy and Spring 2017 taught by Fei-Fei Li, Justin Johnson, and Serena Yeung. They differ widely in their presentation, but the different viewpoints work in stereo to create a rich view of the topic.
Reference
The resources in this section are dense enough that they can be challenging to absorb the first time through. However, once you have a grounding in the concepts, their concise summaries and insightful derivations can refresh your memory or deepen your understanding.
The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, and Jerome Friedman is a widely loved classic. Its presentation is dense, but thorough. The authors offer a PDF version for free.
Andriy Burkov's Hundred Page Machine Learning Book makes ML extremely accessible. It provides concise descriptions of a wide variety of techniques. It's now availble in 11 languages.
Pushing the edge
If you are looking for advanced instruction on machine learning, you will have no choice but to read reseach papers. One exciting (or maddening, depending on who you ask) part of working in such a young field is that the advanced material is still taking shape. There isn't yet general relativity-level theory. It's a free-for-all. The only way to keep up with the latest developments is to choose a couple areas of interest and keep an eye out for papers in that area.
Even reading papers will only keep you on top of work that has already been done. To expand the frontiers of machine learning there is no substitute for solving practical problems with machine learning, questions that someone outside the field might want to know the answer to. It's tough to come up with approaches that have never been tried, but it's surprisingly easy to find problems to which ML has never been successfully applied. As soon as you have a concrete goal, the details of your problem will push your method to its limits and, more often than not, force you to adapt or extend it. It is an excellent way to distinguish yourself in a crowded field of competent practitioners.
I'm stopping here, because a list's value is inversely proportional to it's length, but these are only a starting point. There are many other fantastic resources for learning ML online. A quick look at the original LinkedIn and Twitter replies will show just how deep the bench is. It doesn't matter where you start, as long as you start. Pour some tea, click a link, and dive in.