课程概况
Starting from a history of machine learning, we discuss why neural networks today perform so well in a variety of data science problems. We then discuss how to set up a supervised learning problem and find a good solution using gradient descent. This involves creating datasets that permit generalization; we talk about methods of doing so in a repeatable way that supports experimentation.
Course Objectives:
Identify why deep learning is currently popular
Optimize and evaluate models using loss functions and performance metrics
Mitigate common problems that arise in machine learning
Create repeatable and scalable training, evaluation, and test datasets
课程大纲
Introduction
In this course you’ll get foundational ML knowledge so that you understand the terminology that we use throughout the specialization. You will also learn practical tips and pitfalls from ML practitioners here at Google and walk away with the code and the knowledge to bootstrap your own ML models.
Practical ML
In this module, we will introduce some of the main types of machine learning and review the history of ML leading up to the state of the art so that you can accelerate your growth as an ML practitioner.
Optimization
In this module we will walk you through how to optimize your ML models.
Generalization and Sampling
Now it’s time to answer a rather weird question: when is the most accurate ML model not the right one to pick? As we hinted at in the last module on Optimization -- simply because a model has a loss metric of 0 for your training dataset does not mean it will perform well on new data in the real world.
Summary