你将学到什么
The underlying statistical and algorithmic principles required to develop scalable real-world machine learning pipelines
Exploratory data analysis, feature extraction, supervised learning, and model evaluation
Application of these principles using Spark
How to implement distributed algorithms for fundamental statistical models
课程概况
Machine learning aims to extract knowledge from data, relying on fundamental concepts in computer science, statistics, probability and optimization. Learning algorithms enable a wide range of applications, from everyday tasks such as product recommendations and spam filtering to bleeding edge applications like self-driving cars and personalized medicine. In the age of ‘big data’, with datasets rapidly growing in size and complexity and cloud computing becoming more pervasive, machine learning techniques are fast becoming a core component of large-scale data processing pipelines.
This statistics and data analysis course introduces the underlying statistical and algorithmic principles required to develop scalable real-world machine learning pipelines. We present an integrated view of data processing by highlighting the various components of these pipelines, including exploratory data analysis, feature extraction, supervised learning, and model evaluation. You will gain hands-on experience applying these principles using Spark, a cluster computing system well-suited for large-scale machine learning tasks, and its packages spark.ml and spark.mllib. You will implement distributed algorithms for fundamental statistical models (linear regression, logistic regression, principal component analysis) while tackling key problems from domains such as online advertising and cognitive neuroscience.
预备知识
Python programming background
experience with PySpark equivalent to CS105x: Introduction to Spark
comfort with mathematical and algorithmic reasoning
familiarity with basic machine learning concepts
exposure to algorithms, probability, linear algebra and calculus