你将学到什么
Serve a TensorFlow model with TensorFlow Serving and Docker.
Create a web application with Flask to work as an interface to a served model.
课程概况
In this 2-hour long project-based course, you will learn how to deploy TensorFlow models using TensorFlow Serving and Docker, and you will create a simple web application with Flask which will serve as an interface to get predictions from the served TensorFlow model.
This course runs on Coursera’s hands-on project platform called Rhyme. On Rhyme, you do projects in a hands-on manner in your browser. You will get instant access to pre-configured cloud desktops containing all of the software and data you need for the project. Everything is already set up directly in your Internet browser so you can just focus on learning. For this project, you’ll get instant access to a cloud desktop with (e.g. Python, Jupyter, and Tensorflow) pre-installed.
Prerequisites:
In order to be successful in this project, you should be familiar with Python, TensorFlow, Flask, and HTML.
Notes:
– You will be able to access the cloud desktop 5 times. However, you will be able to access instructions videos as many times as you want.
– This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.
课程大纲
Deploy Models with TensorFlow Serving and Flask
Welcome to this project-based course on Deploying Models with TensorFlow Serving and Flask. In this project, we will deploy a TensorFlow model with the help of TensorFlow Serving, and we will create a small web app which will serve as a visual interface to the model inference. TensorFlow Serving is the ideal way to serve TensorFlow models in production, and Flask is a minimal web framework which lets developers create web apps really quickly.
课程项目
Introduction
Getting Started with the Flask App
Index Template
TensorFlow Serving
Getting Predictions
Connecting to Model Server
Displaying the Results