课程概况
In this course, “Architecting with Google Kubernetes Engine: Workloads,” you learn about performing Kubernetes operations; creating and managing deployments; the tools of GKE networking; and how to give your Kubernetes workloads persistent storage.
This course is part of a specialization focused on building efficient computing infrastructures using Kubernetes and GKE. The specialization introduces participants to deploying and managing containerized applications on GKE and the other services provided by Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as pods, containers, deployments, and services; as well as networks and application services. The specialization also covers deploying practical solutions including security and access management, resource management, and resource monitoring.
>>> By enrolling in this course you agree to the Qwiklabs Terms of Service as set out in the FAQ and located at: https://qwiklabs.com/terms_of_service <<<
课程大纲
Course introduction
In this module, you'll become familiar with the structure and layout of the course.
Kubernetes Operations
In this module you will learn about the kubectl command, which is the command line utility used to interact with and manage the resources inside Kubernetes clusters. You'll learn how to connect it to Google Kubernetes Engine clusters, and use it to create, inspect, interact and delete Pods and other objects within Kubernetes clusters. You'll also use kubectl to view a Pod’s console output, and sign in interactively to a Pod.
Deployments, Jobs, and Scaling
GKE works with containerized applications: in other words, applications packaged into hardware-independent, isolated user-space instances. In GKE and Kubernetes, these packaged applications are collectively called workloads. In this module you will learn about Deployments and Jobs, two of the main types of workload. You will also learn about the mechanisms that are used to scale the GKE clusters where you run your applications. You'll learn to control on which Nodes Pods may and may not run. You'll also explore ways to get software into your cluster.
Google Kubernetes Engine Networking
In this module, you’ll learn how to create Services to expose applications running within Pods, which allows them to communicate with the outside world. You'll also learn how to create Ingress resources for HTTP or HTTPS load balancing. You'll also learn about GKE's container-native load balancing, which allows you to directly configure Pods as network endpoints with Google Cloud Load Balancing.
Persistent Data and Storage
In this module you’ll learn about the different types of Kubernetes storage abstractions. You’ll learn about StatefulSets and how to use them to manage ordered deployments of Pods and storage. You’ll also learn how ConfigMaps can save you time during application deployment by decoupling configuration artifacts from container definitions. Finally, you’ll learn how to keep sensitive information safer from accidental exposure using Kubernetes Secrets.