コース概要
Preparing Machine Learning Models for Deployment
- Packaging models with Docker
- Exporting models from TensorFlow and PyTorch
- Versioning and storage considerations
Model Serving on Kubernetes
- Overview of inference servers
- Deploying TensorFlow Serving and TorchServe
- Setting up model endpoints
Inference Optimization Techniques
- Batching strategies
- Concurrent request handling
- Latency and throughput tuning
Autoscaling ML Workloads
- Horizontal Pod Autoscaler (HPA)
- Vertical Pod Autoscaler (VPA)
- Kubernetes Event-Driven Autoscaling (KEDA)
GPU Provisioning and Resource Management
- Configuring GPU nodes
- NVIDIA device plugin overview
- Resource requests and limits for ML workloads
Model Rollout and Release Strategies
- Blue/green deployments
- Canary rollout patterns
- A/B testing for model evaluation
Monitoring and Observability for ML in Production
- Metrics for inference workloads
- Logging and tracing practices
- Dashboards and alerting
Security and Reliability Considerations
- Securing model endpoints
- Network policies and access control
- Ensuring high availability
Summary and Next Steps
要求
- An understanding of containerized application workflows
- Experience with Python-based machine learning models
- Familiarity with Kubernetes fundamentals
Audience
- ML engineers
- DevOps engineers
- Platform engineering teams
お客様の声 (5)
he was patience and understood that we fall behind
Albertina - REGNOLOGY ROMANIA S.R.L.
コース - Deploying Kubernetes Applications with Helm
How Interactive Reda would explain the information and get us to participate. He would also mention interesting facts along the way and share all the knowledge he has. Reda has excellent communication skills which makes online training really effective.
Janine - BMW SA
コース - Kubernetes Advanced
The training was more practical
Siphokazi Biyana - Vodacom SA
コース - Kubernetes on AWS
Learning about Kubernetes.
Felix Bautista - SGS GULF LIMITED ROHQ
コース - Kubernetes on Azure (AKS)
It gave a good grounding for Docker and Kubernetes.