Hands-on Scalable Edge to Core ML Pipelines
December 08, 2024
1h 7m
Free
mlops
machine-learning
edge-computing
ml-pipelines
distributed-systems
streaming-architecture
rust
webassembly
data-validation
model-deployment
observability
event-driven-architecture
Description
This intensive workshop focuses on designing, implementing, and troubleshooting real-world distributed ML pipelines from edge devices to core infrastructure. Key MLOps challenges for building and managing scalable systems for operational analytics and AI/ML workflows are addressed. Topics include edge computing for data ingestion, streaming architecture with open-source tools, distributed processing across heterogeneous environments, model deployment strategies, and observability/monitoring for distributed ML systems. Participants engage in hands-on activities to build a complete edge-to-core ML pipeline, deploy models for real-time inference, and practice data validation and troubleshooting.