CUDA アプリケーションの中国 GPU アーキテクチャへの移行のトレーニングコース
華為アセンディ、百綿、およびカムブリコン MLU などの中国の GPU アーキテクチャは、地元の AI と HPC 市場向けに特化した CUDA の代替品を提供しています。
この講師主導のライブトレーニング(オンラインまたはオンサイト)は、既存の CUDA アプリケーションを中国のハードウェアプラットフォームに移行および最適化したい高度な GPU プログラマーやインフラストラクチャ専門家向けです。
このトレーニング終了時には、参加者は以下を行うことができます:
- 既存の CUDA 負荷が中国製チップの代替品とどの程度互換性があるかを評価します。
- CUDA コードベースを華為 CANN、百綿 SDK、およびカムブリコン BANGPy 環境に移植します。
- プラットフォーム間でパフォーマンスを比較し、最適化ポイントを特定します。
- クロスアーキテクチャのサポートと展開における実用的な課題に対処します。
コースの形式
- 対話型の講義とディスカッション。
- コード翻訳とパフォーマンス比較の手順実習。
- 複数 GPU への適応戦略に焦点を当てたガイダンス付き演習。
コースのカスタマイズオプション
- プラットフォームや CUDA プロジェクトに基づいてこのコースのカスタマイズ版をお求めの場合、ご連絡ください。
コース概要
中国 AI GPU エコシステムの概要
- 華為アセンディ、百綿、カムブリコン MLU の比較
- CUDA と CANN、百綿 SDK、BANGPy モデルの違い
- 業界トレンドとベンダーエコシステム
移行への準備
- あなたの CUDA コードベースを評価する
- ターゲットプラットフォームと SDK バージョンの特定
- ツールチェーンのインストールと環境設定
コード翻訳手法
- CUDA のメモリアクセスとカーネルロジックを移植する
- 計算グリッド/スレッドモデルのマッピング
- 自動化された翻訳オプションと手動の翻訳オプション
プラットフォーム固有の実装
- 華為 CANN オペレータとカスタムカーネルの使用
- 百綿 SDK 変換パイプライン
- BANGPy(カムブリコン)を使用してモデルを再構築する
クロスプラットフォームテストと最適化
- 各ターゲットプラットフォームでの実行プロファイリング
- メモリチューニングと並列実行の比較
- パフォーマンストラッキングと反復
混合 GPU 環境の管理
- 複数アーキテクチャを使用したハイブリッド展開
- フォールバック戦略とデバイス検出
- コードの保守性を高めるための抽象化レイヤー
事例研究とベストプラクティス
- アセンディやカムブリコンへのビジョン/NLP モデルの移植
- 百綿クラスター上の推論パイプラインの改装
- バージョンの不一致や API のギャップを処理する
まとめと次回のステップ
要求
- CUDA や GPU ベースのアプリケーションのプログラミング経験
- GPU メモリモデルと計算カーネルの理解
- AI モデルの展開や加速ワークフローの知識
対象者
- GPU プログラマー
- システムアーキテクト
- 移植専門家
オープントレーニングコースには5人以上が必要です。
CUDA アプリケーションの中国 GPU アーキテクチャへの移行のトレーニングコース - 予約
CUDA アプリケーションの中国 GPU アーキテクチャへの移行のトレーニングコース - お問い合わせ
CUDA アプリケーションの中国 GPU アーキテクチャへの移行 - コンサルティングお問い合わせ
コンサルティングお問い合わせ
今後のコース
関連コース
Developing AI Applications with Huawei Ascend and CANN
21 時間Huawei Ascend is a family of AI processors designed for high-performance inference and training.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI engineers and data scientists who wish to develop and optimize neural network models using Huawei’s Ascend platform and the CANN toolkit.
By the end of this training, participants will be able to:
- Set up and configure the CANN development environment.
- Develop AI applications using MindSpore and CloudMatrix workflows.
- Optimize performance on Ascend NPUs using custom operators and tiling.
- Deploy models to edge or cloud environments.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Huawei Ascend and CANN toolkit in sample applications.
- Guided exercises focused on model building, training, and deployment.
Course Customization Options
- To request a customized training for this course based on your infrastructure or datasets, please contact us to arrange.
Deploying AI Models with CANN and Ascend AI Processors
14 時間CANN (Compute Architecture for Neural Networks) is Huawei’s AI compute stack for deploying and optimizing AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and engineers who wish to deploy trained AI models efficiently to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
- Understand the CANN architecture and its role in the AI deployment pipeline.
- Convert and adapt models from popular frameworks to Ascend-compatible formats.
- Use tools like ATC, OM model conversion, and MindSpore for edge and cloud inference.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CloudMatrixを使用したAI推論とデプロイ
21 時間CloudMatrixは、Huaweiの統合AI開発およびデプロイプラットフォームで、スケーラブルで本番環境向けの推論パイプラインをサポートするように設計されています。
この講師主導の実践的なトレーニング(オンラインまたはオンサイト)は、CloudMatrixプラットフォームを使用してAIモデルをデプロイおよび監視したい初級から中級レベルのAI専門家向けです。CANNとMindSporeとの統合もサポートされます。
このトレーニング終了時には、参加者は以下のことができるようになります:
- CloudMatrixを使用してモデルをパッケージ化、デプロイ、提供する。
- Ascendチップセット向けにモデルを変換および最適化する。
- 実時間とバッチ推論タスクのパイプラインを設定する。
- 本番環境でのデプロイメントを監視し、パフォーマンスを調整する。
コース形式
- 対話型の講義とディスカッション。
- 実際のデプロイメントシナリオを使用したCloudMatrixの実践的な使用。
- 変換、最適化、および拡張に焦点を当てたガイド付き演習。
コースカスタマイズオプション
- AIインフラストラクチャやクラウド環境に基づいてこのコースをカスタマイズしたい場合は、ご連絡ください。
GPU Programming on Biren AI Accelerators
21 時間Biren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 時間Cambricon MLUs (Machine Learning Units) are specialized AI chips optimized for inference and training in edge and datacenter scenarios.
This instructor-led, live training (online or onsite) is aimed at intermediate-level developers who wish to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
By the end of this training, participants will be able to:
- Set up and configure the BANGPy and Neuware development environments.
- Develop and optimize Python- and C++-based models for Cambricon MLUs.
- Deploy models to edge and data center devices running Neuware runtime.
- Integrate ML workflows with MLU-specific acceleration features.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of BANGPy and Neuware for development and deployment.
- Guided exercises focused on optimization, integration, and testing.
Course Customization Options
- To request a customized training for this course based on your Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 時間CANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit used to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at beginner-level AI developers who wish to understand how CANN fits into the model lifecycle from training to deployment, and how it works with frameworks like MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
- Understand the purpose and architecture of the CANN toolkit.
- Set up a development environment with CANN and MindSpore.
- Convert and deploy a simple AI model to Ascend hardware.
- Gain foundational knowledge for future CANN optimization or integration projects.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with simple model deployment.
- Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 時間Huawei's Ascend CANN toolkit enables powerful AI inference on edge devices such as the Ascend 310. CANN provides essential tools for compiling, optimizing, and deploying models where compute and memory are constrained.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and integrators who wish to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for Ascend 310 using CANN tools.
- Build lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for limited compute and memory environments.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work with edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 時間Huawei’s AI stack — from the low-level CANN SDK to the high-level MindSpore framework — offers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level technical professionals who wish to understand how the CANN and MindSpore components work together to support AI lifecycle management and infrastructure decisions.
By the end of this training, participants will be able to:
- Understand the layered architecture of Huawei’s AI compute stack.
- Identify how CANN supports model optimization and hardware-level deployment.
- Evaluate the MindSpore framework and toolchain in relation to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-prem environments.
Format of the Course
- Interactive lecture and discussion.
- Live system demos and case-based walkthroughs.
- Optional guided labs on model flow from MindSpore to CANN.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 時間CANN SDK (Compute Architecture for Neural Networks) is Huawei’s AI compute foundation that allows developers to fine-tune and optimize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI developers and system engineers who wish to optimize inference performance using CANN’s advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
- Understand CANN's runtime architecture and performance lifecycle.
- Use profiling tools and Graph Engine for performance analysis and optimization.
- Create and optimize custom operators using TIK and TVM.
- Resolve memory bottlenecks and improve model throughput.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with real-time profiling and operator tuning.
- Optimization exercises using edge-case deployment examples.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN SDK for Computer Vision and NLP Pipelines
14 時間The CANN SDK (Compute Architecture for Neural Networks) provides powerful deployment and optimization tools for real-time AI applications in computer vision and NLP, especially on Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI practitioners who wish to build, deploy, and optimize vision and language models using the CANN SDK for production use cases.
By the end of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Use CANN tools to convert models and integrate them into live pipelines.
- Optimize inference performance for tasks like detection, classification, and sentiment analysis.
- Build real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab with model deployment and performance profiling.
- Live pipeline design using real CV and NLP use cases.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 時間CANN TIK (Tensor Instruction Kernel) and Apache TVM enable advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at advanced-level system developers who wish to build, deploy, and tune custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom ops into the CANN runtime and execution graph.
- Use TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Ascend、Biren、Cambriconでのパフォーマンス最適化
21 時間Ascend, Biren, and Cambricon are leading AI hardware platforms in China, each offering unique acceleration and profiling tools for production-scale AI workloads.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI infrastructure and performance engineers who wish to optimize model inference and training workflows across multiple Chinese AI chip platforms.
By the end of this training, participants will be able to:
- Benchmark models on Ascend, Biren, and Cambricon platforms.
- Identify system bottlenecks and memory/compute inefficiencies.
- Apply graph-level, kernel-level, and operator-level optimizations.
- Tune deployment pipelines to improve throughput and latency.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of profiling and optimization tools on each platform.
- Guided exercises focused on practical tuning scenarios.
Course Customization Options
- To request a customized training for this course based on your performance environment or model type, please contact us to arrange.