Graphic techniques (Adobe Photoshop, Adobe Illustrator)のトレーニングコース
What you will learn during the training:
- principles of creating computer graphics
- ways to adjust the color of photos
- principles of retouching and creating photomontages
- ways of preparing logos, charts, tables and illustrations
- preparation of business cards, simple advertisements, billboards and leaflets
- basics of preparing graphics for printing and Internet applications
Examples of lesson topics:
- my poster
- portrait
- space
- my catalogue
- my face
- billboard
- my logo
コース概要
Photoshop
- Basics of image construction and color models
- Scanning
- Adjusting the color of photos
- Retouching and modifications
- Photomontages
- Recording formats, graphics recording and optimization
Illustrator
- Creating illustrations, logos
- Making and printing business cards
- Preparing a simple advertising leaflet
- Charts and tables - attractive presentation of data
要求
Good computer skills.
Open Training Courses require 5+ participants.
Graphic techniques (Adobe Photoshop, Adobe Illustrator)のトレーニングコース - ご予約
Graphic techniques (Adobe Photoshop, Adobe Illustrator)のトレーニングコース - Enquiry
Graphic techniques (Adobe Photoshop, Adobe Illustrator) - Consultancy Enquiry
お客様の声 (2)
さまざまな例が非常にインタラクティブで、トレーニングの開始から終了までの間に複雑さが徐々に増していきます。
Jenny - Andheo
コース - GPU Programming with CUDA and Python
Machine Translated
トレーナーのエネルギーとユーモア。
Tadeusz Kaluba - Nokia Solutions and Networks Sp. z o.o.
コース - NVIDIA GPU Programming - Extended
Machine Translated
Upcoming Courses
関連コース
GPU Programming with CUDA and Python
14 時間CUDA (Compute Unified Device Architecture) is a parallel computing platform and API created by Nvidia.
This instructor-led, live training (online or onsite) is aimed at developers who wish to use CUDA to build Python applications that run in parallel on NVIDIA GPUs.
By the end of this training, participants will be able to:
- Use the Numba compiler to accelerate Python applications running on NVIDIA GPUs.
- Create, compile and launch custom CUDA kernels.
- Manage GPU memory.
- Convert a CPU based application into a GPU-accelerated application.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Administration of CUDA
35 時間CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface model created by Nvidia.
This instructor-led, live training (online or onsite) is aimed at beginner-level system administrators and IT professionals who wish to install, configure, manage, and troubleshoot CUDA environments.
By the end of this training, participants will be able to:
- Understand the architecture, components, and capabilities of CUDA.
- Install and configure CUDA environments.
- Manage and optimize CUDA resources.
- Debug and troubleshoot common CUDA issues.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Learning Maya
14 時間Autodesk Maya or Maya is a 3D computer graphics application. Maya allows users to create realistic animations from scratch.
This instructor-led, live training (online or onsite) is aimed at web designers who wish to use Maya for creating 3D animations.
By the end of this training, participants will be able to:
- Create realistic models and textures in Maya.
- Animate and render projects for high quality playback.
- Simulate natural effects like water and smoke.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
WebGL: Create an Animated 3D Application
21 時間WebGL (Web Graphics Library) is a JavaScript API for rendering 3D graphics within a web browser without the use of plug-ins.
In this instructor-led, live training, participants will learn how to generate realistic computer images using 3D graphics as they step through the creation of an animated 3D application that runs in a browser.
By the end of this training, participants will be able to:
- Understand and use WebGL's various functionality, including meshes, transforms, cameras, materials, lighting, and animation
- Animate objects with WebGL
- Create 3D objects using WebGL
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
NVIDIA GPU Programming
14 時間This course covers how to program GPUs for parallel computing. Some of the applications include deep learning, analytics, and engineering applications.
NVIDIA GPU Programming - Extended
21 時間This instructor-led, live training course covers how to program GPUs for parallel computing, how to use various platforms, how to work with the CUDA platform and its features, and how to perform various optimization techniques using CUDA. Some of the applications include deep learning, analytics, image processing and engineering applications.
Hardware-Accelerated Video Analytics
14 時間Video analytics refers to the technology and techniques used to process a video stream. A common application would be capturing and identifying live video events through motion detection, facial recognition, crowd and vehicle counting, etc.
This instructor-led, live training (online or onsite) is aimed at developers who wish to build hardware-accelerated object detection and tracking models to analyze streaming video data.
By the end of this training, participants will be able to:
- Install and configure the necessary development environment, software and libraries to begin developing.
- Build, train, and deploy deep learning models to analyze live video feeds.
- Identify, track, segment and predict different objects within video frames.
- Optimize object detection and tracking models.
- Deploy an intelligent video analytics (IVA) application.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
GPU Programming with OpenCL
28 時間OpenCL is an open standard for heterogeneous programming that enables a code to run on different platforms and devices, such as multicore CPUs, GPUs, FPGAs, and others. OpenCL exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenCL to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes OpenCL SDK, a device that supports OpenCL, and Visual Studio Code.
- Create a basic OpenCL program that performs vector addition on the device and retrieves the results from the device memory.
- Use OpenCL API to query device information, create contexts, command queues, buffers, kernels, and events.
- Use OpenCL C language to write kernels that execute on the device and manipulate data.
- Use OpenCL built-in functions, extensions, and libraries to perform common tasks and operations.
- Use OpenCL host and device memory models to optimize data transfers and memory accesses.
- Use OpenCL execution model to control the work-items, work-groups, and ND-ranges.
- Debug and test OpenCL programs using tools such as CodeXL, Intel VTune, and NVIDIA Nsight.
- Optimize OpenCL programs using techniques such as vectorization, loop unrolling, local memory, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
GPU Programming with CUDA
28 時間CUDA is an open standard for GPU programming that enables a code to run on NVIDIA GPUs, which are widely used for high-performance computing, artificial intelligence (AI), gaming, and graphics. CUDA exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use CUDA to program NVIDIA GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes CUDA Toolkit, a NVIDIA GPU, and Visual Studio Code.
- Create a basic CUDA program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use CUDA API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use CUDA C/C++ language to write kernels that execute on the GPU and manipulate data.
- Use CUDA built-in functions, variables, and libraries to perform common tasks and operations.
- Use CUDA memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use CUDA execution model to control the threads, blocks, and grids that define the parallelism.
- Debug and test CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize CUDA programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
GPU Programming - OpenCL vs CUDA vs ROCm
28 時間GPU programming is a technique that leverages the parallel processing power of GPUs to accelerate applications that require high-performance computing, such as artificial intelligence, gaming, graphics, and scientific computing. There are several frameworks that enable GPU programming, each with its own advantages and disadvantages. OpenCL is an open standard that can be used to program CPUs, GPUs, and other devices from different vendors, while CUDA is specific to NVIDIA GPUs. ROCm is a platform that supports GPU programming on AMD GPUs, and also provides compatibility with CUDA and OpenCL.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use different frameworks for GPU programming and compare their features, performance, and compatibility.
By the end of this training, participants will be able to:
- Set up a development environment that includes OpenCL SDK, CUDA Toolkit, ROCm Platform, a device that supports OpenCL, CUDA, or ROCm, and Visual Studio Code.
- Create a basic GPU program that performs vector addition using OpenCL, CUDA, and ROCm, and compare the syntax, structure, and execution of each framework.
- Use the respective APIs to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use the respective languages to write kernels that execute on the device and manipulate data.
- Use the respective built-in functions, variables, and libraries to perform common tasks and operations.
- Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
- Use the respective execution models to control the threads, blocks, and grids that define the parallelism.
- Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AMD GPU Programming
28 時間ROCm is an open source platform for GPU programming that supports AMD GPUs, and also provides compatibility with CUDA and OpenCL. ROCm exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
HIP is a C++ runtime API and kernel language that allows you to write portable code that can run on both AMD and NVIDIA GPUs. HIP provides a thin abstraction layer over the native GPU APIs, such as ROCm and CUDA, and allows you to leverage the existing GPU libraries and tools.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use ROCm and HIP to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes ROCm Platform, a AMD GPU, and Visual Studio Code.
- Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use HIP language to write kernels that execute on the GPU and manipulate data.
- Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
- Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
- Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
- Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
ROCm for Windows
21 時間ROCm is an open source platform for GPU programming that supports AMD GPUs, and also provides compatibility with CUDA and OpenCL. ROCm exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
ROCm for Windows is a recent development that allows users to install and use ROCm on Windows operating system, which is widely used for personal and professional purposes. ROCm for Windows enables users to leverage the power of AMD GPUs for various applications, such as artificial intelligence, gaming, graphics, and scientific computing.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to install and use ROCm on Windows to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes ROCm Platform, a AMD GPU, and Visual Studio Code on Windows.
- Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use HIP language to write kernels that execute on the GPU and manipulate data.
- Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
- Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
- Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
- Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Introduction to GPU Programming
21 時間GPU programming is a technique that leverages the parallel processing power of GPUs to accelerate applications that require high-performance computing, such as artificial intelligence, gaming, graphics, and scientific computing. There are several frameworks and tools that enable GPU programming, each with its own advantages and disadvantages. Some of the most popular ones are OpenCL, CUDA, ROCm, and HIP.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to learn the basics of GPU programming and the main frameworks and tools for developing GPU applications.
- By the end of this training, participants will be able to:
Understand the difference between CPU and GPU computing and the benefits and challenges of GPU programming. - Choose the right framework and tool for their GPU application.
- Create a basic GPU program that performs vector addition using one or more of the frameworks and tools.
- Use the respective APIs, languages, and libraries to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
- Use the respective execution models, such as work-items, work-groups, threads, blocks, and grids, to control the parallelism.
- Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
- Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
GPU Programming with OpenACC
28 時間OpenACC is an open standard for heterogeneous programming that enables a code to run on different platforms and devices, such as multicore CPUs, GPUs, FPGAs, and others.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenACC to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up an OpenACC development environment.
- Write and run a basic OpenACC program.
- Annotate code with OpenACC directives and clauses.
- Use OpenACC API and libraries.
- Profile, debug, and optimize OpenACC programs.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.