GPU & Distributed Programming
GPU & Distributed Programming 2022-23
School Of Computer Science And Electronic Engineering
Module - Semester 1
Indicative content includes:
- Comparison between shared memory and distributed memory
- Comparison between processors architectures (inc. SIMD, MIMD, and SIMT). It will cover multi-core processors (i.e. CPUs) and many-core processors (e.g. GPUs)
- Fundamental concepts of concurrent programming, including: threads, shared variables, critical regions, atomics, semaphores, race conditions, deadlock
- Practice of parallel programming:
- Design of algorithms that use the architecture effectively
- Implementation of such algorithms using industry-standard tools and APIs, eg. CUDA, MPI, Vulcan, C++.
- Validation of the results produced by such algorithms
-threshold -Equivalent to 50%.Uses key areas of theory or knowledge to meet the Learning Outcomes of the module. Is able to formulate an appropriate solution to accurately solve tasks and questions. Can identify individual aspects, but lacks an awareness of links between them and the wider contexts. Outputs can be understood, but lack structure and/or coherence.
-good -Equivalent to the range 60%-69%.Is able to analyse a task or problem to decide which aspects of theory and knowledge to apply. Solutions are of a workable quality, demonstrating understanding of underlying principles. Major themes can be linked appropriately but may not be able to extend this to individual aspects. Outputs are readily understood, with an appropriate structure but may lack sophistication.
-excellent -Equivalent to the range 70%+.Assemble critically evaluated, relevent areas of knowledge and theory to constuct professional-level solutions to tasks and questions presented. Is able to cross-link themes and aspects to draw considered conclusions. Presents outputs in a cohesive, accurate, and efficient manner.