1) Question:
15.1 What is heterogeneous computing? 361
15.2 What is a GPU? 361
15.3 Introduction to GPU Architecture 361
15.3.1 Why use a GPU?
15.3.2 What is the core of CUDA?
15.3.3 What is the role of the tensor core in the new Turing architecture for deep learning?
15.3.4 What is the connection between GPU memory architecture and application performance?
15.4 CUDA framework
15.4.1 Is it difficult to do CUDA programming?
15.4.2 cuDNN
15.5 GPU hardware environment configuration recommendation
15.5.1 GPU Main Performance Indicators
15.5.2 Purchase Proposal
15.6 Software Environment Construction
15.6.1 Operating System Selection
15.6.2 Is the native installation still using docker?
15.6.3 GPU Driver Issues
15.7 Frame Selection
15.7.1 Comparison of mainstream frameworks
15.7.2 Framework details
15.7.3 Which frameworks are friendly to the deployment environment?
15.7.4 How to choose the framework of the mobile platform?
15.8 Other
15.8.1 Configuration of a Multi-GPU Environment
15.8.2 Is it possible to distribute training?
15.8.3 Can I train or deploy a model in a SPARK environment?
15.8.4 How to further optimize performance?
15.8.5 What is the difference between TPU and GPU?
15.8.6 What is the impact of future quantum computing on AI technology such as deep learning?
2) Answer:
P
References: