WAIC 2022 frontiers of science during a plenary session of the Shanghai AI Lab released“OpenXLab PU source”artificial intelligence Open Source open system. As one of the important items OpenComputeLab PU computing artificial intelligence open computing system, by the laboratory for AI training and Computing Research Center released, designed to allow the AI to calculate a more open, more efficient, more simple. It pioneered“operator profiles”from the algorithm perspective, the system of carding deep learning-related hundreds of the operator, the formation of classification and grading system, and build an open evaluation framework for computing the performance of a comprehensive in-depth analysis of the upstream and downstream collaboration evolved to provide a valuable reference guide.
AI training and Computing Research Center dedicated to define and lead the new generation of AI training with the computing system build, building inclusive AI computing and the compilation system, the development of highly-scalable, wide open, excellent energy efficiency, a strong adaptation of the system, receiving the support of artificial intelligence technology is the cornerstone of work.
For General intelligent and interdisciplinary future development, the centre will create a new generation of open artificial intelligence training system, the definition of the new training system programming paradigm, the breakthrough multidimensional parallel technology, adaptive training technology as well as the diverse environment of efficient distributed collaboration technology, to achieve ultra-large model of efficient training, to explore the model training efficiency limit, to meet the universal smart and ultra-large-scale scientific computing integration needs.
The center also will continue to explore the open artificial intelligence computing system, research and breakthrough for multi-calculated after the end of the calculation compiler technology, pushing computing power and algorithm development of the synchronous evolution of the cloud and enjoy the end of a variety of computing unit of the efficient adaptation and training support, to meet the new generation of intelligent computing infrastructure to bring new computing needs.
1. For deep learning cutting-edge algorithms and architectures research, and combined with the line-of-business researcher pain points, and constantly enrich the training framework functions;
2. According to the needs of the business lines of the model in a self-developed deep learning framework on adaptation and tuning, mainly to optimize the training time and the memory occupied;
3. According to the algorithm landing scenarios needs, such as security, mobile, face, Autopilot, etc., the algorithm framework and associated tool chain to improve and customize.
1. Computer based solid, proficient in C/C++, skilled use of design patterns and methods, with good software engineering thought and ability;
2. Master machine learning, deep learning and other artificial intelligence techniques of the basic principles, have a related industry or research experience;
3. Familiar with computer architecture, parallel computing basic techniques and GPU parallel computing fundamentals, relevant research experience preferred;
4. In NVIDIA, AMD, etc. any of a GPU architecture on the over side-by-side development or performance tuning is a plus;
5. There are algebraic matrix operations, computer vision and image processing algorithms on different processors transplant experience is preferred;
6. There is a performance analysis and optimization experience is preferred, including, but not limited to, the system load analysis, the GPU load analysis, memory, memory usage analysis, etc.
7. Education: Master's degree and above
8. Professional requirements: computer
Email and resume name format: Name-Application for the post of name-to-school trick/shop trick/internship）