Intel LLM Interns
#算法实习# #大模型#
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
全部评论
相关推荐
发光中勿扰:感觉不对就是不对,跑!
点赞 评论 收藏
分享

点赞 评论 收藏
分享
06-13 01:23
中南民族大学 嵌入式软件开发 点赞 评论 收藏
分享
06-23 11:43
门头沟学院 Java 点赞 评论 收藏
分享