Intel LLM Interns
#算法实习# #大模型#
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
全部评论
相关推荐
点赞 评论 收藏
分享

点赞 评论 收藏
分享