Intel LLM Interns
#算法实习# #大模型#
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
We are currently seeking full-time algorithm interns for efficient LLM inference in the Intel/DCAI/AISE group. The position is based in Shanghai, Zizhu.
Working on exciting projects such as INC (Intel Neural Compressor) [https://github.com/intel/neural-compressor] and ITREX (Intel Extension for Transformers) [https://github.com/intel/intel-extension-for-transformers]
If you are passionate about this field and would like to apply, please send your resume to to wenhua dot cheng @ intel.com
全部评论
相关推荐
落依依:同学,瞅瞅我司,医疗独角兽,
我的主页最新动态,绿灯直达,免笔试~
点赞 评论 收藏
分享
查看6道真题和解析 点赞 评论 收藏
分享
