信息通信技术与政策

信息通信技术与政策

信息通信技术与政策 ›› 2025, Vol. 51 ›› Issue (6): 44-51.doi: 10.12267/j.issn.2096-5931.2025.06.008

专题:AI 驱动能源变革 上一篇    下一篇

低碳AI:大模型的绿色训练与推理优化方法研究

Low-carbon AI: research on green training and inference optimization methods for large model

葛坚1, 牛晓燕2, 毕然2, 黄雍涛2   

  1. 1.中国信息通信研究院技术与标准研究所,北京 100191
    2.中国信息通信研究院安全研究所,北京 100191
  • 收稿日期:2025-04-22 出版日期:2025-06-25 发布日期:2025-07-04
  • 通讯作者: 毕然 中国信息通信研究院安全研究所高级工程师,主要从事AI赋能通信大数据平台等研究工作
  • 作者简介:
    葛坚 中国信息通信研究院技术与标准研究所高级工程师,主要从事智算网络、通信核心网相关研究工作
    牛晓燕 中国信息通信研究院安全研究所工程师,主要从事AI赋能通信大数据平台等研究工作
    黄雍涛 中国信息通信研究院安全研究所工程师,主要从事AI赋能通信大数据平台等研究工作

GE Jian1, NIU Xiaoyan2, BI Ran2, HUANG Yongtao2   

  1. 1. Institute of Technology and Standards, China Academy of Information and Communications Technology, Beijing 100191, China
    2. Institute of Security Research, China Academy of Information and Communications Technology, Beijing 100191, China
  • Received:2025-04-22 Online:2025-06-25 Published:2025-07-04

摘要:

近年来,人工智能(Artificial Intelligence,AI)模型的规模不断扩大,训练与推理过程的能耗问题日益突出,推动了低碳AI研究的兴起。系统梳理了大模型绿色计算的关键技术,重点分析了低碳训练与推理优化方法。在训练阶段,现有研究主要从模型架构优化、计算精度调整、资源调度策略等方面降低能耗,包括神经网络架构搜索、混合精度计算及分布式训练等方法。在推理阶段,优化策略则侧重于模型剪枝、量化技术、边缘计算、缓存复用等技术,以减少计算成本与碳排放。此外,归纳了低碳AI面临的挑战,如计算硬件能效瓶颈、碳足迹量化方法的不确定性以及数据中心绿色能源利用的限制,并探讨了未来的发展趋势。

关键词: 大语言模型, 人工智能, 低碳, 绿色计算

Abstract:

In recent years, the scale of Artificial Intelligence (AI) models has been expanding continuously, leading to increasing energy consumption in training and inference processes. This has driven the rise of low-carbon AI research. This paper systematically reviews the key technologies of green computing for large models, with a focus on low-carbon training and inference optimization methods. In the training phase, existing studies aim to reduce energy consumption through model architecture optimization, computational precision adjustment, and resource scheduling strategies. These approaches include neural architecture search, mixed-precision computing, and distributed training. In the inference phase, optimization strategies primarily focus on techniques such as model pruning, quantization, edge computing, and cache reuse to reduce computational costs and carbon emissions. Additionally, this paper summarizes the challenges faced by low-carbon AI, including hardware energy efficiency bottlenecks, uncertainties in carbon footprint quantification methods, and limitations in the use of green energy in data centers. Furthermore, this paper explores the future development trends.

Key words: large language model, AI, low carbon, green computing

中图分类号: