With the continuous enhancement of large model capabilities and the deepening of inference applications, the scale of data processing has expanded drastically, and data processing requirements have become increasingly diversified, this has imposed higher demands on the collaborative between storage and computing power. In response to the new demands on storage systems by larger data volumes, larger model sizes, and longer context windows in current large model inference scenarios, this study first conducts an in-depth analysis of the implementation mechanisms, key technologies, and practical applications of both “computing-in-place-of-storage” and “storage-in-place-of-computing”, Subsequently, by integrating the current technological and industrial foundation as well as application scenario requirements, this paper proposes that based on access latency and bandwidth demands,a hierarchical and systematic collaborative storage model for the future development of computing-storage synergy is important. This paper aims to explore the specific implementation mechanisms and evolutionary pathways of compute-storage collaboration, providing valuable references for promoting the improvement of intelligent computing cluster utilization efficiency and better supporting the development of large model inference.
As the cornerstone of enterprise digital transformation, the governance quality of Material Master Data (MMD) directly impacts an enterprise’s operational efficiency and decision-making accuracy. When dealing with massive and heterogeneous datasets, traditional MMD governance methods generally face challenges like low automation, inefficiency, and high governance costs. To address these problems, this paper proposes an innovative framework integrating large language models and retrieval-augmented generation technology, conjunction with the actual business context of National Energy Group Materials Co., Ltd... Built on a local computing architecture, the framework designs a clear technical implementation path and establishes a four-tier technical architecture, including computing infrastructure layer, model layer, data layer, and application capability hub layer. It achieves three core functions: duplicate detection for legacy materials, intelligent classification with context-aware recommendations, and automated parameter validation. In specific data governance scenarios, the solution significantly improves the accuracy rate and processing efficiency of data governance while effectively controlling governance costs. It provides solid technical support for the enterprise’s digital transformation and aligns with the future trend of MMD management towards intelligence and automation.
Against the backdrop of the continuous surge in computing power demand for large Artificial Intelligence models, the importance of enabling high-bandwidth direct connection and collaborative work among hundreds or thousands of GPU chips in Scale-up networks has become increasingly prominent. This paper first reviews the current development status of SuperPod technology, then systematically analyzes the development trends of SuperPod technology and its current industrial development status from core dimensions including interconnection protocols, interconnection technologies, system management software, power supply, and heat dissipation technologies. By integrating the opportunities and challenges currently faced by the industry, it finally puts forward the technology iteration paths and industrial advancement strategies for SuperPod technology that adapt to the development needs of the computing power industry.
This paper systematically investigates the core mechanisms by which artificial intelligence technology empowers a national integrated computing network, set against the backdrop of China’s “Eastern Data, Western Computing” strategic initiative. Drawing upon practical experiences from this national project, the study analyzes the current development status and critical bottlenecks of computing networks, focusing on dimensions such as scheduling efficiency and resource integration. Concurrently, through an in-depth examination of cases, including the “China Computing Network” at Pengcheng Laboratory, this paper proposes optimized pathways for intelligent collaborative scheduling in computing networks, providing theoretical foundations for the construction of an efficient, secure, and intelligent integrated computing network.
Traditional computing paradigms face performance bottlenecks in addressing the combinatorial optimization and real-time decision-making challenges of future 6G networks. This paper explores a novel quantum-classical hybrid computing solution. It introduces a framework that utilizes a trapped-ion Quantum Processing Unit (QPU) as a dedicated accelerator to solve the 6G Multi-User Multiple-Input Multiple-Output(MU-MIMO) beamforming optimization problem. The framework maps the communication problem to an Ising model, which is then solved by using the Quantum Approximate Optimization Algorithm (QAOA) in a variational loop with a classical computer. Numerical simulations demonstrate that this hybrid approach can achieve rapid decision-making and improve the system sum-rate by 15% ~20% over traditional heuristics. These results highlight the significant potential of hybrid computing to overcome classical limitations and empower future communication networks.
With the rapid development of various new technologies, the growing demand for computing resources has created a certain contradiction between the current limited computing resources and rigid allocation patterns. Optimizing the allocation of computing resources and improving the utilization efficiency of computing resources have become urgent priority problems to be addressed. The mimic computing system can flexibly and efficiently allocate computing resources through mechanisms such as on-demand allocation of heterogeneous resources, hardware resource pooling, and dynamic resource reconfiguration, fully leveraging the execution efficiency of computing resources and significantly enhancing the overall computing efficiency ratio. To clarify the progressiveness of the mimic computing system and its improvement in computing efficiency, this paper conducts research on the testing and evaluation system of the mimic computing system by sorting out testing and evaluation indicators, establishing testing and evaluation models, and carrying out application scenario practices.
Quantum computing, as the core of next-generation information technology, is regarded as an important tool for tackling complex scientific challenges, optimizing industrial processes, and transforming national security. Compared to technologies such as superconducting and ion-trap quantum computing, optical quantum computing has emerged as a focal point for Japanese research institutions and corporations due to its advantages of not requiring ultra-low-temperature environments and its high scalability. This paper analyzes Japan’s optical quantum computing industry landscape and strategic framework to provide multidimensional insights for advancing China’s optical quantum computing sector.
Advanced computing technologies are driving the intelligent and systematic transformation of non-bidding procurement, the scalable implementation of which hinges on building a foundational support system based on the synergy of computing power and architecture. Focusing on four core scenarios—qualification verification, contract analysis, expert review, and document generation—this paper elucidates how architectures like heterogeneous computing and distributed inference empower multimodal parsing, multi-model collaboration, and human-AI co-generation, significantly enhancing procurement efficiency, accuracy, and compliance. Aligning with national policy directives, this paper systematically outlines the technological evolution path, application effectiveness, and core challenges of AI-powered non-bidding procurement. It further identifies that future breakthroughs must focus on compute-efficient model design, verifiable generation validation mechanisms, and engineering deployment strategies to build an intelligent procurement system that is efficient, trustworthy, and scalable.
In recent years, the privacy-preserving computing technology has developed rapidly, its usability has continuously improved, and product types have gradually become more diverse. However, in practical applications, product security, algorithm usability, and ease of use are key factors in promoting the large-scale application of privacy-preserving computing. This paper first analyzes the needs of users for security, performance, and ease of use when using privacy-preserving computing technology. Then, it deeply analyzes the privacy-preserving computing system based on hardware-software integration from the aspects of architecture, technology, and functions.
This paper proposes a data governance method for hydroelectric equipment operation and maintenance based on digital twins. The method utilizes Asset Administration Shell (AAS) technology to define the digital twin metamodel of the equipment. Relying on the metamodel and the proposed full-domain data space method, it presents a dynamic construction method for the digital twin of hydroelectric equipment. It enables efficient governance of the entire lifecycle data, functional behavior models, and relationships of the equipment. A case study demonstrates the application of the proposed method in hydroelectric equipment operation and maintenance, proving that it can improve operation efficiency, reduce failure rates, and provide data support for decision-making.
The rapid development of deepfake technology has exacerbated the crisis of social trust and security threats, and its abuse scenarios have expanded from fake news and identity fraud to a wider field. In order to meet the challenges, the deepfake detection technology has gradually developed from single-modal to multimodal fusion detection, and the detection accuracy and robustness are significantly improved by integrating multi-source information such as audio-visual information. Firstly, the characteristics and application scenarios of multimodal datasets are analyzed. Secondly, the technical methodology system of detection-positioning-interpretation is classified and described. Then, the actual performance of the existing testing platform is evaluated. Finally, the future research directions are prospected. The purpose of this study is to construct a technical map of multimodal deepfake detection, and to provide theoretical support and practical reference for the development of the field.
Currently, internet education apps face problems such as unclear inventories, inconsistent quality, and uneven capabilities in personal information protection. Establishing an encoding method that can conveniently identify the multi-dimensional information of internet education apps, and improving the quality management mechanism for such apps based on this method, can help to users better select and use compliant and user-friendly apps. It can also effectively support regulatory authorities in carrying out supervision and governance.
The traditional TDM PON technology, which focuses on “bandwidth enhancement,” can no longer meet the needs of some low latency applications, and there is an urgent need to conduct research on low latency technologies for PON networks. Firstly elaborates on the low latency requirements of cloud VR/AR, industrial applications, and other scenarios, and then analyzes the composition of PON network latency. The latency problem mainly comes from the uplink bandwidth allocation mechanism and the uplink random latency introduced by the ONU registration/ranging mechanism. Key low latency technologies such as fixed bandwidth allocation, single frame multi burst, and collaborative DBA were introduced for the former. Introduced low latency key technologies such as out of band window opening and small window opening for the latter.