Rss
Email Alert
Toggle navigation
Home
About Journal
Editorial Board
Academic Exchange
Journal Online
Just Accepted
Current Issue
Archive
Most Read Articles
Most Download Articles
Most Cited Articles
Subscription
Download
Contact Us
中文
Previous
Next
主管:中国航天科技集团公司
主办:北京航天计量测试技术研究所
北京无线电计量测试研究所
出版:《宇航计测技术》编辑部
主编:周谦
通信地址:北京9200信箱24分箱
电话: (010) 68383695
传真: (010) 68383627
邮发代号:18-123
国外代号:BM6613
国内统一刊号:ISSN 1000-7202
国际标准刊号:CN 11-2052/V
Author Center
Instruction
Online Submission
|
Manuscript Tracking
Review Center
Peer Review
|
Editor Work
Editor-in-Chief
|
Office Work
Current Issue
Just Accepted
Archive
Recommend Articles
15 April 2025, Volume 45 Issue 2
Previous Issue
A Review of Large Language Model Evaluation Methods
SONG Jialei, ZUO Xingquan, ZHANG Xiujian, HUANG Hai
2025, 45(2): 1-30. doi:
10.12060/j.issn.1000-7202.2025.02.01
Asbtract
(
)
PDF
(2852KB) (
)
Related Articles
|
Metrics
With the rapid development of large language models, their broad application prospects have attracted significant attention from both the academic and industrial communities.Before a large language model is applied to practice,its performance and potential risks need to be comprehensively evaluated.In recent years,the evaluation methods of large language models have been studied from multiple perspectives by researchers.In this paper,the evaluation metrics,methods and benchmarks of large language models in terms of performance,robustness,and alignment,are reviewed systematically and the advantages and disadvantages of various evaluation metrics and methods are analyzed.Finally,the future research directions and challenges of large language model evaluation are discussed.
Evaluation of Brain-Inspired Intelligence:Connotation,Methodology and Applications
LIAO Yuanhao, SU Chunwang, JIANG Weiguo, WEN Hua, LI Youjun, ZHANG Siping, HUANG Zigang
2025, 45(2): 31-48. doi:
10.12060/j.issn.1000-7202.2025.02.02
Asbtract
(
)
PDF
(4086KB) (
)
Related Articles
|
Metrics
In order to evaluate the comprehensive performance of brain-inspired intelligence,it is necessary to establish a standard brain-inspired intelligence evaluation framework.In this paper,the theoretical foundations of brain-inspired intelligence evaluation are elaborated,including brain science theory,brain-inspired computing theory,the synergistic development of brain science and brain-like computing,and the key issues in brain-inspired intelligence evaluation.Brain-inspired intelligence evaluation methods,including the brain-mechanism based assessment methods and the brain-inspired modeling based assessment methods,are discussed in detail.Corresponding brain-inspired evaluation metrics were proposed alongside the construction of a difficulty-tiered standardized dataset.In addition,the hardware-software integrated evaluation of the brain-inspired intelligent systems is further considered,an integrated application framework is proposed,and a systematic evaluation practice is carried out by focusing on the brain-inspired localization and navigation tasks.
Research Progress of Optical Neural Networks
FENG Jianan, HU Jianyang, ZHANG Xiujian, LIN Jie, JIN Peng
2025, 45(2): 49-62. doi:
10.12060/j.issn.1000-7202.2025.02.03
Asbtract
(
)
PDF
(11030KB) (
)
Related Articles
|
Metrics
Recently,artificial intelligence,particularly deep learning,has developed rapidly,which deeply empowers traditional industries and leads the realization of a new round of industrial technology revolution.However,the size of transistors on electronic chips is gradually approaching the physical limit,resulting in the inability of traditional electronic neural networks to meet the exponential increasing demand for computing power.Benefiting from the unique advantages of photon,optical computing technology merges optoelectronic technology with neural network models,which has the advantages of parallelization,high speed,low power consumption and multi-dimensional processing.The research progress of optical neural networks is reviewed,concentrating on the computational architecture of optical diffractive neural networks.The challenges facing the practical implementation of large-scale optical diffractive neural networks are analysed and summarized,and the future development trend is prospected.
A Progressive Error Repair Method for Knowledge Graphs Assisted by Large Language Models
ZHENG Xu, LIU Jing, ZHANG Lizong, YAN Ke, SONG Faren, CHANG Qingxue
2025, 45(2): 63-71. doi:
10.12060/j.issn.1000-7202.2025.02.04
Asbtract
(
)
PDF
(1038KB) (
)
Related Articles
|
Metrics
Knowledge graph is an important form of knowledge representation,which can integrate and organize information effectively.It has been widely used in search engines,intelligent question answering and recommendation systems.Traditional knowledge graph construction relies on manual annotation and rule-based systems,which is huge in scale and uneven in quality,and is difficult to adapt to the dynamic changes of massive data.Recently,large models have shown superior performance in knowledge generation.However,there is still a lack of research on large language models to enhance knowledge graph error repairing.Therefore,a progressive error correction method for knowledge graphs,assisted by large language models,has been proposed.Using embedding models to evaluate the quality of knowledge triples and high-quality triples as prompts for learning content,knowledge correction by large language models is realized.Based on extensive experiments,the proposed method significantly enhances the reasoning ability of knowledge graphs.
Decision Path Based Sample Perturbation Approach for DNN Model Robustness Testing
WU Ji, NIE Yankai, CAO Hongyu, FAN Xiangyu, SUN Qing, YANG Haiyan
2025, 45(2): 72-82. doi:
10.12060/j.issn.1000-7202.2025.02.05
Asbtract
(
)
PDF
(1439KB) (
)
Related Articles
|
Metrics
With the increasing complexity of the internal structure of deep neural network (DNN) ,it is difficult for people to have an intuitive understanding of its internal operation mechanism,so the probability of model errors is greatly increased.Therefore,an effective DNN robustness test method is needed to solve the trust crisis of the model to ensure the reliability and security of the software system.The existing DNN robustness test methods mostly target the coverage of neurons for generating perturbation samples,without introducing more information about the internal model,resulting in a high degree of perturbation and a large amount of redundancy in the generated perturbation samples,which greatly limits the ability to improve model robustness.A new adversarial example generation method is proposed.Firstly,a decision tree is constructed by the last convolutional layer of the model.The judgment path in the decision tree is regarded as the decision path,and each filter in the path is analyzed to find out the impact factor.Finally,the perturbed samples were generated according to the decision path and impact factors.The test results show that the test samples generated are 78% less than the existing state-of-the-art fuzzing method DLFuzz in terms of perturbation degree on average,and the number of original samples perturbed by our method is 27.7% more on average.
A Method for Constructing a Virtual Simulation Platform for Measuring UAV Perception and Decision-Making Capabilities
GENG Yuxuan, WANG Lihong, WANG Xiaoxiao, WANG Sitong, LU Yinan, WU Tieru, MA Rui
2025, 45(2): 83-90. doi:
10.12060/j.issn.1000-7202.2025.02.06
Asbtract
(
)
PDF
(1880KB) (
)
Related Articles
|
Metrics
With the development of UAV technology,effectively measuring the intelligence level of UAVs has become an important issue.Traditional field measurement methods are costly,inefficient,and sensitive to environmental factors.Therefore,based on Unreal Engine and the AirSim platform,a virtual simulation platform for measuring UAV perception and decision-making capabilities was proposed,using the performance of UAV flight tasks in virtual environments to assess their perception and decision-making capabilities.The platform simulates UAV flight tasks in complex environments with high-precision scenes and dynamic factors such as weather and traffic,supporting the testing and evaluation of various UAV perception and decision-making algorithms.Compared to field measurements,by this method costs and time are significantly reduced while enhancing flexibility.Test results show that the platform has significant advantages in interactivity and simulation effectiveness,providing effective support for optimizing and developing UAV algorithms,with broad application prospects and practical value.
Watermark Evaluation Research and Platform Construction for Artificial Intelligence
RONG Xianjin, WANG Yaofei, HU Donghui
2025, 45(2): 91-96. doi:
10.12060/j.issn.1000-7202.2025.02.07
Asbtract
(
)
PDF
(789KB) (
)
Related Articles
|
Metrics
Digital watermarking achieves data traceability by embedding unique identifiers in the data,which not only improves the reliability of the output of the intelligent model,but also enhances the public’s trust in the AI system.However,the current digital watermarking technology,especially in the field of AI,lacks a scientific and unified evaluation process and specification.The application of image watermarking technology and audio watermarking technology in the field of artificial intelligence are focused on,combining the actual use scenarios of specific intelligent models,scientific evaluation indexes are designed and the standardisation of evaluation is achieved.Meanwhile,an evaluation platform is constructed to achieve test automation and evaluation integration,which provides strong support for improving the traceability and application level of watermarking technology in the application field of artificial intelligence models,and is of great significance for ensuring the safe and controllable development of artificial intelligence.
Metrology and Evaluation of Data for Artificial Intelligence
LIN Jie, SUN Jing, FENG Jianan, HU Jianyang, ZHANG Xiujian, JIN Peng
2025, 45(2): 97-102. doi:
10.12060/j.issn.1000-7202.2025.02.08
Asbtract
(
)
PDF
(669KB) (
)
Related Articles
|
Metrics
At present, artificial intelligence technology is booming, and a variety of artificial intelligence models and products are launched at home and abroad,and artificial intelligence has constantly influence on people’s lives.Data is one of the core factors in artificial intelligence,and the development of artificial intelligence technology benefits from the high-quality data.Therefore,the measurement and evaluation of data for artificial intelligence is an important precondition to achieve legality,safety and fairness of artificial intelligence.For the metrology and evaluation of data,the legitimacy,authenticity,diversity,balance,data privacy protection and ethics,data quantity are proposed as the fundamental contents of measurement and evaluation of data.
News
[an error occurred while processing this directive]
More>>
Most Read Articles
More>>
Most Download Articles
More>>
Most Cited Articles
More>>
Download
More>>
Links
More>>
Visited
Total visitors:
Visitors of today:
Now online: