Most Read articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All

    Published in last 1 year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Research on Question Answering System on Joint of Knowledge Graph and Large Language Models
    ZHANG Heyi, WANG Xin, HAN Lifan, LI Zhao, CHEN Zirui, CHEN Zhe
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2377-2388.   DOI: 10.3778/j.issn.1673-9418.2308070
    The large language model (LLM), including ChatGPT, has shown outstanding performance in understanding and responding to human instructions, and has a profound impact on natural language question answering (Q&A). However, due to the lack of training in the vertical field, the performance of LLM in the vertical field is not ideal. In addition, due to its high hardware requirements, training and deploying LLM remains difficult. In order to address these challenges, this paper takes the application of traditional Chinese medicine formulas as an example, collects the domain related data and preprocesses the data. Based on LLM and knowledge graph, a vertical domain Q&A system is designed. The system has the following capabilities: (1) Information filtering. Filter out vertical domain related questions and input them into LLM to answer. (2) Professional Q&A. Generate answers with more professional knowledge based on LLM and self-built knowledge base. Compared with the fine-tuning method of introducing professional data, using this technology can deploy large vertical domain models without the need for retraining. (3) Extract conversion. By strengthening the information extraction ability of LLM and utilizing its generated natural language responses, structured knowledge is extracted and matched with a professional knowledge graph for professional verification. At the same time, structured knowledge can be transformed into readable natural language, achieving a deep integration of large models and knowledge graphs. Finally, the effect of the system is demonstrated and the performance of the system is verified from both subjective and objective perspectives through two experiments of subjective evaluation of experts and objective evaluation of multiple choice questions.
    Reference | Related Articles | Metrics
    Abstract3031
    PDF2939
    Survey of Causal Inference for Knowledge Graphs and Large Language Models
    LI Yuan, MA Xinyu, YANG Guoli, ZHAO Huiqun, SONG Wei
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2358-2376.   DOI: 10.3778/j.issn.1673-9418.2307065
    In recent decades, causal inference has been a significant research topic in various fields, including statistics, computer science, education, public policy, and economics. Most causal inference methods focus on the analysis of sample observational data and text corpora. However, with the emergence of various knowledge graphs and large language models, causal inference tailored to knowledge graphs and large models has gradually become a research hotspot. In this paper, different causal inference methods are classified based on their orientation towards sample observational data, text data, knowledge graphs, and large language models. Within each classification, this paper provides a detailed analysis of classical research works, including their problem definitions, solution methods, contributions, and limitations. Additionally, this paper places particular emphasis on discussing recent advancements in the integration of causal inference methods with knowledge graphs and large language models. Various causal inference methods are analyzed and compared from the perspectives of efficiency and cost, and specific applications of knowledge graphs and large language models in causal inference tasks are summarized. Finally, future development directions of causal inference in combination with knowledge graphs and large models are prospected.
    Reference | Related Articles | Metrics
    Abstract1168
    PDF1377
    Review on Multi-lable Classification
    LI Dongmei, YANG Yu, MENG Xianghao, ZHANG Xiaoping, SONG Chao, ZHAO Yufeng
    Journal of Frontiers of Computer Science and Technology    2023, 17 (11): 2529-2542.   DOI: 10.3778/j.issn.1673-9418.2303082
    Multi-label classification refers to the classification problem where multiple labels may coexist in a single sample. It has been widely applied in fields such as text classification, image classification, music and video classification. Unlike traditional single-label classification problems, multi-label classification problems become more complex due to the possible correlation or dependence among labels. In recent years, with the rapid development of deep learning technology, many multi-label classification methods combined with deep learning have gradually become a research hotspot. Therefore, this paper summarizes the multi-label classification methods from the traditional and deep learning-based perspectives, and analyzes the key ideas, representative models, and advantages and disadvantages of each method. In traditional multi-label classification methods, problem transformation methods and algorithm adaptation methods are introduced. In deep learning-based multi-label classification methods, the latest multi-label classification methods based on Transformer are reviewed particularly, which have become one of the mainstream methods to solve multi-label classification problems. Additionally, various multi-label classification datasets from different domains are introduced, and 15 evaluation metrics for multi-label classification are briefly analyzed. Finally, future work is discussed from the perspectives of multi-modal data multi-label classification, prompt learning-based multi-label classification, and imbalanced data multi-label classification, in order to further promote the development and application of multi-label classification.
    Reference | Related Articles | Metrics
    Abstract1082
    PDF954
    Survey of Fake News Detection with Multi-model Learning
    LIU Hualing, CHEN Shanghui, CAO Shijie, ZHU Jianliang, REN Qingqing
    Journal of Frontiers of Computer Science and Technology    2023, 17 (9): 2015-2029.   DOI: 10.3778/j.issn.1673-9418.2301064
    While social media brings convenience to people, it has also become a channel for the arbitrary spread of fake news. If not detected and stopped in time, it is easy to cause public panic and social unrest. Therefore, exploring accurate and efficient fake news detection technology has high theoretical value and practical significance. This paper provides a comprehensive overview of the related fake news detection techniques. Firstly, the relevant concepts of multi-modal fake news are sorted and summarized, and the trend of changes in single-modal and multi-modal news datasets is analyzed. Secondly, this paper introduces single-modal fake news detection techniques based on machine learning and deep learning, which have been widely used in the field of fake news detection. However, traditional single-modal techniques cannot fully explore the deep logic of fake news because fake news usually contains multiple data presentations. Thus, they are unable to effectively deal with the challenges brought by multi-modal fake news data. To address this issue, this paper summarizes and discusses advanced multi-modal fake news detection techniques from the perspectives of multi-stream and graph architectures, and explores their concepts and potential drawbacks. Finally, this paper analyzes the difficulties and bottlenecks in the current research on fake news detection and provides future research directions based on these analyses.
    Reference | Related Articles | Metrics
    Abstract847
    PDF1015
    YOLOv8-VSC: Lightweight Algorithm for Strip Surface Defect Detection
    WANG Chunmei, LIU Huan
    Journal of Frontiers of Computer Science and Technology    2024, 18 (1): 151-160.   DOI: 10.3778/j.issn.1673-9418.2308060
    Currently, in the field of strip steel surface defect detection, the generalized target detection algorithm is highly complex and computationally large, while terminal equipment responsible for the detection of some small and medium-sized enterprises usually does not have strong computational capabilities, and the computational resources are limited, which leads to difficulties in the deployment of detection algorithms. To solve this problem, this paper proposes a lightweight strip steel surface defect detection model YOLOv8-VSC based on the YOLOv8n target detec-tion framework, which uses a lightweight VanillaNet network as the backbone feature extraction network and reduces the complexity of the model by reducing the unnecessary branching structure. Meanwhile, the SPD module is introduced to speed up the inference of the model while reducing the number of network layers. To further improve the detection accuracy, a lightweight up-sampling operator, CARAFE, is used in the feature fusion network to improve the quality and richness of the features. Finally, extensive experiments on the NEU-DET dataset yield a model with parametric and computational quantities of 1.96×106 and 6.0 GFLOPs, which are only 65.1% and 74.1% of the baseline, and the mAP reaches 80.8%, which is an improvement of 1.8 percentage points from the baseline. In addition, experimental results on the aluminum surface defect dataset and the VOC2012 dataset show that the proposed algorithm has good robustness. Compared with advanced target detection algorithms, the proposed algorithm requires fewer computational resources while ensuring high detection accuracy.
    Reference | Related Articles | Metrics
    Abstract749
    PDF641
    Review of Attention Mechanisms in Image Processing
    QI Xuanhao, ZHI Min
    Journal of Frontiers of Computer Science and Technology    2024, 18 (2): 345-362.   DOI: 10.3778/j.issn.1673-9418.2305057
    Attention mechanism in image processing has become one of the popular and important techniques in the field of deep learning, and is widely used in various deep learning models in image processing because of its excellent plug-and-play convenience. By weighting the input features, the attention mechanism focuses the model’s attention on the most important regions to improve the accuracy and performance of image processing tasks. Firstly, this paper divides the development process of attention mechanism into four stages, and on this basis, reviews and summarizes the research status and progress of four aspects: channel attention, spatial attention, channel and spatial mixed attention, and self-attention. Secondly, this paper provides a detailed discussion on the core idea, key structure and specific implementation of attention mechanism, and further summarizes the advantages and disadvantages of used models. Finally, by comparing the current mainstream attention mechanisms and analyzing the results, this paper discusses the problems of attention mechanisms in the image processing field at this stage, and provides an outlook on the future development of attention mechanisms in image processing, so as to provide references for further research.
    Reference | Related Articles | Metrics
    Abstract704
    PDF490
    Named Entity Recognition Method of Large Language Model for Medical Question Answering System
    YANG Bo, SUN Xiaohu, DANG Jiayi, ZHAO Haiyan, JIN Zhi
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2389-2402.   DOI: 10.3778/j.issn.1673-9418.2307061
    In medical question answering systems, entity recognition plays a major role. Entity recognition based on deep learning has received more and more attention. However, in the medical question answering system, due to the lack of annotated training data, deep learning methods cannot well identify discontinuous and nested entities in medical text. Therefore, a large language model-based entity recognition application method is proposed, and it is applied to the medical problem system. Firstly, the dataset related to medical question answering is processed into text that can be analyzed and processed by a large language model. Secondly, the output of the large language model is classified, and different classifications are processed accordingly. Then, the input text is used for intent recognition, and  finally the results of entity recognition and intent recognition are sent to the medical knowledge graph for query, and the answer to the medical question and answer is obtained. Experiments are performed on 3 typical datasets and compared with several typical correlation methods. The results show that the method proposed in this paper performs better.
    Reference | Related Articles | Metrics
    Abstract664
    PDF523
    Survey on Visual Transformer for Image Classification
    PENG Bin, BAI Jing, LI Wenjing, ZHENG Hu, MA Xiangyu
    Journal of Frontiers of Computer Science and Technology    2024, 18 (2): 320-344.   DOI: 10.3778/j.issn.1673-9418.2310092
    Transformer is a deep learning model based on the self-attention mechanism, showing tremendous potential in computer vision. In image classification tasks, the key challenge lies in efficiently and accurately capturing both local and global features of input images. Traditional approaches rely on convolutional neural networks to extract local features at the lower layers, expanding the receptive field through stacked convolutional layers to obtain global features. However, this strategy aggregates information over relatively short distances, making it difficult to model long-term dependencies. In contrast, the self-attention mechanism of Transformer directly compares features across all spatial positions, capturing long-range dependencies at both local and global levels and exhibiting stronger global modeling capabilities. Therefore, a thorough exploration of the challenges faced by Transformer in image classification tasks is crucial. Taking Vision Transformer as an example, this paper provides a detailed overview of the core principles and architecture of Transformer. It then focuses on image classification tasks, summarizing key issues and recent advancements in visual Transformer research related to performance enhancement, computational costs, and training optimization. Furthermore, applications of Transformer in specific domains such as medical imagery, remote sensing, and agricultural images are summarized, highlighting its versatility and generality. Finally, a comprehensive analysis of the research progress in visual Transformer for image classification is presented, offering insights into future directions for the development of visual Transformer.
    Reference | Related Articles | Metrics
    Abstract635
    PDF439
    Deep Learning-Based Infrared and Visible Image Fusion: A Survey
    WANG Enlong, LI Jiawei, LEI Jia, ZHOU Shihua
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 899-915.   DOI: 10.3778/j.issn.1673-9418.2306061
    How to preserve the complementary information in multiple images to represent the scene in one image is a challenging topic. Based on this topic, various image fusion methods have been proposed. As an important branch of image fusion, infrared and visible image fusion (IVIF) has a wide range of applications in segmentation, target detection and military reconnaissance fields. In recent years, deep learning has led the development direction of image fusion. Researchers have explored the field of IVIF using deep learning. Relevant experimental work has proven that applying deep learning to achieving IVIF has significant advantages compared with traditional methods. This paper provides a detailed analysis on the advanced algorithms for IVIF based on deep learning. Firstly, this paper reports on the current research status from the aspects of network architecture, method innovation, and limitations. Secondly, this paper introduces the commonly used datasets in IVIF methods and provides the definition of commonly used evaluation metrics in quantitative experiments. Qualitative and quantitative evaluation experiments of fusion and segmentation and fusion efficiency analysis experiments are conducted on some representative methods mentioned in the paper to comprehensively evaluate the performance of the methods. Finally, this paper provides conclusions and prospects for possible future research directions in the field.
    Reference | Related Articles | Metrics
    Abstract616
    PDF542
    Word Embedding Methods in Natural Language Processing: a Review
    ZENG Jun, WANG Ziwei, YU Yang, WEN Junhao, GAO Min
    Journal of Frontiers of Computer Science and Technology    2024, 18 (1): 24-43.   DOI: 10.3778/j.issn.1673-9418.2303056
    Word embedding, as the first step in natural language processing (NLP) tasks, aims to transform input natural language text into numerical vectors, known as word vectors or distributed representations, which artificial intelligence models can process. Word vectors, the foundation of NLP tasks, are a prerequisite for accomplishing various NLP downstream tasks. However, most existing review literature on word embedding methods focuses on the technical routes of different word embedding methods, neglecting comprehensive analysis of the tokenization methods and the complete evolutionary trends of word embedding. This paper takes the introduction of the word2vec model and the Transformer model as pivotal points. From the perspective of whether generated word vectors can dynamically change their implicit semantic information to adapt to the overall semantics of input sentences, this paper categorizes word embedding methods into static and dynamic approaches and extensively discusses this classification. Simultaneously, it compares and analyzes tokenization methods in word embedding, including whole and sub-word segmentation. This paper also provides a detailed enumeration of the evolution of language models used to train word vectors, progressing from probability language models to neural probability language models and the current deep contextual language models. Additionally, this paper summarizes and explores the training strategies employed in pre-training language models. Finally, this paper concludes with a summary of methods for evaluating word vector quality, an analysis of the current state of word embedding methods, and a prospective outlook on their development.
    Reference | Related Articles | Metrics
    Abstract600
    PDF433
    Review of Research on Rolling Bearing Health Intelligent Monitoring and Fault Diagnosis Mechanism
    WANG Jing, XU Zhiwei, LIU Wenjing, WANG Yongsheng, LIU Limin
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 878-898.   DOI: 10.3778/j.issn.1673-9418.2307005
    As one of the most critical and failure-prone parts of the mechanical systems of industrial equipment, bearings are subjected to high loads for long periods of time. When they fail or wear irreversibly, they may cause accidents or even huge economic losses. Therefore, effective health monitoring and fault diagnosis are of great significance to ensure safe and stable operation of industrial equipment. In order to further promote the development of bearing health monitoring and fault diagnosis technology, the current existing models and methods are analyzed and summarized, and the existing technologies are divided and compared. Starting from the distribution of vibration signal data used, firstly, the relevant methods under uniform data distribution are sorted out, the classification, analysis and summary of the current research status are carried out mainly according to signal-based analysis and data-driven-based, and the shortcomings and defects of the fault detection methods in this case are outlined. Secondly, considering the problem of uneven data acquisition under actual working conditions, the detection methods for dealing with such cases are summarized, and different processing techniques for this problem in existing research are classified into data processing methods, feature extraction methods, and model improvement methods according to their different focuses, and the existing problems are analyzed and summarized. Finally, the challenges and future development directions of bearing fault detection in existing industrial equipment are summarized and prospected.
    Reference | Related Articles | Metrics
    Abstract574
    PDF300
    Survey on Inductive Learning for Knowledge Graph Completion
    LIANG Xinyu, SI Guannan, LI Jianxin, TIAN Pengxin, AN Zhaoliang, ZHOU Fengyu
    Journal of Frontiers of Computer Science and Technology    2023, 17 (11): 2580-2604.   DOI: 10.3778/j.issn.1673-9418.2303063
    Knowledge graph completion can make knowledge graph more complete. However, traditional knowledge graph completion methods assume that all test entities and relations appear in the training process. Due to the evolving nature of real world KG, once unseen entities or relations appear, the knowledge graph needs to be retrained. Inductive learning for knowledge graph completion aims to complete triples containing unseen entities or unseen relations without training the knowledge graph from scratch, so it has received much attention in recent years. Firstly, starting from the basic concept of knowledge graph, this paper divides knowledge graph completion into two categories: transductive and inductive. Secondly, from the theoretical perspective of inductive knowledge graph completion, it is divided into two categories: semi-inductive and fully-inductive, and the models are summarized from this perspective. Then, from the technical perspective of inductive knowledge graph completion, it is divided into two categories: based on structural information and based on additional information. The methods based on structural information are subdivided into three categories: based on inductive embedding, based on logical rules and based on meta learning, and the methods based on additional information are subdivided into two categories: based on text information and other information. The current methods are further subdivided, analyzed and compared. Finally, it forecasts the main research directions in the future.
    Reference | Related Articles | Metrics
    Abstract534
    PDF609
    Survey on Blockchain-Based Cross-Domain Authentication for Internet of Things Terminals
    HUO Wei, ZHANG Qionglu, OU Wei, HAN Wenbao
    Journal of Frontiers of Computer Science and Technology    2023, 17 (9): 1995-2014.   DOI: 10.3778/j.issn.1673-9418.2211004
    Internet of things (IoT) devices are widely distributed, numerous and complex, which are involved in multiple management domains. They are often in uncontrollable environments and are more vulnerable to attacks than traditional Internet terminals, the security management and protection of IoT terminals face greater risks and challenges. As “the first line of defense” for the security protection of IoT devices, authentication plays an irreplaceable and important role in the development of IoT security. The blockchain technology has the characteristics and advantages of decentralization, distribution, immutability and traceability. And thus, it can effectively solve the single-point trust failure of trusted third parties and satisfy the principle of least authorization for multi-domain heterogeneity in cross-domain authentication for IoT terminals. Using the blockchain technology is an important trend in the future development of the IoT device cross-domain authentication. This paper categorizes and summarizes the main research achievements of IoT cross-domain authentication based on blockchain technology in recent years according to three categories: integrating traditional identity authentication mechanisms such as PKI and IBS/IBC, adopting inter-blockchain technology, and other cross-domain authentication technologies based on blockchain. Then this paper analyzes the technical characteristics, advantages and disadvantages of each different scheme. On this basis, the current problems and issues in the field of cross-domain authentication of IoT devices are summarized, and the future research directions and development suggestions for the cross-domain authentication of IoT terminals are given, so as to achieve a general and overall grasp of the research progress and development trend of IoT device cross-domain authentication schemes based on blockchain technology.
    Reference | Related Articles | Metrics
    Abstract516
    PDF567
    Review of Research on 3D Reconstruction of Dynamic Scenes
    SUN Shuifa, TANG Yongheng, WANG Ben, DONG Fangmin, LI Xiaolong, CAI Jiacheng, WU Yirong
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 831-860.   DOI: 10.3778/j.issn.1673-9418.2305016
    As static scene 3D reconstruction algorithms become more mature, dynamic scene 3D reconstruction has become a hot and challenging research topic in recent years. Existing static scene 3D reconstruction algorithms have good reconstruction results for stationary objects. However, when objects in the scene undergo deformation or relative motion, their reconstruction results are not ideal. Therefore, developing research on 3D reconstruction of dynamic scenes is essential. This paper first introduces the related concepts and basic knowledge of 3D reconstruction, as well as the research classification and current status of static and dynamic scene 3D reconstruction. Then, the latest research progress on dynamic scene 3D reconstruction is comprehensively summarized, and the reconstruction algorithms are classified into dynamic 3D reconstruction based on RGB data sources and dynamic 3D reconstruction based on RGB-D data sources. RGB data sources can be further divided into template based dynamic 3D reconstruction, non rigid motion recovery structure based dynamic 3D reconstruction, and learning based dynamic 3D reconstruction under RGB data sources. The RGB-D data source mainly summarizes dynamic 3D reconstruction based on learning, with typical examples introduced and compared. The applications of dynamic scene 3D reconstruction in medical, intelligent manufacturing, virtual reality and augmented reality, and transportation fields are also discussed. Finally, future research directions for dynamic scene 3D reconstruction are proposed, and an outlook on the research progress in this rapidly developing field is presented.
    Reference | Related Articles | Metrics
    Abstract499
    PDF450
    Review of Attention Mechanisms in Reinforcement Learning
    XIA Qingfeng, XU Ke'er, LI Mingyang, HU Kai, SONG Lipeng, SONG Zhiqiang, SUN Ning
    Journal of Frontiers of Computer Science and Technology    2024, 18 (6): 1457-1475.   DOI: 10.3778/j.issn.1673-9418.2312006
    In recent years, the combination of reinforcement learning and attention mechanisms has attracted an increasing attention in algorithmic research field. Attention mechanisms play an important role in improving the performance of algorithms in reinforcement learning. This paper mainly focuses on the development of attention mechanisms in deep reinforcement learning and examining their applications in the multi-agent reinforcement learning domain. Relevant researches are conducted accordingly. Firstly, the background and development of attention mechanisms and reinforcement learning are introduced, and relevant experimental platforms in this field are also presented. Secondly, classical algorithms of reinforcement learning and attention mechanisms are reviewed and attention mechanism is categorized from different perspectives. Thirdly, practical applications of attention mechanisms in the reinforcement field are sorted out based on three types of tasks including fully cooperative, fully competitive and mixed, with focus on the application in the field of multi-agent. Finally, the improvement of attention mechanisms on reinforcement learning algorithms is summarized. The challenges and future prospects in this field are discussed.
    Reference | Related Articles | Metrics
    Abstract444
    PDF460
    Research Progress of Graph Neural Network in Knowledge Graph Construction and Application
    XU Xinran, WANG Tengyu, LU Cai
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2278-2299.   DOI: 10.3778/j.issn.1673-9418.2302059
    As an effective representation of knowledge, knowledge graph network can be used to represent rich factual information between different categories and become an effective knowledge management tool. It has achieved great results in the application and research of knowledge engineering and artificial intelligence. Know-ledge graph is usually expressed as a complex network structure. Its unstructured characteristics make the applica-tion of graph neural network to the analysis and research of knowledge graph become a research hotspot in academia. The purpose of this paper is to provide extensive research on knowledge graph construction technology based on graph neural network to solve two types of knowledge graph construction tasks, including knowledge extraction (entity, relationship and attribute extraction) and knowledge merging and processing (link prediction, entity alignment and knowledge reasoning, etc.). Through these tasks, the structure of knowledge graph can be further improved and new knowledge and reasoning relationships can be discovered. This paper also studies the advanced graph neural network method for knowledge graph related applications, such as recommendation system, question answering system and computer vision. Finally, the future research directions of knowledge graph application based on graph neural network are proposed.
    Reference | Related Articles | Metrics
    Abstract426
    PDF568
    Research Progress in Application of Deep Learning in Animal Behavior Analysis
    SHEN Tong, WANG Shuo, LI Meng, QIN Lunming
    Journal of Frontiers of Computer Science and Technology    2024, 18 (3): 612-626.   DOI: 10.3778/j.issn.1673-9418.2306033
    In recent years, animal behavior analysis has become one of the most important methods in the fields of neuroscience and artificial intelligence. Taking advantage of the powerful deep-learning-based image analysis technology, researchers have developed state-of-the-art automatic animal behavior analysis methods with complex functions. Compared with traditional methods of animal behavior analysis, special labeling is not required in these methods, animal pose can be efficiently estimated and tracked. These methods like in a natural environment, which hold the potential for complex animal behavior experiments. Therefore, the application of deep learning in animal behavior analysis is reviewed. Firstly, this paper analyzes the tasks and current status of animal behavior analysis. Then, it highlights and compares existing deep learning-based animal behavior analysis tools. According to the dimension of experimental analysis, the deep learning-based animal behavior analysis tools are divided into two-dimensional animal behavior analysis tools and three-dimensional animal behavior analysis tools, and the functions, performance and scope of application of tools are discussed. Furthermore, the existing animal datasets and evaluation metrics are introduced, and the algorithm mechanism used in the existing animal behavior analysis tool is summarized from the advantages, limitations and applicable scenarios. Finally, the deep learning-based animal behavior analysis tools are prospected from the aspects of dataset, experimental paradigm and low latency.
    Reference | Related Articles | Metrics
    Abstract425
    PDF447
    Influence Evaluation of Telecom Fraud Case Types Based on ChatGPT
    PEI Bingsen, LI Xin, WU Yue
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2413-2425.   DOI: 10.3778/j.issn.1673-9418.2306044
    At present, telecommunications fraud crimes are on the rise, posing a serious threat to the safety of people??s property. In order to optimize anti-fraud strategies, objectively and accurately analyze the trends and characteristics of different types of telecommunications fraud cases, and determine the most influential criminal methods, a ChatGPT based telecommunications fraud case type impact assessment method is proposed. By utilizing a knowledge graph, the content of the case text is structured, and the methods of telecommunications fraud are quantified by taking the time of the incident, the amount involved, and the number of individuals involved as factors to evaluate the impact of the case. Firstly, ChatGPT is used to preprocess and extract knowledge from the text corpus of telecommu-nications fraud cases through multiple rounds of Q&A, in order to quickly and timely construct a case knowledge graph in the field of telecommunications fraud with low resources. Based on the knowledge graph, various factors such as incident time, amount involved, and the number of involved parties are statistically analyzed, and the impact of different types of cases is abstracted into influencing factors. The influencing factors are used to depict the trend and characteristics of incidents, to conduct comprehensive analysis and judgment. This paper analyzes existing case data and calculates the impact factors of case types, obtaining the changes in impact factors of different case types, verifying the scientific and effective calculation methods of impact factors, and providing a new method for the evaluation of telecommunications fraud types. Combining the advantages of ChatGPT and knowledge graph helps to timely grasp the trend of case development and changes, provides strong support and guidance to combat teleco-mmunications fraud, and is of great significance for protecting public property safety and social stability.
    Reference | Related Articles | Metrics
    Abstract406
    PDF315
    Improved YOLOv4-Tiny Lightweight Target Detection Algorithm
    HE Xiangjie, SONG Xiaoning
    Journal of Frontiers of Computer Science and Technology    2024, 18 (1): 138-150.   DOI: 10.3778/j.issn.1673-9418.2301034
    Object detection is an important branch of deep learning. A large number of edge devices need lightweight object detection algorithms, but the existing lightweight universal object detection algorithms have problems of low detection accuracy and slow detection speed. To solve this problem, an improved YOLOv4-Tiny algorithm based on attention mechanism is proposed. The structure of the original backbone network of YOLOv4-Tiny algorithm is adjusted, the ECA (efficient channel attention) attention mechanism is introduced, the traditional spatial pyramid pooling (SPP) structure is improved to DC-SPP structure by using void convolution, and the CSATT (channel spatial attention) attention mechanism is proposed. The neck network of CSATT-PAN (channel spatial attention path aggregation network) is formed with the feature fusion network PAN, which improves the feature fusion capability of the network. Compared with the original YOLOv4-Tiny algorithm, the proposed YOLOv4-CSATT algorithm is significantly more sensitive to information and accurate in classification when the detection speed is basically the same. The accuracy is increased by 12.3 percentage points on VOC dataset and 6.4 percentage points is increased on COCO dataset. Moreover, the accuracy is 3.3,5.5,6.3,17.4,10.3,0.9 and 0.6 percentage points higher than the Faster R-CNN, SSD, Efficientdet-d1, YOLOv3-Tiny, YOLOv4-MobileNetv1, YOLOv4-MobileNetv2 and PP-YOLO algorithms respectively on VOC dataset, and 2.8, 7.1, 4.2, 18.0, 12.2, 2.1 and 4.0 percentage points higher in recall rate, respectively, with an FPS of 94. In this paper, the CSATT attention mechanism is proposed to improve the model’s ability to capture spatial channel information, and the ECA attention mechanism is combined with the feature fusion pyramid algorithm to improve the model’s feature fusion ability and target detection accuracy.
    Reference | Related Articles | Metrics
    Abstract396
    PDF255
    Review of Application of Generative Adversarial Networks in Image Restoration
    GONG Ying, XU Wentao, ZHAO Ce, WANG Binjun
    Journal of Frontiers of Computer Science and Technology    2024, 18 (3): 553-573.   DOI: 10.3778/j.issn.1673-9418.2307073
    With the rapid development of generative adversarial networks, many image restoration problems that are difficult to solve based on traditional methods have gained new research approaches. With its powerful generation ability, generative adversarial networks can restore intact images from damaged images, so they are widely used in image restoration. In order to summarize the relevant theories and research on the problem of using generative adversarial networks to repair damaged images in recent years, based on the categories of damaged images and their adapted repair methods, the applications of image restoration are divided into three main aspects: image inpainting, image deblurring, and image denoising. For each aspect, the applications are further subdivided through technical principles, application objects and other dimensions. For the field of image inpainting, different image completion methods based on generative adversarial networks are discussed from the perspectives of using conditional guidance and latent coding. For the field of image deblurring, the essential differences between motion blurred images and static blurred images and their repair methods are explained. For the field of image denoising, personalized denoising methods for different categories of images are summarized. For each type of applications, the characteristics of the specific GAN models employed are summarized. Finally, the advantages and disadvantages of GAN applied to image restoration are summarized, and the future application scenarios are prospected.
    Reference | Related Articles | Metrics
    Abstract395
    PDF414
    Survey of Research on SMOTE Type Algorithms
    WANG Xiaoxia, LI Leixiao, LIN Hao
    Journal of Frontiers of Computer Science and Technology    2024, 18 (5): 1135-1159.   DOI: 10.3778/j.issn.1673-9418.2309079
    Synthetic minority oversampling technique (SMOTE) has become one of the mainstream methods for dealing with unbalanced data due to its ability to effectively deal with minority samples, and many SMOTE improvement algorithms have been proposed, but very little research existing considers popular algorithmic-level improvement methods. Therefore a more comprehensive analysis of existing SMOTE class algorithms is provided. Firstly, the basic principles of the SMOTE method are elaborated in detail, and then the SMOTE class algorithms are systematically analyzed mainly from the two levels of data level and algorithmic level, and the new ideas of the hybrid improvement of data level and algorithmic level are introduced. Data-level improvement is to balance the data distribution by deleting or adding data through different operations during preprocessing; algorithmic-level improvement will not change the data distribution, and mainly strengthens the focus on minority samples by modifying or creating algorithms. Comparison between these two kinds of methods shows that, data-level methods are less restricted in their application, and algorithmic-level improvements generally have higher algorithmic robustness. In order to provide more comprehensive basic research material on SMOTE class algorithms, this paper finally lists the commonly used datasets, evaluation metrics, and gives ideas of research in the future to better cope with unbalanced data problem.
    Reference | Related Articles | Metrics
    Abstract394
    PDF298
    Overview of Cross-Chain Identity Authentication Based on DID
    BAI Yirui, TIAN Ning, LEI Hong, LIU Xuefeng, LU Xiang, ZHOU Yong
    Journal of Frontiers of Computer Science and Technology    2024, 18 (3): 597-611.   DOI: 10.3778/j.issn.1673-9418.2304003
    With the emergence of concepts such as metaverse and Web3.0, blockchain plays a very important role in many fields. Cross-chain technology is an important technical means to achieve inter-chain interconnection and value transfer. At this stage, traditional cross-chain technologies such as notary and sidechain have trust issues. At the same time, in the field of cross-chain identity authentication, there are problems that the identities of each chain are not unified and users do not have control over their own identities. Firstly, it systematically summarizes the development process and technical solutions of digital identity and cross-chain technology, and analyzes and compares four digital identity models and nine mainstream cross-chain projects. Secondly, by analyzing the main research results of cross-chain identity authentication in recent years, a general model of cross-chain identity authentication is designed, and the shortcomings of existing solutions are summarized. Then, it focuses on the cross-chain identity authentication implementation scheme based on DID, and analyzes the technical characteristics, advantages and disadvantages of different solutions. On this basis, three DID-based cross-chain identity authentication models are summarized, the main implementation steps are functionally described, and their advantages, limitations and efficiency are analyzed. Finally, in view of the shortcomings of the current DID-based cross-chain identity authentication model, its development difficulties are discussed and five possible future research directions are given.
    Reference | Related Articles | Metrics
    Abstract388
    PDF387
    MFFNet: Image Semantic Segmentation Network of Multi-level Feature Fusion
    WANG Yan, NAN Peiqi
    Journal of Frontiers of Computer Science and Technology    2024, 18 (3): 707-717.   DOI: 10.3778/j.issn.1673-9418.2209110
    In the task of image semantic segmentation, most methods do not make full use of features of different scales and levels, but directly upsampling, which will cause some effective information to be dismissed as redundant information, thus reducing the accuracy and sensitivity of segmentation of some small categories and similar categories. Therefore, a multi-level feature fusion network (MFFNet) is proposed. MFFNet uses encoder-decoder structure, during the encoding stage, the context information and spatial detail information are obtained through the context information extraction path and spatial information extraction path respectively to enhance the inter-pixel correlation and boundary accuracy. During the decoding stage, a multi-level feature fusion path is designed, and the context information is fused by the mixed bilateral fusion module. Deep information and spatial information are fused by high-low feature fusion module. The global channel-attention fusion module is used to obtain the connections between different channels and realize global fusion of different scale information. The MIoU (mean intersection over union) of MFFNet network on the PASCAL VOC 2012 and Cityscapes validation sets is 80.70% and 76.33%, respectively, achieving better segmentation results.
    Reference | Related Articles | Metrics
    Abstract369
    PDF233
    Survey on Deep Learning in Oriented Object Detection in Remote Sensing Images
    LAN Xin, WU Song, FU Boyi, QIN Xiaolin
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 861-877.   DOI: 10.3778/j.issn.1673-9418.2308031
    The objects in remote sensing images have the characteristics of arbitrary direction and dense arrangement, and thus objects can be located and separated more precisely by using inclined bounding boxes in object detection task. Nowadays, oriented object detection in remote sensing images has been widely applied in both civil and military defense fields, which shows great significance in the research and application, and it has gradually become a research hotspot. This paper provides a systematic summary of oriented object detection methods in remote sensing images. Firstly, three widely-used representations of inclined bounding boxes are summarized. Then, the main challenges faced in supervised learning are elaborated from four aspects: feature misalignment, boundary discontinuity, inconsistency between metric and loss and oriented object location. Next, according to the motivations and improved strategies of different methods, the main ideas and advantages and disadvantages of each algorithm are analyzed in detail, and the overall framework of oriented object detection in remote sensing images is summarized. Furthermore, the commonly used oriented object detection datasets in remote sensing field are introduced. Experimental results of classical methods on different datasets are given, and the performance of different methods is evaluated. Finally, according to the challenges of deep learning applied to oriented object detection in remote sensing images tasks, the future research trend in this direction is prospected.
    Reference | Related Articles | Metrics
    Abstract369
    PDF427
    Differentiable Rule Extraction with Large Language Model for Knowledge Graph Reasoning
    PAN Yudai, ZHANG Lingling, CAI Zhongmin, ZHAO Tianzhe, WEI Bifan, LIU Jun
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2403-2412.   DOI: 10.3778/j.issn.1673-9418.2306049
    Knowledge graph (KG) reasoning is to predict missing entities or relationships in incomplete triples, complete structured knowledge, and apply to different downstream tasks. Different from black-box methods which are widely studied, such as methods based on representation learning, the method based on rule extraction achieves an interpretable reasoning paradigm by generalizing first-order logic rules from the KG. To address the gap between discrete symbolic space and continuous embedding space, a differentiable rule extracting method based on the large pre-trained language model (DRaM) is proposed, which integrates discrete first-order logical rules with continuous vector space. In view of the influence of atom sequences in first-order logic rules for the reasoning process, a large pre-trained language model is introduced to encode the reasoning process. The differentiable method DRaM, which integrates first-order logical rules, achieves good results in link prediction tasks on three knowledge graph datasets, Family, Kinship and UMLS, especially for the indicator Hits@10. Comprehensive experimental results show that DRaM can effectively solve the problems of differentiable reasoning on the KGs, and can extract first-order logic rules with confidences from the reasoning process. DRaM not only enhances the reasoning performance with the help of first-order logic rules, but also enhances the interpretability of the method.
    Reference | Related Articles | Metrics
    Abstract358
    PDF302
    Review of Computing Offloading Schemes for Multi-access Edge Computing
    ZHANG Bingjie, YANG Yanhong, CAO Shaozhong
    Journal of Frontiers of Computer Science and Technology    2023, 17 (9): 2030-2046.   DOI: 10.3778/j.issn.1673-9418.2301068
    Under the background of the Internet of things, the development of massive and large-scale machine communication has brought about the explosive growth of data traffic. The traditional cloud computing model can no longer meet the needs of low delay and low energy consumption of terminal data processing. Multi-access edge computing (MEC) with distributed multi-nodes near the terminal side is becoming the best choice to solve this problem. Computational offloading is the key technology of MEC, the offloading performance is affected by many factors, and there is a large space for optimization. How to design a high-performance computational offloading scheme has become the main research goal of scholars at home and abroad. This paper reviews the research of computing offloading scheme for MEC, introduces the concept of MEC, sorts out the development and application of MEC, and the execution process of computing offloading, analyzes and compares the recent research methods of computing offloading. According to different improvements, a computing offloading scheme is summarized which takes the offloading system environment, offloading delay, energy consumption of mobile devices and multiple evaluation indexes as the optimization direction. The problems of resource allocation, universality and security in MEC-oriented computing offloading are presented, and the future research directions are forecasted based on these problems.
    Reference | Related Articles | Metrics
    Abstract342
    PDF429
    Deep Learning Compiler Load Balancing Optimization Method for Model Training
    WANG Li, GAO Kai, ZHAO Yaqian, LI Rengang, CAO Fang, GUO Zhenhua
    Journal of Frontiers of Computer Science and Technology    2024, 18 (1): 111-126.   DOI: 10.3778/j.issn.1673-9418.2209026
    For computing-intensive artificial intelligence (AI) training tasks, the computational graph is more complex, and data loading, task division of the computational graph, and load balancing of task scheduling have become the key factors affecting the computing performance. This paper proposes three optimization methods to make the task scheduling of model training in deep learning compilers reach the load balance state. Firstly, the load balance between CPU and back-end computing devices is realized by automatically establishing an efficient pipeline for data loading and model training, which improves the overall energy efficiency of the system. Secondly, the layered optimization technology of computational graph is used to realize the load balance of computational graph when the back-end devices are scheduling. Finally, this paper improves the resource utilization of back-end devices by automatically establishing efficient pipeline between layers. Experimental results show that the proposed optimization method achieves the system load balancing in the process of automatically mapping the training tasks to underlying hardware devices. Compared with traditional deep learning frameworks and compilers such as TensorFlow, nGraph, etc., this paper achieves 2%~10% performance improvement in the training of different AI models, and the overall power consumption of the training system can be reduced by more than 10%.
    Reference | Related Articles | Metrics
    Abstract333
    PDF497
    Contrast Research of Representation Learning in Entity Alignment Based on Graph Neural Network
    PENG Huang, ZENG Weixin, ZHOU Jie, TANG Jiuyang, ZHAO Xiang
    Journal of Frontiers of Computer Science and Technology    2023, 17 (10): 2343-2357.   DOI: 10.3778/j.issn.1673-9418.2307053
    Entity alignment is an important step in knowledge fusion, which aims to identify equivalent entities in different knowledge graphs. In order to accurately determine the equivalent entities, the existing methods first perform representation learning to map the entities into a low-dimensional vector space, and then infer the equivalence of the entities by the similarity between the vectors. Recent works on entity alignment focus on the improvement of representation learning methods. In order to better understand the mechanism of these models, mine valuable design directions, and provide reference for subsequent optimization and improvement work, this paper reviews the research on representation learning methods for entity alignment. Firstly, based on the existing methods, a general framework for representation learning is proposed, and several representative works are summarized and analyzed. Then, these works are compared and analyzed through experiments, and the common methods of each module in the framework are compared. Through the results, the advantages and disadvantages of various methods are summarized, and the use suggestions are put forward. Finally, the feasibility of the alignment and fusion of large language models and knowledge graphs is preliminarily discussed, and the existing problems and challenges are analyzed.
    Reference | Related Articles | Metrics
    Abstract333
    PDF327
    Survey of Transformer-Based Single Image Dehazing Methods
    ZHANG Kaili, WANG Anzhi, XIONG Yawei, LIU Yun
    Journal of Frontiers of Computer Science and Technology    2024, 18 (5): 1182-1196.   DOI: 10.3778/j.issn.1673-9418.2307103
    As a fundamental computer vision task, image dehazing aims to preprocess degraded images by restoring color contrast and texture information to improve visibility and image quality, thereby the clear images can be recovered for subsequent high-level visual tasks, such as object detection, tracking, and object segmentation. In recent years, neural network-based dehazing methods have achieved notable success, with a growing number of Transformer-based dehazing approaches being proposed. Up to now, there is a lack of comprehensive review that thoroughly analyzes Transformer-based image dehazing algorithms. To fill this gap, this paper comprehensively sorts out Transformer-based daytime, nighttime and remote sensing image dehazing algorithms, which not only covers the fundamental principles of various types of dehazing algorithms, but also explores the applicability and performance of these algorithms in different scenarios. In addition, the commonly used datasets and evaluation metrics in image dehazing tasks are introduced. On this basis, analysis of the performance of existing representative dehazing algorithms is carried out from both quantitative and qualitative perspectives, and the performance of typical dehazing algorithms in terms of dehazing effect, operation speed, resource consumption is compared. Finally, the application scenarios of image dehazing technology are summarized, and the challenges and future development directions in the field of image dehazing are analyzed and prospected.
    Reference | Related Articles | Metrics
    Abstract330
    PDF340
    Multi-strategy Improved Dung Beetle Optimizer and Its Application
    GUO Qin, ZHENG Qiaoxian
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 930-946.   DOI: 10.3778/j.issn.1673-9418.2308020
    Dung beetle optimizer (DBO) is an intelligent optimization algorithm proposed in recent years. Like other optimization algorithms, DBO also has disadvantages such as low convergence accuracy and easy to fall into local optimum. A multi-strategy improved dung beetle optimizer (MIDBO) is proposed. Firstly, it improves acceptance of local and global optimal solutions by brood balls and thieves, so that the beetles can dynamically change according to their own searching ability, which not only improves the population quality but also maintains the good searching ability of individuals with high fitness. Secondly, the follower position updating mechanism in the sparrow search algorithm is integrated to disturb the algorithm, and the greedy strategy is used to update the location, which improves the convergence accuracy of the algorithm. Finally, when the algorithm stagnates, Cauchy Gaussian variation strategy is introduced to improve the ability of the algorithm to jump out of the local optimal solution. Based on 20 benchmark test functions and CEC2019 test function, the simulation experiment verifies the effectiveness of the three improved strategies. The convergence analysis of the optimization results of the improved algorithm and the comparison algorithms and Wilcoxon rank sum test prove that MIDBO has good optimization performance and robustness. The validity and reliability of MIDBO in solving practical engineering problems are further verified by applying MIDBO to the solution of automobile collision optimization problems.
    Reference | Related Articles | Metrics
    Abstract327
    PDF301
    Survey on Natural Scene Text Recognition Methods of Deep Learning
    ZENG Fanzhi, FENG Wenjie, ZHOU Yan
    Journal of Frontiers of Computer Science and Technology    2024, 18 (5): 1160-1181.   DOI: 10.3778/j.issn.1673-9418.2306024
    Natural scene text recognition holds significant value in both academic research and practical applications, making it one of the research hotspots in the field of computer vision. However, the recognition process faces challenges such as diverse text styles and complex background environments, leading to unsatisfactory efficiency and accuracy. Traditional text recognition methods based on manually designed features have limited representation capabilities, which are insufficient for effectively handling complex tasks in natural scene text recognition. In recent years, significant progress has been made in natural scene text recognition by adopting deep learning methods. This paper systematically reviews the recent research work in this area. Firstly, the natural scene text recognition methods are categorized into segmentation-based and non-segmentation-based approaches based on character segmentation required or not. The non-segmentation-based methods are further subdivided according to their technical implementation characteristics, and the working principles of the most representative methods in each category are described. Next, commonly used datasets and evaluation metrics are introduced, and the performance of various methods is compared on these datasets. The advantages and limitations of different approaches are discussed from multiple perspectives. Finally, the shortcomings and challenges are given, and the future development trends are also put forward.
    Reference | Related Articles | Metrics
    Abstract318
    PDF333
    Transformer Object Tracking Algorithm Based on Spatio-Temporal Template Update
    WANG Qiang, LU Xianling
    Journal of Frontiers of Computer Science and Technology    2023, 17 (9): 2161-2173.   DOI: 10.3778/j.issn.1673-9418.2208034
    Currently, the mainstream Transformer tracking algorithm only uses Transformer for feature enhancement and feature fusion, ignoring the Transformer??s feature extraction ability, and lacks an effective template update strategy for disturbing factors such as scale change and deformation during the tracking process. Aiming at above problems, a Transformer tracking algorithm based on spatio-temporal template updating and bounding box refining is proposed. Firstly, the improved Swin Transformer is used as the backbone network, and self-attention calculation and global information modeling are performed by shifting windows to enhance the feature extraction ability of the backbone network. Secondly, the Transformer encoder-decoder structure is used to fuse the template area and search area infor-mation, and the attention mechanism is used to establish feature correlation. At the same time, the template is dynamically updated according to the size of confidence score every fixed frame to adjust the appearance state of the template during the tracking process. Finally, the bounding box refinement module is used to refine the regression range of the bounding box and improve the accuracy of the algorithm. Performance comparison experiments with mainstream advanced algorithms have been performed on multiple challenging datasets. The success rate and precision on the OTB2015 dataset respectively reach 70.2% and 91.0%. The average overlap on the GOT-10k dataset is improved 0.02 compared with benchmark algorithm TransT, the success rate on the LaSOT dataset is increased by 0.024 compared with the benchmark algorithm TransT, and it can also perform real-time tracking at a tracking speed of 42 FPS.
    Reference | Related Articles | Metrics
    Abstract309
    PDF303
    Review of Application of Neural Networks in Epileptic Seizure Prediction
    HUANG Honghong, ZHANG Feng, LYU Liangfu, SI Xiaopeng
    Journal of Frontiers of Computer Science and Technology    2023, 17 (11): 2543-2556.   DOI: 10.3778/j.issn.1673-9418.2302001
    Epilepsy, a central nervous system disease caused by abnormal discharge of brain neurons, has a significant impact on patients’ normal life. Early prediction of epileptic seizures and timely preventive measures can effectively improve the quality of life of patients. With the development of data science and big data technology, neural networks are increasingly being applied in the field of epilepsy prediction and have shown great potential for application. This paper provides a review of the application and deficiencies of neural networks in the field of epilepsy prediction, discussing the construction process of epilepsy prediction models in the following order: data- sets, data preprocessing, feature extraction, and neural networks. After introducing the characteristics of EEG signals, common types of datasets, common data preprocessing methods, and common feature extraction methods, especially manual feature extraction methods, this paper focuses on analyzing and summarizing the principles and applications of multi-layer artificial neural networks and spiking neural networks in the field of epilepsy prediction. The disadvantages of neural networks are systematically analyzed, and further application of neural networks in the field of epilepsy prediction is prospected.
    Reference | Related Articles | Metrics
    Abstract306
    PDF442
    Survey of Multi-task Recommendation Algorithms
    WEN Minwei, MEI Hongyan, YUAN Fengyuan, ZHANG Xiaoyu, ZHANG Xing
    Journal of Frontiers of Computer Science and Technology    2024, 18 (2): 363-377.   DOI: 10.3778/j.issn.1673-9418.2303014
    Single-task recommendation algorithms have problems such as sparse data, cold start and unstable recommendation effect. Multi-task recommendation algorithms can jointly model multiple types of user behaviour data and additional information, to better explore the user’s interests and needs in order to improve the recommendation effect and user satisfaction, which provides a new way of thinking to solve a series of problems existing in single-task recommendation algorithms. Firstly, the development background and trend of multi-task recommendation algorithms are sorted out. Secondly, the implementation steps of the multi-task recommendation algorithm and the construction principle are introduced, and the advantages of multi-task learning with data enhancement, feature identification, feature complementation and regularization effect are elaborated. Then, the application of multi-task learning methods in recommendation algorithms with different sharing models is introduced, and the advantages and disadvantages of some classical models and the relationship between tasks are summarized. Then, the commonly used   datasets and evaluation metrics for multi-task recommendation algorithms are introduced, and the differences and connections with other recommendation algorithms in terms of dataset evaluation metrics are elaborated. Finally, it is pointed out that multi-task learning has shortcomings such as negative migration, parameter optimization conflicts, poor interpretability, etc., and an outlook is given to the combination of multi-task recommendation algorithms with reinforcement learning, convex function optimization methods, and heterogeneous information networks.
    Reference | Related Articles | Metrics
    Abstract304
    PDF362
    Survey of Image Adversarial Example Defense Techniques
    LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
    Journal of Frontiers of Computer Science and Technology    2023, 17 (12): 2827-2839.   DOI: 10.3778/j.issn.1673-9418.2303080
    The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots. Deep neural networks are  most widely used in the field of images and most easily cheated by image adversarial examples. The research on the defense techniques for image adversarial examples is an important tool to improve the security of AI applications. There is no standard explanation for the existence of image adversarial examples, but it can be observed and understood from different dimensions, which can provide insights for proposing targeted defense approaches. This paper sorts out and analyzes current mainstream hypotheses of the reason for the existence of adversarial examples, such as the blind spot hypothesis, linear hypothesis, decision boundary hypothesis, and feature hypothesis, and the correlations between various hypotheses and typical adversarial example generation methods. Based on this, this paper summarizes the image adversarial example defense techniques in two dimensions, model-based and data-based, and compares and analyzes the adaptation scenarios, advantages and disadvantages of different technical methods. Most of the existing image adversarial example defense techniques are aimed at defending against specific adversarial example generation methods, and there is no universal defense theory and method yet. In the real application, it needs to consider the specific application scenarios, potential security risks and other factors, optimize and combine the configuration in the existing defense methods. Future researchers can deepen their technical research in terms of generalized defense theory, evaluation of defense effectiveness, and systematic protection strategies.
    Reference | Related Articles | Metrics
    Abstract299
    PDF226
    Survey on Solving Cold Start Problem in Recommendation Systems
    MAO Qian, XIE Weicheng, QIAO Yitian, HUANG Xiaolong, DONG Gang
    Journal of Frontiers of Computer Science and Technology    2024, 18 (5): 1197-1210.   DOI: 10.3778/j.issn.1673-9418.2308044
    Recommender systems provide important functions in areas such as dealing with data overload, providing personalized consulting services, and assisting clients in investment decisions. However, the cold start problem in recommender systems has always been in urgent need of solution and optimization. Based on this, this paper classifies the traditional methods and cutting-edge methods to solve the cold start problem, and expounds the research progress and excellent methods in recent years. Firstly, three traditional solutions to the cold start problem are summarized: recommendation based on content filtering, recommendation based on collaborative filtering, and hybrid recommendation. Secondly, the current cutting-edge recommendation algorithms to solve the cold start problem are summarized, and they are classified into the data-driven strategy and the method-driven strategy. The method-driven strategy is divided into algorithms based on meta-learning, algorithms based on context information and session str-ategy, algorithms based on random walk, algorithms based on heterogeneous graph information and attribute graph, and algorithms based on adversarial mechanism. According to the type of cold start problem, the algorithms are divided into two categories: new users and new items. Then, according to the particularity of the recommendation field, the cold start problem of the recommendation in the multimedia information field and the online e-commerce platform field is expounded. Finally, the possible research directions to solve the cold start problem in the future are summarized.
    Reference | Related Articles | Metrics
    Abstract288
    PDF232
    Survey of Research on Construction Method of Industry Internet Security Knowledge Graph
    CHANG Yu, WANG Gang, ZHU Peng, KONG Lingfei, HE Jingheng
    Journal of Frontiers of Computer Science and Technology    2024, 18 (2): 279-300.   DOI: 10.3778/j.issn.1673-9418.2304081
    The industry Internet security knowledge graph plays an important role in enriching the semantic relationships of security concepts, improving the quality of the security knowledge base, and enhancing the ability to visualize and analyze the security situation. It has become the key to recognize, trace and protect against the attacks targeting new energy industry control systems. However, compared with the construction of the general domain knowledge graph, there are still many problems in each stage of the construction of the industry Internet security knowledge graph, which affect its practical application effect. This paper introduces the concept and significance of the industry Internet security knowledge graph and its difference from the general knowledge graph, summarizes the related work and role of the ontology construction of industry Internet security knowledge graph. Under the background of industry Internet security, it focuses on the related work of the three important components of knowledge graph construction, respectively named entity recognition, relationship extraction and reference resolution. For each component, it detailedly reports on the development history and research status of this component in the domain, and deeply analyses the domain challenges in this component, such as non-continuous entity recognition, candidate word extraction, the lack of domain-quality datasets and so on. It predicts the future research directions of this component, provides reference and enlightenment to further improve the quality and usefulness of industry Internet security knowledge graph, so as to deal with emerging threats and attacks more effectively.
    Reference | Related Articles | Metrics
    Abstract286
    PDF336
    Knowledge Graph Completion Algorithm with Multi-view Contrastive Learning
    QIAO Zifeng, QIN Hongchao, HU Jingjing, LI Ronghua, WANG Guoren
    Journal of Frontiers of Computer Science and Technology    2024, 18 (4): 1001-1009.   DOI: 10.3778/j.issn.1673-9418.2301038
    Knowledge graph completion is a process of reasoning new triples based on existing entities and relations in knowledge graph. The existing methods usually use the encoder-decoder framework. Encoder uses graph convolutional neural network to get the embeddings of entities and relations. Decoder calculates the score of each tail entity according to the embeddings of the entities and relations. The tail entity with the highest score is the inference result. Decoder inferences triples independently, without consideration of graph information. Therefore, this paper proposes a graph completion algorithm based on contrastive learning. This paper adds a multi-view contrastive learning framework into the model to constrain the embedded information at graph level. The comparison of multiple views in the model constructs different distribution spaces for relations. Different distributions of relations fit each other, which is more suitable for completion tasks. Contrastive learning constraints the embedding vectors of entity and subgraph and enhahces peroformance of the task. Experiments are carried out on two datasets. The results show that MRR is improved by 12.6% over method A2N and 0.8% over InteractE on FB15k-237 dataset, and 7.3% over A2N and 4.3% over InteractE on WN18RR dataset. Experimental results demonstrate that this model outperforms other completion methods.
    Reference | Related Articles | Metrics
    Abstract283
    PDF309
    Dual Features Local-Global Attention Model with BERT for Aspect Sentiment Analysis
    LI Jin, XIA Hongbin, LIU Yuan
    Journal of Frontiers of Computer Science and Technology    2024, 18 (1): 205-216.   DOI: 10.3778/j.issn.1673-9418.2210012
    Aspect-based sentiment analysis aims to predict the sentiment polarity of a specific aspect in a sentence or document. Most of recent research uses attention mechanism to model the context. But there is a problem in that the context information needs to be considered according to different contexts when the BERT model is used to calculate the dependencies between representations to extract features by sentiment classification models, which leads to the lack of contextual knowledge of the modelled features. And the importance of aspect words is not given more attention, affecting the overall classification performance of the model. To address the problems above, this paper proposes a dual features local-global attention model with BERT (DFLGA-BERT). Local and global feature extraction modules are designed respectively to fully capture the semantic association between aspect words and context. Moreover, an improved quasi-attention mechanism is used in DFLGA-BERT, which leads to the model using minus attention in the fusion of attention to weaken the effect of noise on classification in the text. The feature fusion structure of local and global features is designed to better integrate regional and global features based on conditional layer normalization (CLN). Experiments are conducted on the SentiHood and SemEval 2014 Task 4 datasets. Experimental results show that the performance of the proposed model is significantly improved compared with the baselines after incorporating contextual features.
    Reference | Related Articles | Metrics
    Abstract277
    PDF250
    Construction and Application of Knowledge Graph for Water Engineering Scheduling Based on Large Language Model
    FENG Jun, CHANG Yanghong, LU Jiamin, TANG Hailin, LYU Zhipeng, QIU Yuchun
    Journal of Frontiers of Computer Science and Technology    2024, 18 (6): 1637-1647.   DOI: 10.3778/j.issn.1673-9418.2311098
    With the growth of water conservancy and the increasing demand for information, handling and representing large volumes of water-related data has become complex. Particularly, scheduling textual data often exists in natural language form, lacking clear structure and standardization. Processing and utilizing such diverse data necessitates extensive domain knowledge and professional expertise. To tackle this challenge, a method based on large language model has been proposed to construct a knowledge graph for water engineering scheduling. This approach involves collecting and preprocessing scheduling rule data at the data layer, leveraging large language models to extract embedded knowledge, constructing the ontology at the conceptual layer, and extracting the “three-step” method prompt strategy at the instance layer. Under the interaction of the data, conceptual, and instance layers, high-performance extraction of rule texts is achieved, and the construction of the dataset and knowledge graph is completed. Experimental results show that the F1 value of the extraction method in this paper reaches 85.5%, and the effectiveness and rationality of the modules of the large language model are validated through ablation experiments. This graph integrates dispersed water conservancy rule information, effectively handles unstructured textual data, and offers visualization querying and functionality tracing. It aids professionals in assessing water conditions and selecting appropriate scheduling schemes, providing valuable support for conservancy decision-making and intelligent reasoning.
    Reference | Related Articles | Metrics
    Abstract277
    PDF270