Content of Science Researches in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Multiscale Difference Feature Enhancement Network for Remote Sensing Change Detection
    WANG Jie, JIANG Fusong, JIANG Peng
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2401057
    Accepted: 14 March 2024

    Abstract16
    PDF13
    A Review of Unsupervised Learning Gait Recognition
    CHEN Fushi, SHEN Yao, ZHOU Chichun, DING Meng, LI Juhao, ZHAO Dongyue, LEI Yongsheng, PAN Yilun
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2311049
    Accepted: 14 March 2024

    Abstract8
    PDF5
    Fast multi-view clustering with sparse matrix and improved normalized cut
    YANG Mingrui, ZHOU Shibing, WANG Xi, SONG Wei
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2309037
    Accepted: 15 December 2023

    Abstract64
    PDF116
    No-reference low-light image enhancement with enhanced feature map
    YUAN Heng, WANG Xiaoxue, ZHANG Shengchong
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308052
    Accepted: 05 December 2023

    Abstract50
    PDF52
    Identity-Based Linkable Ring Signature from Lattice in Random Oracle Model
    XIE Jia, WANG Lu, LIU Shizhao, GAO Juntao, WANG Baocang
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2310057
    Accepted: 08 December 2023

    Abstract61
    PDF87
    Research on Distributed V2V Computing Offloading Method for Internet of Vehicles Blockchain
    MENG Zhen, REN Guanyu, WAN Jianxiong, LI Leixiao
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2307081
    Accepted: 08 December 2023

    Abstract63
    PDF65
    Multi-agent self-organizing cooperative hunting in non-convex environment with improved MADDPG algorithm
    ZHANG Hongqiang, SHI Jiahang, WU Lianghong, WANG Xi, ZUO Cili, CHEN Zuguo, LIU Zhaohua, CHEN Lei
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2310040
    Accepted: 08 December 2023

    Abstract75
    PDF78
    Multimodal Unsupervised Entity Alignment Approach with Progressive Strategies
    MA He, WANG Hangrong, WANG Yiyan, SUN Chong, ZHOU Beijing
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2310100
    Accepted: 11 December 2023

    Abstract77
    PDF64
    Question Feature Enhanced Knowledge Tracing Model
    XU Zhihong, ZHANG Huibin, DONG Yongfeng, WANG Liqin, WANG Xu
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308086
    Accepted: 30 November 2023

    Abstract63
    PDF73
    Multimodal Sentiment Analysis Based on Cross-modal Semantic Information Enhancement
    LI Mengyun, ZHANG Jing, ZHANG Huanxiang, ZHANG Xiaolin, LIU Luyao
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2307045
    Accepted: 20 October 2023

    Abstract128
    PDF141
    Operation System Vulnerabilities Analysis Based on Code Clone Detection
    WANG Zhe, REN Yi, ZHOU Kai, GUAN Jianbo, TAN Yusong
    Journal of Frontiers of Computer Science and Technology    2021, 15 (9): 1619-1631.   DOI: 10.3778/j.issn.1673-9418.2008083

    Software vulnerability detection based on code clone detection technology is an important direction in the static analysis of software vulnerability. At present, the existing software vulnerability detection tools have deficie-ncies in the vulnerability detection for large-scale code sets, and lack of optimization for the vulnerability characte-ristics of the operating system. Therefore, based on the code clone detection technology, this paper proposes a method for detecting the vulnerability of the operating system. Firstly, on the basis of the general “code representation-extracting features-feature comparison” detection process, a pre-screening mechanism based on the type of operating system software package and function code size is added to exclude most irrelevant code before performing code representation. Secondly, the basic information of the function, the label sequence and the control flow path are selected to extract the code features, and the similarity between the fragile code and the code under test is compared step by step. Finally, experiments are conducted on typical open source operating systems with fragile samples obtained from the public vulnerability database. The results show that the pre-screening can effectively reduce the code size of the test subjects, and the average accuracy of the detection results reaches 84%.

    Reference | Related Articles | Metrics
    Abstract298
    PDF213
    Multi-aspect Semantic Trajectory Similarity Computation Model
    CAI Mingxin, SUN Jing, WANG Bin
    Journal of Frontiers of Computer Science and Technology    2021, 15 (9): 1632-1640.   DOI: 10.3778/j.issn.1673-9418.2008095

    The development of mobile devices enables trajectory data to record more useful information, such as check in information and activity information, constituting semantic trajectory data. Fast and effective trajectory similarity computation will bring great benefits to the analysis of problems. Scholars have studied trajectory similarity and semantic trajectory similarity and have proposed some effective methods. However, existing trajectory similarity computation methods cannot be applied to semantic trajectory data, and the current semantic trajectory similarity computation methods do not work well under the condition of low trajectory sampling frequency. In this paper, on the basis of solving the sensitivity of trajectory similarity computation to low sampling frequency, combined with the additional visited point information of semantic trajectory, a new trajectory similarity computation model is proposed, which is called multi-aspect semantic trajectory (MAST). The model is based on LSTM (long short-term memory) and introduces the self-attention mechanism. The learned trajectory is expressed as multiple low-dimensional vectors of different aspects of the trajectory, forming a matrix, thereby solving the problem that a single vector cannot accurately express the trajectory. This matrix contains not only the spatial information of the trajectory, but also the semantic information, which can be used to calculate the similarity of the semantic trajectory. MAST is tested on two realistic semantic trajectory datasets. Experimental data show that MAST is superior to existing methods.

    Reference | Related Articles | Metrics
    Abstract311
    PDF285
    SWAM: Workload Automatic Mapper for SNN
    YU Gongjian, ZHANG Lufei, LI Peiqi, HUA Xia, LIU Jiahang, CHAI Zhilei, CHEN Wenjie
    Journal of Frontiers of Computer Science and Technology    2021, 15 (9): 1641-1657.   DOI: 10.3778/j.issn.1673-9418.2010056

    In order to meet the computing requirements of large-scale spiking neural network (SNN), neuromorphic computing systems usually need to adopt large-scale parallel computing platforms. Therefore, how to quickly deter-mine a reasonable number of computing nodes for the SNN workload (that is, how to properly map the workload to the computing platform) to obtain the best performance, better power consumption and other indicators has become one of the key issues that a neuromorphic computing system needs to solve. Firstly, this paper analyzes the SNN workload characteristics and establishes a calculation model for it. Then for the NEST simulator, this paper further instantiates SNN load model of storage, calculation and communication. Finally,  this paper designs and implements a NEST-based workload automatic mapper for SNN (SWAM). SWAM can automatically calculate the mapping result and complete the mapping, avoiding the extremely time-consuming manual trial process of workload mapping. SNN typical applications are run on three different computing platforms, ARM+FPGA, ARM, and PC clusters, and the mapping results of SWAM and LM (Levenberg-Marquardt) algorithm fitting and measured are compared. Experi-mental results show that the average mapping accuracy of SWAM reaches 98.833%. Compared with the LM method and the measured mapping, SWAM has absolute advantage in time cost.

    Reference | Related Articles | Metrics
    Abstract246
    PDF202
    Recommendation System for Medical Consultation Integrating Knowledge Graph and Deep Learning Methods
    WU Jiawei, SUN Yanchun
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1432-1440.   DOI: 10.3778/j.issn.1673-9418.2101029

    In recent years, with the popularization of Internet and technologies like big data analysis, the demand for mobile medical services has become more and more urgent, which mainly focuses on ascertaining their diseases based on symptoms and further choosing hospitals and doctors with good service quality based on diseases. In order to tackle problems above, this paper designs and implements a recommendation system for medical consultation based on knowledge graph and deep learning. Using the open data on Internet, a “disease-symptom” knowledge graph is constructed. Once given symptom description, a disease candidate set is built to help user self-diagnose. To improve the accuracy of disease diagnosis, a vector representation of entities in the knowledge graph is trained by a knowledge graph embedding model. Then the disease candidate set is expanded by selecting disease entity with the shortest Euclidean distance with diseases in the set. Combining the two above, disease diagnosis service is provided. To recommend hospitals and doctors, given open media data, this paper uses a deep learning model and combines it with existing quality evaluation indicators for medical service to achieve scoring for doctors?? multi-dimensional service quality automatically. Finally, this paper verifies the accuracy of the disease diagnosis service and the doctor recommendation service by constructing test sets and designing questionnaires, which reach 74.00% and 90.91%, respectively.

    Reference | Related Articles | Metrics
    Abstract627
    PDF664
    Research on Community Division Method Under Network Formal Context
    LIU Wenxing, FAN Min, LI Jinhai
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1441-1449.   DOI: 10.3778/j.issn.1673-9418.2006072

    Network community division is the basis of concept cognition and pattern learning from social networks, and is also a hot topic in the study of machine learning under the network background. In order to make full use of the advantages of formal concepts and network characteristic values, this paper discusses the problem of network community division based on network formal contexts. It firstly gives the notions of network node centrality and centralization based on the information of network structure and node attributes, which makes the division of network community in a network formal context take into account the characteristics of the network structure and node connotation. Then, the network community concept of network formal context is presented. It not only obtains the formal concept of a traditional formal context, but also includes the network characteristic values of the concept. As a result, the average importance of the concept in the network can be described as well as the different quantity between the average importances. Furthermore, considering the characteristics of multiple roles and network orienta-tion in the division of a social network, the directed network is divided into single-role network and double-role network. Besides, two network community division algorithms are proposed by combining the information of network structure and node attributes, and their time complexities are analyzed. Finally, examples are used to show the effectiveness of the proposed network community division algorithms. The obtained rusults can provide a reference for the further study of network data mining and network concept cognition.

    Reference | Related Articles | Metrics
    Abstract267
    PDF198
    DNN Intrusion Detection Model Based on DT and PCA
    WU Xiaodong, LIU Jinghao, JIN Jie, MAO Siping
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1450-1458.   DOI: 10.3778/j.issn.1673-9418.2007045

    Intrusion detection is an important field. The problems such as high false alarm rate, low detection rate, slow processing speed and high feature dimension plague the experts and scholars in this field. For those problems, this paper proposes an intrusion detection model DT-PCA-DNN combining DT (decision tree), PCA (principal com-ponent analysis) and DNN (deep neural networks) to improve the processing speed of the IDS (intrusion detection system) on the basis of a relatively high detection rate and a relatively low false alarm rate. In order to reduce the overall data volume and speed up the processing speed, DT is used to make a preliminary judgment on the data. The data judged as intrusion by DT are stored in a temporary sample set to optimize DT and DNN, and the data judged as normal are processed by PCA to reduce the data dimension and then processed by DNN for secondary judgment. If the DT structure is too deep, too much normal data will be judged as intrusion data. This will cause the subsequent DNN processing cannot effectively improve the overall accuracy, so DT uses a shallow structure. DNN uses the ReLU activation function that simplifies the calculation process of the neural network and the Adam optimization algorithm with faster convergence speed to speed up the data processing speed. According to the binary and multi-class classification experimental results on the NSL-KDD dataset, compared with other intrusion detection methods that use deep learning, this model, which achieves a relatively high detection rate and has a faster detection speed, solves the real-time problem of intrusion detection effectively.

    Related Articles | Metrics
    Abstract285
    PDF344
    Spatio-Temporal Correlation Based Adaptive Feature Learning of Tracking Object
    GUO Mingzhe, CAI Zixin, WANG Xinyue, JING Liping, YU Jian
    Journal of Frontiers of Computer Science and Technology    2021, 15 (6): 1049-1061.   DOI: 10.3778/j.issn.1673-9418.2007002

    Object tracking has been a difficult problem in the field of vision in recent years. The core task is to continuously locate an object in video sequences and mark its location with bounding boxes. Most of the existing tracking methods use the idea of object detection, and separate the video sequence by frame to detect the target separately. Although this strategy makes full use of the current frame information, it ignores the spatio-temporal correlation information among frames. However, the spatio-temporal correlation information is the key of adapting to the change of the target??s appearance and fully representing the target. To solve this problem, this paper proposes a spatio-temporal siamese network (STSiam) based on spatio-temporal correlation. STSiam uses the spatio-temporal correlation information for target locating and real-time tracking in two stages: object localization and object repre-sentation. In the stage of object localization, STSiam adaptively captures the features of the target and its surroun-ding area, and updates the target matching template to ensure that it is not affected by appearance changes. In the stage of object representation, STSiam pays attention to the spatial correlation information between corresponding regions in different frames. By using the object localization, STSiam locates the target area and learns the target bounding box correction parameters to ensure that the bounding box fits the target as closely as possible. The model??s network architecture is based on offline training, and it is no need to update model parameters during online tracking to ensure its real-time tracking speed. Extensive experiments on visual tracking benchmarks including OTB2015, VOT2016, VOT2018 and LaSOT demonstrate that STSiam achieves state-of-the-art performance in terms of accu-racy, robustness and speed compared with existing methods.

    Reference | Related Articles | Metrics
    Abstract332
    PDF394
    Robust Auto-weighted Multi-view Subspace Clustering
    FAN Ruidong, HOU Chenping
    Journal of Frontiers of Computer Science and Technology    2021, 15 (6): 1062-1073.   DOI: 10.3778/j.issn.1673-9418.2007003

    As the ability to collect and store data improving, real data are usually made up of different forms (view). Therefore, multi-view learning plays a more and more important role in the field of machine learning and pattern recognition. In recent years, a variety of multi-view learning methods have been proposed and applied to different practical scenarios. However, since most of the data points in the objective function have square residuals and a few outliers with large errors can easily invalidate the objective function, how to deal with redundant data becomes an important challenge for multi-view learning. For solving the above problems, this paper proposes a model, termed as robust auto-weighted multi-view subspace clustering. The model uses the Frobenius norm to deal with the squared error of data and uses the [?1]-norm to deal with outliers at the same time. Thus the effect of outliers and data points on model performance is effectively balanced. Furthermore, unlike traditional methods which measure the impact of different views by introducing hyper-parameters, the proposed model learns the weight of each view automatically. Since this model is a non-smooth and non-convex problem which is difficult to solve directly, this paper designs an effective algorithm to solve the problem and analyzes the convergence and computational complexity of this algo-rithm. Compared with traditional multi-view subspace clustering algorithms, the experimental results on multi-view datasets present the effectiveness of the proposed algorithm.

    Reference | Related Articles | Metrics
    Abstract285
    PDF498
    SCVerify: Verification of Software Implementation Against Power Side-Channel Attacks
    ZHANG Jun
    Journal of Frontiers of Computer Science and Technology    2021, 15 (6): 1074-1083.   DOI: 10.3778/j.issn.1673-9418.2002047

    Power side-channel attacks, have become a serious threat to embedded computing devices in cyber-physical systems because of the ability of deducing secret data using statistical analysis. A common strategy for designing countermeasures against power-analysis-based side-channel attacks uses random masking techniques to remove the statistical dependency between secret data and side-channel information. Although existing techniques can verify whether a piece of cryptographic software code is perfectly masked, they are limited in accuracy and scalability. In order to eliminate such limitations, a refinement-based method for verifying masking countermeasures is proposed. This method is more accurate than prior type-inference based approaches and more scalable than prior model-counting based approaches using satisfiability (SAT) or satisfiability modulo theories (SMT) solvers. Indeed, this method uses a set of semantic type-inference rules to reason about distribution type. These rules are kept abstract initially to allow fast deduction, and then specified when the abstract version is not able to resolve the verification problem. This method is implemented in a software tool called SCVerify and is evaluated on cryptographic benchmarks including advanced encryption standard (AES) and message authentication code Keccak (MAC-Keccak). The experimental results show that the method significantly outperforms state-of-the-art techniques in terms of accuracy and scalability.

    Reference | Related Articles | Metrics
    Abstract325
    PDF300
    User Behavior Analysis with RNN and Graph Neural Networks
    WANG Xiaodong, ZHAO Yining, XIAO Haili, WANG Xiaoning, CHI Xuebin
    Journal of Frontiers of Computer Science and Technology    2021, 15 (5): 838-847.   DOI: 10.3778/j.issn.1673-9418.2005018

    With the increasing amount of logs produced by nodes in CNGrid (China National Grid), traditional manual methods for user behavior analysis can no longer meet the need of daily analysis. In recent years, deep learning has shown good results in key tasks related to computer sciences, such as intrusion detection, image recognition, natural language processing and malware detection. This paper demonstrates how to apply deep learning models to user behavior analysis. To this end, this paper classifies user behavior in CNGrid and extracts a large number of user operation sequences bounded to sessions. These sequences are put into deep learning models.  This paper proposes a deep learning model that combines recurrent neural network (RNN) with graph neural network (GNN) to predict the user behavior. Graph neural network can catch the hidden state of the user’s local behavior, so it can be used as preprocessing. Recurrent neural network can catch the message of time sequence. The model is built by combining GNN and RNN to acquire both advantages. In order to verify the effectiveness of the model, this paper conducts experiments of the real user behavior datasets on CNGrid and compares them with a variety of other methods. Experimental results demonstrate the effectiveness of this novel deep learning model.

    Reference | Related Articles | Metrics
    Abstract648
    PDF706
    Joint Optimization Scheme of Resource Allocation and Offloading Decision in Mobile Edge Computing
    LIU Jijun, ZOU Shanhua, LU Xianling
    Journal of Frontiers of Computer Science and Technology    2021, 15 (5): 848-858.   DOI: 10.3778/j.issn.1673-9418.2006087

    Considering the problem of users?? high processing delay and energy consumption in mobile edge computing (MEC), a joint optimization scheme of resource allocation and offloading decision based on the “cloud-edge-end” three-tier MEC computation offloading structure is proposed. Firstly, considering the system delay and energy con-sumption, the optimization problem is studied in order to maximize the users?? task offloading gain, which is measured by a weighted sum of reductions in tasks?? relative processing delay and energy consumption. Secondly, the priority is set for users?? tasks and  the offloading decision is initialized according to the data size of tasks. Then, the channel allocation algorithm that balances transmission performance is proposed to allocate channel resources for offloading tasks. For the tasks that are offloaded to the same edge server, the optimal allocation of computing resources can be achieved by computing for resources with the goal of maximizing resources?? profit. Finally, the optimization problem is proven to be a potential function about the offloading decision based on game theory, that is, there exists a Nash equilibrium, and the iterative method by comparing the gain value is used to achieve the offloading decision under Nash equilibrium. The simulation results show that the proposed joint optimization scheme achieves the maximum total system gain under meeting the processing delay requirements of users, and effectively improves the performance of computation offloading.

    Reference | Related Articles | Metrics
    Abstract483
    PDF515
    Double Cuckoo Search Algorithm with Dynamically Adjusted Probability
    CHEN Cheng, HE Xingshi, YANG Xinshe
    Journal of Frontiers of Computer Science and Technology    2021, 15 (5): 859-880.   DOI: 10.3778/j.issn.1673-9418.2004031

    Cuckoo search algorithm is an emerging bionic intelligent algorithm, which has the shortages of  low search precision, easy to fall into local optimum and slow convergence speed. Double cuckoo search algorithm with dynamically adjusted probability (DECS) is proposed. Firstly, the population distribution entropy is introduced into the adaptive discovery probability P, and the size of the discovery probability P is dynamically changed by the iteration order of the algorithm and the population distribution situation. It is advantageous to balance the ability of cuckoo algorithm local optimization and global optimization and accelerate the convergence speed. Secondly, in the formula for updating the path position of cuckoo??s nest search, a new step-size factor update and optimization method is adopted to form a double search mode of Levy flight, which sufficiently searches the solution space. Finally, the nonlinear logarithmic decreasing inertial weight is introduced into the updated formula of stochastic preference walk, so that the algorithm can effectively overcome the shortcoming of being easily trapped into a local optimum, and improve search ability. Compared with four algorithms, simulation results of 19 test functions show that, the optimization performance of the improved cuckoo algorithm is significantly improved, the convergence speed is faster, the solution accuracy is higher, and it has stronger ability of global search and jumping out of local optimum.

    Reference | Related Articles | Metrics
    Abstract277
    PDF314
    Method of Code Features Automated Extraction
    SHI Zhicheng, ZHOU Yu
    Journal of Frontiers of Computer Science and Technology    2021, 15 (3): 456-467.   DOI: 10.3778/j.issn.1673-9418.2005048

    The application of neural networks in software engineering has greatly eased the pressure of traditional method of extracting code features manually. Previous code feature extraction models usually regard code as natural language or heavily depend on the domain knowledge of experts. The method of transferring code into natural language is too simple and can easily cause information loss. However, the model with heuristic rules designed by experts is usually too complicated and lacks of expansibility and generalization. In regard of the problems above, this paper proposes a model based on convolutional neural network and recurrent neural network to extract code features through abstract syntax tree (AST). To solve the problem of gradient vanishing caused by the huge size of AST, this paper splits the AST into a sequence of small ASTs and then feeds these trees into the model. The model uses convolutional neural network and recurrent neural network to extract structure information and sequence information respectively. The whole procedure doesn??t need to introduce the domain knowledge of experts to guide the model training and the model will automatically learn how to extract features through the codes which have been labeled classification. This paper uses the task of similar code search to test the performance of the trained encoder, the metric of Top1, NDCG and MRR is 0.560, 0.679 and 0.638 respectively. Compared with recent state-of-the-art feature extraction deep learning models and common similar code detection tools, the proposed model has significant advantages.

    Reference | Related Articles | Metrics
    Abstract1083
    PDF818
    Dynamic Matrix Clustering Method for Time Series Events
    MA Ruiqiang, SONG Baoyan, DING Linlin, WANG Junlu
    Journal of Frontiers of Computer Science and Technology    2021, 15 (3): 468-477.   DOI: 10.3778/j.issn.1673-9418.2008094

    Time series events clustering is the basis of studying the classification of events and mining analysis. Most of the existing clustering methods directly aim at continuous events with time attribute and complex structure, but the transformation of clustering objects is not considered, hence the accuracy of clustering is extremely low, and the efficiency is limited. In response to these problems, a time series events oriented dynamic matrix clustering method RDMC is proposed. Firstly, the r-nearest neighbor evaluation system is established to measure the representativeness of the event according to the evaluation value, and the candidate set of RDS (representative and diversifying sequences) is constructed by the backward difference calculation strategy of the nearest neighbor score. Secondly, a method of RDS selection based on combinatorial optimization is proposed to obtain the optimal solution of RDS from the candidate set quickly. Finally, on the basis of dynamically constructing the distance matrix between RDS and the data set, a matrix clustering method based on K-means is proposed to realize the effective division of time series events. Experimental results show that compared with the existing methods, the method proposed in this paper has obvious advantages in clustering accuracy, clustering reliability, and clustering efficiency.

    Reference | Related Articles | Metrics
    Abstract376
    PDF406
    Social Network Information Diffusion Method with Support of Privacy Protection
    GAO Ang, LIANG Ying, XIE Xiaojie, WANG Zisen, LI Jintao
    Journal of Frontiers of Computer Science and Technology    2021, 15 (2): 233-248.   DOI: 10.3778/j.issn.1673-9418.2004007

    Current relevant researches on influence propagation of social networks focus on how to use a small size seed set to produce the highest impact in social networks, and they often regard forwarding as the only way of information diffusion, ignoring other ways of information diffusion. For example, users can disseminate information by publishing a message with similar content to the message they see. This way of diffusion (referred to as mentioning) is difficult to track, and it is easy to cause the risk of privacy disclosure. Aiming at the causes of privacy leakage in social networks, this paper defines a social network information diffusion model supporting mentioning relationship, and presents a social network information diffusion algorithm LocalGreedy, which can ensure messages sent by users are not leaked to the specified, maximize the influence of the propagation and balance the contradiction between privacy protection and message propagation. This paper proposes an incremental strategy to construct a seed set while reducing time complexity caused by enumeration. After that, giving the calculating method on local influence subgraph, the influence generated by seed set propagation can be quickly estimated. When estimating the influence, a calculation method for deriving the upper limit of privacy leakage probability is proposed to ensure the privacy protection constraint limit and avoid time complexity caused by the Monte Carlo simulation. The crawled Sina Weibo dataset is used to carry out experimental verification and example analysis. The experimental results show that the proposed method is effective.

    Reference | Related Articles | Metrics
    Abstract359
    PDF542
    Cache Prefetching Strategy Based on Correlation of Image Layers in Docker Registry
    ZHANG Chen, DENG Yuhui
    Journal of Frontiers of Computer Science and Technology    2021, 15 (2): 249-260.   DOI: 10.3778/j.issn.1673-9418.2003025

    With the popularization of container technology, large-scale Docker public registries use object storage services to solve the problem of the sharp increasement in the number of images, but this loosely coupled registry design results in higher latency overhead. A cache prefetching strategy named LCPA (layer correlation prefetch algorithm) is proposed based on the correlation of image layers to enhance registry performance. When the registry server cache misses, LCPA creates the storage structure of layer by analyzing image metadata and calculates it by the correlation model to obtain the set of relevant layers. Finally, the registry actively prefetches the set from the back-end storage to the memory to improve the cache hit ratio. Experiment uses the Docker trace collected in real production workloads to test algorithm. The results show that LCPA outperforms the traditional cache algorithms, such as LRU, LIRS and GDFS etc., improves the average cache hit ratio by 12%-29% and increases the average latency saving by 21.1%-49.4%. Compared with the existing LPA prefetching algorithm, LCPA improves the cache hit ratio by 25.6%. Simulation experiments show that LCPA can effectively utilize the cache, greatly improves the cache hit ratio of Docker registry, and reduces the latency overhead of pulling images.

    Reference | Related Articles | Metrics
    Abstract384
    PDF344
    Exploring Stability in WiFi Sensing System Based on Fresnel Zone Model
    NIU Kai, ZHANG Fusang, WU Dan, ZHANG Daqing
    Journal of Frontiers of Computer Science and Technology    2021, 15 (1): 60-72.   DOI: 10.3778/j.issn.1673-9418.1912017

    WiFi based contactless sensing systems use pervasive wireless communication signals in the environment to sense human activities in a natural way, enabling many promising applications. From fine-grained activity sensing to coarse-grained activity recognition, existing work have done a great deal of exploration. However, there is lack of understanding and tackling the serious unstable sensing performance problem. While changing the human target, the position of transceivers, and test environment, the system performance is severely degraded. The reason behind the instability of WiFi-based sensing system is that human activities induce the inconsistent signal patterns inherently at different positions. This paper proposes the Fresnel zone-based diffraction and reflection sensing model, which can be used to accurately quantify the relationship between the target??s position with respect to the transceiver, movement trajectory and the signal variation pattern. By illustrating two application examples, i.e., fine-grained finger gesture recognition and coarse-grained fitness activity recognition, and guided by the sensing model, this paper explores the reason behind the unstable performance for sensing system.  This paper clearly explains how to obtain the consistent signal patterns and how to generate easily distinguishable signal patterns, further presents the methods to improve the performance of wireless sensing systems.

    Reference | Related Articles | Metrics
    Abstract731
    PDF1354
    Combinatorial Auction-Based Mechanism for Task Offloading in Edge Computing
    LI Yinghao, SONG Tian, YANG Yating
    Journal of Frontiers of Computer Science and Technology    2021, 15 (1): 73-83.   DOI: 10.3778/j.issn.1673-9418.2001043

    In the era of the Internet of everything, the rapid increase in data volume and computation demand has prompted the evolution of application deployment mode from cloud computing to edge computing in order to reduce bandwidth consumption and response delay. However, there is a two-way selection problem between the application service provider (ASP) and the edge computing provider (ECP) in the process of task offloading. To solve this pro-blem, this paper proposes a combinatorial auction-based mechanism for task offloading in edge computing. First, this paper establishes a system model, explains the key issues of model implementation, proposes a heuristic task sel-ection algorithm for ECPs based on the analysis of their bidding process where choosing tasks to maximize resource utilization is proven to be an NP-complete problem, and then designs two auction algorithms, single-winner auction and multi-winner auction to fit trust-first and efficiency-first scenarios respectively. The experimental results show that compared with the single auction mechanism, the proposed scheme improves the utilization of ECP resources by 13%, and increases the utility of ASP by 37%.

    Reference | Related Articles | Metrics
    Abstract331
    PDF605
    Multi-objective Discrete Combinatorial Optimization Algorithm Combining Problem- decomposition and Adaptive Large Neighborhood Search
    WEI Qian, JI Bin
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2306032
    Accepted: 01 November 2023

    Abstract32
    PDF34
    Dynamic Task Decomposition and Persistently Operate Mechanism for Spatial Distributed Computing
    SUO Xiaotian, YANG Yating, SONG Tian
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308046
    Accepted: 09 November 2023

    Abstract35
    PDF26
    Few-shot Named Entity Recognition with Prefix-Tuning
    LYU Haixiao, LI Yihong, ZHOU Xiaoyi
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2307060
    Accepted: 09 November 2023

    Abstract92
    PDF87
    Pelican optimization algorithm Combining unscented sigma point mutation and cross reversion
    ZUO Fengqin, ZHANG Damin, HE Qing, BAN Yunfei, SHEN Qianwen
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308010
    Accepted: 17 November 2023

    Abstract38
    PDF30
    Integrating User Relation Representations and Information Diffusion Topology Features for Information Propagation Prediction
    WU Yunbing, GAO Hang, ZENG Weisen, YIN Aiying
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2309050
    Accepted: 17 November 2023

    Abstract47
    PDF50
    Multi-scale and boundary fusion network for skin lesion regions segmentation
    WANG Guokai, ZHANG Xiang, WANG Shunfang
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2306003
    Accepted: 17 November 2023

    Abstract43
    PDF50
    Auction mechanism driven data incentive sharing solution
    LU Yu, WANG Jingyu, LIU Lixin, WANG Haonan
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2310099
    Accepted: 14 November 2023

    Abstract51
    PDF28
    Fusion of Global Enhancement and Local Attention Features for Expression Recognition Network
    LIU Juan, WANG Ying, HU Min, HUANG Zhong
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2307013
    Accepted: 22 November 2023

    Abstract68
    PDF62
    Domain Adaptation Algorithm for 3D Human Pose Estimation With Spatial Attention and Position Optimization
    JIANG Youpeng, HUA Yang, SONG Xiaoning
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2307016
    Accepted: 23 November 2023

    Abstract74
    PDF99
    A Downsampling Algorithm with Fusion of Different Receptive Field Sizes in Deep Detection Methods
    GU Zhenghua, LIU Gaqiong, Shao Changbin, YU Hualong
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308064
    Accepted: 23 November 2023

    Abstract20
    PDF10
    Research on knowledge graph entity prediction method of multimodal curriculum learning
    XU Zhihong, HAO Xuemei, WANG Liqin, DONG Yongfeng, WANG Xu
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2308085
    Accepted: 22 November 2023

    Abstract64
    PDF78
    Nested Named Entity Recognition Combining Multi-Modal and Multi-Span Features
    QIU Yunfei, XING Haoran, YU Zhilong, ZHANG Wenwen
    Journal of Frontiers of Computer Science and Technology    DOI: 10.3778/j.issn.1673-9418.2302029
    Accepted: 29 March 2023

    Abstract113
    PDF148