Content of Theory and Algorithm in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Research on Granular Conversion Computing in Algebraic Quotient Space
    WEI Zongxuan, WANG Jiayang
    Journal of Frontiers of Computer Science and Technology    2022, 16 (12): 2870-2878.   DOI: 10.3778/j.issn.1673-9418.2103028

    Granular computing is a problem processing paradigm based on multi-level structure, which has attracted extensive attention of domestic and foreign scholars in recent years. Granular transformation and problem solving are key issues of multi-granular computing. However, the algebraic quotient space model lacks discussion of these issues. In light of above problems, for the algebraic quotient space model, three complete clusters of algebraic quotient space are defined according to the algebraic quotient space construction methods, so as to analyze and demonstrate the closeness of granular conversion. Furthermore, for different granularity principles and modes, the complete algebraic granularity conversion methods are given from multiple angles. Next, the similarities and differences between different conversion methods and the relationship between granularity conversion results of the algebraic are discussed. In addition, in order to describe the solution results of algebraic problems after coarse grain and fine grain transformations, the consistency principle of solution results is proposed based on the granularity transformation method and algebraic solution rules. The reliability of the granularity conversion methods and the consistency principle is proven by theoretical analysis, and the effectiveness of the proposed methods is verified by an example. The example results are consistent with the theoretical analysis conclusions, which proves the correctness of the consistency principle. It solves the core problem of granular computing using algebraic quotient space model, and provides a theoretical basis for solving large-scale complex problems using algebraic granular computing.

    Reference | Related Articles | Metrics
    Abstract151
    PDF94
    HTML5
    Three-Way Concept Acquisition and Attribute Characteristic Analysis Based on Pictorial Diagrams
    WAN Qing, MA Yingcang, LI Jinhai
    Journal of Frontiers of Computer Science and Technology    2022, 16 (12): 2879-2889.   DOI: 10.3778/j.issn.1673-9418.2104120

    Three-way concepts analysis, an effective tool for knowledge discovery, is a combination of three-way decision and formal concept analysis. Based on the connections between three-way concepts and formal concepts, the method of obtaining three-way concepts and the method of how to judge attribute characteristics are studied respectively from the perspective of pictorial diagrams. Inspired by the pictorial-diagrams-based acquisition methods of concepts (formal concept, object-oriented concept and property-oriented concept), and combining the connections between three kinds of concepts in combinatorial contexts and four kinds of three-way concepts, the definitions of combinatorial-property (combinatorial-object) pictorial diagram and property pair induced (object pair induced) three-way pictorial diagram are proposed. Then, the acquisition approaches to four kinds of three-way concepts are investigated by using the novel proposed pictorial diagrams. In addition, from the perspective of preserving the structure of the lattice, the general definitions of the reduction and attribute characteristics are given, attribute characteristics of an object induced three-way concept lattice are analyzed by discernibility matrix, and the judg-ment theorems of attribute characteristics are given based on property pair induced pictorial diagrams.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract247
    PDF81
    HTML7
    Improved Whale Optimization Algorithm for Solving High-Dimensional Optimiza-tion Problems
    WANG Yonggui, LI Xin, GUAN Lianzheng
    Journal of Frontiers of Computer Science and Technology    2022, 16 (12): 2890-2902.   DOI: 10.3778/j.issn.1673-9418.2104029

    Aiming at the problems of insufficient global exploration ability and easy to fall into local extremes when dealing with high-dimensional optimization problems, an improved whale optimization algorithm is proposed. Firstly, an initialization strategy combining Fuch chaos mapping and optimized oppsition-based learning is used in the search space to generate good quality chaotic initial populations with good diversity by using higher search effi-ciency of Fuch mapping, and then combined with the optimized oppsition-based learning strategy to generate good whale populations while ensuring population diversity, laying foundation for the global search of the algorithm. Secondly, the parameter A is adjusted in the global exploration phase to help the whale populations to perform global search more effectively and avoid premature convergence while balancing global exploration and local exploitation.Finally, the Laplace operator is introduced in the local exploitation stage to perform dynamic crossover operation for optimal individual. Children generation is produced farther away from the parent generation in the early iteration to improve the global search ability to get rid of local extreme value binding, and points are produced closer to the parent generation in the late iteration to refine the search range to improve the solution accuracy. Ten standard test functions are selected for simulation in 100, 500 and 1000 dimensions. The results show that this algorithm is significantly better than other comparative algorithms in terms of convergence speed, solution accuracy and sta-bility, and can effectively deal with high-dimensional optimization problems.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract310
    PDF186
    HTML7
    Attribute Selection via Maximizing Independent-and-Effective Classification Information Ratio
    LIU Ye, DAI Jianhua, CHEN Jiaolong
    Journal of Frontiers of Computer Science and Technology    2022, 16 (11): 2619-2627.   DOI: 10.3778/j.issn.1673-9418.2104117

    Attribute selection in rough set theory has wide practical application values. Most existing attribute selection approaches neglect the relationship among the classification information and redundant information brought by the candidate attribute, and the retained classification information provided by the selected attributes when selecting the candidate attribute. Therefore, the significant evaluation function of effective classification information ratio is defined for attribute selection, and an attribute selection approach via the effective classification information ratio is proposed further, which can effectively select the attributes that can provide lots of effective classification information and low redundant information. Besides, considering the influence of candidate attribute on the retained classification information provided by the selected attributes, another significant evaluation function of independent-and-effective classification information ratio is advanced, and an improved attribute selection approach is proposed, which can contribute to balancing the relationship between the effective classification information and redundant information of the attributes, and improving the overall recognition ability of the selected attribute subset. Finally, comparative experiments are conducted from the aspects of classification performance and statistical Bonferroni-Dunn test, and the experimental results illustrate that the proposed attribute selection approaches are effective.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract191
    PDF83
    HTML5
    Extreme Individual Guided Artificial Bee Colony Algorithm
    CHEN Lan, WANG Lianguo
    Journal of Frontiers of Computer Science and Technology    2022, 16 (11): 2628-2641.   DOI: 10.3778/j.issn.1673-9418.2104105

    To overcome the drawbacks of poor development ability, easy to fall into local optimum, slow conver-gence speed of artificial bee colony (ABC) algorithm in solving function optimization problems, an extreme indi-vidual guided artificial bee colony (EABC) algorithm is proposed. Firstly, global extremum and neighborhood ex-tremum individuals are used to guide search for employed bees and following bees. The global extremum individual guided search is good for the retention and development of excellent individuals in the population, so that the algorithm jumps out of local extremum and avoids premature convergence. The neighborhood extremum individual guided search is good for enhancing the search accuracy and improving the convergence speed of the algorithm, and the random number r is used to balance two search mechanisms. Secondly, the small probability mutation operator is introduced into search process, and each dimension of bee individual is mutated with a small probability to overcome local extremum and premature convergence of the algorithm. Finally, the greedy selection strategy based on the value of objective function is adopted to improve the optimization performance of the algorithm. Simulation experiments are carried out with 28 test functions and the algorithm proposed in this paper is compared with other algorithms. Experimental results show that the improved algorithm has higher optimization performance and faster convergence speed.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract242
    PDF188
    HTML9
    Many-Objective Evolutionary Algorithm Based on Distance Dominance Relation
    GU Qinghua, XU Qingsong, LI Xuexian
    Journal of Frontiers of Computer Science and Technology    2022, 16 (11): 2642-2652.   DOI: 10.3778/j.issn.1673-9418.2103053

    There are two main aspects of research in multi-objective optimization algorithm, namely, convergence and diversity. While, it is difficult for original algorithms to maintain the diversity of solutions in the high-dimensional objective space. In order to enhance the diversity of algorithms in many-objective optimization problems, a new distance dominance relation is proposed in this paper. Firstly, in order to ensure the convergence of the algorithm, in the same niche, the distance dominance relation calculates the distance from the candidate solution to the ideal point as the fitness value, and selects the candidate solution with good fitness value as the non-dominant solution.Then, in order to enhance the diversity of the algorithm, the distance dominance relation sets each candidate solution to have the same niche and ensures that only one optimal solution is retained in the same niche. Finally, the VaEA algorithm is improved based on the proposed distance dominance relation, and the algorithm is named VaEA-DDR. On the DTLZ and IDTLZ test of 5, 8, 10, 15 dimensional objectives, the improved algorithm is compared with six commonly used algorithms. Experimental results show that the improved algorithm is highly competitive and can significantly enhance the diversity of the algorithm.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract374
    PDF238
    HTML55
    Particle Swarm Optimization Combined with Q-learning of Experience Sharing Strategy
    LUO Yixuan, LIU Jianhua, HU Renyuan, ZHANG Dongyang, BU Guannan
    Journal of Frontiers of Computer Science and Technology    2022, 16 (9): 2151-2162.   DOI: 10.3778/j.issn.1673-9418.2102070

    Particle swarm optimization (PSO) has shortcomings such as easy to fall into local optimum, insufficient diversity and low precision. Recently, adopting the strategy of combining the reinforcement learning method like Q-learning to improve the PSO algorithm has become a new idea. However, this method has been proven to suffer the insufficient objectiveness of parameter selection and the limited strategy is not capable of coping with various situations. This paper proposes a Q-learning PSO with experience sharing (QLPSOES). The algorithm combines the PSO algorithm with the reinforcement learning method to construct a Q-table for each particle for dynamic selection of particle parameter settings. At the same time, an experience sharing strategy is designed, in which the particles share the “behavior experience” of the optimal particle through the Q-table. This method can accelerate the convergence of Q-table, enhance the learning ability between particles, and balance the global and local search ability of the algorithm. In addition, this paper uses orthogonal analysis experiments to find reinforcement learning methods for the selection of state, action parameters and reward functions in the PSO algorithm. The experiment is tested on the CEC2013 test function. The results show that the convergence speed and convergence accuracy of the QLPSOES algorithm are significantly improved compared with other algorithms, which verifies that the algorithm has better performance.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract664
    PDF328
    HTML163
    Weighted K-nearest Neighbors and Multi-cluster Merge Density Peaks Clustering Algorithm
    CHEN Lei, WU Runxiu, LI Peiwu, ZHAO Jia
    Journal of Frontiers of Computer Science and Technology    2022, 16 (9): 2163-2176.   DOI: 10.3778/j.issn.1673-9418.2102021

    Density peaks clustering (DPC) algorithm is a clustering algorithm based on density. The algorithm is simple in principle and efficient in operation, and can find any non-spherical class clusters. However, there are some defects in the algorithm. Firstly, the measurement criteria defined by the local density are not uniform and there are great differences in the clustering results. Secondly, the allocation strategy is prone to allocation errors, that is once a sample is incorrectly allocated, a series of subsequent samples will be incorrectly allocated too. In order to solve these problems, this paper proposes a weighted K-nearest neighbors and multi-cluster merge density peaks clustering (WKMM-DPC) algorithm. Combined with the idea of weighted K-nearest neighbors, the local density of the sample is redefined by introducing the weight coefficient of the sample, which makes the local density more dependent on the position of the sample in the K-nearest neighbors, and unifies the measurement criteria of density definition. The similarity between clusters is defined, and the clusters are merged according to the metric to avoid the joint error in the allocation of remaining samples. Experiments on artificial and UCI datasets show that the clustering performance of the proposed algorithm is better than that of FKNN-DPC, DPCSA, FNDPC, DPC and DBSCAN algorithms.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract546
    PDF210
    HTML14
    Hierarchical Multi-attribute Decision-Making Method with Twofold Integral Operator of Cloud Model
    WANG Tiedan, ZHANG Yuqing, PENG Dinghong
    Journal of Frontiers of Computer Science and Technology    2022, 16 (8): 1898-1909.   DOI: 10.3778/j.issn.1673-9418.2103009

    Due to the complexity of the actual decision-making problem, most indicators have the characteristics of mutual dependence. Therefore, for the multi-attribute decision-making problem with correlation between indicators under uncertain environment, a decision method with hierarchical structure as the framework and cloud model twofold integral (C-TI) as the aggregation operator is proposed. Firstly, in order to reflect the uncertain thinking of decision makers in determining the index weights, a cloud model fuzzy measure is proposed. Secondly, the cloud model twofold integral operator is constructed on the basis of cloud model fuzzy measure, which takes the cloud model as the representation of decision information, fully reflects the fuzzy and random nature of decision information. And twofold integral combining the advantages of Choquet integral and Sugeno integral is applied to aggregating the index values, which can not only effectively deal with the interaction problem between indexes, but also take into account the focus relationship between the index value and the weight. Subsequently, the theorems and properties of the cloud model twofold integral operator and the variation form of specific conditions are then discussed and the corresponding proofs are given. Finally, a hierarchical multi-attribute decision making method for cloud model twofold integral operator is constructed and applied to the corporate social responsibility assessment problem. Compared with other models, it is further shown that the method constructed in this paper is suitable for dealing with fuzzy and stochastic uncertainty problems and can take into account the overall decision effect while the indicators are correlated.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract217
    PDF95
    HTML3
    Density-Peak Clustering Algorithm on Decentralized and Weighted Clusters Merging
    ZHAO Liheng, WANG Jian, CHEN Hongjun
    Journal of Frontiers of Computer Science and Technology    2022, 16 (8): 1910-1922.   DOI: 10.3778/j.issn.1673-9418.2111138

    The clustering by fast search and find of density peaks (DPC) is a density-based clustering algorithm proposed in recent years, which has the advantages of simple principle, no iteration and clustering of arbitrary shape. However, the algorithm still has some defects: clustering around clustering centers makes the clustering results significantly affected by central points, and the number of clustering centers needs to be manually specified; the cutoff distance considers the distribution density of the data but ignores the internal features; if there is a sample allocation error in the clustering process, the subsequent sample clustering may amplify the error. To solve the above problems, this paper proposes a density-peak clustering algorithm on decentralized and weighted clusters merging (DCM-DPC). This algorithm introduces the weight to redefine the local density, dividing core sample groups located in different local high density regions to replace cluster centers as the cluster basis. Finally, the remaining samples are assigned to the highest coupled core sample groups or labeled as discrete points by their near neighbor samples. Experiments on artificial and UCI datasets show that the clustering performance of the proposed algorithm outperforms the contrast algorithms, and the boundary samples partition of the entangled clusters is more accurate.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract276
    PDF95
    HTML2
    Rough K-means Clustering Algorithm Combined with Artificial Bee Colony Optimization
    YE Tingyu, YE Jun, WANG Hui, WANG Lei
    Journal of Frontiers of Computer Science and Technology    2022, 16 (8): 1923-1932.   DOI: 10.3778/j.issn.1673-9418.2012099

    The rough K-means clustering algorithm has strong ability to deal with data with uncertain boundaries. However, this algorithm also has limitations such as sensitivity to the selection of initial clustering centers, and use of fixed weights and thresholds resulting in unstable clustering results and decreased accuracy. A lot of research has been devoted to solving these problems from different angles. With introduction of artificial bee colony (ABC) algorithm, the algorithm is improved from three aspects. Firstly, based on the ratio of the number of objects in lower approximate set and the boundary set to the product of the difference of the objects in the dataset, a more reasonable method of dynamically adjusting the weights of approximation and boundary set is designed. Secondly, in order to speed up the convergence speed of the algorithm, an implementation method of adaptive threshold ε associated with the number of iterations is given. Thirdly, by constructing the fitness function of the nectar source location, the bee colony is guided to search for high-quality nectar sources globally. The best position of honey source obtained by each iteration is taken as the initial cluster center, and the cluster is carried out on the basis of this. Experimental results show that the improved algorithm improves the stability of the clustering results and obtains better clustering effect.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract287
    PDF140
    HTML5
    Butterfly Optimization Algorithm for Chaotic Feedback Sharing and Group Synergy
    LI Shouyu, HE Qing, DU Nisuo
    Journal of Frontiers of Computer Science and Technology    2022, 16 (7): 1661-1672.   DOI: 10.3778/j.issn.1673-9418.2012066

    A butterfly optimization algorithm (BOA) based on chaotic feedback sharing and group synergy (CFSBOA) is proposed to solve the shortcomings of low precision and easy to fall into local optimum. Firstly, using Hénon chaos to initialize the population can make the population cover the search blind area as much as possible, increase the diversity of the population, and improve the ability of optimizing the algorithm. Secondly, using the ideas of positive and negative feedback mechanism in feedback control circuit, it builds butterfly feedback shared communication network, allowing individuals to receive information from multiple directions to help populations of positioning the location of the optimal solution and perform careful search, enhance the ability to escape from local optimum and accelerate the algorithm convergence speed. Finally, the collective synergistic effect mechanism is used to improve and balance the global and local search ability and enhance the global and local optimization ability of the algorithm. The performance of the improved butterfly optimization algorithm is verified by using different dimension benchmark test functions, statistical test, Wilcoxon test and multiple types of CEC2014 partial functions. Compared with the new improved butterfly algorithm and other swarm intelligence algorithms, the experimental results show that the proposed algorithm has obvious advantages.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract397
    PDF330
    HTML12
    New Type of Soft (Prime) Ideals in Commutative BCK-Algebras
    HUANG Yu, LIAO Zuhua
    Journal of Frontiers of Computer Science and Technology    2022, 16 (7): 1673-1680.   DOI: 10.3778/j.issn.1673-9418.2012083

    The soft set theory is an important mathematical tool for dealing with uncertainty. By endowing a par-ameter set as a commutative BCK-algebra (that is commutative weak-BCI-algebra), the notions of a new type of soft prime ideals, annihilators of soft sets and new type of involutory soft ideals in commutative BCK-algebras are introduced. Two new compositional operations are defined and used to characterize the new type of soft ideals in commutative BCK-algebras. By using partial ordering on commutative BCK-algebras, some properties of the new type of soft ideals are studied. Properties of annihilators of soft sets and new type of involutory soft ideals are obtained. The existence of a new type of soft prime ideals in commutative BCK-algebras and its difference from the standard soft prime ideals are illustrated with examples. It is shown that a soft set is a new type of soft prime ideals in commutative BCK-algebras and its level set is a prime ideal is not a necessary and sufficient condition, which is different from the results of the usual fuzzy algebra. Some equivalent characterizations of the new type of soft prime ideals in commutative BCK-algebras are given. Furthermore, the properties of its homomorphism image and inverse image are discussed.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract245
    PDF76
    HTML5
    Hybrid Algorithm of Slime Mould Algorithm and Arithmetic Optimization Algorithm Based on Random Opposition-Based Learning
    JIA Heming, LIU Yuxiang, LIU Qingxin, WANG Shuang, ZHENG Rong
    Journal of Frontiers of Computer Science and Technology    2022, 16 (5): 1182-1192.   DOI: 10.3778/j.issn.1673-9418.2105016

    Slime mould algorithm (SMA) and arithmetic optimization algorithm (AOA) are new meta-heuristic optimization algorithms proposed recently. SMA has strong ability of global exploration, but the oscillation effect is weak in the late iteration. It is easy to fall into local optimum, and the contraction mechanism is not strong, which leads to slow convergence speed. AOA algorithm uses multiplication and division operator to update position, which has strong randomness and good ability to avoid premature convergence. To solve the above problems, this paper combines the two algorithms and uses random opposition-based learning strategy to improve the convergence speed, and proposes a hybrid algorithm of slime mould algorithm and arithmetic optimization algorithm based on random opposition-based learning (HSMAAOA) with superior performance and high efficiency. The improved algorithm retains the SMA’s exploration phase and the exploitation phase will be replaced by the multiplication and division operators, which improves the capacity of the algorithm and the ability to jump out of the local optimal solution. In addition, random opposition-based learning strategy is used to enhance the diversity of the improved algorithm population and improve the convergence speed. The experimental results show that the HSMAAOA algorithm has good robustness and optimization accuracy, and significantly improves the convergence speed. Finally, the applicability and effectiveness of HSMAAOA in engineering problems are verified through the design of welded beams and the design of pressure vessels.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract533
    PDF352
    HTML15
    Shuffled Frog Leaping Algorithm Driven by Nuclear Center and Its Application
    LIU Liqun, GU Renyuan
    Journal of Frontiers of Computer Science and Technology    2022, 16 (5): 1169-1181.   DOI: 10.3778/j.issn.1673-9418.2108067

    Aiming at the defects of slow evolution speed and easy to fall into local convergence caused by the inertia provided by the current position of individual frog and the jump step of shuffled frog leaping algorithm (SFLA), a shuffled frog leaping algorithm driven by nuclear center (NCSFLA) is proposed, in which the jump evolution behavior of individual frog is defined as quantum mechanical behavior. In the global optimization, the concentric circle centered on the nucleus is used as the electron orbit to form the frog population. In the local optimization, three different local search strategies are used to update the worst individual in the population, such as jumping to the local optimal individual with the transition step as the radius, jumping to the global optimal individual with the driving step as the radius, and randomly generating non repeated frog individual components. Taking the electron orbit center, that is, the local optimal individual, as the inertial guidance of the transition, makes the convergence in the population more conducive to finding the local optimal solution and improving the search ability. If it falls into local optimization, the inertial guidance driven by the nuclear center, that is, the global optimal individual, makes the frog individuals gather around the nuclear center as much as possible, so as to speed up the convergence speed. The algorithm is applied to solving the capacity-limited vehicle routing problem (CVRP), and a shuffled frog leaping algorithm driven by nuclear center for capacity-limited vehicle routing problem (NCSFLA-CVRP) is proposed. In the test of 20 benchmark functions such as single peak value, multi-peak function and composite function, the expe-rimental results show that the improved shuffled frog leaping algorithm driven by nuclear center has the charac-teristics of fast convergence and high accuracy compared with other five algorithms. The test results of Solomon example standard test data show that this method can effectively improve the optimization performance of capacity-limited vehicle routing problem.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract223
    PDF85
    HTML9
    Skill Reduction and Assessment in Formal Context
    ZHOU Yinfeng, LI Jinjin
    Journal of Frontiers of Computer Science and Technology    2022, 16 (3): 692-702.   DOI: 10.3778/j.issn.1673-9418.2008024

    Knowledge space theory (KST) provides an effective way to construct knowledge evaluation system. Formal concept analysis (FCA) is a powerful tool for knowledge discovery. The knowledge space theory is closely related to the formal concept analysis. Knowledge space theory is applied to assessing learners’ knowledge and guiding future learning. At present, how to construct accurate knowledge structure is the key research problem of knowledge space theory. Based on the relationship between skills and items, this paper studies the relationship between knowledge space theory and formal concept analysis. Firstly, the concept of skill context is proposed, the one-to-one correspondence relationship between skill map and skill context is established, and the method of mutual conversion between skill map and skill context is obtained. Secondly, based on the skill context, the problem of knowledge structure construction is discussed, and the method of knowledge structure construction based on the concept lattice of skill context is obtained. Then, based on the skill context, the method of finding the knowledge base, the skill reduction method of keeping the knowledge base unchanged, and the method of building the skill context through the knowledge base are introduced. Finally, in the case of known knowledge state of learners, skills are assessed by judging the skills that learners must master, and learning path selection is discussed by selecting skills that can promote the change of knowledge state.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract491
    PDF173
    HTML10
    Fuzzy Intelligent Decision Tree Model and Its Application
    YU Xianfeng, GENG Shengling
    Journal of Frontiers of Computer Science and Technology    2022, 16 (3): 703-712.   DOI: 10.3778/j.issn.1673-9418.2009051

    Decision making is one of the core problems of computational intelligence. Based on the theory of fuzzy mathematics, a general fuzzy decision tree model is established. The nodes are used to describe the decision premise and control information, and the edges of the tree are used to formalize the reasoning rules. The reasonable fuzzy decision operators are defined on the nodes and the edges to make a multi-level comprehensive decision. Engineering decisions consider the costs, feasibility, and benefits of different options. The fusion of these information is used to measure the merits and demerits of decision making schemes. The weighted fuzzy intelligent decision-making model is established, and the algorithm of multi-attribute constrained decision-making based on the model is given. The complexity of the model and algorithm is discussed. Finally, through two application examples, it is proven that the decision model and the algorithm of optimal decision scheme consider qualitative and quantitative information, and the decision result is scientific and reasonable with large amount of information.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract350
    PDF262
    HTML12
    Energy Balancing for Multiple Devices with Multiple Tasks in Mobile Edge Computing
    PANG Yuan, WU Jigang, CHEN Long, YAO Mianyang
    Journal of Frontiers of Computer Science and Technology    2022, 16 (2): 480-488.   DOI: 10.3778/j.issn.1673-9418.2009072

    With the development of technology, mobile edge computing is facing the challenge of energy balancing of multi-device and multi-task. Related research mostly focuses on how to utilize the computing performance of edge servers to reduce the energy consumption and execution time of mobile devices during task processing. However, the existing research has not yet a good solution to the problem of multi-device and multi-task energy balancing. Aiming at this kind of energy balancing problem, this paper improves the existing edge computing system model, and gives a calculation model for the energy balancing optimization problem of multi-device and multi-task. At the same time, a greedy algorithm is proposed and the corresponding approximate ratio analysis is made. In addition, this paper is compared with the total energy consumption minimization algorithm and the random algorithm, and a large number of simulation experiments are conducted. Experimental results reveal that the average performance of the proposed greedy algorithm can be further improved by 66.59% in terms of energy balancing than the random algorithm. Compared with the brute force algorithm, under the classic task topology, when the minimum transmission power of the mobile device is 5 dBm and 6 dBm respectively, the greedy algorithm almost obtains the optimal solution.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract396
    PDF397
    HTML26
    Novel Discrete Differential Evolution Algorithm for Solving D{0-1}KP Problem
    ZHANG Fazhan, HE Yichao, LIU Xuejing, WANG Zekun
    Journal of Frontiers of Computer Science and Technology    2022, 16 (2): 468-479.   DOI: 10.3778/j.issn.1673-9418.2007047

    The discounted {0-1} knapsack problem (D{0-1}KP) is a more complex variant of the classic 0-1 knap-sack problem (0-1KP). In order to efficiently solve the D{0-1}KP by using discrete differential evolution algorithm, firstly, a novel V-shape transfer function (NV) is proposed. The real vector of an individual is mapped into a binary vector by NV. Compared with the existing S-shaped and V-shaped transfer function, NV has lower computational complexity and higher efficiency. Then, a new discrete differential evolution algorithm (NDDE) is given based on the novel V-shape transfer function. A novel and efficient method for solving D{0-1}KP is proposed by NDDE. Finally, in order to verify the efficiency of NDDE in solving D{0-1}KP, it is used to solve four kinds of large-scale D{0-1}KP instances, and the results are compared with the existing algorithms such as group theory-based optimi-zation algorithm (GTOA), ring theory-based evolutionary algorithm (RTEA), hybrid teaching-learning-based optimi-zation algorithm (HTLBO) and whale optimization algorithm (WOA). The results show that NDDE not only has higher accuracy, but also has good stability, which is very suitable for solving large-scale D{0-1}KP instances.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract496
    PDF309
    HTML47
    Research on Tacit Knowledge Transfer Based on Fusion of Three-Way Decision and Fuzzy Rough Set
    ZHANG Jianhua, LI Fangfang, LIU Yilin, YANG Lan
    Journal of Frontiers of Computer Science and Technology    2022, 16 (1): 253-260.   DOI: 10.3778/j.issn.1673-9418.2008065

    In the face of increasingly abundant knowledge resources, low-cost, high-precision knowledge transfer mechanisms can support knowledge service organizations to provide good knowledge services for knowledge users, realize effective allocation of knowledge resources, and improve the utilization of knowledge resources. This paper takes tacit knowledge as the research object, and proposes a model of tacit knowledge transfer based on three-way decision aiming at the characteristics of knowledge itself. The model mainly includes three parts: matching the user’s knowledge needs with existing knowledge resources; determining the decision threshold $\left( \alpha,\beta \right)$ based on the transfer cost; determining the region of “transfer” “non-transfer” and “delayed transfer” based on decision threshold. The empirical results show that, compared with the existing methods, the implicit knowledge transfer model based on three-way decision proposed in this paper reduces the cost of knowledge transfer due to misclassification by subdividing the transfer area. Meanwhile, the decision area is determined according to the similarity of views, improving the accuracy of knowledge transfer.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract284
    PDF133
    HTML13
    Semi-supervised Multi-view Classification via Consistency Constraints
    LIU Yu, MENG Min, WU Jigang
    Journal of Frontiers of Computer Science and Technology    2022, 16 (1): 242-252.   DOI: 10.3778/j.issn.1673-9418.2009020

    Since the traditional semi-supervised multi-view algorithms seldom take into account the diversity of information contained in different views and neglect the consistency of spatial structure between different views, they hardly achieve promising performance when dealing with multi-view data with noise and outlying entries. Although some researchers have proposed semi-supervised multi-view methods,these methods do not make full use of sample discriminant information and subspace structure information under different metric learning,which leads to the unsatisfactory classification results. To deal with the above problems,this paper proposes a semi-supervised multi-view classification via consistency constraint (SMCC) for multi-view data analysis. Firstly, the consistency constraints between different views are enhanced based on the Hilbert-Schmidt independence criteria (HSIC). Then, the dimensionality reduction is performed by feature projection to preserve the local manifold structure, which is integrated with Frobenius norm constraint to improve the robustness of the algorithm. Furthermore, the corresponding weights are adaptively assigned to different views to reduce the influence of feature information and noise pollution in different views. Finally, the proposed model can be solved efficiently using the linear alternative direction method with adaptive penalty and eigen-decomposition. The experimental results on four benchmark datasets show that the proposed algorithm can discover more effective discriminant information from multi-view data and its accuracy is improved.

    Table and Figures | Reference | Related Articles | Metrics
    Abstract356
    PDF507
    HTML117
    One-Stage Partition-Fusion Multi-view Subspace Clustering Algorithm
    ZHANG Pei, ZHU En, CAI Zhiping
    Journal of Frontiers of Computer Science and Technology    2021, 15 (12): 2413-2420.   DOI: 10.3778/j.issn.1673-9418.2009070

    Multi-view subspace clustering has attracted increasing attention for revealing the inherent low-dimension structure of the data. Nevertheless, most existing methods directly fuse the multiple noisy affinity matrices from the original data, and commonly conduct clustering after obtaining a unified multi-view representation. Separating the representation learning from the clustering process can result in a suboptimal clustering result. To this end, this paper proposes a one-stage partition-fusion multi-view subspace clustering algorithm. Instead of directly fusing the noisy and redundant affinity matrices, this paper fuses the more discriminative partition-level information extracted from the affinity matrices. Moreover, this paper proposes a new framework, integrating representation learning, multiple information fusion and final clustering process. The three sub-processes promote each other to serve clustering best. The promising clustering results can lead to better representations and therefore better clustering performance. Consequently, this paper solves the resultant optimization problem through an alternative algorithm. Experiment results on four real-world benchmark datasets show the effectiveness and superior performance of the proposed method over the state-of-the-art approaches.

    Reference | Related Articles | Metrics
    Abstract253
    PDF176
    Multi-population Genetic Algorithm Based on Optimal Weight Dynamic Control Learning Mechanism
    PAN Jiawen, QIAN Qian, FU Yunfa, FENG Yong
    Journal of Frontiers of Computer Science and Technology    2021, 15 (12): 2421-2437.   DOI: 10.3778/j.issn.1673-9418.2008044

    Genetic algorithm (GA) has strong global search ability and is easy to operate, but it has some disadvantages, such as slow convergence speed, and easy to fall into local extreme. In order to overcome these disadvantages, an improved genetic algorithm is proposed in this paper. Firstly, instead of the random initialization method, a uniform partition multi-population initialization method is used to generate the initial populations. This method calculates clustering centers by the criterion of Hamming distance, so as to generate different populations. The algorithm can make the initial solutions disperse in the solution space as much as possible, thus avoiding the problem of local extremes. Secondly, the ideas of multi-population parallel mechanism and learning mechanism are introduced to further improve the performance of algorithm. Based on the analysis of advantages and disadvantages of the two mechanisms, new improvements are made to these two mechanisms. Modified multi-population parallel mechanism and optimal weight dynamic control learning mechanism are proposed. In addition, the rationality of the two improved mechanisms is discussed. At last, the above mentioned two mechanisms and the new initialization method are combined. Simulation results show that the proposed algorithm has better performance in convergence speed and accuracy than other genetic algorithms.

    Reference | Related Articles | Metrics
    Abstract192
    PDF121
    Semi-supervised Clustering Method for Non-negative Functional Data
    YAO Xiaohong, HUANG Hengjun
    Journal of Frontiers of Computer Science and Technology    2021, 15 (12): 2438-2448.   DOI: 10.3778/j.issn.1673-9418.2105116

    Functional clustering analysis is an important tool for exploring functional data. Most of the existing functional clustering methods are essentially unsupervised learning and do not take into account the label information of data. To resolve the issues of unsupervised characteristics of existing functional clustering methods and the non-negative characteristics of functional data, a semi-supervised non-negative functional clustering method (SSNFC) is proposed, focusing on processing clustering of non-negative functional data with a little label information. Firstly, the label information is integrated into the functional clustering by introducing the constrained non-negative matrix factorization (CNMF) technique, and a one-step model is constructed, which fuses the curve fitting, non-negative constraint and functional clustering into one objective function. Secondly, an iterative updating algorithm is con-ducted, and its local convergence and time complexity are discussed. Finally, the experimental results on simulation data, Growth data and TIMIT (Texas Instruments and Massachusetts Institute of Technology) speech data indicate that SSNFC is helpful for improving clustering performance compared with other unsupervised functional clustering methods.

    Reference | Related Articles | Metrics
    Abstract194
    PDF123
    Concept Drift Data Stream Classification Algorithm Based on McDiarmid Bound
    LIANG Bin, LI Guanghui
    Journal of Frontiers of Computer Science and Technology    2021, 15 (10): 1990-2001.   DOI: 10.3778/j.issn.1673-9418.2006100

    Concept drift in data streams can cause significant performance degradation of existing classification models. Most current data stream algorithms for concept drift only aim at a certain type of concept drift (such as abrupt, gradual, or recurring drift), which is difficult to adapt to different scenarios. Therefore, this paper proposes a new data stream algorithm suitable for different types of concept drift. The proposed algorithm saves the latest classification results through a two-layer window, assigns weights to it based on the membership function and calculates the weighted error rate. Then the McDiarmid bound is used to analyze the difference [δ] between the error rates of current window and the past window, and the concept drift is detected according to the significance of[δ]. After detecting drift, the semi-parametric log-likelihood algorithm is used to check whether the current new concept is a recurrence of the past concept, and then whether to reuse the old classifier is decided. Experimental results show that, the proposed algorithm outperforms the similar existing algorithms in terms of average detecting delay, false positive rate, classification accuracy and running time.

    Reference | Related Articles | Metrics
    Abstract230
    PDF263
    Population System Optimization Algorithm with Impulsive Birth and Seasonal Killing
    HUANG Guangqiu, LU Qiuqin
    Journal of Frontiers of Computer Science and Technology    2021, 15 (10): 2002-2014.   DOI: 10.3778/j.issn.1673-9418.2007035

    To solve some nonlinear optimization problems, a new swarm intelligence optimization algorithm, the PSO-IBSK algorithm, is proposed by using the population dynamics model with impulsive birth and seasonal killing. In this algorithm, it is assumed that a certain population is composed of several individuals with two stages, young and adult. The young individuals are generated by the impulse birth of adult individuals and become adult individuals after a period of time. To improve the overall quality of the population, it is necessary to kill some adult individuals with poor growth status seasonally. The birth operator and growth operator in the algorithm can realize instantaneous and delayed information transfer from adult to young individuals, which is helpful for searching to jump out of traps of local optimal solutions. The killing operator can periodically clear the bad adult individuals, and the death operator can remove the weak individuals randomly, the two operators can improve the exploitation ability of the algorithm. The strong operator can realize the diffusion of strong information from strong individuals to weak individuals, and the competition operator can realize the effective information exchange between the young and the adult individuals, the two operators are conducive to enhancing the exploration ability of the algorithm. The evolu-tionary operator can ensure the global convergence of the algorithm. Most of the parameters of the algorithm are determined by the population dynamics model, which is scientific. The algorithm only deals with [6‰~8%] of the number of individual features each time, which greatly reduces the time complexity. The test results show that the algorithm has superior performance and is suitable for solving optimization problems with high dimension.

    Reference | Related Articles | Metrics
    Abstract159
    PDF218
    Parallel SaNSDE for Many-Core Sunway Processor
    KANG Shang, QIAN Xuezhong, GAN Lin
    Journal of Frontiers of Computer Science and Technology    2021, 15 (10): 2015-2024.   DOI: 10.3778/j.issn.1673-9418.2006059

    Evolutionary algorithm is an important method for solving large-scale optimization problems, which is widely applied to machine learning, process control, engineering optimization, management science, and social sciences. However, when the traditional evolutionary algorithms are used to high-dimensional and computing-density task, the performance of corresponding applications is difficult to be satisfactory. Parallelization on supercomputer is a popular solution to solve this problem. This paper proposes a two-level parallel self-adaptive differential evolution algorithm with neighborhood search (SaNSDE) on the Sunway TaihuLight, which implements process-level and thread-level parallelism. In the process-level parallelism, the cooperative co-evolution model and pool model are implemented, which divide large-scale problems into multiple low-dimensional problems and distribute them in different processes. In the thread-level parallelism, fitness calculation is accelerated. Experimental results show that the algorithm using the cooperative co-evolution model and the pool model, compared with the traditional parallel algorithm, improves the convergence effect more obviously after multi-core expansion. Compared with the serial algorithm, the two-level parallel SaNSDE algorithm achieves the maximum speedup of 134.29, 186.05, 239.01 and 189.80 in the four benchmark functions, respectively.

    Reference | Related Articles | Metrics
    Abstract175
    PDF246
    RIOPSO Algorithm for Fuzzy Cloud Resource Scheduling Problem
    LI Chengyan, SONG Yue, MA Jintao
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1534-1545.   DOI: 10.3778/j.issn.1673-9418.2006045

    To solve the cloud resource scheduling problem under time-cost constraints, a triangular fuzzy number is used to represent the uncertain task execution time, and a fuzzy cloud resource scheduling model is established. The objective function of scheduling model is to reduce the total execution time and total cost consumption of the task, and the decision variables are the mapping relationship between tasks and virtual machines. The re-randomization inertia weight orthogonal initialization particle swarm optimization algorithm (RIOPSO) is proposed to solve the fuzzy cloud resource scheduling. This algorithm uses the method of orthogonal initialization particle swarm optimiza-tion to improve the quality of the initial exploration of the optimal scheduling scheme. In the process of particle search, re-randomization is used to control the search range of particles, and real-time updating of inertia weight is used to control the speed of particles, and to obtain the optimal scheduling scheme. The randomly generated simula-tion data on the Cloudsim simulation platform are used to verify the problem model and optimization algorithm proposed in this paper, which proves the reliability of the model. The experimental results show that RIOPSO algorithm can reduce the total execution time and cost in cloud resource scheduling, and it has good performance in convergence speed and solving ability.

    Reference | Related Articles | Metrics
    Abstract270
    PDF226
    Research on Initialization Algorithm for Visual-Inertial SLAM System
    LIU Gang, GE Hongwei
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1546-1554.   DOI: 10.3778/j.issn.1673-9418.2005043

    Monocular vision and inertial simultaneous localization and mapping (SLAM) system is becoming more and more popular in practical engineering applications because it can achieve the complementarity in use scenarios and lower hardware cost. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. However, the optimization-based SLAM algorithm of vision inertial navigation fusion is highly nonlinear, and its performance highly depends on the accuracy of the estimation of the initial parameters of the system state. The inertial measurement unit needs acceleration excitation, which means that it cannot start from the static state, but must start from the unknown motion state. Therefore, accurate estimation of the initial state is the key to the high robustness of the algorithm and the first step of the vision inertial fusion algorithm. By analyzing the pre integration algorithm of inertial measurement unit, an initialization estimation system based on convex optimization is derived, and the initial states are solved jointly considering the constraints of the gravity acceleration. More importantly, a novel method is proposed to determine the termination condition of the initialization algorithm by measuring the estimation effect with Fisher information, which improves the accuracy of the algorithm and shortens the initialization time. Experiments on Euroc dataset show that the new algorithm has a more precise and robust initial state.

    Reference | Related Articles | Metrics
    Abstract355
    PDF758
    Research on Multi-granularity Attribute Reduction Method for Continuous Parameters
    WU Jiang, SONG Jingjing, CHENG Fuhao, WANG Pingxin, YANG Xibei
    Journal of Frontiers of Computer Science and Technology    2021, 15 (8): 1555-1562.   DOI: 10.3778/j.issn.1673-9418.2006061

    To measure the degree of information granulation, granularity has attracted many researchers?? extensive attention in the field of granular computing. One of the important and widely accepted patterns is parameterized granularity. Based on such parameterized granularity, when solving the problem of attribute reduction, it is often necessary to calculate the reducts related to each parameter until all of the reducts have been obtained. Obviously, this method will result in high time consumption. To fill such a gap, a multi-granularity attribute reduction approach based on continuous parameters is proposed. Firstly, a new constraint related to attribute reduction is constructed by using the interval of continuous parameters and the monotonicity of uncertainty measure in rough set. Secondly, a forward greedy searching algorithm is designed to derive the continuous parameters based reducts. Finally, 8 UCI data sets are selected for experimental comparisons and analyses. The results show that compared with single granularity based reducts in terms of multiple parameters, attribute reduction related to continuous parameters can greatly reduce the elapsed time of obtaining reduct without causing significant changes in the classification performance. This study provides a new solution for multi-granularity based modeling and feature selection from a continuous perspective.

    Reference | Related Articles | Metrics
    Abstract229
    PDF145
    Study of Implication Representation Based on Decision Implication
    WANG Yali, ZHAI Yanhui, ZHANG Shaoxia, JIA Nan, LI Deyu
    Journal of Frontiers of Computer Science and Technology    2021, 15 (7): 1322-1331.   DOI: 10.3778/j.issn.1673-9418.2006064

    Formal concept analysis can use concept lattice and (attribute) implication to visualize and represent knowledge. Decision implication is a special implication, and the study of decision implication is to establish and study one or more closed subsystems in implications, including decision implication subsystem and corresponding semantic and syntactic subsystems. In order to further clarify the relationship between implications and decision implications, it is studied whether the implication systems can be obtained from these decision implications subsystems. In fact, if implications can be deduced from decision implications, the studies on implications and canonical basis can be reduced to the studies of decision implications and decision implication canonical basis. Firstly, some sufficient and necessary conditions are given to determine whether implications can be represented by decision implication. Secondly, an example is given to show that there are some implications that cannot be represented by decision implications, and thus the representation of implications is further divided into direct and indirect representations. Finally, by studying the characteristics of the implication that cannot be directly represented when there is only one decision attribute in decision contexts, a sufficient and necessary condition is presented to determine whether implications cannot be directly represented by decision implication, and a generation method is also designed to generate the implications that cannot be directly represented. This study provides a new perspective for the study of implications and canonical basis, and also forms a foundation for further theoretical study on formal concept analysis.

    Reference | Related Articles | Metrics
    Abstract258
    PDF251
    Discernibility Matrix and Its Application in Logical Optimization
    YAN Xinyi, WEN Xin, CHEN Zehua
    Journal of Frontiers of Computer Science and Technology    2021, 15 (7): 1332-1338.   DOI: 10.3778/j.issn.1673-9418.2005006

    The simplification of truth table is of great significance to the analysis and design of logic circuits. In this paper, the simplification of truth table is studied, and a method of GDM (granular discernibility matrix) that uses discernibility matrix is proposed to obtain the minimum Boolean expression from truth table, and its application in logic optimization is realized. Firstly, the truth table is regarded as a logical information system and the simplification of truth table is transformed into the problem of finding the simplest rules of the logical information system. Then, based on the traditional discernibility matrix, the equivalent relation model is used to construct the GDM, the information granular that can be organized into the minimum Boolean expression is found, and the minimum Boolean expression of the logical information system is obtained by using the disjunctive and conjunction operation of the information granular. In order to accelerate the convergence speed of the algorithm, the heuristic information is introduced, and the decision rule of organizing information granular is given to avoid the occurrence of redundant logic items in the acquisition of the minimum Boolean expression, making Boolean logic expression simplest. The acquisition efficiency of the minimum Boolean expression is improved, and the problem of large-scale logic circuit optimization is solved. Finally, the algorithm is described in detail through an example, and its correctness is proven mathematically. Finally, a detailed algorithm is given, and the correctness and effectiveness of the method are demonstrated through examples and theoretical proofs.

    Reference | Related Articles | Metrics
    Abstract207
    PDF171
    Improved Shuffled Binary Grasshopper Optimization Feature Selection Algorithm
    ZHAO Zeyuan, DAI Yongqiang
    Journal of Frontiers of Computer Science and Technology    2021, 15 (7): 1339-1349.   DOI: 10.3778/j.issn.1673-9418.2005011

    Feature selection is to select the optimal or relatively optimal feature subsets from the original feature set of the data set to speed up classification and improve classification accuracy. An improved shuffled binary grass-hopper optimization feature selection algorithm is proposed in this paper. By introducing a binary transformation strategy that uses step size to guide individual position change, the blindness of the binary conversion is reduced, and the search performance of the algorithm in solution space is improved. By introducing shuffled complex evolution, the grasshopper population is divided into subgroups and evolved independently, which improves the diversity of algorithm and reduces the probability of premature convergence. The improved algorithm is used to select features of some data sets of UCI, and K-NN (K-nearest neighbor) classifier is used to classify and evaluate the feature subset. Experimental results show that compared with the basic binary grasshopper optimization algorithm, binary particle swarm optimization algorithm and binary gray wolf optimization algorithm, the improved algorithm has better search performance, convergence performance and strong robustness, and can obtain better feature subsets and better classification effect.

    Reference | Related Articles | Metrics
    Abstract280
    PDF256
    Estimation of Least-Cost Planning Sequence for Labeled Petri Nets
    ZHOU Guangrui, XU Shulin, GUO Yiyun, LU Faming, YUE Hao
    Journal of Frontiers of Computer Science and Technology    2021, 15 (7): 1350-1358.   DOI: 10.3778/j.issn.1673-9418.2011035

    To solve the least-cost planning sequence problem of a manufacturing system which is modeled by labeled Petri nets, an algorithm based on backtracking method is proposed. Given a labeled Petri net with its structure and an initial marking, the searching stages are divided into parts according to the given labeled sequence. In each stage, the transition with the minimal cost fires at first. With the observation of all the labels following this rule, the sum of the costs of transitions in the firing sequence is the minimum total cost. The least-cost planning sequence and the corresponding total cost are stored. The proposed method traverses the solution space tree according to the depth-first strategy. By taking the current minimum total cost as the constraint condition, the markings and the sequence of transitions that do not need to be searched can be eliminated in other paths. Therefore, the searching space is reduced. An illustrative example shows the feasibility of the method. Compared with the execution of dynamic programming method, the proposed method needs a smaller amount of calculation and achieves higher work efficiency.

    Reference | Related Articles | Metrics
    Abstract271
    PDF319
    Research on Drift Calculation of Concept Lattice for Sliding Window Method
    XU Jilin, XU Jianfeng, LIU Long, WU Fangwen
    Journal of Frontiers of Computer Science and Technology    2021, 15 (6): 1145-1154.   DOI: 10.3778/j.issn.1673-9418.2006063

    Concept lattice is an effective tool for data analysis and rule acquisition. In recent years, the application and research of concept lattice has gradually become an important research direction in the field of data analysis. With the development of information technology, stream data have become an important part of big data, and the concept drift in stream data mining has become a hot topic in machine learning. The construction of dynamic concept lattice is an important research task of concept lattice theory, but the research of concept lattice drift in streaming data environment has not been carried out. To solve the problem of concept lattice drift in stream data environment, the drift calculation method of concept lattice based on sliding window method is proposed in this paper. First, the stream data in the sliding window are modeled. Then, in the sliding window, this paper conducts inference research separately for five phenomena, i.e. the same inflow and outflow concepts, different inflow and outflow concepts, partial intersection of inflow and outflow concepts, inflow concept including outflow concept, and the outflow concept including the inflow concept. Finally, based on the above model reasoning, a concept lattice construction algorithm based on sliding window method is proposed, and an example is given to illustrate the effectiveness and efficiency of the algorithm.

    Reference | Related Articles | Metrics
    Abstract293
    PDF216
    Improved Sparrow Algorithm Combining Cauchy Mutation and Opposition-Based Learning
    MAO Qinghua, ZHANG Qiang
    Journal of Frontiers of Computer Science and Technology    2021, 15 (6): 1155-1164.   DOI: 10.3778/j.issn.1673-9418.2010032

    Aiming at the problem that the population diversity of basic sparrow search algorithm decreases and it is easy to fall into local extremum in the late iteration, an improved sparrow search algorithm combining Cauchy variation and reverse learning (ISSA) is proposed. Firstly, this paper uses a Sin chaotic initialization population with an unlimited number of mapping folds to lay the foundation for global optimization. Secondly, this paper introduces the previous generation global optimal solution into the discoverer location-update method to enhance the sufficiency of global search. At the same time, the adaptive weight is added to coordinate the ability of local mining and global exploration, and the convergence speed is accelerated. Then, the Cauchy mutation operator and the opposition-based learning strategy are combined to perform disturbance mutation to generate new solutions at the optimal solution position, and the algorithm??s ability to jump out of local space is enhanced. Finally, this algorithm is compared with 3 basic algorithms and 2 improved sparrow algorithms. Simulation and Wilcoxon rank and inspection are performed on 8 benchmark test functions. The optimization performance of ISSA is assessed, and time complexity analysis of ISSA is carried out. The results show that ISSA has faster convergence rate and higher precision than the other 5 algorithms. And the overall optimization capabilities are improved.

    Reference | Related Articles | Metrics
    Abstract1742
    PDF986
    Hybrid Local Causal Structure Learning
    WANG Yunxia, CAO Fuyuan, LING Zhaolong
    Journal of Frontiers of Computer Science and Technology    2021, 15 (4): 754-765.   DOI: 10.3778/j.issn.1673-9418.2005041

    Local causal structure learning focuses on identifying the direct causes and direct effects of a given target variable without learning an entire causal network. Existing local causal structure learning algorithms are usually completed by two steps. Step 1 uses constraint-based methods to learn the Markov blanket (MB) or parents and children (PC) set of the target variable by conditional independence tests. However, due to small sample sizes, this may lead to unreliable tests and the accuracy of this step is usually not very high. Step 2 uses found V structures and Meek rules for distinguishing direct causes from direct effects of the target variable. But this step depends on the discovery of V structure extremely and synchronous sampling is also affected by limited samples. The accuracy of the algorithm is not very high. To solve the above problems, this paper proposes a hybrid local causal structure learning algorithm based on the combination of scoring and constraint. In step 1, a new PC learning algorithm SIAPC (score-based incremental association parents and children) is proposed by adding scoring idea into the constraint based algorithm. In step 2, the direction of the edge is determined by using the intersection of the orientation result obtained by PC algorithm and the orientation result obtained by grading some data sets, so as to reduce the dependence on V structure and alleviate the finite sample problem. After that, this paper uses independence test to modify the orientation results of the edges to further improve the accuracy of the algorithm, and then proposes HLCS (hybrid local causal structure learning) algorithm. Using benchmark Bayesian networks, the experimental results show that the algorithm proposed in this paper has better performance than the existing algorithms in terms of learning accuracy and reducing the data efficiency.

    Reference | Related Articles | Metrics
    Abstract336
    PDF487
    Two-Population Comprehensive Learning PSO Algorithm Based on Particle Per-mutation
    JI Wei, LI Yingmei, JI Weidong, ZHANG Long
    Journal of Frontiers of Computer Science and Technology    2021, 15 (4): 766-776.   DOI: 10.3778/j.issn.1673-9418.2005016

    In order to solve the problems of low population diversity and easy to fall into local optimization of particle swarm optimization (PSO), a two-population comprehensive learning PSO algorithm based on particle permutation (PP-CLPSO) is proposed. According to the convergence characteristic of PSO algorithm and the chaotic idea of Logistic mapping, the PSO population and chaotic population of parallel evolution are designed. Combined with the particle numbering mechanism, the same sign structure and the same position structure of particles in the two-population system are formed, in which the inertia weight of particles is adaptively adjusted according to the fitness value. When the search process falls into local optimization, the particles with poor fitness under the same position structure of the PSO population carry out the particle replacement operation according to the same sign structure between the chaotic population and the chaotic population, which realizes the reasonable scheduling of the resources of the two-population system and increases the diversity of the population. Furthermore, the global exploration and local search are carried out by combining the co-particle learning strategy of two-way search and the local learning strategy of linearly decreasing search step, which improves the accuracy of the algorithm. Nine benchmark functions are selected in the experiment, and the proposed algorithm is compared with four improved particle swarm optimization algorithms and four swarm intelligence algorithms at the same time. The experimental results show that the PP-CLPSO algorithm has better comprehensive performance in terms of solution accuracy and convergence speed.

    Reference | Related Articles | Metrics
    Abstract297
    PDF345
    Incremental Reduced Least Squares Twin Support Vector Regression
    CAO Jie, GU Binjie, XIONG Weili, PAN Feng
    Journal of Frontiers of Computer Science and Technology    2021, 15 (3): 553-563.   DOI: 10.3778/j.issn.1673-9418.1912005

    In the incremental least squares twin support vector regression, to solve the problem that the constituted kernel matrix cannot approximate the original kernel matrix well, this paper proposes an incremental reduced least squares twin support vector regression (IRLSTSVR) algorithm. Firstly, in order to reduce the correlation of column vectors in the kernel matrix, the proposed algorithm utilizes a reduced method to determine the correlation between column vectors and then screen support vectors from samples to constitute the column vectors of the kernel matrix. Therefore, the constituted kernel matrix can better approximate the original counterpart, which ensures the sparsity of the solution. Secondly, the inverse matrix is incrementally updated by the block matrix inverse lemma, which further shortens the training time of the proposed algorithm. Finally, the feasibility and efficacy of the proposed algorithm are verified on the benchmark datasets. Experimental results show that the IRLSTSVR algorithm can obtain sparse solution and its generalization performance is closer to offline algorithm compared with state-of-the-art algorithms.

    Reference | Related Articles | Metrics
    Abstract354
    PDF340
    Improved YSGA Algorithm Combining Declining Strategy and Fuch Chaotic Mechanism
    GAO Leifu, RONG Xuejiao
    Journal of Frontiers of Computer Science and Technology    2021, 15 (3): 564-576.   DOI: 10.3778/j.issn.1673-9418.2004036

    In order to enhance the search coverage and optimization accuracy of the Goatfish algorithm to optimize the global exploration ability and local mining ability, an improved Goatfish optimization algorithm IYSGA (improved yellow saddle goatfish algorithm) is proposed combining a step size factor reduction strategy and a chaotic local enhancement mechanism. Firstly, the improved algorithm is based on the standard YSGA algorithm, and designs a dynamic step-factor variable mode to achieve efficient and comprehensive search for the goatfish algorithm. This strategy is conducive to improving the search efficiency of the algorithm and expanding the scope of optimization. Secondly, the chaos search mechanism is a local re-mining method of constructing the current optimal solution based on the superior chaotic characteristics of Fuch mapping theory and better local convergence performance to complete the improvement of the local search performance of the YSGA algorithm. The improvement of YSGA by this coupling method is beneficial to realize the multi-round dynamic iterative balance between global exploration and local search capability of IYSGA algorithm. Finally, numerical experiments verify the superior parallel iteration optimization performance and robustness of the IYSGA algorithm.

    Reference | Related Articles | Metrics
    Abstract502
    PDF354