Journal of Frontiers of Computer Science and Technology ›› 2022, Vol. 16 ›› Issue (1): 59-87.DOI: 10.3778/j.issn.1673-9418.2104020
• Surveys and Frontiers • Previous Articles Next Articles
YUAN Lining1, LI Xin2, WANG Xiaodong3, LIU Zhao4,+()
Received:
2021-04-08
Revised:
2021-09-07
Online:
2022-01-01
Published:
2021-09-13
About author:
YUAN Lining, born in 1995, M.S. candidate, student member of CCF. His research interests include machine learning, graph neural network, etc.Supported by:
通讯作者:
+ E-mail: liuzhao@ppsuc.edu.cn作者简介:
袁立宁(1995—),男,硕士研究生,CCF学生会员,主要研究方向为机器学习、图神经网络等。基金资助:
CLC Number:
YUAN Lining, LI Xin, WANG Xiaodong, LIU Zhao. Graph Embedding Models: A Survey[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(1): 59-87.
袁立宁, 李欣, 王晓冬, 刘钊. 图嵌入模型综述[J]. 计算机科学与探索, 2022, 16(1): 59-87.
Add to citation manager EndNote|Ris|BibTeX
URL: http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2104020
符号 | 说明 |
---|---|
A | 邻接矩阵 |
D | 顶点度矩阵 |
L | 拉普拉斯矩阵 |
X | 特征矩阵 |
S | 相似度矩阵 |
W | 系数矩阵 |
G | 静态图 |
$\mathcal{G}$ | 动态图 |
V | 节点集 |
E | 边集 |
d | 嵌入维度 |
Y | 嵌入表示 |
Table 1 Symbols and definitions
符号 | 说明 |
---|---|
A | 邻接矩阵 |
D | 顶点度矩阵 |
L | 拉普拉斯矩阵 |
X | 特征矩阵 |
S | 相似度矩阵 |
W | 系数矩阵 |
G | 静态图 |
$\mathcal{G}$ | 动态图 |
V | 节点集 |
E | 边集 |
d | 嵌入维度 |
Y | 嵌入表示 |
模型分类 | 模型 | 时间 | 模型策略 |
---|---|---|---|
矩阵分解 | LLE[ | 2000 | 构造邻域保持映射,最小化重建损失函数 |
GF[ | 2013 | 分解邻接矩阵,利用向量内积捕捉边的存在 | |
GraRep[ | 2015 | 使用SVD分解k步对数转移概率矩阵 | |
HOPE[ | 2016 | GSVD分解相似度矩阵,L2范数保持高阶相似度 | |
NGE[ | 2013 | 使用NMF将输入分解为系数矩阵和嵌入矩阵 | |
LE[ | 2001 | 保持相连节点在嵌入空间尽可能靠近 | |
CGE[ | 2011 | 修改LE损失函数,保持低权值节点对相似性 | |
SPE[ | 2009 | 利用核矩阵生成关系矩阵 | |
随机游走 | Deepwalk[ | 2014 | 使用随机游走采样节点,Skip-Gram最大化节点共现概率 |
node2vec[ | 2016 | 在Deepwalk的基础上引入有偏的随机游走 | |
HARP[ | 2017 | 利用原始图生成保留全局结构的压缩图 | |
Walklets[ | 2016 | 改进Deepwalk,捕获节点与社区的从属关系并建模 | |
TriDNR[ | 2016 | 最大化节点标签、节点邻域、节点内容的共现概率 | |
自编码器 | GraphEncoder[ | 2014 | 利用L2损失函数重构图相似度矩阵 |
SDNE[ | 2016 | 使用有监督和无监督组件分别保持一阶和二阶相似度 | |
DNGR[ | 2016 | 随机冲浪捕获图结构,生成PPMI矩阵输入SDAE | |
DNE-APP[ | 2017 | 使用PPMI度量和k步转移矩阵构建相似度聚合矩阵 | |
VGAE[ | 2016 | 引入VAE,使用GCN编码器,使用内积解码器 | |
GALA[ | 2020 | 编码器执行拉普拉斯平滑,解码器执行拉普拉斯锐化 | |
ANE[ | 2018 | 施加对抗性正则化避免流形断裂 | |
图神经网络 | GCN[ | 2016 | 利用谱卷积一阶近似提高层间传播效率 |
GraphSAGE[ | 2017 | 采样和聚合节点的局部邻域特征训练聚合器函数 | |
GAT[ | 2017 | 在GCN的基础上引入self-attention和multi-head attention | |
GIN[ | 2018 | 利用GNN和WL图同构测试保留图结构信息 | |
MF-GCN[ | 2020 | 使用多个局部GCN滤波器提取节点特征 | |
GraphAIR[ | 2020 | 邻域聚合模块融合节点特征表示,邻域交互模块通过乘法运算显示建模 | |
SDGNN[ | 2021 | 在GNN的基础引入地位理论和平衡理论 | |
其他 | LINE[ | 2015 | 分别优化一阶和二阶相似度,将嵌入向量进行拼接 |
DNRE[ | 2018 | 直接使用LSTM聚合邻域信息重建节点嵌入 | |
Graphormer[ | 2021 | 在标准Transformer基础上,引入有中心性编码、空间编码和边编码 |
Table 2 Strategies of static graph embedding
模型分类 | 模型 | 时间 | 模型策略 |
---|---|---|---|
矩阵分解 | LLE[ | 2000 | 构造邻域保持映射,最小化重建损失函数 |
GF[ | 2013 | 分解邻接矩阵,利用向量内积捕捉边的存在 | |
GraRep[ | 2015 | 使用SVD分解k步对数转移概率矩阵 | |
HOPE[ | 2016 | GSVD分解相似度矩阵,L2范数保持高阶相似度 | |
NGE[ | 2013 | 使用NMF将输入分解为系数矩阵和嵌入矩阵 | |
LE[ | 2001 | 保持相连节点在嵌入空间尽可能靠近 | |
CGE[ | 2011 | 修改LE损失函数,保持低权值节点对相似性 | |
SPE[ | 2009 | 利用核矩阵生成关系矩阵 | |
随机游走 | Deepwalk[ | 2014 | 使用随机游走采样节点,Skip-Gram最大化节点共现概率 |
node2vec[ | 2016 | 在Deepwalk的基础上引入有偏的随机游走 | |
HARP[ | 2017 | 利用原始图生成保留全局结构的压缩图 | |
Walklets[ | 2016 | 改进Deepwalk,捕获节点与社区的从属关系并建模 | |
TriDNR[ | 2016 | 最大化节点标签、节点邻域、节点内容的共现概率 | |
自编码器 | GraphEncoder[ | 2014 | 利用L2损失函数重构图相似度矩阵 |
SDNE[ | 2016 | 使用有监督和无监督组件分别保持一阶和二阶相似度 | |
DNGR[ | 2016 | 随机冲浪捕获图结构,生成PPMI矩阵输入SDAE | |
DNE-APP[ | 2017 | 使用PPMI度量和k步转移矩阵构建相似度聚合矩阵 | |
VGAE[ | 2016 | 引入VAE,使用GCN编码器,使用内积解码器 | |
GALA[ | 2020 | 编码器执行拉普拉斯平滑,解码器执行拉普拉斯锐化 | |
ANE[ | 2018 | 施加对抗性正则化避免流形断裂 | |
图神经网络 | GCN[ | 2016 | 利用谱卷积一阶近似提高层间传播效率 |
GraphSAGE[ | 2017 | 采样和聚合节点的局部邻域特征训练聚合器函数 | |
GAT[ | 2017 | 在GCN的基础上引入self-attention和multi-head attention | |
GIN[ | 2018 | 利用GNN和WL图同构测试保留图结构信息 | |
MF-GCN[ | 2020 | 使用多个局部GCN滤波器提取节点特征 | |
GraphAIR[ | 2020 | 邻域聚合模块融合节点特征表示,邻域交互模块通过乘法运算显示建模 | |
SDGNN[ | 2021 | 在GNN的基础引入地位理论和平衡理论 | |
其他 | LINE[ | 2015 | 分别优化一阶和二阶相似度,将嵌入向量进行拼接 |
DNRE[ | 2018 | 直接使用LSTM聚合邻域信息重建节点嵌入 | |
Graphormer[ | 2021 | 在标准Transformer基础上,引入有中心性编码、空间编码和边编码 |
模型分类 | 模型 | 时间 | 模型策略 |
---|---|---|---|
矩阵分解 | DANE[ | 2017 | 拉普拉斯特征映射捕获t时刻的结构和属性信息,矩阵摄动理论更新动态信息 |
DHPE[ | 2018 | GSVD分解各时刻Katz矩阵,矩阵摄动理论更新动态信息 | |
TRIP[ | 2015 | 利用图中三角形个数构建特征函数,将特征对矩阵映射成嵌入向量 | |
TIMERS[ | 2017 | SVD最大误差界重启,消除增量更新积累的误差 | |
DWSF[ | 2017 | 将图中有限监督信息合并为标签,并在每次迭代中更新 | |
随机游走 | CTDNE[ | 2018 | 按照时间顺序对边进行遍历 |
dynnode2vec[ | 2018 | node2vec初始化快照,对变化点执行随机游走,利用Skip-Gram更新动态信息 | |
STWalk[ | 2017 | 捕捉规定时间窗内节点的变化 | |
tNodeEmbed[ | 2019 | 模仿句子嵌入作为节点嵌入,捕捉节点角色和边的动态变化 | |
自编码器 | DynGEM[ | 2018 | 提出PropSize动态调节神经元个数,同时引入L1和L2正则化 |
dyngraph2vec[ | 2019 | 组合AE和LSTM,构建不同的编码器和解码器组合 | |
NetWalk[ | 2018 | 同时最小化节点距离和自编码器重构误差 | |
BurstGraph[ | 2019 | 将动态演化分为一般演化和突发演化,RNN捕捉各时刻的图结构 | |
HVGNN[ | 2021 | 采用基于TGNN的双曲VGAE | |
图神经网络 | DyRep[ | 2018 | 基于自我传播、外源驱动、局部嵌入传播更新节点表示 |
DySAT[ | 2020 | 结构注意力提取邻域特征,时间注意力捕捉多个时刻的表示 | |
EvolveGCN[ | 2019 | 在每个时刻使用RNN调整GCN参数 | |
DGNN[ | 2020 | 使用时态信息增强LSTM作为更新框架 | |
TemporalGAT[ | 2020 | 集成GAT和TCN,self-attention应用于邻域,TCN用于动态信息更新 | |
其他 | HTNE[ | 2018 | Hawkes捕捉过去时刻对当前时刻邻域的影响,Skip-Gram更新动态信息 |
DynamicTriad[ | 2018 | 通过闭合三元组和开放三元组模拟图的动态演化 | |
M2DNE[ | 2019 | 微观动态描述结构的形成,宏观动态描述规模的演化,利用二者交互生成嵌入 | |
CAW[ | 2021 | 回溯多个时刻的相邻链接编码因果关系,根据特征位置计数编码相应节点标识 |
Table 3 Strategies of dynamic graph embedding
模型分类 | 模型 | 时间 | 模型策略 |
---|---|---|---|
矩阵分解 | DANE[ | 2017 | 拉普拉斯特征映射捕获t时刻的结构和属性信息,矩阵摄动理论更新动态信息 |
DHPE[ | 2018 | GSVD分解各时刻Katz矩阵,矩阵摄动理论更新动态信息 | |
TRIP[ | 2015 | 利用图中三角形个数构建特征函数,将特征对矩阵映射成嵌入向量 | |
TIMERS[ | 2017 | SVD最大误差界重启,消除增量更新积累的误差 | |
DWSF[ | 2017 | 将图中有限监督信息合并为标签,并在每次迭代中更新 | |
随机游走 | CTDNE[ | 2018 | 按照时间顺序对边进行遍历 |
dynnode2vec[ | 2018 | node2vec初始化快照,对变化点执行随机游走,利用Skip-Gram更新动态信息 | |
STWalk[ | 2017 | 捕捉规定时间窗内节点的变化 | |
tNodeEmbed[ | 2019 | 模仿句子嵌入作为节点嵌入,捕捉节点角色和边的动态变化 | |
自编码器 | DynGEM[ | 2018 | 提出PropSize动态调节神经元个数,同时引入L1和L2正则化 |
dyngraph2vec[ | 2019 | 组合AE和LSTM,构建不同的编码器和解码器组合 | |
NetWalk[ | 2018 | 同时最小化节点距离和自编码器重构误差 | |
BurstGraph[ | 2019 | 将动态演化分为一般演化和突发演化,RNN捕捉各时刻的图结构 | |
HVGNN[ | 2021 | 采用基于TGNN的双曲VGAE | |
图神经网络 | DyRep[ | 2018 | 基于自我传播、外源驱动、局部嵌入传播更新节点表示 |
DySAT[ | 2020 | 结构注意力提取邻域特征,时间注意力捕捉多个时刻的表示 | |
EvolveGCN[ | 2019 | 在每个时刻使用RNN调整GCN参数 | |
DGNN[ | 2020 | 使用时态信息增强LSTM作为更新框架 | |
TemporalGAT[ | 2020 | 集成GAT和TCN,self-attention应用于邻域,TCN用于动态信息更新 | |
其他 | HTNE[ | 2018 | Hawkes捕捉过去时刻对当前时刻邻域的影响,Skip-Gram更新动态信息 |
DynamicTriad[ | 2018 | 通过闭合三元组和开放三元组模拟图的动态演化 | |
M2DNE[ | 2019 | 微观动态描述结构的形成,宏观动态描述规模的演化,利用二者交互生成嵌入 | |
CAW[ | 2021 | 回溯多个时刻的相邻链接编码因果关系,根据特征位置计数编码相应节点标识 |
统计项 | 20-NewsGroup | Flickr | DBLP | YouTube | Wikipedia | Cora | CiteSeer | Pubmed | Yelp |
---|---|---|---|---|---|---|---|---|---|
#Nodes | 1 720,3 224,5 141 | 80 513 | 29 199 | 1 138 499 | 2 405 | 2 708 | 3 327 | 19 717 | 6 569 |
#Edges | Fully connected | 5 899 882 | 133 664 | 2 990 443 | 17 981 | 5 429 | 4 732 | 44 338 | 95 361 |
#Features | — | — | — | — | 4 973 | 1 433 | 3 703 | 500 | — |
#Labels | 3,6,9 | 195 | 4 | 47 | 17 | 7 | 6 | 3 | — |
Table 4 Statistics of static graph datasets
统计项 | 20-NewsGroup | Flickr | DBLP | YouTube | Wikipedia | Cora | CiteSeer | Pubmed | Yelp |
---|---|---|---|---|---|---|---|---|---|
#Nodes | 1 720,3 224,5 141 | 80 513 | 29 199 | 1 138 499 | 2 405 | 2 708 | 3 327 | 19 717 | 6 569 |
#Edges | Fully connected | 5 899 882 | 133 664 | 2 990 443 | 17 981 | 5 429 | 4 732 | 44 338 | 95 361 |
#Features | — | — | — | — | 4 973 | 1 433 | 3 703 | 500 | — |
#Labels | 3,6,9 | 195 | 4 | 47 | 17 | 7 | 6 | 3 | — |
统计项 | Epinions | Hep-th | AS | Enron | UCI |
---|---|---|---|---|---|
#Nodes | 14 180 | 1 424~7 980 | 7 716 | 184 | 1 809 |
#Edges | 227 642 | 2 556~21 036 | 10 695~26 467 | 63~591 | 16 822 |
#Features | 9 936 | — | — | — | — |
#Labels | 20 | — | — | — | — |
#Time Steps | 16 | 60 | 100 | 128 | 13 |
Table 5 Statistics of dynamic graph datasets
统计项 | Epinions | Hep-th | AS | Enron | UCI |
---|---|---|---|---|---|
#Nodes | 14 180 | 1 424~7 980 | 7 716 | 184 | 1 809 |
#Edges | 227 642 | 2 556~21 036 | 10 695~26 467 | 63~591 | 16 822 |
#Features | 9 936 | — | — | — | — |
#Labels | 20 | — | — | — | — |
#Time Steps | 16 | 60 | 100 | 128 | 13 |
Index | Deepwalk | node2vec | LINE | DANE | tNodeEmbed | CTDNE | HTNE | DynamicTriad | M2DNE |
---|---|---|---|---|---|---|---|---|---|
micro-F1 | 69.65 | 75.20 | 67.56 | 74.53 | 82.20 | 71.70 | 71.40 | 71.10 | 69.75 |
macro-F1 | 72.37 | 23.50 | 70.97 | 75.69 | 50.40 | 8.30 | 6.90 | 5.50 | 69.71 |
Table 6 Performance comparison of dynamic graph node classification %
Index | Deepwalk | node2vec | LINE | DANE | tNodeEmbed | CTDNE | HTNE | DynamicTriad | M2DNE |
---|---|---|---|---|---|---|---|---|---|
micro-F1 | 69.65 | 75.20 | 67.56 | 74.53 | 82.20 | 71.70 | 71.40 | 71.10 | 69.75 |
macro-F1 | 72.37 | 23.50 | 70.97 | 75.69 | 50.40 | 8.30 | 6.90 | 5.50 | 69.71 |
Index | Deepwalk | GAE | VGAE | Linear-GAE | Linear-VGAE | GALA | GraphSAGE | MF-GCN | GraphAIR |
---|---|---|---|---|---|---|---|---|---|
AUC | 80.5 | 89.5 | 90.8 | 91.5 | 91.6 | 94.4 | 93.7 | 92.4 | 95.0 |
AP | 83.6 | 89.9 | 92.0 | 92.9 | 93.1 | 94.8 | — | — | — |
Table 7 Performance comparison of link prediction %
Index | Deepwalk | GAE | VGAE | Linear-GAE | Linear-VGAE | GALA | GraphSAGE | MF-GCN | GraphAIR |
---|---|---|---|---|---|---|---|---|---|
AUC | 80.5 | 89.5 | 90.8 | 91.5 | 91.6 | 94.4 | 93.7 | 92.4 | 95.0 |
AP | 83.6 | 89.9 | 92.0 | 92.9 | 93.1 | 94.8 | — | — | — |
[1] | 宋雨萌, 谷峪, 李芳芳, 等. 人工智能赋能的查询处理与优化新技术研究综述[J]. 计算机科学与探索, 2020, 14(7):1081-1103. |
SONG Y M, GU Y, LI F F, et al. Survey on AI powered new techniques for query processing and optimization[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(7):1081-1103. | |
[2] |
TRONCOSO F, WEBER R. A novel approach to detect associations in criminal networks[J]. Decision Support Systems, 2020, 128:113159.
DOI URL |
[3] | GUO S N, LIN Y F, FENG N, et al. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence, the 31st Innovative Applications of Artificial Intelligence Conference, the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, Jan 27-Feb 1, 2019. Menlo Park: AAAI, 2019: 922-929. |
[4] |
GU J X, WANG Z H, KUEN J, et al. Recent advances in convolutional neural networks[J]. Pattern Recognition, 2018, 77:354-377.
DOI URL |
[5] | SALEHINEJAD H, SANKAR S, BARFETT J, et al. Recent advances in recurrent neural networks[J]. arXiv:1801.01078, 2017. |
[6] |
LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553):436-444.
DOI URL |
[7] | BHAGAT S, CORMODE G, MUTHUKRISHNAN S. Node classification in social networks[M]//AGGARWALC C.Social Network Data Analytics. Berlin, Heidelberg: Springer, 2011: 115-148. |
[8] |
LIBEN-NOWELL D, KLEINBERG J. The link-prediction problem for social networks[J]. Journal of the American Society for Information Science and Technology, 2007, 58(7):1019-1031.
DOI URL |
[9] | DING C H Q, HE X F, ZHA H Y, et al. A min-max cut algorithm for graph partitioning and data clustering[C]// Proceedings of the 2001 IEEE International Conference on Data Mining, San Jose, Nov 29-Dec 2, 2001. Washington:IEEE Computer Society, 2001: 107-114. |
[10] | VAN DER MAATEN L, HINTON G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11):2579-2605. |
[11] | QIU J Z, TANG J, MA H, et al. DeepInf: social influence prediction with deep learning[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2110-2119. |
[12] |
SILVEIRA T, ZHANG M, LIN X, et al. How good your recommender system is? A survey on evaluations in recom-mendation[J]. International Journal of Machine Learning and Cybernetics, 2019, 10(5):813-831.
DOI URL |
[13] | AHMED A, SHERVASHIDZE N, NARAYANAMURTHY S M, et al. Distributed large-scale natural graph factorization[C]// Proceedings of the 22nd International World Wide Web Conference, Rio de Janeiro, May 13-17, 2013. New York: ACM, 2013: 37-48. |
[14] | LOVÁSZ L. Random walks on graphs: a survey[J]. Com-binatorics, Paul Erdös is Eighty, 1993, 2(1):1-46. |
[15] | GOLDBERG Y, LEVY O. word2vec explained: deriving Mikolov et al.’s negative-sampling word-embedding method[J]. arXiv:1402.3722, 2014. |
[16] |
CAI H, ZHENG V W, CHANG K C. A comprehensive survey of graph embedding: problems, techniques, and applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(9):1616-1637.
DOI URL |
[17] |
GOYAL P, FERRARA E. Graph embedding techniques, applications, and performance: a survey[J]. Knowledge-Based Systems, 2018, 151:78-94.
DOI URL |
[18] |
CHEN F X, WANG Y C, WANG B, et al. Graph repres-entation learning: a survey[J]. APSIPA Transactions on Signal and Information Processing, 2020, 9:e15.
DOI URL |
[19] |
CUI P, WANG X, PEI J, et al. A survey on network embedding[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 31(5):833-852.
DOI URL |
[20] | XIE Y, LI C, YU B, et al. A survey on dynamic network embedding[J]. arXiv:2006.08093, 2020. |
[21] |
SKARDINGA J, GABRYS B, MUSIAL K. Foundations and modelling of dynamic networks using dynamic graph neural networks: a survey[J]. IEEE Access, 2021, 9:79143-79168.
DOI URL |
[22] | NG A. Sparse autoencoder[J]. CS294A Lecture Notes, 2011, 72:1-19. |
[23] |
SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE Transactions on Neural Networks, 2008, 20(1):61-80.
DOI URL |
[24] |
ROWEIS S T, SAUL L K. Nonlinear dimensionality red-uction by locally linear embedding[J]. Science, 2000, 290(5500):2323-2326.
DOI URL |
[25] | CAO S S, LU W, XU Q K. GraRep: learning graph repres-entations with global structural information[C]// Proceedings of the 24th ACM International Conference on Information and Knowledge Management, Melbourne, Oct 19-23, 2015. New York: ACM, 2015: 891-900. |
[26] | LEVY O, GOLDBERG Y. Neural word embedding as implicit matrix factorization[C]// Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, Dec 8-13, 2014. Red Hook: Curran Associates, 2014: 2177-2185. |
[27] | OU M D, CUI P, PEI J, et al. Asymmetric transitivity preserving graph embedding[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, Aug 13-17, 2016. New York: ACM, 2016: 1105-1114. |
[28] |
KATZ L. A new status index derived from sociometric analysis[J]. Psychometrika, 1953, 18(1):39-43.
DOI URL |
[29] |
PAIGE C C, SAUNDERS M A. Towards a generalized singular value decomposition[J]. SIAM Journal on Numerical Analysis, 1981, 18(3):398-405.
DOI URL |
[30] | YANG J C, YANG S C, FU Y, et al. Non-negative graph embedding[C]// Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Jun 24-26, 2008. Washington: IEEE Computer Society, 2008: 1-8. |
[31] |
LEE D D, SEUNG H S. Learning the parts of objects by non-negative matrix factorization[J]. Nature, 1999, 401(6755):788-791.
DOI URL |
[32] | BELKIN M, NIYOGI P. Laplacian Eigenmaps and spectral techniques for embedding and clustering[C]// Proceedings of the Neural Information Processing Systems: Natural and Synthetic, Vancouver, Dec 3-8, 2001. Cambridge: MIT Press, 2001: 585-591. |
[33] | LUO D J, DING C H Q, NIE F P, et al. Cauchy graph embe-dding[C]// Proceedings of the 28th International Conference on Machine Learning, Bellevue, Jun 28-Jul 2, 2011. Madison: Omnipress, 2011: 553-560. |
[34] | SHAW B, JEBARA T. Structure preserving embedding[C]// Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, Jun 14-18, 2009. New York: ACM, 2009: 937-944. |
[35] | LI R C. Matrix perturbation theory[M]//HOGBEN L. 2nd ed. Handbook of Linear Algebra. Boca Raton: CRC Press, 2013. |
[36] | LI J D, DANI H, HU X, et al. Attributed network embe-dding for learning in a dynamic environment[C]// Proceedings of the 2017 ACM on Conference on Information and Know-ledge Management, Singapore, Nov 6-10, 2017. New York: ACM, 2017: 387-396. |
[37] |
HARDOON D R, SZEDMAK S, SHAWE-TAYLOR J. Canonical correlation analysis: an overview with application to learning methods[J]. Neural Computation, 2004, 16(12):2639-2664.
DOI URL |
[38] | ZHU D Y, CUI P, ZHANG Z W, et al. High-order proximity preserved embedding for dynamic networks[J]. IEEE Tran-sactions on Knowledge and Data Engineering, 2018, 30(11):2134-2144. |
[39] | STRANG G, STRANG G, STRANG G, et al. Introduction to linear algebra[M]. Wellesley: Wellesley-Cambridge Press, 1993. |
[40] | CHEN C, TONG H H. Fast Eigen-functions tracking on dynamic graphs[C]// Proceedings of the 2015 SIAM Inter-national Conference on Data Mining, Vancouver, Apr 30 - May 2, 2015. Philadelphia: SIAM, 2015: 559-567. |
[41] | TSOURAKAKIS C E. Fast counting of triangles in large real networks without counting: algorithms and laws[C]// Proceedings of the 8th IEEE International Conference on Data Mining, Pisa, Dec 15-19, 2008. Washington: IEEE Com-puter Society, 2008: 608-617. |
[42] |
TSOURAKAKIS C E. Counting triangles in real-world networks using projections[J]. Knowledge and Information Systems, 2011, 26(3):501-520.
DOI URL |
[43] | ZHANG Z W, CUI P, PEI J, et al. Timers: error-bounded SVD restart on dynamic networks[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 224-231. |
[44] |
AHMED N M, CHEN L, WANG Y L, et al. DeepEye: link prediction in dynamic networks based on non-negative matrix factorization[J]. Big Data Mining and Analytics, 2018, 1(1):19-33.
DOI URL |
[45] | SEYEDI S A, MORADI P, TAB F A. A weakly-supervised factorization method with dynamic graph embedding[C]// Proceedings of the 2017 Artificial Intelligence and Signal Processing Conference, Shiraz, Oct 25-27, 2017. Piscataway: IEEE, 2017: 213-218. |
[46] |
WANG Y X, ZHANG Y J. Nonnegative matrix factorization: a comprehensive review[J]. IEEE Transactions on Knowledge and Data Engineering, 2012, 25(6):1336-1353.
DOI URL |
[47] |
TRIGEORGIS G, BOUSMALIS K, ZAFEIRIOU S, et al. A deep matrix factorization method for learning attribute representations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(3):417-429.
DOI URL |
[48] | LIU Y, LEE J, PARK M, et al. Learning to propagate labels: transductive propagation network for few-shot learning[J]. arXiv:1805.10002, 2018. |
[49] | PEROZZI B, AL-RFOU R, SKIENA S. DeepWalk: online learning of social representations[C]// Proceedings of the 20th ACM SIGKDD International Conference on Know-ledge Discovery and Data Mining, New York, Aug 24-27, 2014. New York: ACM, 2014: 701-710. |
[50] | GROVER A, LESKOVEC J. node2vec: scalable feature learning for networks[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, Aug 13-17, 2016. New York: ACM, 2016: 855-864. |
[51] | CHEN H C, PEROZZI B, HU Y F, et al. Harp: hierarchical representation learning for networks[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 2127-2134. |
[52] | YANG C, LIU Z Y. Comprehend deepwalk as matrix factor-ization[J]. arXiv:1501.00358, 2015. |
[53] | PEROZZI B, KULKARNI V, SKIENA S. Walklets: multiscale graph embeddings for interpretable network classification[J]. arXiv:1605.02115, 2016. |
[54] | PAN S R, WU J, ZHU X Q, et al. Tri-party deep network representation[C]// Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, Jul 9-15, 2016. Menlo Park: AAAI, 2016: 1895-1901. |
[55] | SAJJAD H P, DOCHERTY A, TYSHETSKIY Y. Efficient representation learning using random walks for dynamic graphs[J]. arXiv:1901.01346, 2019. |
[56] | NGUYEN G H, LEE J B, ROSSI R A, et al. Continuous-time dynamic network embeddings[C]// Companion Procee-dings of the Web Conference, Lyon, Apr 23-27, 2018. New York: ACM, 2018: 969-976. |
[57] | MAHDAVI S, KHOSHRAFTAR S, AN A J. Dynnode2vec: scalable dynamic network embedding[C]// Proceedings of the 2018 IEEE International Conference on Big Data, Seattle, Dec 10-13, 2018. Piscataway: IEEE, 2018: 3762-3765. |
[58] | KIM Y, CHIU Y I, HANAKI K, et al. Temporal analysis of language through neural language models[J]. arXiv:1405.3515, 2014. |
[59] | PANDHRE S, MITTAL H, GUPTA M, et al. STWalk: learning trajectory representations in temporal graphs[C]// Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, Goa, Jan 11-13, 2018. New York: ACM, 2018: 210-219. |
[60] | SINGER U, GUY I, RADINSKY K. Node embedding over temporal graphs[J]. arXiv:1903.08889, 2019. |
[61] | PALANGI H, DENG L, SHEN Y L, et al. Deep sentence embedding using the long short term memory network: analysis and application to information retrieval[J]. arXiv:1502.06922, 2015. |
[62] |
BOURLARD H, KAMP Y. Auto-association by multilayer perceptrons and singular value decomposition[J]. Biological Cybernetics, 1988, 59(4):291-294.
DOI URL |
[63] | TIAN F, GAO B, CUI Q, et al. Learning deep representations for graph clustering[C]// Proceedings of the 28th AAAI Conference on Artificial Intelligence, Québec City, Jul 27 -31, 2014. Menlo Park: AAAI, 2014: 1293-1299. |
[64] | WANG D X, CUI P, ZHU W W. Structural deep network embedding[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, Aug 13-17, 2016. New York: ACM, 2016: 1225-1234. |
[65] | CAO S S, LU W, XU Q K. Deep neural networks for learning graph representations[C]// Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, Feb 12-17, 2016. Menlo Park: AAAI, 2016: 1145-1152. |
[66] |
BULLINARIA J A, LEVY J P. Extracting semantic repres-entations from word co-occurrence statistics: a computational study[J]. Behavior Research Methods, 2007, 39(3):510-526.
DOI URL |
[67] | SHEN X, CHUNG F L. Deep network embedding with aggregated proximity preserving[C]// Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Sydney, Jul 31-Aug 3, 2017. New York: ACM, 2017: 40-43. |
[68] | KINGMA D P, WELLING M. Auto-encoding variational Bayes[J]. arXiv:1312.6114, 2013. |
[69] | KIPF T N, WELLING M. Variational graph auto-encoders[J]. arXiv:1611.07308, 2016. |
[70] | KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[J]. arXiv1609.02907, 2016. |
[71] |
KULLBACK S, LEIBLER R A. On information and suffi-ciency[J]. The Annals of Mathematical Statistics, 1951, 22(1):79-86.
DOI URL |
[72] | SALHA G, HENNEQUIN R, VAZIRGIANNIS M. Keep it simple: graph autoencoders without graph convolutional networks[J]. arXiv:1910.00942, 2019. |
[73] | PARK J, LEE M, CHANG H J, et al. Symmetric graph convolutional autoencoder for unsupervised graph represen-tation learning[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, Seoul, Oct 27-Nov 2, 2019. Piscataway: IEEE, 2019: 6518-6527. |
[74] | CAI D, HE X F, HU Y X, et al. Learning a spatially smooth subspace for face recognition[C]// Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Minneapolis, Jun 18-23, 2007.Washington: IEEE Computer Society, 2007: 1-7. |
[75] | TAUBIN G. A signal processing approach to fair surface design[C]// Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, Aug 6-11, 1995. New York: ACM, 1995: 351-358. |
[76] | XIAO Y, XIAO D, HU B B, et al. ANE: network embe-dding via adversarial autoencoders[C]// Proceedings of the 2018 IEEE International Conference on Big Data and Smart Computing, Shanghai, Jan 15-17, 2018. Washington:IEEE Computer Society, 2018: 66-73. |
[77] | MAKHZANI A, SHLENS J, JAITLY N, et al. Adversarial autoencoders[J]. arXiv:1511.05644, 2015. |
[78] | GOYAL P, KAMRA N, HE X, et al. DynGEM: deep embedding method for dynamic graphs[J]. arXiv:1805.11273, 2018. |
[79] |
GOYAL P, CHHETRI S R, CANEDO A. Dyngraph2vec: capturing network dynamics using dynamic graph repres-entation learning[J]. Knowledge-Based Systems, 2020, 187:104816.
DOI URL |
[80] |
HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
DOI URL |
[81] | YU W C, CHENG W, AGGARWAL C C, et al. NetWalk: a flexible deep embedding approach for anomaly detection in dynamic networks[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2672-2681. |
[82] | AKOGLU L, FALOUTSOS C. Anomaly, event, and fraud detection in large network datasets[C]// Proceedings of the 6th ACM International Conference on Web Search and Data Mining, Rome, Feb 4-8, 2013. New York: ACM, 2013: 773-774. |
[83] |
AKOGLU L, TONG H H, KOUTRA D. Graph based anomaly detection and description: a survey[J]. Data Mining and Knowledge Discovery, 2015, 29(3):626-688.
DOI URL |
[84] | ZHAO Y F, WANG X W, YANG H X, et al. Large scale evolving graphs with burst detection[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, Aug 10-16, 2019: 4412-4418. |
[85] | HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[J]. arXiv:1706.02216, 2017. |
[86] |
MITCHELL T J, BEAUCHAMP J J. Bayesian variable selection in linear regression[J]. Journal of the American Statistical Association, 1988, 83(404):1023-1032.
DOI URL |
[87] | HEARD N A, WESTON D J, PLATANIOTI K, et al. Bayesian anomaly detection methods for social networks[J]. The Annals of Applied Statistics, 2010: 645-662. |
[88] |
KLEINBERG J. Bursty and hierarchical structure in streams[J]. Data Mining and Knowledge Discovery, 2003, 7(4):373-397.
DOI URL |
[89] | CHEN W, FANG W, HU G, et al. On the hyperbolicity of small-world and treelike random graphs[J]. Internet Math-ematics, 2013, 9(4):434-491. |
[90] | GANEA O E, BÉCIGNEUL G, HOFMANN T. Hyperbolic neural networks[J]. arXiv:1805.09112, 2018. |
[91] | ZHU D Y, CUI P, WANG D X, et al. Deep variational network embedding in Wasserstein space[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2827-2836. |
[92] | SUN L, ZHANG Z B, ZHANG J W, et al. Hyperbolic variational graph neural network for modeling dynamic graphs[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence, the 33rd Conference on Innovative Applications of Artificial Intelligence, the 11th Symposium on Educational Advances in Artificial Intelligence, Virtual Event, Feb 2-9, 2021. Menlo Park: AAAI, 2021: 4375-4383. |
[93] |
HAMMOND D K, VANDERGHEYNST P, GRIBONVAL R. Wavelets on graphs via spectral graph theory[J]. Applied and Computational Harmonic Analysis, 2011, 30(2):129-150.
DOI URL |
[94] | DEFFERRARD M, BRESSON X, VANDERGHEYNST P. Convolutional neural networks on graphs with fast localized spectral filtering[J]. arXiv:1606.09375, 2016. |
[95] | VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[J]. arXiv:1710.10903, 2017. |
[96] | CHAUDHARI S, MITHAL V, POLATKAN G, et al. An attentive survey of attention models[J]. arXiv:1904.02874, 2019. |
[97] | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. arXiv:1706.03762, 2017. |
[98] | XU K, HU W, LESKOVEC J, et al. How powerful are graph neural networks?[J]. arXiv:1810.00826, 2018. |
[99] | LEMAN A A, WEISFEILER B. A reduction of a graph to a canonical form and an algebra arising during this reduction[J]. Nauchno-Technicheskaya Informatsiya, 1968, 2(9):12-16. |
[100] | WANYAN T, ZHANG C, AZAD A, et al. Attribute2vec: deep network embedding through multi-filtering GCN[J]. arXiv:2004.01375, 2020. |
[101] | WU F, SOUZA A H, ZHANG T Y, et al. Simplifying graph convolutional networks[C]// Proceedings of the 36th International Conference on Machine Learning, Long Beach, Jun 9-15, 2019: 6861-6871. |
[102] |
HU F Y, ZHU Y Q, WU S, et al. Graphair: graph rep-resentation learning with neighborhood aggregation and interaction[J]. Pattern Recognition, 2021, 112:107745.
DOI URL |
[103] | HUANG J, SHEN H, HOU L, et al. SDGNN: learning node representation for signed directed networks[J]. arXiv:2101.02390, 2021. |
[104] | LESKOVEC J, HUTTENLOCHER D, KLEINBERG J M. Predicting positive and negative links in online social networks[C]// Proceedings of the 19th International Con-ference on World Wide Web, Raleigh, Apr 26-30, 2010. New York: ACM, 2010: 641-650. |
[105] | LESKOVEC J, HUTTENLOCHER D P, KLEINBERG J M. Signed networks in social media[C]// Proceedings of the 28th International Conference on Human Factors in Computing Systems, Atlanta, Apr 10-15, 2010. New York:ACM, 2010: 1361-1370. |
[106] | TRIVEDI R, FARAJTABAR M, BISWAL P, et al. Dyrep: learning representations over dynamic graphs[C]// Proceedings of the 2019 International Conference on Learning Repres-entations, 2019. |
[107] | ZHANG J, SHI X, XIE J, et al. GaAN: gated attention networks for learning on large and spatiotemporal graphs[J]. arXiv:1803.07294, 2018. |
[108] | SANKAR A, WU Y H, GOU L, et al. DySAT: deep neural representation learning on dynamic graphs via self-attention networks[C]// Proceedings of the 13th ACM International Conference on Web Search and Data Mining, Houston, Feb 3-7, 2020. New York: ACM, 2020: 519-527. |
[109] | GEHRING J, AULI M, GRANGIER D, et al. Convol-utional sequence to sequence learning[C]// Proceedings of the 34th International Conference on Machine Learning, Sydney, Aug 6-11, 2017. New York: ACM, 2017: 1243-1252. |
[110] | PAREJA A, DOMENICONI G, CHEN J, et al. EvolveGCN: evolving graph convolutional networks for dynamic graphs[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence, the 32nd Innovative Applications of Artificial Intelligence Conference, the 10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, Feb 7-12, 2020. Menlo Park: AAAI, 2020: 5363-5370. |
[111] | MA Y, GUO Z Y, REN Z C, et al. Streaming graph neural networks[C]// Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 719-728. |
[112] | FATHY A, LI K. TemporalGAT: attention-based dynamic graph representation learning[C]// LNCS 12084: Proceedings of the 24th Pacific-Asia Conference on Knowledge Dis-covery and Data Mining, Singapore, May 11-14, 2020. Cham: Springer, 2020: 413-423. |
[113] | BAI S J, KOLTER J Z, KOLTUN V. An empirical evalu-ation of generic convolutional and recurrent networks for sequence modeling[J]. arXiv:1803.01271, 2018. |
[114] | VAN DEN OORD A, DIELEMAN S, ZEN H, et al. Wavenet: a generative model for raw audio[J]. arXiv:1609.03499, 2016. |
[115] | SANKAR A, WU Y, GOU L, et al. Dynamic graph repre-sentation learning via self-attention networks[J]. arXiv:1812.09430, 2018. |
[116] | TANG J, QU M, WANG M Z, et al. LINE: large-scale information network embedding[C]// Proceedings of the 24th International Conference on World Wide Web, Florence, May 18-22, 2015. New York: ACM, 2015: 1067-1077. |
[117] | TU K, CUI P, WANG X, et al. Deep recursive network embedding with regular equivalence[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2357-2366. |
[118] | PAGE L, BRIN S, MOTWANI R, et al. The PageRank citation ranking: bringing order to the Web[R]. Stanford InfoLab, 1999. |
[119] | YING C, CAI T, LUO S, et al. Do transformers really perform bad for graph representation?[J]. arXiv:2106.05234, 2021. |
[120] | ZUO Y, LIU G N, LIN H, et al. Embedding temporal network via neighborhood formation[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, London, Aug 19-23, 2018. New York: ACM, 2018: 2857-2866. |
[121] |
HAWKES A G. Spectral of some self-exciting and mutually exciting point processes[J]. Biometrika, 1971, 58(1):83-90.
DOI URL |
[122] | ZHOU L K, YANG Y, REN X, et al. Dynamic network embedding by modeling triadic closure process[C]// Pro-ceedings of the 32nd AAAI Conference on Artificial Inte-lligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, Feb 2-7, 2018. Menlo Park: AAAI, 2018: 571-578. |
[123] | COLEMAN J S. Foundations of social theory[M]. Cam-bridge: Harvard University Press, 1994. |
[124] |
HUANG H, TANG J, LIU L, et al. Triadic closure pattern analysis and prediction in social networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(12):3374-3389.
DOI URL |
[125] | LU Y F, WANG X, SHI C, et al. Temporal network embe-dding with micro-and macro-dynamics[C]// Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, Nov 3-7, 2019. New York: ACM, 2019: 469-478. |
[126] | WANG Y, CHANG Y Y, LIU Y, et al. Inductive repres-entation learning in temporal networks via causal anon-ymous walks[J]. arXiv:2101.05974, 2021. |
[127] |
MICALI S, ZHU Z A. Reconstructing Markov processes from independent and anonymous experiments[J]. Discrete Applied Mathematics, 2016, 200:108-122.
DOI URL |
[128] | AHMED N K, NEVILLE J, ROSSI R A, et al. Efficient graphlet counting for large networks[C]// Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, Nov 14-17, 2015. Washington: IEEE Computer Society, 2015: 1-10. |
[129] | PARANJAPE A, BENSON A R, LESKOVEC J. Motifs in temporal networks[C]// Proceedings of the 10th ACM International Conference on Web Search and Data Mining, Cambridge, Feb 6-10, 2017. New York: ACM, 2017: 601-610. |
[130] |
HORNIK K, STINCHCOMBE M B, WHITE H. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks[J]. Neural Networks, 1990, 3(5):551-560.
DOI URL |
[131] | ZAFARANI R, LIU H. Social computing data repository[M]. Tempe: Arizona State University, 2009. |
[132] | TANG L, LIU H. Scalable learning of collective behavior based on sparse social dimensions[C]// Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, Nov 2-6, 2009. New York: ACM, 2009: 1107-1116. |
[133] | YANG C, LIU Z Y, ZHAO D L, et al. Network repres-entation learning with rich text information[C]// Procee-dings of the 24th International Joint Conference on Artificial Intelligence, Buenos Aires, Jul 25-31, 2015. Menlo Park:AAAI, 2015: 2111-2117. |
[134] | SEN P, NAMATA G, BILGIC M, et al. Collective classi-fication in network data[J]. AI Magazine, 2008, 29(3):93. |
[135] | ZENG H Q, ZHOU H K, SRIVASTAVA A, et al. Graphsaint: graph sampling based inductive learning method[J]. arXiv:1907.04931, 2019. |
[136] | TANG J L, GAO H J, LIU H. mTrust: discerning multi-faceted trust in a connected world[C]// Proceedings of the 5th International Conference on Web Search and Data Mining, Seattle, Feb 8-12, 2012. New York: ACM, 2012: 93-102. |
[137] |
GEHRKE J, GINSPARG P, KLEINBERG J M. Overview of the 2003 KDD Cup[J]. SIGKDD Explorations, 2003, 5(2):149-151.
DOI URL |
[138] | LESKOVEC J, KREVL A. SNAP datasets: Stanford large network dataset collection[EB/OL]. [2021-02-10]. http://snap.stanford.edu/data/. |
[139] | KLIMT B, YANG Y M. The enron corpus: a new dataset for email classification research[C]// LNCS 3201: Proceedings of the 15th European Conference on Machine Learning, Pisa, Sep 20-24, 2004. Berlin, Heidelberg: Springer, 2004: 217-226. |
[140] |
PANZARASA P, OPSAHL T, CARLEY K M. Patterns and dynamics of users’ behavior and interaction: network analysis of an online community[J]. Journal of the American Society for Information Science and Technology, 2009, 60(5):911-932.
DOI URL |
[141] |
KUMARAN G, ALLAN J. Adapting information retri-eval systems to user queries[J]. Information Processing & Management, 2008, 44(6):1838-1862.
DOI URL |
[142] | HOSMER D W, LEMESHOW S, STURDIVANT R X. Applied logistic regression[M]. New York: John Wiley & Sons, Inc., 2000. |
[143] | PEDREGOSA F, VAROQUAUX G, GRAMFORT A, et al. Scikit-learn: machine learning in Python[J]. Journal of Machine Learning Research, 2011, 12:2825-2830. |
[144] |
SUYKENS J A K, VANDEWALLE J. Least squares support vector machine classifiers[J]. Neural Processing Letters, 1999, 9(3):293-300.
DOI URL |
[145] | MCCALLUM A, NIGAM K. A comparison of event models for Naive Bayes text classification[C]// Procee-dings of the AAAI-98 Workshop on Learning for Text Categorization, Madison, Jul 26-27, 1998. Menlo Park: AAAI, 1998: 41-48. |
[146] |
VON LUXBURG U. A tutorial on spectral clustering[J]. Statistics and Computing, 2007, 17(4):395-416.
DOI URL |
[147] |
SHI J B, MALIK J. Normalized cuts and image seg-mentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8):888-905.
DOI URL |
[148] | MACQUEEN J. Some methods for classification and analysis of multivariate observations[C]// Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, Dec 27, 1965-Jan 7, 1966. Berkeley: University of California Press, 1966: 281-297. |
[149] |
CAI D, HE X F, HAN J W, et al. Graph regularized nonnegative matrix factorization for data representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8):1548-1560.
DOI URL |
[150] | MANZOOR E A, MILAJERDI S, AKOGLU L. Fast memory-efficient anomaly detection in streaming heterogeneous graphs[C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, Aug 13-17, 2016. New York: ACM, 2016: 1035-1044. |
[151] | RANSHOUS S, HARENBERG S, SHARMA K, et al. A scalable approach for outlier detection in edge streams using sketch-based approximations[C]// Proceedings of the 2016 SIAM International Conference on Data Mining, Miami, May 5-7, 2016. Philadelphia: SIAM, 2016: 189-197. |
[152] | TANG J, LIU J Z, ZHANG M, et al. Visualizing large-scale and high-dimensional data[C]// Proceedings of the 25th International Conference on World Wide Web, Mon-treal, Apr 11-15, 2016. New York: ACM, 2016: 287-297. |
[153] |
YUAN L N, WANG Y, CHENG X G, et al. Semi- AttentionAE: an integrated model for graph representation learning[J]. IEEE Access, 2021, 9:80787-80796.
DOI URL |
[154] | HU W, FEY M, ZITNIK M, et al. Open graph benchmark: datasets for machine learning on graphs[J]. arXiv:2005.00687, 2020 |
[155] | ZHAO J N, WANG X, SHI C, et al. Heterogeneous graph structure learning for graph neural networks[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence, 33rd Conference on Innovative Applications of Artificial Intelligence, the 11th Symposium on Educational Adv-ances in Artificial Intelligence, Virtual Event, Feb 2-9, 2021. Menlo Park: AAAI, 2021: 4697-4705. |
[156] |
VAN DYK D A, MENG X L. The art of data augmentation[J]. Journal of Computational and Graphical Statistics, 2001, 10(1):1-50.
DOI URL |
[157] | VERMA V, QU M, KAWAGUCHI K, et al. GraphMix: improved training of GNNs for semi-supervised learning[J]. arXiv:1909.11715, 2019. |
[1] | TIAN Xuan, CHEN Hangxue. Survey on Applications of Knowledge Graph Embedding in Recommendation Tasks [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(8): 1681-1705. |
[2] | HUANG Siyuan, ZHAO Yuhai, LIANG Yiming. Code Search Combining Graph Embedding and Attention Mechanism [J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 844-854. |
[3] | ZHAO Xueli, LU Guangyue, LV Shaoqing, ZHANG Pan. Attributed Bipartite Network Representation Learning [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(3): 495-505. |
[4] | LI Penghui, ZHAI Zhengli, FENG Shu. Research Progress of Adversarial Defenses on Graphs [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(12): 2292-2303. |
[5] | SHU Shitai, LI Song, HAO Xiaohong, ZHANG Liping. Knowledge Graph Embedding Technology: A Review [J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(11): 2048-2062. |
[6] | DONG Lili, CHENG Jiong, ZHANG Xiang, YE Na. Research on Disease Diagnosis Method Combining Knowledge Graph and Deep Learning [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(5): 815-824. |
[7] | LIU Guoqing, LU Guifu, ZHOU Sheng, XUAN Dongdong, CAO Along. Non-Negative Low Rank Graph Embedding Algorithm [J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(3): 502-512. |
[8] | XU Lei, HUANG Ling, WANG Changdong. Motif-Preserving Network Representation Learning [J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(8): 1261-1271. |
[9] | ZHANG Yu, GAO Kening, CHEN Mo, YU Ge. Method of Link Prediction Combining Network Structure and Node Attributes [J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(7): 1094-1101. |
[10] | YANG Xiaocui, SONG Jiaxiu, ZHANG Xihuang. Link Prediction Algorithm Based on Network Representation Learning [J]. Journal of Frontiers of Computer Science and Technology, 2019, 13(5): 812-821. |
[11] | SHU Min, LIU Huawen, ZHENG Zhonglong, XU Xiaodan. Outlier Detection Algorithm Combining Locality Sensitive Hashing and Random Walks [J]. Journal of Frontiers of Computer Science and Technology, 2018, 12(12): 1950-1960. |
[12] | CHU Jinghui, LUO Wei, LV Wei. Multi-Graph Embedding Representation for Human Activity Pattern Recognition [J]. Journal of Frontiers of Computer Science and Technology, 2017, 11(6): 941-949. |
[13] | LIU Heng, KOU Yue, SHEN Derong, WANG Taiming, YU Ge. Distributed SimRank Algorithm Based on Random Walk Path [J]. Journal of Frontiers of Computer Science and Technology, 2014, 8(12): 1422-1431. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
/D:/magtech/JO/Jwk3_kxyts/WEB-INF/classes/