摘要
针对现有图嵌入方法损失函数来源单一导致节点表示不能被充分优化的问题,提出了基于同步联合优化的注意力图自编码器(attentional graph auto-encoder based on synchronous joint optimization,AGE-SJO)。设计基于注意力机制的编码器学习节点表示,并利用内积解码器重建图结构生成重建损失(L_(R));为从多方面优化表示,将编码器和多层感知机分别作为生成模型和判别模型进行对抗训练,获得生成损失(L_(G))和判别损失(L_(D));提出同步联合优化策略,依次在L_(R)的k步、L_(D)的k步和L_(G)的1步之间优化表示,并将其应用于链路预测和节点聚类。在引文数据集上的实验结果表明,所提出的AGE-SJO性能优越,与最强基线相比,AUC、AP、ACC、NMI和ARI指标可分别提升1.6%、2.1%、10.6%、4.9%和12.4%。
Due to the single source of the loss function in the existing graph embedding methods,the representations of nodes are not be fully optimized.To solve the problem,paper an attentional graph auto-encoder based on synchronous joint optimization(AGE-SJO)was proposed.The encoder based on the attention mechanism was designed to learn the representations of nodes,and the inner product decoder was used to reconstruct the structure of the graph to generate reconstruction loss(L_(R)).To optimize the representations from many aspects,the encoder and the multi-layer perceptron were used as the generative model and the discriminant model to conduct adversarial training to obtain the generative loss(L_(G))and the discriminant loss(L_(D)).The synchronous joint optimization strategy was to optimize the representations among k step of L_(R),k steps of L_(D) and 1 steps of L_(G),which were applied to link prediction and node clustering.The experimental results on the citation data sets show that the method has superior performance.Compared with the strongest baseline,the AUC,AP,ACC,NMI,and ARI indicators are improved by 1.6%,2.1%,10.6%,4.9%and 12.4%,respectively.
作者
李琳
梁永全
刘广明
LI Lin;LIANG Yongquan;LIU Guangming(College of Computer Science & Engineering, Shandong University of Science & Technology, Qingdao, Shandong 266590, China)
出处
《中国科技论文》
CAS
北大核心
2021年第11期1248-1255,共8页
China Sciencepaper
基金
国家重点研发计划项目(2017YFC0804406)
国家自然科学基金资助项目(91746100)。
关键词
注意力图自编码器
生成对抗机制
同步联合优化
图嵌入
网络表示学习
attentional graph auto-encoder
generative adversarial mechanism
synchronous joint optimization
graph embedding
network representation learning