#一天一个AI知识点# 什么是GNN?
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/928bf00df19b4ce6bc3723e972f8801b~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1728032820&x-signature=Aprvj5zG2Lp9lGegpFkNWC7%2FJZ0%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">近期</span>,图神经网络 (GNN) 在各个<span style="color: black;">行业</span>越来越受到欢迎,<span style="color: black;">包含</span>社交网络、知识图谱、<span style="color: black;">举荐</span>系统,<span style="color: black;">乃至</span>生命科学。那什么是GNN呢?</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">名词解释</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN是Graph Neural Network的简<span style="color: black;">叫作</span>,是用于学习<span style="color: black;">包括</span><span style="color: black;">海量</span>连接的图的联结主义模型。当信息在图的节点之间传播时GNN会<span style="color: black;">捉捕</span>到图的独立性。与标准神经网络<span style="color: black;">区别</span>的是,GNN会保持一种状态,这个状态<span style="color: black;">能够</span><span style="color: black;">表率</span><span style="color: black;">源自</span>于人为指定的深度上的信息。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图神经网络处理的数据<span style="color: black;">便是</span>图,而图是一种非欧几里得数据。GNN的<span style="color: black;">目的</span>是学习到<span style="color: black;">每一个</span>节点的邻居的状态嵌入,这个状态嵌入是向量且<span style="color: black;">能够</span>用来产生输出,例如节点的标记。如下图,<span style="color: black;">最后</span>的目的<span style="color: black;">便是</span>学习到红框的H,<span style="color: black;">因为</span>H是<span style="color: black;">选定</span>,<span style="color: black;">因此呢</span><span style="color: black;">能够</span><span style="color: black;">持续</span>迭代直到H的值<span style="color: black;">再也不</span>改变即停止。</p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p9-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/1125bb924f444dbcb327751da2a6eada~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1728032820&x-signature=5dL0IiQQYBJ3GqIhtyNYq8%2FnD%2FI%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN的核心问题<span style="color: black;">便是</span>理解图怎么做傅里叶变换。CNN的核心操作是卷积,GNN<span style="color: black;">亦</span>是。CNN计算二维矩阵的卷积,GNN计算图的卷积。<span style="color: black;">那样</span><span style="color: black;">咱们</span>定义好图的傅里叶变换和图的卷积就<span style="color: black;">能够</span>了,其媒介<span style="color: black;">便是</span>图的拉普拉斯矩阵。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN 在对图形中节点间的依赖关系进行建模方面能力强大,使得图分析<span style="color: black;">关联</span>的<span style="color: black;">科研</span>领域取得了突破性<span style="color: black;">发展</span>。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN<span style="color: black;">便是</span>一种在图域上操作的深度学习<span style="color: black;">办法</span>。</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">什么是图Graph</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图是一种对节点和节点间关系建模的数据结构,是<span style="color: black;">设备</span>学习中<span style="color: black;">独一</span>的非欧几里得数据,图分析可用于节点<span style="color: black;">归类</span>、链接预测和聚类。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在计算机科学中,图<span style="color: black;">是由于</span>两个部件<span style="color: black;">构成</span>的一种数据结构:顶点 (vertices) 和边 (edges)。一个图 G <span style="color: black;">能够</span>用它<span style="color: black;">包括</span>的顶点 V 和边 E 的集合来描述。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">边<span style="color: black;">能够</span>是有向的或无向的,这取决于顶点之间<span style="color: black;">是不是</span>存在方向依赖关系。</p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/dfb8d03127e4448ca1c94814d2daa6c0~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1728032820&x-signature=fxWlHXXDCwbWURNezVfWVioVkek%3D" style="width: 50%; margin-bottom: 20px;"></div>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">一个有向的图 (wiki)</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">顶点<span style="color: black;">一般</span><span style="color: black;">亦</span>被<span style="color: black;">叫作</span>为节点 (nodes)。在本文中,这两个术语是<span style="color: black;">能够</span>互换的。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图神经网络</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图神经网络是一种直接在图结构上运行的神经网络。GNN 的一个典型应用是节点分类。本质上,图中的<span style="color: black;">每一个</span>节点都与一个标签<span style="color: black;">关联</span>联,<span style="color: black;">咱们</span>的目的是预测<span style="color: black;">无</span> ground-truth 的节点的标签。</p>
<div style="color: black; text-align: left; margin-bottom: 10px;"><img src="https://p3-sign.toutiaoimg.com/tos-cn-i-qvj2lq49k0/bb7d8730392e4f649e0fbc82732d9c7e~noop.image?_iz=58558&from=article.pc_detail&lk3s=953192f4&x-expires=1728032820&x-signature=F4juW10qkpFyqVYfqW2%2BzP9UBgc%3D" style="width: 50%; margin-bottom: 20px;"></div>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">GNN的<span style="color: black;">源自</span></h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">CNN:CNN<span style="color: black;">能够</span>提取<span style="color: black;">海量</span>本地紧密特征并组合为高阶特征,但CNN只能够操作欧几里得数据。CNN的关键在于局部连接、权值共享、多层<span style="color: black;">运用</span>;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">graph embedding:在低维向量上学习<span style="color: black;">暗示</span>图节点、边<span style="color: black;">或</span>子图。思想源于特征学习和单词嵌入,<span style="color: black;">第1</span>个图嵌入学习<span style="color: black;">办法</span>是DeepWalk,它把节点看做单词并在图上随机游走,并且在它们上面<span style="color: black;">运用</span>SkipGram模型;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">基于以上两种思想,GNN会在图结构上聚合信息,<span style="color: black;">因此呢</span><span style="color: black;">能够</span>对输入/输出的元素及元素间的独立性进行建模。GNN还<span style="color: black;">能够</span><span style="color: black;">同期</span><span style="color: black;">运用</span>RNN核对图上的扩散过程进行建模。</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">GNN的<span style="color: black;">优良</span></h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">标准神经网络(CNN、RNN)<span style="color: black;">没法</span><span style="color: black;">处理</span>图输入无序性,<span style="color: black;">由于</span>它们将点的特征看做是特定的输入;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">两点之间的边<span style="color: black;">表率</span>着独立信息,在标准神经网络中,这种信息被看做是点的信息,而GNN<span style="color: black;">能够</span><span style="color: black;">经过</span>图结构来进行传播,而不是将其看做是特征;<span style="color: black;">一般</span>而言,GNN更新<span style="color: black;">隐匿</span>节点的状态,是<span style="color: black;">经过</span>近邻节点的权值和;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">高级人工只能需要更高的可解释性;标准神经网络<span style="color: black;">能够</span>生成合成图像或文档,但<span style="color: black;">没法</span>生成图;GNN<span style="color: black;">能够</span>生成无结构的数据(多种应用:文字<span style="color: black;">归类</span>、神经<span style="color: black;">设备</span>翻译、关系提取、图像<span style="color: black;">归类</span>);</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">GNN不足</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">更新节点的<span style="color: black;">隐匿</span>状态是低效的;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在迭代中<span style="color: black;">运用</span>相同的参数,更新节点<span style="color: black;">隐匿</span>状态是时序的;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在边上有<span style="color: black;">有些</span>信息化的特征<span style="color: black;">没法</span>在原始GNN中建模;<span style="color: black;">怎样</span>学习边的<span style="color: black;">隐匿</span>状态<span style="color: black;">亦</span>是问题;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">倘若</span><span style="color: black;">咱们</span>的<span style="color: black;">目的</span>是节点的<span style="color: black;">暗示</span>而不是图,<span style="color: black;">运用</span>固<span style="color: black;">选定</span>H是不合适的;</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">GNN改进&变种</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">对GNN的改进分为如下三种:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图类型改进;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">传播<span style="color: black;">过程</span>改进;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">训练<span style="color: black;">办法</span>改进;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图类型</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">原始GNN的输入图是带有标记信息的节点和无向边。以下是几种<span style="color: black;">区别</span>的图类型:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">有向图:信息更紧密;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">异质图:<span style="color: black;">包括</span>几种不同的节点,最简单的处理方式是one-hot feature vector;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">带边信息的图:<span style="color: black;">每一个</span>边<span style="color: black;">亦</span>有信息、权值和类型,<span style="color: black;">咱们</span><span style="color: black;">能够</span>将边<span style="color: black;">亦</span>变成节点,<span style="color: black;">或</span>当传播时在<span style="color: black;">区别</span>的边上<span style="color: black;">运用</span><span style="color: black;">区别</span>的权重矩阵;</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">传播<span style="color: black;">过程</span></h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">对GNN而言,传播<span style="color: black;">过程</span>是非常重要的,它<span style="color: black;">能够</span><span style="color: black;">得到</span>节点(边)的<span style="color: black;">隐匿</span>状态。传播<span style="color: black;">过程</span><span style="color: black;">运用</span>的<span style="color: black;">办法</span><span style="color: black;">一般</span>是<span style="color: black;">区别</span>的聚合函数(在<span style="color: black;">每一个</span>节点的邻居收集信息)和特定的更新函数(更新节点<span style="color: black;">隐匿</span>状态)。</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">卷积操作</h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在图上的卷积操作<span style="color: black;">一般</span><span style="color: black;">能够</span>分为光谱<span style="color: black;">办法</span>和非光谱<span style="color: black;">办法</span>。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">光谱<span style="color: black;">办法</span>:光谱<span style="color: black;">办法</span>在图的光谱<span style="color: black;">暗示</span>上运行,学习的过滤器是基于拉普拉斯特征权重的,<span style="color: black;">因此呢</span>与图的结构紧密<span style="color: black;">关联</span>,难以泛化;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">非光谱<span style="color: black;">办法</span>:直接在图上定义卷积,对紧密相近的节点进行操作,<span style="color: black;">重点</span>的挑战<span style="color: black;">便是</span>非光谱<span style="color: black;">办法</span>在<span style="color: black;">区别</span><span style="color: black;">体积</span>的邻居上的定义和保持CNN的局部变量,非光谱<span style="color: black;">办法</span><span style="color: black;">拥有</span>点<span style="color: black;">归类</span>和图<span style="color: black;">归类</span>两种;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Gate闸门机制</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在GNN中<span style="color: black;">运用</span>门限机制是为了减少限制并改善<span style="color: black;">长时间</span>的图结构上的信息传递。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Attention<span style="color: black;">重视</span>力机制</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">重视</span>力机制<span style="color: black;">已然</span>成功的应用于基于时序的任务,例如<span style="color: black;">设备</span>翻译、<span style="color: black;">设备</span>阅读等。GAT在传播<span style="color: black;">过程</span><span style="color: black;">运用</span>了<span style="color: black;">重视</span>力机制,会<span style="color: black;">经过</span>节点的邻居来计算节点的<span style="color: black;">隐匿</span>状态,<span style="color: black;">经过</span>自<span style="color: black;">重视</span>策略。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Skip connection</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">许多<span style="color: black;">设备</span>学习的应用都会<span style="color: black;">运用</span>多层神经网络,然而多层神经网络不<span style="color: black;">必定</span>更好,<span style="color: black;">由于</span>误差会逐层累积,最直接定位问题的<span style="color: black;">办法</span>,残差网络,是来自于计算机视觉。即使<span style="color: black;">运用</span>了残差网络,多层GCN依旧<span style="color: black;">没法</span>像2层GCN<span style="color: black;">同样</span>表现良好。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">有一种<span style="color: black;">办法</span>是<span style="color: black;">运用</span>高速路GCN(Highway GCN),它像高速路网络<span style="color: black;">同样</span><span style="color: black;">运用</span>逐层门限。</p>
<h1 style="color: black; text-align: left; margin-bottom: 10px;">训练<span style="color: black;">办法</span></h1>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">原始的图神经网络在训练和优化<span style="color: black;">过程</span>有缺陷,它需要完整的图拉普拉斯,对大图而言计算力消耗大。<span style="color: black;">更加多</span>的,层L上的节点嵌入是递归计算的,<span style="color: black;">经过</span>嵌入它的所有L-1层的邻居。<span style="color: black;">因此呢</span>,单层节点是成倍增长的,<span style="color: black;">因此呢</span>对节点的计算消耗巨大。且GCN是对<span style="color: black;">每一个</span>固定图进行独立训练的,<span style="color: black;">因此呢</span>泛化能力<span style="color: black;">欠好</span>。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">以下是几种改善方式:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GraphSAGE:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">将全图拉普拉斯替换为可学习聚合函数,是<span style="color: black;">运用</span>信息传递并生长到未见节点的关键,GraphSAGE还<span style="color: black;">运用</span>了邻居采样来避免接收域爆炸;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">FastGCN:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">FastGCN对采样算法做了更深的改进,FastGCN为每层直接采样接受域,而非对<span style="color: black;">每一个</span>节点进行邻居采样;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">control-variate based stochastic approximation:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">运用</span>节点的历史激励<span style="color: black;">做为</span><span style="color: black;">掌控</span>随机数,此<span style="color: black;">办法</span>限制接受域为1跳邻居,但<span style="color: black;">运用</span>历史<span style="color: black;">隐匿</span>状态<span style="color: black;">做为</span>可接受最优化<span style="color: black;">办法</span>;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Co-Training GCN and Self-Training GCN:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">用于<span style="color: black;">处理</span>GCN需要许多额外的有标记数据及卷积过滤器的局部特征的限制,<span style="color: black;">因此呢</span><span style="color: black;">运用</span>了此<span style="color: black;">办法</span>来扩大训练数据集,Co-Training<span style="color: black;">办法</span>为训练数据找到<span style="color: black;">近期</span>的邻居,Self-Training则采用了类似boosting的<span style="color: black;">办法</span>。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">框架</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">框架的目的是集合<span style="color: black;">区别</span>的模型。有论文提出message passing neural network(MPNN),<span style="color: black;">能够</span>同一化多种图神经网络和图卷积网络<span style="color: black;">办法</span>。non-local neural network(NLNN)则同一化了几个自<span style="color: black;">重视</span><span style="color: black;">办法</span>。graph network(GN)统一了MPNN和NLNN还有其他的Interaction Networks,Neural Phsics Engine,CommNet, structure2vec,GGNN,Relation Network,Deep Sets和Point Net。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">MPNN</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">MPNN是监督学习的框架,它抽象了几个最流行的用于处理图结构数据的模型的<span style="color: black;">类似</span>性。模型<span style="color: black;">包含</span>两个<span style="color: black;">周期</span>,信息传递<span style="color: black;">周期</span>和读出<span style="color: black;">周期</span>。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">信息传递<span style="color: black;">周期</span>:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">便是</span>传播<span style="color: black;">周期</span>,会运行T次。是以信息传递函数和端点更新函数为定义的。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">读出<span style="color: black;">周期</span>:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">读出<span style="color: black;">周期</span>会<span style="color: black;">运用</span>读出函数来对<span style="color: black;">全部</span>图计算特征向量。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">NLNN</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">NLNN是用来对深度神经网络的长范围的独立性。non-local操作<span style="color: black;">源自</span>于经典non-local mean操作在计算机视觉上的应用。non-local操作会在一个位置上计算响应,<span style="color: black;">同期</span>加权了的特征和在所有点上。这些位置<span style="color: black;">能够</span>是空间、时间<span style="color: black;">或</span>空间时间。<span style="color: black;">因此呢</span>NLNN<span style="color: black;">能够</span>看做是<span style="color: black;">区别</span>的自<span style="color: black;">重视</span><span style="color: black;">办法</span>的统一。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">通常</span>的,non-local操作被如下定义:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">non-local operation</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">其中,i是输出位置的索引,j<span style="color: black;">指的是</span>出所有可能位置的索引,f函数计算i和j之间的缩放值,这<span style="color: black;">能够</span><span style="color: black;">表率</span><span style="color: black;">她们</span>之间的联系,g函数<span style="color: black;">表率</span>了输入的变换以及公式的系数用于正则化结果。当<span style="color: black;">运用</span><span style="color: black;">区别</span>的f和g函数时,将得到<span style="color: black;">区别</span>的non-local操作实例,最简单的g函数<span style="color: black;">便是</span>线性变换了。以下是<span style="color: black;">有些</span>可能<span style="color: black;">选取</span>的f函数:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Gaussian;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Embedded Gaussian;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Dot product;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Concatenation;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GN</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">首要</span>是图的定义<span style="color: black;">而后</span>是GN块,核心GN计算单元,计算<span style="color: black;">过程</span>,最后是GN的基本设计原则。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Graph definition:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图被定义为三元组,(全局属性,节点集合,边集合);</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GN block:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GN块<span style="color: black;">包含</span>三个更新函数和三个聚合函数;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Computation steps;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">Design Principles:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GN的设计基于三个基本原则:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">flexible representation</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">configurable within-block structure</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">composable multi-block architectures</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN的应用场景</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN的应用场景非常多,<span style="color: black;">由于</span>GNN是应用于图信息的,而多种多样的数据都<span style="color: black;">能够</span>划分为图数据。以下是GNN的应用场景:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">文字<span style="color: black;">归类</span>;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">神经网络翻译;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">关系提取;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">图<span style="color: black;">归类</span>;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">…</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在<span style="color: black;">这儿</span><span style="color: black;">咱们</span>对GNN的应用进行简单的介绍,<span style="color: black;">首要</span><span style="color: black;">咱们</span>将其划分为三种场景的应用:</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">结构化场景:数据有<span style="color: black;">显著</span>的联系结构;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">非结构化场景:联系结构不<span style="color: black;">显著</span>,例如图像、文字等;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">其他的应用场景:生成模型、组合最优化问题等;</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN对我的启发</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN是对图数据进行处理的深度学习神经网络,它<span style="color: black;">能够</span>实现对异构数据的学习与<span style="color: black;">暗示</span>,<span style="color: black;">这儿</span>的图数据与<span style="color: black;">咱们</span><span style="color: black;">一般</span>所说的图是不<span style="color: black;">同样</span>的,<span style="color: black;">这儿</span>的图指的是数据结构中的那种图以及离散数学中图论,其中<span style="color: black;">区别</span>的节点<span style="color: black;">暗示</span><span style="color: black;">区别</span>的信息。<span style="color: black;">因此呢</span>,图即<span style="color: black;">表率</span>实体及实体之间的联系。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">在<span style="color: black;">咱们</span>的<span style="color: black;">平常</span>生活中,图是无处不在的。而图结构数据<span style="color: black;">拥有</span><span style="color: black;">必定</span>的<span style="color: black;">繁杂</span>性,<span style="color: black;">由于</span>图结构的数据节点<span style="color: black;">一般</span><span style="color: black;">拥有</span>是<span style="color: black;">拥有</span><span style="color: black;">区别</span>的类型的,<span style="color: black;">因此呢</span>对普通的神经网络而言处理起来<span style="color: black;">拥有</span><span style="color: black;">必定</span>的难度。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">GNN最重要的两点<span style="color: black;">便是</span>:1. CNN 特征提取;2. graph embedding 降维操作,再<span style="color: black;">经过</span><span style="color: black;">有些</span>神经网络必要的训练操作等,<span style="color: black;">咱们</span>就<span style="color: black;">能够</span>得到对图的大致<span style="color: black;">暗示</span>。<span style="color: black;">而后</span>就<span style="color: black;">能够</span>实现<span style="color: black;">归类</span>、回归等任务了。GNN<span style="color: black;">亦</span>需要防止过拟合和欠拟合,<span style="color: black;">因为</span>图数据<span style="color: black;">一般</span>过大,<span style="color: black;">因此</span><span style="color: black;">能够</span>采用随机游走的方式,来获取图的特征。<span style="color: black;">咱们</span>还能对GNN进行什么改进以及应用呢?欢迎提出你的想法。</p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;">以上部分摘自CSDN博主「浮云若飞」的原创<span style="color: black;">文案</span>,原文链接:</p>https://blog.csdn.net/qq_34911465/article/details/88524599
一看到楼主的气势,我就觉得楼主同在社区里灌水。 期待更新、坐等、迫不及待等。 感谢你的精彩评论,为我的思绪打开了新的窗口。 楼主节操掉了,还不快捡起来! 这篇文章真的让我受益匪浅,外链发布感谢分享!
页:
[1]