资源简介
该代码为基于RNN的Tensorflow实现文本分类任务的注意力机制,笔者亲测有效,不需要环境配置等,欢迎大家下载。
代码片段和文件信息
import tensorflow as tf
def attention(inputs attention_size time_major=False return_alphas=False):
“““
Attention mechanism layer which reduces RNN/Bi-RNN outputs with Attention vector.
The idea was proposed in the article by Z. Yang et al. “Hierarchical Attention Networks
for Document Classification“ 2016: http://www.aclweb.org/anthology/N16-1174.
Variables notation is also inherited from the article
Args:
inputs: The Attention inputs.
Matches outputs of RNN/Bi-RNN layer (not final state):
In case of RNN this must be RNN outputs ‘Tensor‘:
If time_major == False (default) this must be a tensor of shape:
‘[batch_size max_time cell.output_size]‘.
If time_major == True this must be a tensor of shape:
‘[max_time batch_size cell.output_size]‘.
In case of Bidirectional RNN this must be a tuple (outputs_fw outputs_bw) containing the forward and
the backward RNN outputs ‘Tensor‘.
If time_major == False (default)
outputs_fw is a ‘Tensor‘ shaped:
‘[batch_size max_time cell_fw.output_size]‘
and outputs_bw is a ‘Tensor‘ shaped:
‘[batch_size max_time cell_bw.output_size]‘.
If time_major == True
outputs_fw is a ‘Tensor‘ shaped:
‘[max_time batch_size cell_fw.output_size]‘
and outputs_bw is a ‘Tensor‘ shaped:
‘[max_time batch_size cell_bw.output_size]‘.
attention_size: Linear size of the Attention weights.
time_major: The shape format of the ‘inputs‘ Tensors.
If true these ‘Tensors‘ must be shaped ‘[max_time batch_size depth]‘.
If false these ‘Tensors‘ must be shaped ‘[batch_size max_time depth]‘.
Using ‘time_major = True‘ is a bit more efficient because it avoids
transposes at the beginning and end of the RNN calculation. However
most TensorFlow data is batch-major so by default this function
accepts input and emits output in batch-major form.
return_alphas: Whether to return attention coefficients variable along with layer‘s output.
Used for visualization purpose.
Returns:
The Attention output ‘Tensor‘.
In case of RNN this will be a ‘Tensor‘ shaped:
‘[batch_size cell.output_size]‘.
In case of Bidirectional RNN this will be a ‘Tensor‘ shaped:
‘[batch_size cell_fw.output_size + cell_bw.output_size]‘.
“““
if isinstance(inputs tuple):
# In case of Bi-RNN concatenate the forward and the backward RNN outputs.
inputs = tf.concat(inputs 2)
if time_major:
# (TBD) => (BTD)
inputs = tf.array_ops.transpose(inputs
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
文件 1073 2018-02-02 08:14 tf-rnn-attention-master\.gitignore
文件 4102 2018-02-02 08:14 tf-rnn-attention-master\attention.py
文件 67 2018-08-05 23:35 tf-rnn-attention-master\checkpoint
文件 1068 2018-02-02 08:14 tf-rnn-attention-master\LICENSE
文件 4648057 2018-08-03 16:33 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533283448.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:40 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533285656.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:42 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533285700.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:51 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533286315.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 17:45 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533289549.LAPTOP-0FHPGVM0
文件 913672 2018-08-05 15:56 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533455785.LAPTOP-0FHPGVM0
文件 4653017 2018-08-05 23:35 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533480582.LAPTOP-0FHPGVM0
文件 4627897 2018-08-05 23:35 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533480592.LAPTOP-0FHPGVM0
文件 913672 2018-08-08 21:57 tf-rnn-attention-master\logdir\test\events.out.tfevents.1533736622.LAPTOP-0FHPGVM0
文件 4585769 2018-08-03 16:30 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533283447.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:40 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533285656.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:42 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533285700.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 16:51 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533286314.LAPTOP-0FHPGVM0
文件 913672 2018-08-03 17:45 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533289549.LAPTOP-0FHPGVM0
文件 913672 2018-08-05 15:56 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533455785.LAPTOP-0FHPGVM0
文件 4606985 2018-08-05 23:31 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533480582.LAPTOP-0FHPGVM0
文件 4589193 2018-08-05 23:30 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533480592.LAPTOP-0FHPGVM0
文件 1447238 2018-08-08 21:59 tf-rnn-attention-master\logdir\train\events.out.tfevents.1533736622.LAPTOP-0FHPGVM0
文件 14895620 2018-08-05 23:35 tf-rnn-attention-master\model.data-00000-of-00001
文件 1688 2018-08-05 23:35 tf-rnn-attention-master\model.index
文件 503631 2018-08-05 23:35 tf-rnn-attention-master\model.me
文件 601 2018-02-02 08:14 tf-rnn-attention-master\README.md
文件 6211 2018-02-02 08:14 tf-rnn-attention-master\train.py
文件 1196 2018-02-02 08:14 tf-rnn-attention-master\utils.py
文件 4495 2018-08-05 15:56 tf-rnn-attention-master\visualization.html
文件 1511 2018-02-02 08:14 tf-rnn-attention-master\visualize.py
............此处省略11个文件信息
- 上一篇:王国强)虚拟样机技术及其在ADAMS上的实践
- 下一篇:3dsmax2009 sdk
相关资源
- 深度学习资料+官方文档
- 深度学习/图像识别/TensorFlow
- tensorflow实战+实战Google深度学习框架
- MNIST数据集CSV格式
- Sklearn_与_TensorFlow_机器学习实用指南
- Tensorflow 实战Google深度学习框架 源码
- tensorflow_gpu-1.6.0-cp36-cp36m-win_amd64.zip
- 机器学习实战:基于Scikit-Learn和Tens
- 基于tensorflow实现猫狗识别代码(CNN)
- Learning.TensorFlow.2017.8.pdf
- cuda9.0 win7 64 for tensorflow 1.5
- Mastering TensorFlow 1.x-Packt Publishing(20
- TensorFlow、Keras、numpy安装库及安装方法
- SSD-Tensorflow-master.zip
- TensorFlow实战Google深度学习框架(随书
- TensorFlow实战Google深度学习框架
- 面向机器智能的TensorFlow实践 高清完整
- tensorflow_gpu-1.12.0-cp37
- TensorFlow教程
- Tensorflow2.0 Transformer模型中英翻译.ra
- 分布式深度学习论文tensorflow的并行计
- TensorFlow机器学习项目实战.epub
- Tensorflow内核剖析 电子书
- tensorflow车牌号码识别源码
- 《深度学习之TensorFlow:入门、原理与
- hands-on machine learning with scikit-learn an
- tensorflow lite 依赖包
- tensorFlow keras 深度学习 人工智能实践
- Tensorflow手写字体识别入门
- Comprehensive_Experiment.zip
评论
共有 条评论