在Keras中自定义Attention层
具体的公式随处可见,就不展开了讨论了,在build中定义了WQ,WK,WV三个可训练的权值矩阵(batch_size,len,embedding_size),通过对input相乘后将WQ与WK的转置矩阵相乘((batch_size,len,embedding_size)(batch_size,embedding_size,len)=(batch_size,len,len)),并对该结果进行softmax运算,再对WV进行扩维,扩维至(batch_size,len,len,embedding_size),扩维后将上一步运算结果与WV相乘得到((batch_size,len,len,1)(batch_size,len,len,embedding_size)=(batch_size,len,len,embedding_size)),再对第2维进行求和,即可得到最后的输出。
class MyAttention(Layer):
"""注意力机制
"""
def __init__(self, out_dim,key_size=8, **kwargs):
super(MyAttention, self).__init__(**kwargs)
self.out_dim = out_dim
self.key_size=key_size
def build(self, input_shape):
super(MyAttention, self).build(input_shape)
input_shape = list(input_shape)
if input_shape[1]==None:
input_shape[1]=1
kernel_initializer = 'glorot_uniform'
kernel_regularizer = None
kernel_constraint = None
self.qw = self.add_weight(name='qw',
shape=(input_shape[1], self.out_dim),
# shape=(1, self.out_dim),
initializer=kernel_initializer,
regularizer=kernel_regularizer,
constraint=kernel_constraint,
trainable=True)
self.kw = self.add_weight(name='kw',
shape=(input_shape[1], self.out_dim),
# shape=(1, self.out_dim),
initializer=kernel_initializer,
regularizer=kernel_regularizer,
constraint=kernel_constraint,
trainable=True)
self.vw = self.add_weight(name='vw',
shape=(input_shape[1], self.out_dim),
# shape=(1, self.out_dim),
initializer=kernel_initializer,
regularizer=kernel_regularizer,
constraint=kernel_constraint,
trainable=True)
def call(self, inputs):
input_size = tf.shape(inputs)
q = tf.multiply(inputs, self.qw)
k = K.permute_dimensions(tf.multiply(inputs, self.kw),(0,2,1))
v = tf.multiply(inputs, self.vw)
v = tf.reshape(tf.tile(v,[1,input_size[1],1]),(input_size[0],input_size[1],input_size[1],self.out_dim))
p=tf.matmul(q,k)
p=tf.reshape(K.softmax(p/np.sqrt(self.key_size)),(input_size[0],input_size[1],input_size[1],1))
v=tf.reduce_sum(tf.multiply(v, p), 2)
return v
def compute_output_shape(self, input_shape):
return (input_shape[0],input_shape[1], self.out_dim)
将编写的Attention放置在Keras自带的IMBD数据集上进行实验,结果如下:
有些参数还可以再调整,调整后说不定会好一点点。
实验代码基于苏神的评测代码(https://spaces.ac.cn/archives/4765),随便也把苏神的结果放上来仅供参考。
苏神的代码时间复杂度更低一些,但最后似乎有点过拟合了,苏神的代码没有通过可训练的权值矩阵来操作,而是运用 了不带偏置的全连接层,感兴趣的话可以去苏神的文章中详细查看。