以前的大多数尝试都是将上述子问题的现有解决方案组合在一起,以便从图像进行描述[6,16]。相反,我们在这项工作中提出一个联合模型,该模型以图像 I 作为输入,并经过训练以最大化产生目标单词序列 S = S 1 , S 2 , . . . 的可能性 p ( S∣I ) ,其中每个单词 S t 来自给定的字典,即充分描述图像。
我们工作的主要灵感来自机器翻译的最新进展,其中的任务是通过最大化 p ( T∣S) ,将以源语言编写的句子 S 转换为目标语言的译文 T 。多年以来,机器翻译还通过一系列单独的任务来实现(分别翻译单词,对齐单词,重新排序等),但是最近的工作表明,使用递归神经网络(RNN)可以以更简单的方式完成翻译。 [3,2,30]并仍达到最先进的性能。 “编码器” RNN读取源语句并将其转换为丰富的固定长度向量表示形式,然后将其用作生成目标语句的“解码器” RNN的初始隐藏状态。
Thus, we propose to directly maximize the probability of the correct description given the image by using the following formulation:
where θ are the parameters of our model, I is an image, and S its correct transcription. Since S represents any sentence, its length is unbounded(无限的). Thus, it is common to apply the chain rule to model the joint probability over S 0 , . . . , S N , where N is the length of this particular example as
where we dropped the dependency on θfor convenience. At training time, ( S , I ) is a training example pair, and we optimize the sum of the log probabilities as described in (2) over the whole training set using stochastic gradient descent (further training details are given in Section 4).
It is natural to model p ( S t ∣ I , S 0 , . . . , S t − 1 )with a Recurrent Neural Network (RNN), where the variable number of words we condition upon up to t − 1 is expressed by a fixed length hidden state or memory h t This memory is updated after seeing a new input xtby using a non-linear function f :
在(3)中f的选择取决于它处理消失和爆炸梯度的能力[10],这是设计和训练RNN时最常见的挑战。为了解决这一挑战,引入了一种称为LSTM的特殊形式的递归网络[10],并成功应用于翻译[3,30]和序列生成[9]。 LSTM模型的核心是存储单元 c ,它在每个时间步上编码知识,直到该步为止都观察到了哪些输入(参见图2)。单元的行为由“门”控制,“门”是相乘的层,因此如果门为1则可以保留门控层的值,如果门为0则可以保持此值为零。特别是,正在使用三个门用于控制是否忘记当前单元格值(忘记门 f),是否应读取其输入(输入门 i)以及是否输出新单元格值(输出门 o )。门的定义以及单元更新和输出如下:
其中 ⊙表示门值的乘积,而各种 W矩阵都是经过训练的参数。这样的乘法门使训练鲁棒的LSTM成为可能,因为这些门很好地处理了爆炸和消失的梯度[10]。非线性为S型σ(⋅)和双曲正切 h ( ⋅ ) 。最后一个方程 mt是输入给Softmax的方程,它将产生所有单词上的概率分布。
LSTM模型经过训练,可以在看到图像后预测句子中的每个单词以及通过 p ( S t ∣ I , S 0 , . . . , S t − 1 ) ) 预测所有先前单词。为此,以展开形式考虑LSTM是有启发性的–为图像和每个句子单词创建LSTM存储器的副本,以便所有LSTM在时间t共享相同的参数和LSTM的输出 mt-1 在时间 t 将馈送到LSTM(见图3)。在展开版本中,所有经常性连接都将转换为前馈连接。更详细地讲,如果我们用I表示输入图像,而用 S = ( S 0 , . . . , S N ) 表示描述该图像的真实句子,则展开过程为:
在这里,我们将每个单词表示为一维向量 S t ,其维数等于字典的大小。注意,我们用 S 0 表示一个特殊的开始词,用 S N 表示一个特殊的停止词,它指定句子的开头和结尾。特别是通过发出停用词,LSTM发出信号,表明已生成完整的句子。图像和单词都映射到相同的空间,使用视觉CNN映射图像,使用单词嵌入 W e映射到单词。图像 I 仅在 t = − 1 时输入一次,以通知LSTM有关图像内容。我们凭经验验证了,由于网络可以显式利用图像中的噪声并更容易过度拟合,因此在每个时间步幅上作为额外的输入来馈送图像会产生较差的结果
Our loss is the sum of the negative log likelihood of the correct word at each step as follows:
The above loss is minimized w.r.t. all the parameters of the LSTM, the top layer of the image embedder CNN and word embeddings We
References [1] A. Aker and R. Gaizauskas. Generating image descriptions using dependency relational patterns. In ACL, 2010. [2] D. Bahdanau, K. Cho, and Y . Bengio. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473, 2014. [3] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y . Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2014. [4] J. Donahue, Y . Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [5] D. Elliott and F. Keller. Image description using visual dependency representations. In EMNLP, 2013. [6] A. Farhadi, M. Hejrati, M. A. Sadeghi, P . Y oung, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences from images. In ECCV, 2010. [7] R. Gerber and H.-H. Nagel. Knowledge representation for the generation of quantified natural language descriptions of vehicle traffic in image sequences. In ICIP. IEEE, 1996. [8] Y . Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik. Improving image-sentence embeddings using large weakly annotated photo collections. In ECCV, 2014. [9] A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013. [10] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8), 1997. [11] M. Hodosh, P . Y oung, and J. Hockenmaier. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR, 47, 2013. [12] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In arXiv:1502.03167, 2015. [13] A. Karpathy, A. Joulin, and L. Fei-Fei. Deep fragment embeddings for bidirectional image sentence mapping. NIPS, 2014. [14] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. In arXiv:1411.2539, 2014. [15] R. Kiros and R. Z. R. Salakhutdinov. Multimodal neural language models. In NIPS Deep Learning Workshop, 2013. [16] G. Kulkarni, V . Premraj, S. Dhar, S. Li, Y . Choi, A. C. Berg, and T. L. Berg. Baby talk: Understanding and generating simple image descriptions. In CVPR, 2011. [17] P . Kuznetsova, V . Ordonez, A. C. Berg, T. L. Berg, and Y . Choi. Collective generation of natural image descriptions. In ACL, 2012. [18] P . Kuznetsova, V . Ordonez, T. Berg, and Y . Choi. Treetalk: Composition and compression of trees for image descriptions. ACL, 2(10), 2014. [19] S. Li, G. Kulkarni, T. L. Berg, A. C. Berg, and Y . Choi. Composing simple image descriptions using web-scale n-grams. In Conference on Computational Natural Language Learning, 2011. [20] T.-Y . Lin, M. Maire, S. Belongie, J. Hays, P . Perona, D. Ramanan, P . Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. arXiv:1405.0312, 2014. [21] J. Mao, W. Xu, Y . Yang, J. Wang, and A. Y uille. Explain images with multimodal recurrent neural networks. In arXiv:1410.1090, 2014. [22] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In ICLR, 2013. [23] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. C. Berg, K. Yamaguchi, T. L. Berg, K. Stratos, and H. D. III. Midge: Generating image descriptions from computer vision detections. In EACL, 2012. [24] V . Ordonez, G. Kulkarni, and T. L. Berg. Im2text: Describing images using 1 million captioned photographs. In NIPS, 2011. [25] K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. BLEU: A method for automatic evaluation of machine translation. In ACL, 2002. [26] C. Rashtchian, P . Y oung, M. Hodosh, and J. Hockenmaier. Collecting image annotations using amazon’s mechanical turk. In NAACL HLT Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 139– 147, 2010. [27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge, 2014. [28] P . Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y . LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. [29] R. Socher, A. Karpathy, Q. V . Le, C. Manning, and A. Y . Ng. Grounded compositional semantics for finding and describing images with sentences. In ACL, 2014. [30] I. Sutskever, O. Vinyals, and Q. V . Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [31] R. V edantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based image description evaluation. In arXiv:1411.5726, 2015. [32] B. Z. Yao, X. Yang, L. Lin, M. W. Lee, and S.-C. Zhu. I2t: Image parsing to text description. Proceedings of the IEEE, 98(8), 2010. [33] P . Y oung, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In ACL, 2014. [34] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. In arXiv:1409.2329, 2014.
$$ \exp \left(\min \left(0,1-\frac{l e n_{\text {label }}}{l e n_{\text {pred }}}\right)\right) \prod_{n=1}^{k} p_{n}^{1 / 2^{n}} $$ 其中 \(k\) 是我们希望匹配的子序列的最大词数。可以看到当预测 序列和标签序列完全一致时,BLEU为 1 。 因为匹配较长子序列比匹配较短子序列更难,BLEU对匹配较长 子序列的精度赋予了更大权重。例如,当 \( p_{n} \) 固定在 \( 0.5 \) 时,随在 \( n \) 的增大, \( 0.5^{1 / 2} \approx 0.7,0.5^{1 / 4} \approx 0.84,0.5^{1 / 8} \approx$ $0.92,0.5^{1 / 16} \approx 0.96 \) 。另外,模型预测较短序列往往会得到 较高 \( p_{n} \) 值。因此,上式中连乘项前面的系数是为了惩罚较短的 输出而设的。举个例子,当 \(k=2 \) 时,假设标签序列为 A 、 B 、 C 、 D 、 E 、 F ,而预测序列为 A 、 B 。虽然 \( p_{1}=p_{2}=1 \) ,但惩罚系数 \( \exp (1-6 / 2) \approx 0.14\) , 因此BLEU也接近 0.14 。
python实现:
def bleu(pred_tokens, label_tokens, k):
len_pred, len_label = len(pred_tokens), len(label_tokens)
score = math.exp(min(0, 1 - len_label / len_pred))
for n in range(1, k + 1):
num_matches, label_subs = 0, collections.defaultdict(int)
for i in range(len_label - n + 1):
label_subs[''.join(label_tokens[i: i + n])] += 1
for i in range(len_pred - n + 1):
if label_subs[''.join(pred_tokens[i: i + n])] > 0:
num_matches += 1
label_subs[''.join(pred_tokens[i: i + n])] -= 1
score *= math.pow(num_matches / (len_pred - n + 1), math.pow(0.5, n))
return score
import math import random import matplotlib.pyplot as plt from numpy import *
n_components = 2
def f1(x, period = 4): return 0.5(x-math.floor(x/period)period)
def create_data(): #data number n = 500 #data time T = [0.1*xi for xi in range(0, n)] #source S = array([[sin(xi) for xi in T], [f1(xi) for xi in T]], float32) #mix matrix A = array([[0.8, 0.2], [-0.3, -0.7]], float32) return T, S, dot(A, S)
def whiten(X): #zero mean X_mean = X.mean(axis=-1) X -= X_mean[:, newaxis] #whiten A = dot(X, X.transpose()) D , E = linalg.eig(A) D2 = linalg.inv(array([[D[0], 0.0], [0.0, D[1]]], float32)) D2[0,0] = sqrt(D2[0,0]); D2[1,1] = sqrt(D2[1,1]) V = dot(D2, E.transpose()) return dot(V, X), V
def do_fastica(X): n, m = X.shape; p = float(m); g = _logcosh #black magic X *= sqrt(X.shape[1]) #create w W = ones((n,n), float32) for i in range(n): for j in range(i): W[i,j] = random.random()
#compute W
maxIter = 200
for ii in range(maxIter):
gwtx, g_wtx = g(dot(W, X))
W1 = do_decorrelation(dot(gwtx, X.T) / p - g_wtx[:, newaxis] * W)
lim = max( abs(abs(diag(dot(W1, W.T))) - 1) )
W = W1
if lim < 0.0001:
break
return W
def show_data(T, S): plt.plot(T, [S[0,i] for i in range(S.shape[1])], marker=”*”) plt.plot(T, [S[1,i] for i in range(S.shape[1])], marker=”o”) plt.show()
def main(): T, S, D = create_data() Dwhiten, K = whiten(D) W = do_fastica(Dwhiten) #Sr: reconstructed source Sr = dot(dot(W, K), D) show_data(T, D) show_data(T, S) show_data(T, Sr)
##Python实现PCA
import numpy as np
def pca(X,k):#k is the components you want#mean of each feature
n_samples, n_features = X.shape
mean=np.array([np.mean(X[:,i]) for i in range(n_features)])
#normalization
norm_X=X-mean
#scatter matrix
scatter_matrix=np.dot(np.transpose(norm_X),norm_X)
#Calculate the eigenvectors and eigenvalues
eig_val, eig_vec = np.linalg.eig(scatter_matrix)
eig_pairs = [(np.abs(eig_val[i]), eig_vec[:,i]) for i in range(n_features)]
# sort eig_vec based on eig_val from highest to lowest
eig_pairs.sort(reverse=True)
# select the top k eig_vec
feature=np.array([ele[1] for ele in eig_pairs[:k]])
#get new data
data=np.dot(norm_X,np.transpose(feature))
return data
X = np.array([[-1, 1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
print(pca(X,1))
用sklearn的PCA与我们的PCA做个比较:
##用sklearn的PCA
from sklearn.decomposition import PCA
import numpy as np
X = np.array([[-1, 1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
pca=PCA(n_components=1)
pca.fit(X)
print(pca.transform(X))
奇异值分解(Singular Value Decompositionm,简称SVD)是在机器学习领域应用较为广泛的算法之一,也是学习机器学习算法绕不开的基石之一。SVD算法主要用在降维算法中的特征分解、推荐系统、自然语言处理计算机视觉等领域。奇异值分解(SVD)通俗一点讲就是将一个线性变换分解为两个线性变换,一个线性变换代表旋转,一个线性变换代表拉伸。