Вариационный автоэнкодер: InvalidArgumentError: Несовместимые формы: [100,5] против [100]
Я пытаюсь запустить вариационный автоэнкодер с LSTM. Поэтому я заменяю dense
слой с LSTM
слой. Но это не работает. Это пример:
# generate data
data = generate_example(length = 560,seed=253)
normal_data = data[1:400,:]
fault_data = data[400:,:]
timesteps = 5
# data prepare
# define the normalize function
# normalize function
def normalize(normal, fault):
normal_mean = normal.mean(axis = 0)
normal_std = normal.std(axis = 0)
# normalize
fault_normalize = np.array(fault).reshape(fault.shape)
for i in np.linspace(0,fault.shape[1]-1):
i = int(i)
fault_normalize[:,i] = (fault[:,i] - normal_mean[i])/normal_std[i]
return(fault_normalize)
# define the lag function
# lag function
def lag(data, timesteps = 10):
# define the shape of return data
data_row = data.shape[0]
data_col = data.shape[1]
data_len = data_row - timesteps
data_lag = np.repeat(0,data_len*timesteps*data_col).reshape(data_len,timesteps,data_col).astype("float")
for i in np.arange(0,data_len):
data_lag[i,:,:] = data[i:(i+timesteps),:]
return(data_lag)
normal_scale = normalize(normal = normal_data, fault = normal_data)
normal_scale = lag(data=normal_scale, timesteps = timesteps)
Это вариационный автоэнкодер
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from keras.layers import Input, Dense, Lambda, LSTM, RepeatVector, TimeDistributed
from keras.models import Model
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
batch_size = 100
original_dim = 3
latent_dim = 2
intermediate_dim = 5
epochs = 100
epsilon_std = 1.0
x = Input(shape=(timesteps,original_dim))
h = LSTM(intermediate_dim,return_sequences=False)(x)
z_mean = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0.,
stddev=epsilon_std)
return z_mean + K.exp(z_log_var / 2) * epsilon
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
# we instantiate these layers separately so as to reuse them later
decoded_repeat = RepeatVector(timesteps)
decoder_h = LSTM(intermediate_dim, activation='tanh',return_sequences=True)
decoder_mean = TimeDistributed(Dense(original_dim, activation='sigmoid'))
h_repeat = decoded_repeat(z)
h_decoded = decoder_h(h_repeat)
x_decoded_mean = decoder_mean(h_decoded)
# instantiate VAE model
vae = Model(x, x_decoded_mean)
# Compute VAE loss
xent_loss = original_dim * metrics.binary_crossentropy(x, x_decoded_mean)
kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
vae_loss = K.mean(xent_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='rmsprop',loss=None)
vae.summary()
x_train = normal_scale
x_test = normal_scale
vae.fit(x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size)
# build a model to project inputs on the latent space
encoder = Model(x, z_mean)
Но я получил ошибку InvalidArgumentError: Incompatible shapes: [100,5] vs. [100]
Я не думаю, что есть несовместимые формы. Это структура вариационного автоэнкодера
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_76 (InputLayer) (None, 5, 3) 0
____________________________________________________________________________________________________
lstm_42 (LSTM) (None, 5) 180 input_76[0][0]
____________________________________________________________________________________________________
dense_317 (Dense) (None, 2) 12 lstm_42[0][0]
____________________________________________________________________________________________________
dense_318 (Dense) (None, 2) 12 lstm_42[0][0]
____________________________________________________________________________________________________
lambda_72 (Lambda) (None, 2) 0 dense_317[0][0]
dense_318[0][0]
____________________________________________________________________________________________________
repeat_vector_20 (RepeatVector) (None, 5, 2) 0 lambda_72[0][0]
____________________________________________________________________________________________________
lstm_43 (LSTM) (None, 5, 5) 160 repeat_vector_20[0][0]
____________________________________________________________________________________________________
time_distributed_18 (TimeDistrib (None, 5, 3) 18 lstm_43[0][0]
====================================================================================================
Total params: 382
Trainable params: 382
Non-trainable params: 0
1 ответ
Ошибка возникает при вычислении функции потерь:
vae_loss = K.mean(xent_loss + kl_loss)
Вот, xent_loss
тензор с формой (100, 5)
, в то время как kl_loss
имеет форму (100,)
, Расширение измерения kl_loss
включит трансляцию (я полагаю, это то, что вы хотели):
vae_loss = K.mean(xent_loss + kl_loss[:, None])