请告诉我mu和{}以及{}的计算方法。
我不知道你代码的线性部分。 因此,我建议:
w _ c _ mu=TF.variable (TF.truncated _ normal ([7*7* 256,latent_dim],stddev=0.1 ),name='weight )
b _ c _ mu=TF.variable (TF.constant (0.1,shape=[latent_dim] ),name='biases_fc_mu ' )
w _ c _ SIG=TF.variable (TF.truncated _ normal ([7*7* 256,latent_dim],stddev=0.1 ),name='weight
b _ c _ SIG=TF.variable (TF.constant (0.1,shape=[latent_dim] ),name='biases_fc_sig ' )
Epsilon=TF.random_normal([1,latent_dim] ) ) ) ) ) ) ) ) ) )。
withTF.variable_scope('mu ' ) :
TF.Summary.Histogram('mu ',mu ) )。
withTF.variable_scope(STDdev ) ) :
TF.Summary.Histogram(stddev ),STDdev ) )。
withTF.variable_scope(z ) ) :
# thisformulawasadoptedfromthefollowingpaper 33603358 IEEE xplore.IEEE.org/stamp/stamp.JSP? arnumber=7979344
latent _ var=mutf.multiply (TF.sqrt ) TF.exp (STD dev ),epsilon ) )
TF.summary.histogram (' features _ SIG ',stddev ) )。
.
withTF.name_scope(Loss_KL ) ) :
temp2=1tf.log (TF.square (STD de v1e-9 )-TF.square (mu )-TF.square (STD dev ) )。
KL _ term=-0.5 * TF.reduce _ sum (temp 2,reduction_indices=1) ) ) ) ) ) )。
TF.Summary.Scalar(KL_term ),TF.reduce _ mean (KL _ term ) )
withTF.name_scope(total_loss ) ) :
variational _ lower _ bound=TF.reduce _ mean (log _ likelihood KL _ term )
TF.Summary.Scalar('loss ',variational_lower_bound ) )。
withTF.name_scope(optimizer ) ) :
update _ ops=TF.get _ collection (TF.graph keys.update _ ops ) )。
withtf.control _ dependencies (update _ ops ) :
optimizer=TF.train.Adam optimizer (0.00001 ).minimize (variational _ lower _ bound ) ) )。
我希望这个有用! 在