Iget[-1,256,256, 3] astheoutputshapeusingthetransposelayersshownbelow.iprinttheoutputshape.myquestionisspecificallyabouttheheightandwidthwhwhich thenumberoffiltersfromthelasttransposelayerinmycode。
iassumedrathersimplisticallythattheformulaisthis.ireadotherthreads。
butwhenicalculateidon ' tseemtogetthatoutput.ithinkimaybemissingthepaddingcalculation
How much padding is added by 'SAME '?
我的代码is this。
linear=TF.layers.dense(z,512 * 8 * 8) ) ) ) )。
linear=TF.contrib.layers.batch _ ATG dbg (linear,is_training=is_training,decay=0.88 ) ) )。
out=TF.layers.conv 2d _ transpose (conv,64,kernel_size=4,strides=2,padding='SAME ' ) )。
out=TF.layers.dropout(out,keep_prob ) )。
out=TF.contrib.layers.batch _ ATG dbg (out,is_training=is_training,decay=0.88 ) ) ) ) )
out=tf.nn.leaky_relu(out )
out=TF.layers.conv 2d _ transpose (out,128,kernel_size=4,strides=1,padding='SAME ' ) ) )。
out=TF.layers.dropout(out,keep_prob ) )。
out=TF.contrib.layers.batch _ ATG dbg (out,is_training=is_training,decay=0.88 ) ) ) ) )
out=TF.layers.conv 2d _ transpose (out,3,kernel_size=4,strides=1,padding='SAME ' ) )。
print(out.get_shape ) )
Regarding 'SAME' padding,theconvolutiondocumentationofferssomedetailedexplanations (furtherdetailsinthosenotes ).espec IC
# for ` TF.layers.conv 2d ` with ` same ` padding :
out _ height=ceil (浮动(in _ height ) /浮动(strides[1] ) )
out_width=ceil(float(in_width )/float ) strides[2] )
In this case,theoutputshapedependsonlyontheinputshapeandstride.thepaddingsizeiscomputedfromtheretofillthissshaperequirement
nowfortransposedconvolutions . asthisoperationisthebackwardcounterpartofaatgdbgalconvolution (its gradient ), itmeansthattheoutputshapeofaatgdbgalconvolutioncorrespondstotheinputshapetoitscounterparttransposedoperation.inotherwords, wile the output shapeoftf.layers.conv 2d () is divided by the stride,theoutputshape
oftf.layers.conv 2d _ transpose (ismultipliedbyit :
# for ` TF.layers.conv 2d _ transpose () with `SAME` padding:
out _ height=in _ height * strides [1]
out_width=in_width * strides[2]
But once again,thepaddingsizeiscalculatedtoobtainthisoutputshape,nottheotherwayaround (forsamepadding ).sincetheatgdbgalraratind
# for ` TF.layers.conv 2d _ transpose ((withgivenpadding :
out _ height=strides [1] * (in _ height-1 ) kernel _ size [0]-2 * padding _ height
out _ width=strides [2] * (in _ width-1 ) kernel_size[1] - 2 * padding_width
Rearranging the equations we get
note 3360 ife.g.2 * padding _ heightisanoddnumber,then padding _ height _ top=floor (padding _ height ); and padding _ height _ bottom=ceil (padding _ height ) ) same for resp. padding_width,padding _ width _ leftandpand padth
replacing out _ heightandout _ widthwiththeirexpressions,andusingyourvalues (for the1 stransposedconvolution ) :
youthushaveapaddingof1addedoneverysideofyourdata,inordertoobtaintheoutputdimout _ dim=in _ dim * stride=strides * (