站浏览量 站访问人数
目录

checpoint,每一步图参数的保存;

在训练过程中,将会创建很多checkpoints,每一步的训练后得到一个参数,就是global_step,

global_step初始化为0,不需要训练和优化该参数,会随着训练而自增。
self . global_step = tf . Variable ( 0 , dtype = tf . int32 , trainable = False , name = ‘global_step’)

global_step需要传入优化器中,才知道每一步训练后需要自增,
self . optimizer = tf . train . GradientDescentOptimizer ( self . lr ). minimize ( self . loss ,
global_step = self . global_step)

tf . train . Saver . save ( sess , save_path , global_step = None , latest_filename = None ,
meta_graph_suffix = ‘meta’ , write_meta_graph = True , write_state = True)

saver . save ( sess , ‘checkpoints/skip-gram’ , global_step = model . global_step)

saver . restore ( sess , ‘checkpoints/skip-gram-10000’)