emloop_tensorflow.hooks
¶Module providing a small collection of TF emloop hooks.
The hooks are expected to be created and used by emloop. For additional info about the emloop hooks system please refer to the emloop tutorial.
WriteTensorBoard
:
Write scalar epoch variables to TensorBoard summaries.DecayLR
:
Hook for modifying (decaying) the learning rate (or any other variable) during the training.InitLR
:
Hook for initializing the learning rate (or any other variable) before the training.DecayLROnPlateau
:
Decay learning rate on plateau.emloop_tensorflow.hooks.
WriteTensorBoard
(model, output_dir, image_variables=None, flush_secs=10, visualize_graph=False, on_unknown_type='ignore', on_missing_variable='error', **kwargs)[source]¶Bases: hooks.AbstractHook
Write scalar epoch variables to TensorBoard summaries.
Refer to TensorBoard introduction for more info.
By default, non-scalar values are ignored.
hooks:
- emloop_tensorflow.hooks.WriteTensorboard
hooks:
- emloop_tensorflow.WriteTensorboard:
on_unknown_type: str
hooks:
- emloop_tensorflow.hooks.WriteTensorboard:
visualize_graph: true
MISSING_VARIABLE_ACTIONS
= {'error', 'ignore', 'warn'}¶Action executed on missing variable.
UNKNOWN_TYPE_ACTIONS
= {'error', 'ignore', 'warn'}¶Possible actions to take on unknown variable type.
__init__
(model, output_dir, image_variables=None, flush_secs=10, visualize_graph=False, on_unknown_type='ignore', on_missing_variable='error', **kwargs)[source]¶Create new WriteTensorBoard hook.
Parameters: |
|
---|
emloop_tensorflow.hooks.
DecayLR
(model, decay_value=0.98, variable_name='learning_rate', decay_type='multiply', **kwargs)[source]¶Bases: emloop.hooks.every_n_epoch.EveryNEpoch
Hook for modifying (decaying) the learning rate (or any other variable) during the training.
It expects a variable with the specified name to be present in the TF model being trained.
Each n_epochs, the variable is either multiplied or summed with the specified decay_value
.
learning_rate
variable by 0.98 after every epoch¶hooks:
- emloop_tensorflow.hooks.DecayLR
learning_rate
variable by 0.999 each 5th epoch¶hooks:
- emloop_tensorflow.hooks.DecayLR:
decay_value: 0.999
n_epochs: 5
my_learning_rate
variable¶hooks:
- emloop_tensorflow.hooks.DecayLR:
decay_value: -0.00001
variable_name: my_learning_rate
decay_type: add
LR_DECAY_TYPES
= {'add', 'multiply'}¶Possible LR decay types.
__init__
(model, decay_value=0.98, variable_name='learning_rate', decay_type='multiply', **kwargs)[source]¶Create new DecayLR hook.
Parameters: |
|
---|
_after_n_epoch
(epoch_id, **_)[source]¶Call _decay_variable()
.
Return type: | None |
---|
emloop_tensorflow.hooks.
InitLR
(model, value, variable_name='learning_rate', **kwargs)[source]¶Bases: hooks.AbstractHook
Hook for initializing the learning rate (or any other variable) before the training.
This is useful for setting up a resumed training.
It expects a variable with the specified name to be present in the TF model being trained.
learning_rate
variable to 0.001¶hooks:
- emloop_tensorflow.hooks.InitLR:
value: 0.001
my_variable
variable to 42¶hooks:
- emloop_tensorflow.hooks.InitLR:
variable_name: my_variable
value: 42
emloop_tensorflow.hooks.
DecayLROnPlateau
(decay_value=0.1, **kwargs)[source]¶Bases: emloop.hooks.on_plateau.OnPlateau
, emloop_tensorflow.hooks.decay_lr.DecayLR
Decay learning rate on plateau.
After decaying, LR may be decayed again only after additional long_term
epochs.
Shares args from both DecayLR
and el.hooks.OnPlateau
.
accuracy
values is
greater than the mean of last 30 accuracy
values.¶hooks:
- emloop_tensorflow.hooks.DecayLROnPlateau:
long_term: 100
short_term: 30
variable: accuracy
objective: max
loss
plateau is detected¶hooks:
- emloop_tensorflow.hooks.DecayLROnPlateau:
decay_value: 0.01
__init__
(decay_value=0.1, **kwargs)[source]¶Create new OnPlateau hook.
Parameters: |
|
---|---|
Raises: | AssertionError – if |
after_epoch
(**kwargs)[source]¶Call _on_plateau_action()
if the long_term
variable mean is lower/greater than the short_term
mean.
Return type: | None |
---|