Deep Learning and Artificial Intelligence Tutorial @ LMU WS 2018/19
View the Project on GitHub changkun/ws-18-19-deep-learning-tutorial
a)
Unit test if fit_linear
works properly.
b)
Why reproducibility is important?
How to ensure reproducibility?
tensorflow
instead of tensorflow-gpu
CPUDA_VISIBLE_DEVICES=""
tf.device()
blocks
with tf.device("/cpu:0"):
... # create ops
config = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
with tf.Session(config=config) as sess:
...
random.seed(42)
np.random.seed(42)
# at the beginning or before tf.reset_default_graph() (i.e. before first random op is created)
tf.set_random_seed(42)
config = tf.estimator.RunConfig(tf_random_seed=42)
dnn_clf = tf.estimator.DNNClassifier(..., config=config)
input_fn = tf.estimator.inputs.numpy_input_fn(x={"X": xtrain, ..., shuffle=False})
Eliminate any other source of variability
# results depends of OS
files = os.listdir()
files.sort() # should sort before use
c)
Loss is NaN.
d)
from tensorflow.python import debug as tf_debug
...
if debug:
session = tf_debug.LocalCLIDebugWrapperSession(session)
e)
output = dense_layer(h, 'output_layer', 1)
output = sigmoid(output) # here cause negative outputs
a)
tf.summary.scalar()
tf.summary.histogram()
tf.summary.merge_all()
tf.summary.FileWriter()
b)
…
c)
…
d)
tensorboard_callback = tf.keras.callbacks.TensorBoard(
experiment_dir, write_graph=True, write_images=True,
histogram_freq=1)
callbacks = [tensorboard_callback]