一个计算机技术爱好者与学习者

0%

tensorboard基础

1. 前言

《tenserflow入门》中,我们学习了tensorflow的基本概念和搭建神经网络的方法。接下来,我们学习一下tensorboard的使用。

2. Tensorboard

为了更方便 TensorFlow 程序的理解、调试与优化,Google发布了一套叫做 TensorBoard 的可视化工具。你可以用 TensorBoard 来展现你的 TensorFlow 图像,绘制图像生成的定量指标图以及附加数据。

2.1. 可视化构建过程

《tenserflow入门》中,我们建立了一个神经网络。这里,我们进行一下修改,使之用Tensorboard显示出结构。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import tensorflow as tf
import numpy as np

tf.set_random_seed(1)
np.random.seed(1)

# fake data
x = np.linspace(-1, 1, 100)[:, np.newaxis] # shape (100, 1)
noise = np.random.normal(0, 0.1, size=x.shape)
y = np.power(x, 2) + noise # shape (100, 1) + some noise

with tf.variable_scope('Inputs'):
tf_x = tf.placeholder(tf.float32, x.shape, name='x')
tf_y = tf.placeholder(tf.float32, y.shape, name='y')

with tf.variable_scope('Net'):
l1 = tf.layers.dense(tf_x, 10, tf.nn.relu, name='hidden_layer')
output = tf.layers.dense(l1, 1, name='output_layer')

# add to histogram summary
tf.summary.histogram('h_out', l1)
tf.summary.histogram('pred', output)

loss = tf.losses.mean_squared_error(tf_y, output, scope='loss')
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
tf.summary.scalar('loss', loss) # add loss to scalar summary

sess = tf.Session()
sess.run(tf.global_variables_initializer())

writer = tf.summary.FileWriter('./log', sess.graph) # write to file
merge_op = tf.summary.merge_all() # operation to merge all summary

for step in range(100):
# train and net output
_, result = sess.run([train_op, merge_op], {tf_x: x, tf_y: y})
writer.add_summary(result, step)

1、首先运行程序,python 305_tensorboard.py,这时,会在当前文件夹下生成log目录和 event.out.tfevent.* log文件。

2、执行tensorboard --logdir ./log,此时,会启动tensorboard web服务,默认端口为6006。访问 http://localhost:6006 ,看到tensorboard页面。

3、因为我们只定义了graph,所以点击导航栏的GRAPHS,即可看到我们定义的graph。

2.2. 可视化训练过程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
from __future__ import print_function
import tensorflow as tf
import numpy as np


def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):
# add one more layer and return the output of this layer
layer_name = 'layer%s' % n_layer
with tf.name_scope(layer_name):
with tf.name_scope('weights'):
Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
tf.summary.histogram(layer_name + '/weights', Weights)
with tf.name_scope('biases'):
biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
tf.summary.histogram(layer_name + '/biases', biases)
with tf.name_scope('Wx_plus_b'):
Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
if activation_function is None:
outputs = Wx_plus_b
else:
outputs = activation_function(Wx_plus_b, )
tf.summary.histogram(layer_name + '/outputs', outputs)
return outputs


# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise

# define placeholder for inputs to network
with tf.name_scope('inputs'):
xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
ys = tf.placeholder(tf.float32, [None, 1], name='y_input')

# add hidden layer
l1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, n_layer=2, activation_function=None)

# the error between prediciton and real data
with tf.name_scope('loss'):
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
reduction_indices=[1]))
tf.summary.scalar('loss', loss)

with tf.name_scope('train'):
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

sess = tf.Session()
merged = tf.summary.merge_all()

writer = tf.summary.FileWriter("log/", sess.graph)

init = tf.global_variables_initializer()
sess.run(init)

for i in range(1000):
sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
if i % 50 == 0:
result = sess.run(merged,
feed_dict={xs: x_data, ys: y_data})
writer.add_summary(result, i)

打开tensorboard,点击导航栏SCALARS,即可看到loss的变化过程。点击导航栏DISTRIBUTIONS,即可看到weights、biases和putputs的变化过程。

3. 源码分享

https://github.com/voidking/Tensorflow-Tutorial.git

4. 书签

TensorFlow官网

Tensorflow游乐场

莫烦Tensorflow教程系列

TensorFlow 官方文档中文版

TensorFlow中文社区

youtube CS 20SI: Tensorflow for Deep Learning Research

  • 本文作者: 好好学习的郝
  • 原文链接: https://www.voidking.com/dev-tensorboard/
  • 版权声明: 本文采用 BY-NC-SA 许可协议,转载请注明出处!源站会即时更新知识点并修正错误,欢迎访问~
  • 微信公众号同步更新,欢迎关注~