学着用tensorboard在一个模型里面有训练模型和测试模型的损失函数曲线对比,上网找了好多,但是还都是一个曲线,即自己画的是这样的
但是想要的是这样的:
到底应该怎么样呢?
简单操作:
[python]
- tensorboard --logdir=run1:"/home/.../summary",run2:"/home/.../summary"
其实只要在终端同时运行几个events即可,events就是summary生成的那个东西,
然后注意了,还有个骚操作:
注意,左上角的ignore outliers in chart scaling 没点时是上图,如果点了,就是下面的图了
最后附上我的代码,可以自己耍一下。
[python]
- # -*- coding: utf-8 -*-
- """
- Created on Tue May 29 09:17:55 2018
- @author: 618
- """
- import tensorflow as tf
- import numpy as np
- trX = np.linspace(-1, 1, 100)
- trY = trX + np.random.randn(*trX.shape)*0.33
- teX = np.linspace(2, 3, 100)
- teY = teX
- X = tf.placeholder(tf.float32, [100,], name="input_x")
- Y = tf.placeholder(tf.float32, [100,], name="output_y")
- X1 = tf.placeholder(tf.float32, [100,], name="input_x1")
- Y1 = tf.placeholder(tf.float32, [100,], name="output_y1")
- def model(X, w):
- return tf.multiply(X, w)
- w = tf.Variable(0.1, name='weights')
- with tf.name_scope("cost"):
- y_model = model(X, w)
- cost = tf.reduce_mean(tf.square(Y - y_model))
- tf.summary.scalar('loss', cost)
- with tf.name_scope("train"):
- train_op = tf.train.AdamOptimizer(0.01).minimize(cost)
- with tf.name_scope("test_cost"):
- y_model = model(X, w)
- test_cost = tf.reduce_mean(tf.square(Y - y_model))
- tf.summary.scalar('test_loss', test_cost)
- merged = tf.summary.merge_all()
- with tf.Session() as sess:
- tf.global_variables_initializer().run()
- summary_writer = tf.summary.FileWriter('./log/train', sess.graph)
- summary_writer1 = tf.summary.FileWriter('./log/test')
- for i in range(1000):
- feed_dict = {}
- if i % 100 == 0:
- print(sess.run(cost, feed_dict={X: trX, Y: trY}))
- print(sess.run(test_cost, feed_dict={X: teX, Y: teY}))
- summary, _ = sess.run([merged, test_cost], feed_dict={X: teX, Y: teY})
- summary_writer1.add_summary(summary, i)
- else:
- summary, _ =sess.run([merged, train_op], feed_dict={X: trX, Y: trY})
- summary_writer.add_summary(summary,i)
- print(sess.run(w))
- summary_writer.close()
- summary_writer1.close()
- '''''
- import tensorflow as tf
- import numpy as np
- trX = np.linspace(-1, 1, 100)
- trY = trX + np.random.randn(*trX.shape)*0.33
- teX = np.linspace(2, 3, 100)
- teY = teX + 1
- X = tf.placeholder(tf.float32, [100,], name="input_x")
- Y = tf.placeholder(tf.float32, [100,], name="output_y")
- X1 = tf.placeholder(tf.float32, [100,], name="input_x1")
- Y1 = tf.placeholder(tf.float32, [100,], name="output_y1")
- def model(X, w):
- return tf.multiply(X, w)
- w = tf.Variable(0.1, name='weights')
- y_model = model(X, w)
- cost = tf.reduce_mean(tf.square(Y - y_model))
- tf.summary.scalar('loss', cost)
- train_op = tf.train.AdamOptimizer(0.01).minimize(cost)
- y_model = model(X1, w)
- test_cost = tf.reduce_mean(tf.square(Y1 - y_model))
- tf.summary.scalar('test_loss', test_cost)
- merged = tf.summary.merge_all()
- with tf.Session() as sess:
- tf.global_variables_initializer().run()
- summary_writer = tf.summary.FileWriter('./log', sess.graph)
- for i in range(1000):
- sess.run([train_op], feed_dict={X: trX, Y: trY, X1: teX, Y1: teY})
- if i % 100 == 0:
- print(sess.run(cost, feed_dict={X: trX, Y: trY, X1: teX, Y1: teY}))
- print(sess.run(test_cost, feed_dict={X: trX, Y: trY, X1: teX, Y1: teY}))
- summary = sess.run(merged, feed_dict={X: trX, Y: trY, X1: teX, Y1: teY})
- summary_writer.add_summary(summary,i)
- print(sess.run(w))
- summary_writer.close()
- '''
登录 | 立即注册