You are currently viewing A Quick guide to save and restore model Tensorflow – Part 6 |AI Sangam

A Quick guide to save and restore model Tensorflow – Part 6 |AI Sangam

Loading

Introduction

In this Tutorial we will how to create a model using tensorflow. We will also learn to create Saver to manage all variables in the model. You will also come to know the use case of placeholder, Variable, tensordot, tf.global_variables_initializer(). In the restoring phase you will learn how to load the saved model from the directory as well as how to load the variables which are saved in the saving phase. This is a quick guide to save and restore model build using tensorflow deep learning library. You can also learn why restore is an important step while models are created using deep learning libraries from AI Sangam GitHub repository on Save and Restore Model Tensorflow. You may also visit the tensorflow series for the part 5 as below

Tensorflow tutorials from basic for beginners -Part1 | AI Sangam

Optimize Parmeters Tensorflow Tutorial -Part 2| AI Sangam

Low Level Introduction of Tensorflow in a Simple Way -Part3 | AI Sangam

Tensorflow Control Practice with Live Codes and Sessions-Part 4 |  AI Sangam

Create Save and load Model with Graph-Part 5 | Tensorflow MNIST 

What you will learn in this tutorial

Saving model build using Tensorflow

Restoring model using meta graphs and checkpoints

Saving model build using tensorflow

Before any discussion on it, is very important to know why model saving is important when we have built the model. To signify the importance of saving, let us have a look at important phases for building the model. One has to undergo preprocessing, model selection, building model and training model with the data to train the model before proceeding to the prediction phase. So if for each time one have to train the model again and again for the prediction phase, it would be wastage of time. Moreover, it will lead to memory as well as power consumption. Imagine what if we dataset is huge, it will take days to train the data. As a solution to these problems, saving of model is answer. When you save model each time you have to do prediction, just call the saved model and you are saved from training every time. Please have a look at the below code to understand how we have build a small program in TensorFlow and saved the variable using tf.train.Saver and Saver.save.

#creating the model
import tensorflow as tf
#x and y are to be feeded with the help of dictionary and hence are chosen as the placeholders
x = tf.placeholder(dtype=float,shape=(3,),name="x1")
y = tf.placeholder(dtype=float,shape=(3,),name="x2")
#z is the varaible whose value will change and hence is declared as variable
z = tf.Variable(2.0,dtype=float,name='variable')
# multiplication along the dimension
dot = tf.tensordot(x, y,1,name="operation_multiplication")
#updating the value of variable
z1 = tf.assign(z, z+1)
#adding the sum achieved above
addition=tf.add(dot,z1,name="addtion")
#Create a Saver with tf.train.Saver() to manage all variables in the model
saver = tf.train.Saver()
#Model is saved inside the session
#session is created as result in the tensorflow is viewed inside the session
with tf.Session() as sess:
#loop for single iteration is run
     for _ in range(1):
         #this is used as we have used the variable
         #Returns an Op that initializes global variables.
         sess.run(tf.global_variables_initializer())
         writer = tf.summary.FileWriter('session_tensorflow', sess.graph)
         output= sess.run([addition],feed_dict={x:[1,2,4],y:[4,5,7]})
         save_path = saver.save(sess, 'dir/my_model')
         print("dd",sess.run(z1))
         print("Model saved in path: %s" % save_path)
         print(output)

Graph for the above is also created in Tensorboard by typing the below command in the terminal

tensorboard --logdir="Name of directory where sess.graph is written"
#For example in this code
tensorboard --logdir="session_tensorflow"

Graph for this program is as

AutoGraph in Tensorboard
AutoGraph in Tensorboard

Restoring model using meta graphs and checkpoints

Before coming to the code, it is important to know why restoring model is important. Suppose you have to run for the 10000 checkpoint and your model stopped at 5000 checkpoint because of any reason. I also estimate that it took 1 day to reach to 5000 iteration (this is just an imagination). If you know how to restore model from the latest checkpoint, it would save a day for this just referring to the above example. This implies show important is to restore the model when the model is saved or to start from the checkpoint where training is stopped because of any reason. Have a look at below code to understand restoring phase also

#deep learning library is imported
import tensorflow as tf
sess=tf.Session()
#First let's load meta graph and restore weights
saver = tf.train.import_meta_graph('dir/my_model.meta')
saver.restore(sess,tf.train.latest_checkpoint('dir/'))
print(sess.run('variable:0'))
graph = tf.get_default_graph()
x = graph.get_tensor_by_name("x1:0")
y = graph.get_tensor_by_name("x2:0")
feed_dict ={x:[1,2,3],y:[6,7,8]}
op_to_restore = graph.get_tensor_by_name("addtion:0")
print(sess.run(op_to_restore,feed_dict))

For more details and explanation of the code please free to visit the github link https://github.com/AISangam/Save-and-Restore-Model-Tensorflow and please spare some time to read README.md file. You can email us for any queries at aisangamofficial@gmail.com for any doubts. Please follow us at our social accounts. You can see these at the footer section of our official website www.aisangam.com. In the coming post we will have a deeper look at the tensorflow deep learning library and understand its usecase in  the better way. Please stay in touch with us. 

For more information subscribe us at aisangam youtube channel.

 

Leave a Reply