Normally, you can use a tool called strip_unused.py, located at the same location as freeze_graph.py at tensorflow/python/tools, to remove the DecodeJpeg operation that is not included in the TensorFlow core library (see https://www.tensorflow.org/mobile/prepare_models#removing_training-only_nodes for more details), but since the input node image_feed requires the decode operation (Figure 6.2), a tool such as strip_unused won't treat the DecodeJpeg as unused so it won't be stripped. You can verify this by first running the strip_unused command as follows:
bazel-bin/tensorflow/python/tools/strip_unused --input_graph=/tmp/image2text_frozen.pb --output_graph=/tmp/image2text_frozen_stripped.pb --input_node_names="image_feed,input_feed,lstm/state_feed" --output_node_names="softmax,lstm/initial_state,lstm/state" --input_binary=True
Then loadg the output graph in iPython and list the first several nodes like this:
import tensorflow as tf
g=tf.GraphDef()
g.ParseFromString(open("/tmp/image2text_frozen_stripped", "rb").read())
x=[n.name for n in g.node]
x[:6]
The output will be as follows:
[u'image_feed',
u'input_feed',
u'decode/DecodeJpeg',
u'convert_image/Cast',
u'convert_image/y',
u'convert_image']
A second possible solution to fix the error for your iOS app is to add the unregistered op implementation to the tf_op_files file and rebuild the TensorFlow iOS library, as we did in Chapter 5, Understanding Simple Speech Commands. The bad news is that because there's no implementation of the DecodeJpeg functionality in TensorFlow, there's no way to add a TensorFlow implementation of DecodeJpeg to tf_op_files.
The fix to this annoyance is actually hinted at also in Figure 6.2, where a convert_image node is used as the decoded version of the image_feed input. To be more accurate, click on the Cast and decode nodes in the TensorBoard graph as shown in Figure 6.4, and you'll see from the TensorBoard info cards on the right that the input and output of Cast (named convert_image/Cast) are decode/DecodeJpeg and convert_image, and the input and output of decode are image_feed and convert_image/Cast:

In fact, in im2txt/ops/image_processing.py, there's a line image = tf.image.convert_image_dtype(image, dtype=tf.float32) that converts a decoded image to floats. Let's replace image_feed with convert_image/Cast, the name shown in TensorBoard, as well as the output of the preceding code snippet, and run strip_unused again:
bazel-bin/tensorflow/python/tools/strip_unused --input_graph=/tmp/image2text_frozen.pb --output_graph=/tmp/image2text_frozen_stripped.pb --input_node_names="convert_image/Cast,input_feed,lstm/state_feed" --output_node_names="softmax,lstm/initial_state,lstm/state" --input_binary=True
Now rerun the code snippet as follows:
g.ParseFromString(open("/tmp/image2text_frozen_stripped", "rb").read())
x=[n.name for n in g.node]
x[:6]
And the output no longer has a decode/DecodeJpeg node:
[u'input_feed',
u'convert_image/Cast',
u'convert_image/y',
u'convert_image',
u'ExpandDims_1/dim',
u'ExpandDims_1']
If we use our new model file, image2text_frozen_stripped.pb, in an iOS or an Android app, the No OpKernel was registered to support Op 'DecodeJpeg' with these attrs. will for sure be gone. But another error occurs, Not a valid TensorFlow Graph serialization: Input 0 of node ExpandDims_6 was passed float from input_feed:0 incompatible with expected int64. If you go through a nice Google TensorFlow codelab called TensorFlow for Poets 2 (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2), you may recall there's another tool called optimize_for_inference that does things similar to strip_unused, and it works nicely for the image classification task in the codelab. You can run it like this:
bazel build tensorflow/python/tools:optimize_for_inference
bazel-bin/tensorflow/python/tools/optimize_for_inference \
--input=/tmp/image2text_frozen.pb \
--output=/tmp/image2text_frozen_optimized.pb \
--input_names="convert_image/Cast,input_feed,lstm/state_feed" \
--output_names="softmax,lstm/initial_state,lstm/state"
But loading the output model file image2text_frozen_optimized.pb on iOS or Android app results in the same Input 0 of node ExpandDims_6 was passed float from input_feed:0 incompatible with expected int64 error. Looks like, while we're trying to achieve, to some humble extent at least, what Holmes can do in this chapter, someone wants us to be like Holmes first.
If you have tried the strip_unused or optimize_for_inference tools on other models, such as those we have seen in the previous chapters, they work just fine. It turns out the two Python-based tools, albeit included in the official TensorFlow 1.4 and 1.5 releases, have some bugs when optimizing some more complicated models. The updated and right tool is the C++ based transform_graph tool, now the official tool recommended at the TensorFlow Mobile site (https://www.tensorflow.org/mobile). Run the following commands to get rid of the float incompatible with the int64 error when deployed on mobile devices:
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/image2text_frozen.pb \
--out_graph=/tmp/image2text_frozen_transformed.pb \
--inputs="convert_image/Cast,input_feed,lstm/state_feed" \
--outputs="softmax,lstm/initial_state,lstm/state" \
--transforms='
strip_unused_nodes(type=float, shape="299,299,3")
fold_constants(ignore_errors=true, clear_output_shapes=true)
fold_batch_norms
fold_old_batch_norms'
We won't go into the details of all the --transforms options, which are fully documented at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms. Basically, the --transforms setting correctly gets rid of unused nodes such as DecodeJpeg for our model and also does a few other optimizations.
Now if you load the image2text_frozen_transformed.pb file in your iOS and Android apps, the incompatible error will be gone. Of course, we haven't written any real iOS and Android code yet, but we know the model is good and ready for us to have fun with. Good, but can be better.