TensorFlow For Dummies
Book image
Explore Book Buy On Amazon
It may seem strange to use the Cloud SDK to launch jobs locally. But the ML Engine is neither simple nor free, so you should test your applications locally before deploying them to the cloud. Another reason to execute your code locally is that you can view printed text on the command line instead of having to download and read logs.

You can launch a job on your development system by entering one of the following commands:

  • gcloud ml-engine local train: run a training job locally
  • gcloud ml-engine local predict: run a prediction job locally
These commands accomplish different results and accept different configuration flags.

Running a local training job

A GCP training job executes a Python package and produces output in the directory specified by the --job-dir flag. This table lists --job-dir and other flags you can set for local training jobs.

Flags for Local Training

Flag Description
--module-name=MODULE_NAME Identifies the module to execute
--package-path=PACKAGE_PATH Path to the Python package containing the module to execute
--job-dir=JOB_DIR Path to store training outputs
--distributed Runs code in distributed mode
--parameter-server-count=

PARAMETER_SERVER_COUNT

Number of parameter servers to run
--start-port=START_PORT Start of the range of ports reserved by the local cluster
--worker-count=WORKER_COUNT Number of workers to run

The --package-path flag identifies the top-level directory of your package. This is the directory that contains your package's setup.py file. The --module-name flag identifies the module to execute inside the package.

If you'd like to try this for yourself, copy the mnist_train.tfrecords and mnist_test.tfrecords files from the ch12 directory to the ch13 directory. Then go to the ch13/cloud_mnist directory and enter the following command:

gcloud ml-engine local train --module-name trainer.task--package-path trainer --job-dir output ----data_dir ../images
In this command, --package-path indicates that the trainer directory represents a package, and --module-name indicates that the name of the package's module is trainer.task. The --job-dir flag tells the application to store its results in a directory named output.

Two dashes (--) separate --job-dir from --data_dir. This indicates that --data_dir and any following flags are defined by the user.

Running a local prediction job

After training is complete, you can launch a local prediction job by executing gcloud ml-engine local predict. The table lists the different flags you can set.

Flags for Local Prediction

Flag Description
--model-dir=MODEL_DIR Path of the model
--json-instances=JSON_INSTANCES Path to a local file containing prediction data in JSON format
--text-instances=TEXT_INSTANCES Path to a local file containing prediction data in plain text
You should assign the --model-dir flag to the directory that contains the output of the training operation. Also, you need to identify prediction parameters using the --json-instances or --text-instances flags.

About This Article

This article is from the book:

About the book author:

Matthew Scarpino has been a programmer and engineer for more than 20 years. He has worked extensively with machine learning applications, especially those involving financial analysis, cognitive modeling, and image recognition. Matthew is a Google Certified Data Engineer and blogs about TensorFlow at tfblog.com.

This article can be found in the category: