You can launch a job on your development system by entering one of the following commands:
gcloud ml-engine local train
: run a training job locallygcloud ml-engine local predict
: run a prediction job locally
Running a local training job
A GCP training job executes a Python package and produces output in the directory specified by the--job-dir
flag. This table lists --job-dir
and other flags you can set for local training jobs.Flags for Local Training
Flag | Description |
--module-name=MODULE_NAME |
Identifies the module to execute |
--package-path=PACKAGE_PATH |
Path to the Python package containing the module to execute |
--job-dir=JOB_DIR |
Path to store training outputs |
--distributed |
Runs code in distributed mode |
--parameter-server-count=
|
Number of parameter servers to run |
--start-port=START_PORT |
Start of the range of ports reserved by the local cluster |
--worker-count=WORKER_COUNT |
Number of workers to run |
The --package-path
flag identifies the top-level directory of your package. This is the directory that contains your package's setup.py
file. The --module-name
flag identifies the module to execute inside the package.
If you'd like to try this for yourself, copy the mnist_train.tfrecords
and mnist_test.tfrecords
files from the ch12 directory to the ch13
directory. Then go to the ch13/cloud_mnist
directory and enter the following command:
gcloud ml-engine local train --module-name trainer.task--package-path trainer --job-dir output ----data_dir ../imagesIn this command,
--package-path
indicates that the trainer directory represents a package, and --module-name
indicates that the name of the package's module is trainer.task
. The --job-dir
flag tells the application to store its results in a directory named output.Two dashes (--
) separate --job-dir
from --data_dir
. This indicates that --data_dir
and any following flags are defined by the user.
Running a local prediction job
After training is complete, you can launch a local prediction job by executinggcloud ml-engine local predict
. The table lists the different flags you can set.Flags for Local Prediction
Flag | Description |
--model-dir=MODEL_DIR |
Path of the model |
--json-instances=JSON_INSTANCES |
Path to a local file containing prediction data in JSON format |
--text-instances=TEXT_INSTANCES |
Path to a local file containing prediction data in plain text |
--model-dir
flag to the directory that contains the output of the training operation. Also, you need to identify prediction parameters using the --json-instances
or --text-instances
flags.