How to Configure Applications to Receive Machine Learning Arguments

By Matthew Scarpino

When the machine learning (ML) Engine executes your application, it passes arguments that provide information about the operating environment. The table lists the possible arguments.

Machine Learning Arguments

Argument Operation
--job-dir Location of the application’s data
--train_batch_size Batch size for training
--train_steps Number of steps for each training epoch
--eval_batch_size Batch size for evaluation
--eval_steps Number of steps to run evaluation at each checkpoint
--eval_delay_secs Time to wait before first evaluation
--min_eval_frequency Minimum number of training steps between evaluations

--job-dir is particularly important because it tells the application where it should store its output files. The following code demonstrates how you can access this using an ArgumentParser:

if __name__ == '__main__':

parser = argparse.ArgumentParser()



help='Checkpoint/output location',



args = parser.parse_args()

In addition to the built-in arguments, you can provide arguments of your own. When you submit a job, the ML Engine will pass your arguments to the application. But keep two points in mind:

  • User-defined flags must follow all of the built-in flags.
  • Two dashes (--) must separate the built-in flags from the user-defined flags.

For example, suppose that you want to pass two arguments to your application named data_dir and num_epochs. When you execute a command, you need to set the --data_dir and --num_epochs flags at the end of the command and separate them from the command’s normal flags with --.