How to Configure Applications to Receive Machine Learning Arguments
When the machine learning (ML) Engine executes your application, it passes arguments that provide information about the operating environment. The table lists the possible arguments.
Machine Learning Arguments
||Location of the application’s data|
||Batch size for training|
||Number of steps for each training epoch|
||Batch size for evaluation|
||Number of steps to run evaluation at each checkpoint|
||Time to wait before first evaluation|
||Minimum number of training steps between evaluations|
--job-dir is particularly important because it tells the application where it should store its output files. The following code demonstrates how you can access this using an
if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--job-dir', help='Checkpoint/output location', required=True ) args = parser.parse_args()
In addition to the built-in arguments, you can provide arguments of your own. When you submit a job, the ML Engine will pass your arguments to the application. But keep two points in mind:
- User-defined flags must follow all of the built-in flags.
- Two dashes (
--) must separate the built-in flags from the user-defined flags.
For example, suppose that you want to pass two arguments to your application named
num_epochs. When you execute a command, you need to set the
--num_epochs flags at the end of the command and separate them from the command’s normal flags with