Job types

[1]:
from aapi import *

JobCommand

AutomationAPI Documentation

Job that executes an Operating System command

Required arguments:

  • object_name : The name of the job

  • command: The command to execute

[2]:
job = JobCommand('JobCommandSample', command='echo Hello')
[3]:
job = JobCommand('JobCommandSample',
                 command='mycommand',
                 run_as='user',
                 host='myhost.com',
                 pre_command='echo Precommand',
                 post_command='echo Finished')

JobScript

AutomationAPI Documentation

Job that executes a script

Required arguments:

  • object_name : The name of the job

  • file_name : The name of the script

  • file_path : The path of the script

Optional arguments:

  • arguments: An array of strings that are passed as argument to the script

[4]:
job = JobScript('JobScriptSample', file_name='task.sh', file_path='/home/scripts')
[5]:
job = JobScript('JobScriptSame',
                file_name='task.sh',
                file_path='%%path',
                run_as='user',
                arguments=['arg1', 'arg2'],
                variables=[{'path': '/home/scripts'}])

JobEmbeddedScript

AutomationAPI Documentation

Job that executes a script written in the job itself. Note that control characters need to be escaped

Required arguments:

  • object_name : The name of the job

  • script : The written script. (note: all control characters need to be escaped!)

  • file_name : The name of the script file.

[6]:
job = JobEmbeddedScript('JobEmbeddedScriptSample',
                        file_name='filename.sh',
                        script=r'#!/bin/bash\necho "Hello"\necho "Bye"'),

JobFileTransfer

AutomationAPI Documentation

Job that executes a list of file transfers

Required arguments:

  • object_name: The name of the job

  • file_transfers: A list of FileTransfer object defining the transfers to be executed by the job

In addition, a Connection Profile will be required to run the job. For a JobFileTransfer, you can either define a Dual Endpoint connection profile connection_profile_dual_endpoint or two connection profiles for source and destination (connection_profile_src and connection_profile_dest)

Optional arguments:

  • number_of_retries: Number of connection attempts after a connection failure. Range of values: 0–99 as string (“0” to “99”) or “Default” (to inherit the default). Default: 5 attempts. Note: the value should be passed as string. To define 10, it should be passed “10”

  • s3_bucket_name: For file transfers between a local filesystem and an Amazon S3 or S3-compatible storage service: The name of the S3 bucket

  • s3_bucket_name_src: For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the source

  • s3_bucket_name_dest: For file transfers between two Amazon S3 or S3-compatible storage services: The name of the S3 bucket at the destination

[7]:
job = JobFileTransfer('JobFileTransferSample',
                      connection_profile_src='CP1', connection_profile_dest='CP2',
                      number_of_retries='7',
                      file_transfers=[
                          FileTransfer(src='/home/ctm/file1', dest='/home/cmt/file2',
                                       transfer_type=FileTransfer.TransferType.Binary,
                                       transfer_option=FileTransfer.TransferOption.SrcToDest),

                          FileTransfer(src='/home/ctm/file1', dest='/home/cmt/file2',
                                       transfer_type=FileTransfer.TransferType.Binary,
                                       transfer_option=FileTransfer.TransferOption.SrcToDest),
                      ])
[8]:
# An example of a file transer from a S3 storage to a local filesystem:
job = JobFileTransfer('TransferS3ToLocal',
                      connection_profile_src='amazonCP', connection_profile_dest='localcp',
                      s3_bucket_name='bucket')

# Note that file_transfers is by default initialized as an empty list
job.file_transfers.append(FileTransfer(src='folder/file1', dest='/home/file1'))

JobFileWatcherCreate

AutomationAPI Documentation

Job that detects the successful completion of a file transfer activity that creates a file

[9]:
job = JobFileWatcherCreate('JobFileWatcherCreateSample',
                           path='C:\\path*.txt',
                           search_interval='45',
                           time_limit='22',
                           start_time='201705041535',
                           stop_time='201805041535',
                           minimum_size='10B',
                           wildcard=False,
                           minimal_age='1Y',
                           maximal_age='1D2H4MIN'
                           )

JobFileWatcherDelete

AutomationAPI Documentation

Job that detects the successful completion of a file transfer activity that deletes a file

[10]:
job = JobFileWatcherDelete('JobFileWatcherDeleteSample',
                           path='C:\\path*.txt',
                           search_interval='45',
                           time_limit='22',
                           start_time='201705041535',
                           stop_time='201805041535',
                           )

JobDatabaseEmbeddedQuery

AutomationAPI Documentation

Job that runs an embedded query

Required arguments:

  • object_name : The name of the job

  • query : The embedded SQL query that you want to run. The SQL query can contain auto edit variables. During job run, these variables are replaced by the values that you specify in Variables parameter (next row). For long queries, you can specify delimiters using \n (new line) and \t (tab).

Optional arguments:

  • autocommit: Commits statements to the database that completes successfully

  • output_execution_log: Shows the execution log in the job output

  • output_sql_output: Shows the SQL sysout in the job output

  • sql_output_format: Defines the output format as either Text, XML, CSV, or HTML

[11]:
job = JobDatabaseEmbeddedQuery(
    'JobEmbeddedQuerySample',
    query='SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC',
    connection_profile='CPDB',
    variables=[{'firstParamName': 'value'}],
    autocommit='N',
    output_execution_log='y',
    output_sql_output='y',
    sql_output_format='XML'
)

JobDatabaseSQLScript

AutomationAPI Documentation

Job that runs a SQL script from a file system

Required arguments:

  • object_name : The name of the job

  • sql_script : The path of the script

Optional arguments:

  • parameters: An array of dictionary with name and value. Every name that appears in the SQL script will be replaced by its value pair.

[12]:
job = JobDatabaseSQLScript('JobDatabaseSQLScriptSample',
                           connection_profile='CPDB',
                           sql_script='/home/script.sql',
                           parameters=[{'param1': 'val1'}, {'param2': 'val2'}])

JobDatabaseStoredProcedure

AutomationAPI Documentation

Job that runs a program that is stored on the database

Required Arguments:

  • object_name : The name of the job

  • stored_procedure : The name of the job

  • schema : The database schema where the stored procedure resides

Optional arguments:

  • parameters: A comma-separated list of values and variables for all parameters in the procedure, in the order of their appearence in the procedure

  • return_value: A variable for the Return parameter (if the procedure contains such a parameter)

  • package: (Oracle only) Name of a package in the database where the stored procedure resides. The default is “*”, that is, any package in the database.

[13]:
job = JobDatabaseStoredProcedure('JobDatabaseStoredProcedureSample',
                                 connection_profile='CPDB',
                                 stored_procedure='procedure',
                                 schema='public',
                                 return_value='RV')

JobDatabaseMSSQLAgentJob

AutomationAPI Documentation

Job that manages a SQL server defined job

Note : Only available with Automation API >= 9.0.19.210

Required arguments:

  • object_name : The name of the job

  • job_name : The name of the job defined in the SQL server

Optional arguments:

  • category: The category of the job, as defined in the SQL server

[14]:
job = JobDatabaseMSSQLAgentJob('JobDatabaseMSSQLAgentJobSample',
                               job_name='get_version',
                               connection_profile='CPDB',
                               category='Data Collector')

JobDatabaseMSSQL_SSIS

AutomationAPI Documentation

Job that executes of SQL Server Integration Services (SSIS) packages:

Note : Only available with Automation API >= 9.0.19.220

Required arguments:

  • object_name : The name of the job

  • package_source : The source of the SSIS package

  • package_name : The name of the SSIS package

Optional arguments:

  • catalog_env: The category of the job, as defined in the SQL server

  • config_files: Names of configuration files that contain specific data that you want to apply to the SSIS package

  • properties: Pairs of names and values for properties defined in the SSIS package. Each property name is replaced by its defined value during SSIS package execution.

[15]:
job = JobDatabaseMSSQL_SSIS('JobDatabaseMSSQL_SSISSample',
                           connection_profile='CPDB',
                           package_source=JobDatabaseMSSQL_SSIS.PackageSource.SSIS_Catalog,
                           package_name='\\Data Collector\\SqlTraceCollect',
                           catalog_env='ENV_NAME',
                           config_files=[
                               'C:\\Users\\dbauser\\Desktop\\test.dtsConfig',
                               'C:\\Users\\dbauser\\Desktop\\test2.dtsConfig'
                           ],
                           properties=[{
                               'PropertyName': 'PropertyValue'
                           },
                               {
                               'PropertyName2': 'PropertyValue2'
                           }]
                           )

JobHadoopSparkPython

AutomationAPI Documentation

Job that runs a Spark Python program

Required arguments:

  • object_name : The name of the job

  • spark_script : The name of the Spark script

  • package_name : The name of the SSIS package

Optional arguments:

  • spark_options: The options to be passed to the script

[16]:
job = JobHadoopSparkPython('JobHadoopSparkPythonSample',
                           connection_profile='CP',
                           spark_script='/home/process.py')

[17]:
job = JobHadoopSparkPython('JobHadoopSparkPythonSample',
                           connection_profile='CP',
                           spark_script='/home/process.py',
                           arguments=[
                               '1000',
                               '120'
                           ],
                           spark_options=[{'--master': 'yarn'},
                                          {'--num': '-executors 50'}]
                           )

JobHadoopSparkScalaJava

AutomationAPI Documentation

[18]:
job = JobHadoopSparkScalaJava('JobHadoopSparkScalaJavaSample',
                              connection_profile='CP',
                              program_jar='/home/user/ScalaProgram.jar',
                              main_class='com.mycomp.sparkScalaProgramName.mainClassName'
                              )

JobHadoopPig

AutomationAPI Documentation

[19]:
job = JobHadoopPig('JobHadoopPigSample',
                   connection_profile='CP',
                   pig_script='/home/user/script.pg'
                   )

JobHadoopSqoop

AutomationAPI Documentation

[20]:
job = JobHadoopSqoop('JobHadoopSqoopSample',
                     connection_profile='CP',
                     sqoop_command='import --table foo --target-dir /dest_dir',
                     sqoop_options=[
                         {"--warehouse-dir": "/shared"},
                         {"--default-character-set": "latin1"}
                     ]
                     )

JobHadoopHive

AutomationAPI Documentation

[21]:
job = JobHadoopHive('JobHadoopHiveSample',
                    connection_profile='CP',
                    hive_script='/home/user1/hive.script',
                    parameters=[{'amount': '1000'}, {'topic': 'food'}],
                    hive_options={'hive.root.logger': 'INFO, console'})

JobHadoopDistCp

AutomationAPI Documentation

[22]:
job = JobHadoopDistCp('JobHadoopDistCpSample',
                      connection_profile='CP',
                      target_path='hdfs://nns2:8020/foo/bar',
                      source_paths=['hdfs://nns1:8020/foo/a'],
                      distcp_options=[{'-m': '3'}, {'-filelimit': '100'}]
                      )

JobHadoopHDFSCommands

AutomationAPI Documentation

[23]:
job = JobHadoopHDFSCommands('JobHadoopHDFSCommandsSample',
                            connection_profile='CP',
                            commands=[{"get": "hdfs://nn.example.com/user/hadoop/file localfile"},
                                      {"rm": "hdfs://nn.example.com/file /user/hadoop/emptydir"}]
                            )

JobHadoopHDFSFileWatcher

AutomationAPI Documentation

[24]:
job = JobHadoopHDFSFileWatcher('JobHadoopHDFSFileWatcherSample',
                               connection_profile='CP',
                               hdfs_file_path='/input/filename',
                               min_detected_size='1',
                               max_wait_time='2'
                               )

JobHadoopOozie

AutomationAPI Documentation

[25]:
job = JobHadoopOozie('JobHadoopOozieSample',
                     connection_profile='CP',
                     job_properties_file='/home/user/job.properties',
                     oozie_options=[{"inputDir": "/usr/tucu/inputdir"},
                                    {"outputDir": "/usr/tucu/outputdir"}]
                     )

JobHadoopMapReduce

AutomationAPI Documentation

[26]:
job = JobHadoopMapReduce('JobHadoopMapReduceSample',
                         connection_profile='CP',
                         program_jar='/home/user1/hadoop-jobs/hadoop-mapreduce-examples.jar',
                         main_class='com.mycomp.mainClassName',
                         arguments=[
                             'arg1',
                             'arg2'
                         ])

JobHadoopMapredStreaming

AutomationAPI Documentation

[27]:
job = JobHadoopMapredStreaming('JobHadoopMapredStreamingSample',
                               connection_profile='CP',
                               input_path='/user/input/',
                               output_path='/tmp/output',
                               mapper_command='mapper.py',
                               reducer_command='reducer.py',
                               general_options=[
                                   {"-D": "fs.permissions.umask-mode=000"},
                                   {"-files": "/home/user/hadoop-streaming/mapper.py,/home/user/hadoop-streaming/reducer.py"}
                               ]
                               )

JobHadoopTajoInputFile

AutomationAPI Documentation

[28]:
job = JobHadoopTajoInputFile('JobHadoopTajoInputFileSample',
                             connection_profile='CP',
                             full_file_path='/home/user/tajo_command.sh',
                             tajo_options=[
                                 {"amount": "1000"},
                                 {"volume": "120"}
                             ]
                             )

JobHadoopTajoQuery

AutomationAPI Documentation

[29]:
job = JobHadoopTajoQuery('JobHadoopTajoQuerySample',
                         connection_profile='CP',
                         open_query='SELECT %%firstParamName AS VAR1 \\n FROM DUMMY \\n ORDER BY \\t VAR1 DESC')

JobPeopleSoft

AutomationAPI Documentation

Job that manages PeopleSoft jobs and processes

Required arguments:

  • object_name : The name of the job

Optional arguments:

  • user : A PeopleSoft user ID that exists in the PeopleSoft Environment

  • control_id : Run Control ID for access to run controls at runtime

  • process_type: A PeopleSoft process type that the user is authorized to perform

  • process_name: The name of the PeopleSoft process to run

  • append_to_output: Whether to include PeopleSoft job output in the Control-M job output, either true or false

  • bind_variables: Values of up to 20 USERDEF variables for sharing data between Control-M and the PeopleSoft job or process

[30]:
job = JobPeopleSoft('JobPeopleSoftSample',
                    connection_profile='CP_PS',
                    user='PS_User',
                    control_id='controlid',
                    server_name='server',
                    process_type='ptype',
                    process_name='pname',
                    append_to_output=False,
                    bind_variables=['val1', 'val2'])

JobInformatica

[AutomationAPI Documentation]((https://documents.bmc.com/supportu/API/Monthly/en-US/Documentation/API_CodeRef_JobTypes_DataIntegration.htm#Job:Informatica)

Job that manages Informatica workflows

Required arguments:

  • repository_folder : The Repository folder that contains the workflow that you want to run

  • workflow : The workflow that you want to run in Control-M for Informatica

Optional arguments:

  • instance_name: The specific instance of the workflow that you want to run

  • os_profile: The operating system profile in Informatica

  • workflow_execution_mode: The mode for executing the workflow

  • workflow_restart_mode: The operation to execute when the workflow is in a suspended satus

  • restart_from_task: The task from which to start running the workflow. This parameter is required only if you set WorkflowExecutionMode to StartFromTask.

  • run_single_task: The workflow task that you want to run. This parameter is required only if you set WorkflowExecutionMode to RunSingleTask.

  • workflow_parameters_file: The path and name of the workflow parameters file. This enables you to use the same workflow for different actions.

[31]:
job = JobInformatica('JobInformaticaSample',
               connection_profile='CP_INF',
               repository_folder='POC',
               workflow='WF_TEST',
               instance_name='MyInstance',
               os_profile='OSPROFILE',
               workflow_execution_mode=JobInformatica.WorkflowExecutionMode.RunSingleTask,
               workflow_restart_mode=JobInformatica.WorkflowRestartMode.ForceRestartFromSpecificTask,
               restart_from_task='s_MapTest_Success',
               workflow_parameters_file='/opt/wf1.prop')

JobAWSLambda

AutomationAPI Documentation

Job that executes an AWS Lambda service on an AWS server

Required arguments:

  • object_name : The name of the job

  • function_name : The Lambda function to execute

Optional arguments:

  • instance_name: The specific instance of the workflow that you want to run

  • version: The Lambda function version. The default is $LATEST (the latest version).

  • payload: The Lambda function payload, in JSON string format

  • append_log: Whether to add the log to the job’s output, either true (the default) or false

[32]:
job = JobAWSLambda('JobAWSLambdaSample',
                   connection_profile='CPAWS',
                   function_name='fname',
                   version='1',
                   payload='{"myVar":"value1", "othervar":"value2"}',
                   append_log=True
                   )

JobAWSStepFunction

AutomationAPI Documentation

Job that executes an AWS Step Function service on an AWS server

Required arguments:

  • object_name : The name of the job

  • state_machine : The State Machine to use

  • execution_name: A name for the execution

Optional arguments:

  • input: The Step Function input in JSON string format

  • append_log: Whether to add the log to the job’s output, either true (the default) or false

[33]:
job = JobAWSStepFunction('JobAWSStepFunctionSample',
                         connection_profile='CPAWS',
                         state_machine='smach1',
                         execution_name='exec1',
                         input='{"var":"value", "othervar":"val2"}',
                         append_log=False)

JobAWSBatch

AutomationAPI Documentation

Job that executes an AWS Batch service on an AWS server

Required arguments:

  • object_name : The name of the job

  • job_name : The name of the batch job

  • job_definition: The job definition to use

  • job_definition_revision: The job definition revision

  • job_queue: The queue to which the job is submitted

Optional arguments:

  • aws_job_type: The type of job, either Array or Single

  • array_size: (For a job of type Array) The size of the array (that is, the number of items in the array)

  • depends_on: Parameters that determine a job dependency

  • command: A command to send to the container that overrides the default command from the Docker image or the job definition

  • memory: The number of megabytes of memory reserved for the job

  • v_cpus: The number of vCPUs to reserve for the container

  • job_attempts: The number of retry attempts passed as string

  • execution_timeout: The timeout duration in seconds

  • append_log: Whether to add the log to the job’s output, either true (the default) or false

[34]:
job = JobAWSBatch('JobAWSBatchSample',
                  connection_profile='CPAWS',
                  job_name='batchjob',
                  job_definition='jobdef',
                  job_definition_revision='3',
                  job_queue='queue1',
                  aws_job_type=JobAWSBatch.AwsJobType.Array,
                  array_size='100',
                  depends_on=JobAWSBatch.DependsOn(
                      dependency_type=JobAWSBatch.DependsOn.DependencyType.Standard,
                      job_depends_on='job5'),
                  command='ls',
                  memory='10',
                  v_cpus='2',
                  job_attempts='5',
                  execution_timeout='60',
                  append_log=False)

JobAzureFunction

AutomationAPI Documentation

Job that executes an Azure function service.

Required arguments:

  • object_name : The name of the job

  • function : The name of the Azure function to execute

  • function_app: The name of the Azure function app

Optional arguments:

  • parameters: Function parameters defined as pairs of name and value

  • append_log: Whether to add the log to the job’s output, either true (the default) or false

[35]:
job = JobAzureFunction('JobAzureFunctionSample',
                       connection_profile='CPAZURE',
                       function='AzureFunction',
                       function_app='funcapp',
                       append_log=False,
                       parameters=[
                           {"firstParamName": "firstParamValue"},
                           {"secondParamName": "secondParamValue"}
                       ])

JobAzureLogicApps

AutomationAPI Documentation

Job that executes an Azure Logic App service

Required arguments:

  • object_name : The name of the job

  • logic_app_name : The name of the Azure Logic App

Optional arguments:

  • request_body: The JSON for the expected payload

  • append_log: Whether to add the log to the job’s output, either true (the default) or false

[36]:
job = JobAzureLogicApps('JobAzureLogicAppsSample',
                        connection_profile='CPAZURE',
                        workflow='workflow_app',
                        parameters='{"name":"value"}'
                        )

JobAzureBatchAccount

AutomationAPI Documentation

Job that executes an Azure Batch Accounts service

Required arguments:

  • object_name : The name of the job

  • job_id : The ID of the batch job

  • command_line : A command line that the batch job runs

Optional arguments:

  • wallclock: Maximum limit for the job’s run time

  • max_tries: If you do not include this parameter, the default is none (no retries).

  • retention: If you do not include this parameter, the default is none (no retries).

  • append_log: File retention period for the batch job. If you do not include this parameter, the default is an unlimited retention period.

[37]:
job = JobAzureBatchAccount('JobAzureBatchAccountSample',
                           connection_profile='CPAZURE',
                           job_id='azurejob',
                           command_line='echo "Hello"',
                           append_log=True,
                           wallclock=JobAzureBatchAccount.Wallclock(
                               time='70', unit=JobAzureBatchAccount.Wallclock.Unit.Minutes),
                           max_tries=JobAzureBatchAccount.MaxTries(
                               count='6', option=JobAzureBatchAccount.MaxTries.Option.Custom),
                           retention=JobAzureBatchAccount.Retention(
                               time='1', unit=JobAzureBatchAccount.Retention.Unit.Hours)
                           )

JobWebServices

AutomationAPI Documentation

Note : Only available with Automation API >= 9.0.20.220

Job that executes standard web services, servlets, or RESTful web services. To manage Web Services jobs, you must have the Control-M for Web Services, Java, and Messaging (Control-M for WJM) plug-in installed in your Control-M environment

[38]:
job = JobWebServices('JobWebServicesSample',
                     connection_profile='CP_WS',
                     location='http://www.dneonline.com/calculator.asmx?WSDL',
                     soap_header_file='c:\\myheader.txt',
                     service='Calculator(Port:CalculatorSoap)',
                     operation='Add',
                     request_type=JobWebServices.RequestType.Parameter,
                     override_url_endpoint='http://myoverridehost.com',
                     override_content_type='*/*',
                     http_connection_timeout='2345',
                     preemptive_http_authentication='abc@bmc.com',
                     include_title_in_output=True,
                     exclude_job_output=False,
                     output_parameters=[{
                         "Element": "AddResponse.AddResult",
                         "HttpCode": "*",
                         "Destination": "testResultAdd",
                         "Type": "string"
                     }],
                     input_parameters=[
                         {
                             "Name": "intA",
                             "Value": "97",
                             "Type": "string"
                         },
                         {
                             "Name": "intB",
                             "Value": "345",
                             "Type": "string"
                         },
                         {
                             "Name": "accept-encoding",
                             "Value": "*/*",
                             "Type": "header"
                         }
                     ],
                     soap_request=[
                         '''<soapenv:Envelope xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/ xmlns:tem=http://tempuri.org/>
                                <soapenv:Header/>
                                <soapenv:Body>
                                    <tem:Add>
                                        <tem:intA>98978</tem:intA>
                                        <tem:intB>75675</tem:intB>
                                    </tem:Add>
                                </soapenv:Body>
                            </soapenv:Envelope>'''
                     ],
                     input_file='/home/usr/soap.xml')

JobSLAManagement

AutomationAPI Documentation

[39]:
job = JobSLAManagement('JobSLAManagementSample',
                       service_name='SLA-service',
                       service_priority='1',
                       job_runs_deviations_tolerance='1',
                       complete_in=JobSLAManagement.CompleteIn(time='00:01'),
                       complete_by=JobSLAManagement.CompleteBy(
                           time='21:40', days='3'),
                       average_run_time_tolerance=JobSLAManagement.AverageRunTimeTolerance(
                           average_run_time='10',
                           units=JobSLAManagement.AverageRunTimeTolerance.Units.Minutes
                       ))

JobZOS

AutomationAPI Documentation

[40]:
job = JobZOSMember('JobZOSMemberSample',
                   member_library='IOAQ.AAPI#AU.JCL',
                   archiving=JobZOS.Archiving(archive_sys_data=True),
                   must_end=JobZOS.MustEnd(hours='1', minutes='15', days=1),
                   run_on_all_agents_in_group=True
                  )

[41]:
job = JobZOSInStreamJCL('JobZOInStreamJCLSample',
                   member_library='IOAQ.AAPI#AU.JCL',
                   jcl="//AUTEST0 JOB ,ASM,CLASS=A,REGION=0M\\n// JCLLIB ORDER=IOAQ.AAPI#AU.PROCLIB\\n// INCLUDE MEMBER=IOASET\\n//S1 EXEC IOATEST,PARM='TERM=C0000'",
                   archiving=JobZOS.Archiving(archive_sys_data=True),
                   run_on_all_agents_in_group=True,
                   created_by='workbench',
                   run_as='workbench'
                  )