Job Object
This section describes the purpose of the Job object and its properties. Below you will find a general overview, followed by a detailed description of Job Properties as they appear in the various Job categories (tabs).
A Job is a process that runs on a
in a reliable and secure manner. An ActiveBatch Job can be one of 3 types - Process, Script, or Jobs Library. Usually a Job is executed in an unattended mode such that the desktop itself is not occupied with the task of running the Job. In fact, with ActiveBatch, the desktop need not be logged on when running Jobs.
A Job definition (sometimes referred to as a Job template) stores all the properties of the process you wish to run through ActiveBatch. When a Job definition is triggered for execution, a Job Instance is cloned (created) from the Job definition. The instance is sent to an Execution Agent to run. In summary, a Job definition defines the Job, and the Job instance is what is executed. A Job definition can spawn many Job Instances over its lifetime.
General Job Object Information
Batch Job Security Requirements
Running any Job in ActiveBatch requires at least one
. In addition, all Jobs in ActiveBatch run under a security context. Typically that context will be the submitting user’s username and password. All credentials supplied are authenticated by the system.The Execution Agent ("Agent") is the component that runs ActiveBatch jobs. It must be installed and configured on a system that is specified in the Machine property of an Execution Queue object. The Job Scheduler uses the Queue's Machine property to determine what system to connect to in order to dispatch jobs. Most ActiveBatch environments have more than one system with an Agent installation. Which Agent to send the Job to is based on the Submission Queue that is assigned (associated) to each Job, a required Job property.
Since ActiveBatch supports different operating systems and platforms, different authentication is used, however, a username and password is common to all systems.
Next, for Windows systems, an additional requirement must be met. The user who desires to execute a batch Job on an Agent machine must be granted the “Logon as a Batch Job” right. By default, this permission is granted automatically, by ActiveBatch. The auto-grant is controlled by the Execution Agent Windows Registry as follows: HKEY_LOCAL_MACHINE\SOFTWARE\ASCI\ActiveBatch\VXX\ExecAgent\AutoRight, and as stated, is enabled by default. See the ActiveBatch Installation and Administrator's Guide for more details. This right must be locally granted to the Execution Agent machine the Job will run on. Most users are participants of some type of domain and so their accounts are typically domain accounts. Please note that the local aspects of the Logon as a Batch Job right are important. A typical mistake is to grant this right at the domain level (if you choose to manage the granting of this right on your own). To reiterate, the right must be granted at the local machine level, specifically on the Execution Agent machine. The best approach is to allow the Execution Agent to automatically grant the Logon as a Batch Job right to users (the default behavior) who you have already authorized as having the ability to execute batch Jobs.
If you decide to manually grant the Logon as a Batch Job right, then you may want to consider creating an ActiveBatch local group. You can then assign the Logon as a Batch Job right to that group and then add members to the group. The image below uses the Local Security Policy applet of the Administration Services group and illustrates how this would be done. In the example below, the group ActiveBatch Users has been created and is now assigned the Logon as a Batch Job right on the local computer. You could now add domain members/groups to this local group to complete the required right assignment.
![]()
Next, Microsoft Windows Server systems support Kerberos sign-on through a mechanism called “Protocol Transition”. This allows a user to run a Job under their logon security credentials without having to specify a password. The advantage of this approach is that with no password to specify you don’t have to worry about password expiration, storage, etc.
How to Create or Edit a Job
To create a Job, right-click on the desired container (Scheduler root, existing Folder or Plan) in the Object Navigation Pane, select New, then Job. When you’ve completed the Job property settings, you must click the Save or the Save and Close button to save the Job. Click the X on the tab of the New Job if you wish to cancel the creation of the Job. When you save the Job, it will instantly appear in the Object Navigation pane (if auto refresh is enabled). To modify an existing Job, right-click on the Job in the Object Navigation pane, then select Properties. You will see something similar to what is depicted in the image below. The General tab (top left) is the active tab by default.
Note: You can also edit an existing Job by double-clicking on a selected Job in the Object Navigation pane. When accessed this way, the Job type property sheet (Process, Script or Jobs Library) is the active tab, not the General tab.
![]()
Minimum Job properties required to configure a basic Job
Below you will find the minimum Job properties you need to set to get a basic Job up and running.
General tab - A Name and Label must be entered.
Job (type) tab - The name of this tab varies (it's directly under the topmost General tab) and it depends on the Job type configured. The tab name will either be Process, Script or Jobs Library. The default Job type is Process. Within the Job type tab, you must:
Specify a Submission Queue - This determines where the Job will run. At a minimum, you must create at least one
object to get a basic Job up and running. There is another Queue type ( ), but to keep it simple, start with an Execution Queue.Specify a User Account - This determines the user credentials that the Job will run under, on the target Execution Agent system. At a minimum, you must create at least one User Account object with a Credential Type of Username/Password. This account must authenticate on the Agent system and have the appropriate operating system permissions granted to run the payload of the Job. As a general rule, if you can run the process outside of ActiveBatch with the credentials specified on the Job, you should be able to run it with the same credentials using ActiveBatch.
Configure the Job's payload. If it is a script type Job - enter the script contents. If it is a Process type Job, enter the name of the file to be executed. If it a Jobs Library Job type, drag and drop the desired step(s) to the Jobs Library Step Editor and configure the step properties accordingly.
To manually trigger your Job, right click on it in the Object Navigation pane, then select Trigger. By default, the Instances Pane should display an instance of the running Job. Check there to see if your Job ran successfully.
Job Types
ActiveBatch supports several Job types to choose from when defining a Job. While all Jobs have many properties in common, each specific Job type also has properties that are unique to that type of Job. The Job types below are presented in Best Practice order.
Jobs Library - The Job Library Job type allows you to leverage ActiveBatch logic to quickly implement Jobs based on an extensible list of Job features without having to write a script. For example, you can zip up a file, execute a SQL Server SSIS package or start an SFTP file transfer and many other tasks. These tasks can also be combined into a mini-workflow within a single Job through a series of Job steps.
Process - The Process Job type is the typical Job used under the ActiveBatch system. A Process Job consists of either a script file or an executable image that is to be run on an execution machine.
Script - The Script Job type is the same as the Process type Job except that the script is embedded as part of the Job and contains no external file specification. The embedded script Job is perfect when you either; have a short script to write and don’t want to go to the trouble of locating it on a system or you’re not sure what systems may need this script. Since the script is transmitted to the machine on execution, you never have to worry about whether the script will be found. Another benefit of the script Job type is, since the script contents are stored in the ActiveBatch backend database, you can revert the Job back to a prior revision if a change resulted in an error.
Note: Effective starting with ActiveBatch V10, the following Job Types have been deprecated, E-Mail, FileSystem, FTP/SFTP/FTPS, since their equivalents are part of the Jobs Library. If you were an ActiveBatch customer prior to V10 and used those Job types, the only possible legacy Job type you may see is FileSystem. Starting with V12, E-Mail and FTP/SFTP/FTPS Job types are automatically converted to Jobs Library Job types. Existing FileSystem Jobs are not converted and are still supported, but new file system Jobs (and other legacy Job types) cannot be created. Use the Job Library File System category of Job Steps for your file system Jobs.
General
The image below depicts the General category of an existing Job.
![]()
Name: This mandatory property represents the name of the object. The name is limited to 128 characters. The object’s name should be unique to avoid confusion. We recommend that it also be somewhat descriptive so it’s easy to find. The name is used (by default) to identify the object in the Object Navigation pane and other places in the UI. This can be changed to the label, if desired. See "Display Mode" in the General Settings
Label: Every object must be uniquely labeled within the scope of the namespace. The label is limited to sixty-four (64) characters. The label is typically the same value as the name (it is auto-filled to match the name you enter); however, uniqueness is always enforced for an object’s label. The label is recorded in the ActiveBatch namespace. The characters that may be used for the label property are restricted to alphanumeric (A-Z, a-z, 0-9), space, period (.), dash (-) and underscore (_). The label itself must begin with an alphabetic character. The label is typically used when scripting. All searches are case-insensitive. ActiveBatch does allow you to search for objects using either the label or the name properties.
ID: This is a unique read-only number that can be used to retrieve the object. Is it assigned by the system when a new object is saved.
Full Path: This read-only property provides the full namespace specification of the object. It consists of the container(s) the object has been placed in, with the object’s label appended to the end. For example, the fullpath: /IT Jobs/Nightly Run/<object label>, is such that IT Jobs is a root-level Folder, Nightly Run is a Plan, followed by the label of the object you are creating.
Description: This free form property is provided so you can document and describe the object to others. The description is limited to 512 characters. Clicking on the pencil icon will pull up a mini text editor where you can more easily enter your description.
Documentation: This optional field is used to denote a reference to the Job in an operator’s runbook or other documentation concerning the running of the Job (to a maximum of 128 characters). Clicking on the pencil icon will cause a mini text editor to appear.
User Defined: This optional field can be set by the Job’s author as free-form text (to a maximum of 128 characters). If you set this field as a URL (for example, a hot link to runbook information for this Plan) you can click the button on the right and launch your web browser with this field as a URL.
Category: This optional field is used to categorize the Plan (to a maximum of 128 characters).
Group: This field is not used and is obsolete (it is maintained for backward compatibility purposes only). Plans are now used to associate related Jobs. The maximum group name length is 64 characters.
State:This read-only field displays the state of the Job. States include: enabled, disabled, soft disabled and held.
Hide in Runbook, Gantt and Daily Activity List: This checkbox indicates whether you would like to hide this Job in the views mentioned. This can be useful when you have a Job that runs very often and would only clutter the usefulness of the other views.
Read Only: This checkbox, when enabled, means the Jobs’s properties cannot be changed. You must have “Modify” access permission to the Job object to set this feature. To clear the read-only attribute, uncheck the box.
Global Disable: This checkbox indicates whether the Job is globally enabled for
use. If the checkbox is checked, the Job is globally disabled and all reference Jobs will also be disabled.
Job Properties
In the image below, the first few common Job properties are depicted, which are discussed below. But first, know that this tab's label will vary, depending on the Job type selected. The label could be Process, Script or Jobs Library.
![]()
The following properties are common to all Job Types: Submission Queue, User Account and Job Type.
Submission Queue: This field contains the Queue that this Job is being submitted to. Select the desired Queue from the dropdown list. If a new Queue is needed, click the New button. After doing so, you will be prompted for the location where the new Execution Queue object is to be created (you cannot create a Generic Queue with this feature). After doing so, the property sheets for the New Execution Queue will be tabbed in the Main view. Configure the Queue, then save it. The new Queue will be added to the location you selected (in the Object Navigation pane), and it will be associated to the Job you are creating. Once a Queue is set in this property, you have the ability to pull up the properties of the Queue by clicking on the ellipsis (...) button. The properties of the Queue will be tabbed in the Main view.
User Account: This property allows you to select a User Account to associate with this Job. The drop down arrow allows you to select from a list of accounts that you have the rights to use. Two smaller buttons to the right allow you to manage or create a User Account object. When you click the “New” button you will be prompted for the location where the User Account object is to be created. After doing so, the property sheets for the New User Account will be tabbed in the Main view. Configure the User Account, then save it. The new User Account will be added to the location you selected (in the Object Navigation pane), and will be associated to the Job you are creating. Once a User Account is set in this property, you have the ability to pull up the properties of the User Account by clicking on the ellipsis (...) button. The properties of the User Account will be tabbed in the Main view.
Job Type: This field is a drop-down listing the currently supported types: Process, Script and Jobs Library. The fields displayed are specific to the Job type you’ve selected. For example, the Success Code rule is only applicable to some Job types but not all.
Process Job Properties
When an ActiveBatch runs a Process Job type, there are actually 3 possible steps that can execute - (pre, main or post). Only Main is required. Main is where you specify the File Name property, described below.
File Name: This mandatory field represents the main Job file (or step) that is to be executed. Unless the Copy Script to Execution Machine checkbox is enabled, the file specification must be accessible from the Execution Agent system the Job will run on. A new user mistake is to specify a local drive and directory on a client machine - when a different execution machine will be used to actually run the Job. You can specify a UNC path for the Job file specification. For frequent or recurring Jobs the Job’s file should reside on the execution machine for best performance. Your security context will be used to access the file so please make certain the file is accessible by you.
Copy script to execution machine: This checkbox controls whether the file specification should be copied to the Job Scheduler's backend database, and then ultimately to the Execution machine (it's copied to the Execution machine each time the Job is triggered). The file size must be less than the System policy CopyScriptMaxSize.
Network traffic (copying the file to the Agent each run).
Backend database size and performance (if you use this option frequently and/or the files you save are large).
If a copied file is updated, the ActiveBatch Job needs to be updated as well, to force a re-write of the latest file to the backend database. This is because ActiveBatch does not know when a file that was copied to its backend database has been updated - since the update takes place outside of ActiveBatch.
As a general Best Practice, use the Copy Script option sparingly. Place the files in a location accessible to the Execution Agent system.
Parameters: This optional field can be used to pass input parameter(s) to the Job. Separate multiple parameters with a space. If a single parameter naturally has a space in it, enclose the parameter in quotes so the system treats the value as one parameter (e.g. "Hello World"). All data specified is passed as a string. Supported in this property are hard-coded values, ActiveBatch variables (constant and active), environment variables, Date Arithmetic and additional date and time substitution options discussed below. You may perform environment variable substitution using the syntax %var% for Windows systems and $var for UNIX systems.
Input parameters can also be specified dynamically using
Current or Param qualifiers, using the SetVariable or SetVariables Jobs Library Job steps, or using the Advanced Trigger > Parameters option.
Date and Time substitution
Date and Time substitution can be used in the Parameters property only. You can specify one or more of the strings listed below - surrounded by pound signs (#). The strings are all date and time related - and are limited to the current date/time, returned in various formats.
Note: If your input parameter data uses the # sign as data, specifying two consecutive pound signs (##) means that one will be actually passed.
%a abbreviated weekday %p AM/PM indicator for 12-hour format (locale)
%A full weekday name %S
seconds in decimal (00-59)
%b abbreviated month name %U week of year - decimal; begin Sunday (00-52) %B full month name %w weekday in decimal (0-6; Sunday is 0)
%c date/time representation (by locale) $W week of year - decimal; begin Monday (00-52) %d day of month in decimal (01-31) %x date representation for current locale
%H hour in 24-hour format (00-23) %X time representation for current locale %l hour in 12-hour format (01-12)
%y year without century (00-99) %j day of year in decimal (001-366) $Y year with century in decimal
%m month in decimal (01-12)
%z,%Z time-zone abbreviation or name %M minute in decimal (00-59)
%% percent sign
For example: CUST_#%m%d%Y#.dat would look like when resolved: CUST_08302023.dat (when the current date is August 30th, 2023).
You may insert any formatting characters within the delimited string that you also want to appear. This substitution occurs on the Execution machine. The substitution offered above supports current date and time only. If you need to perform date calculations as well as substitution, you should use the Date Arithmetic Uses and Syntax feature.
As a Date Arithmetic example, you could use the following syntax: <BOM/+Day=2>. BOM means beginning of month. Therefore, the current month is returned. Next, +Day=2 adds two days to the beginning of the current month, which is the final date value. The Date Arithmetic string can be used in the parameters property, or if you have several jobs sharing the same Date Arithmetic parameter, you could create a Date Arithmetic Active Variable and use the variable name in the parameters property - e.g. ${DateVarName}. The Job Scheduler performs Date Arithmetic variable substitution, therefore the variable is resolved before the Job is sent to the Execution Agent to run.
Completion Status Rule: This section concerns how a Job’s exit status is to be interpreted.
Success Code Rule: This field can contain keywords and numeric (decimal) exit codes all of which are interpreted as success. A range can be specified as “m-n” (for example, 0-5 means zero through 5). Several keywords are also supported: NTMsg, Odd, Even and LZero (negative). NTMsg indicates that an NT Message Code is passed back as an exit status. The upper two (2) bits indicate the severity. XLNT customers can use this type of success code rule. Odd and Even indicate status codes which are either odd or even numbers. OpenVMS uses an odd status as a success indication. Multiple codes can be specified by separating whole positive numbers and ranges with commas (for example: 0,5-10,20). This field offers a simple pull-down control with zero, NTMSG, LZERO, ODD and EVEN as quick selection choices.
Note: Certain IT type Jobs which require a system reboot may want to consider the “Lost” status as success. The Lost status occurs when ActiveBatch loses communication with an Execution Agent and then, on reconnection, discovers that the Job has already finished without any possibility of determining how the Job actually fared. For a reboot this would be a known state. -1071120348 is the exit code representing Lost.
Use Search String: Sometimes a program doesn’t pass a reliable exit code. By “reliable” we mean that the exit code may be the same regardless of whether the program worked or not, and that the only indication of a program success (or failure) is some data that is written to the batch Job’s log file (or another file created by the process). If that is the case, ActiveBatch supports the scanning of a log file or user-specified file for a string that indicates either success or failure of the Job. To enable this feature, check the “Use Search String” checkbox and then click the “Setup” button.
![]()
Clicking the Setup button produces this dialog:
![]()
Search for: This field indicates the word or phrase that you want to search for. If you begin the search with REGEX: you can use Regular Expression syntax. By default, with REGEX omitted, any search string found will indicate success (or failure if “Presence of…” checkbox is enabled).
in file: This radio button allows you to either select the ActiveBatch generated Job Log file or you can specify another file (if your process creates one) to be searched. Either way, the file must be accessible from the Execution machine.
Case Sensitive Search: By default, the string search is not case sensitive unless you enable this checkbox.
Presence of string indicates Job Failure: By default, the presence of the string indicates success. If you enable this checkbox, the presence of the string will indicate failure.
Additional Search String Details and Examples
For Process and Script type jobs, the ability to search a Job’s Log file or designated file to determine success or failure is an important capability. Sometimes a program or utility’s Exit Code is not reliable. This section explains the capabilities of the Search String facility.
![]()
The figure above shows a simple example of this facility. The presence of the word “success” (in a case-less match), in the Job’s Log File, means this Job succeeded (regardless of the exit code). Sometimes it may be that a program will always indicate failure but not more specifically success. In that case, the following example may apply.
![]()
In this case, the word “error” with the checkbox “Presence of string…” enabled means the Job failed. When multiple words are used, all of the words that match will denote success or failure (depending on the “Presence…” checkbox).
![]()
In this case, the phrase “success good worked” must match for the Job to succeed.
Another capability within Search String is the use of Regular Expressions. A Regular Expression is a syntax for creating patterns that allow complex character/word matching. ActiveBatch supports “extended” Regular Expressions which is a POSIX standard.
![]()
The above example, the prefix “Regex:” indicates that a Regular Expression follows. This expression indicates that a match of one or more occurrences of the word “first”, followed by any number of spaces, followed by the one or more occurrences of the word “this”, followed by any number of spaces, followed by zero or more occurrences of the word “second”. This pattern can occur over multiple lines.
![]()
The above example (credit: Microsoft MSDN) illustrates the use of “extended” regex. In this case we are capturing groups. The Regex matches with one or more occurrences of “a”, followed by one or more occurrences of “b” and then one of more of “c” and then any number of the third capture group (which is “b+”). The index of a capture group can be found by counting the left parenthesis from left to right. So this Regex matches with the target sequence “aabbcbbb”.
The checkbox “Case Sensitive Search” is disabled by default. This means that, by default, Regex expressions are matched in a case insensitive fashion. To have Regex match in a case sensitive manner please enable the checkbox. You can still match on case or not within the regular expression itself.
The prefix “Regex:” is case-less.
Search strings may be specified in Unicode for Microsoft based Execution Agents only. Non-Windows systems are currently restricted to ASCII.
Run Job Interactively: If enabled, Windows Execution Agents will run the Job on the user’s desktop. Three (3) modes are available: Normal, Maximized and Minimized. Typically, Jobs are not run interactively but rather, are run in background mode.
Note: For Best Security Practices, you should ensure that any user who will be running Interactive Jobs also place the ActiveBatch Interactive Desktop Helper (AbatIDH) program into the Startup folder. AbatIDH will execute Jobs that are marked as interactive to avoid having the ActiveBatch Execution Agent service run the program. A Microsoft service named "Interactive Services Detection" (ISD) intercepts any windows or message boxes that are generated from a service. Since the Windows Execution Agent runs as a service, this caused a problem (when IDS was initially introduced) when running interactive Jobs. The solution was interactive Jobs are run using AbatIDH, not the Windows Execution Agent. Running an interactive Job on a desktop through a service does pose a security risk since the desktop user will have possible access to interactive Job.
Terminate all child processes: Enabling this checkbox means that all processes created by the Job are to be terminated when the Job completes. For Windows systems, the Job is created as a Windows Job object and all children can be tracked. For UNIX systems, parent/child processes are part of the operating system. For OpenVMS systems, sub-processes are terminated regardless of the setting of this checkbox and detached processes are never terminated based on this property setting.
Pre/Post Job Steps: The fields in this section allow you to define Job steps that are to execute either before and/or after the file specified above.
Pre Job Filename: When enabled, the file specification entered will be executed before the main Job file. This “pre” step is typically used to establish or confirm the existence of certain objects that the main step would require. For example, the presence of certain files, moving files from one machine to another, etc. The “pre” step, like the main Job step, can either be a script file or a program. Unless the Copy script… checkbox is enabled, the file specification must be accessible from the execution machine. You may use the Browse button to lookup the file specification.
The environment variables (OpenVMS: logical names) table below lists the variables that a Job step may query (you cannot change any of the values). The Pre Job step must exit with a zero exit status to be considered successful.
Copy script to execution machine: This checkbox controls whether the pre Job step file specification is considered local or accessible from the client and should be copied to the Job Schedule backend database and then ultimately to the Execution machine. The file size must be less than the System policy CopyScriptMaxSize. Before enabling this property, the same factors should be considered that were described previously (See Copy Script Considerations).
Post Job Filename: When enabled, the file specification entered will always be executed after the main Job file.
This “post” step is typically used to either clean up any temporary or other files that are no longer necessary and/or move files to other machines. The main Job step’s exit status is also provided to the “post” step. The “post” step, like the main Job step, can either be a script file or a program. Unless the Copy script… checkbox is enabled, the file specification must be accessible from the execution machine. You may use the Browse button to lookup the file specification.
The environment variables described in the table below allow the “post” step to determine how the main Job step did and to then perform success or failure processing. The “Restart” indication is also useful in that many Jobs may require new copies of various files before the main step executes. With pre and post step processing, such determinations no longer need to be made by the main Job step. The use of ActiveBatch COM Automation to retrieve additional information about the running Job may also be used. The Post Job step must exit with a zero exit status to be considered successful.
Copy script to execution machine: This checkbox controls whether the post Job step file specification is considered local or accessible from the client and should be copied to the Job Scheduler backend database and then ultimately to the Execution machine. The file size must be less than the System policy CopyScriptMaxSize. Before enabling this property, the same factors should be considered that were described previously (See Copy Script Considerations).
Failure results in Job Failure: This checkbox controls whether a post Job step failure results in marking the entire Job in failure regardless of what the main Job step actually did. By default, this checkbox is cleared so the results of the post Job step don’t interfere with the results of the Job.
The following environment variables (or logical names) are defined. This allows your program or script (Process Jobs can run executables and external script files) access to some aspects of ActiveBatch without having to write a complicated procedure.
Environment Variables available for Process Jobs Steps (Main, Pre and Post)
Variable Name Description Job Step ABAT_JOBID
The unique Job id.
All
ABAT_JOBNAME
Job Name
All
ABAT_JOBSCHED
The machine name of the Job Scheduler.
All
ABAT_CLIENTMACHINE
The submitting user’s machine name
All
ABAT_QUEUE
The Queue name this Job is running on.
All
ABAT_JOBFILE
The file specification of the main Job file.
All
ABAT_JOBMODE
The Job’s mode (NORMAL)
All
ABAT_CHECKPOINT
Checkpoint (YES | NO)
Main/Post
ABAT_CHECKPOINTVAL
Checkpoint value (if ABAT_CHECKPOINT “Yes”)
Main/Post
ABAT_CHECKPOINTTIME
Last Checkpoint Date/Time
Main/Post
ABAT_JOBEXITSEV
Job Exit Severity (SUCCESS | FAILURE)
Post
ABAT_JOBEXITCODE
Job Exit Code (decimal)
Post
ABAT_RESTART
Job Restart Mode (YES | NO)
All
ABAT_INSTALLPATH
ActiveBatch Installation Path
All
ABAT_SUBMITUSER
The submitting user
All
ABAT_SUBMITTIME
Time the Job was submitted
All
ABAT_PORT
TCP/IP port of the Execution Agent
All
Script Job Properties
This type of Job is very similar to Process except that the script is actually embedded in the Job itself. No external file needs to be specified.
![]()
Script Contents: This window provides an editable view of the embedded script. As you begin typing in this property, you are using the built-in script editor. Variable substitution is supported. If you wish to edit the script using an external editor, you can click on the Edit in External Editor button.
There are advantages to using the built-in script editor. First any variable usage syntax allows for auto-tab type completion (note below with “app_path” usage).
![]()
In addition, several commands are provided:
Test Trigger allows you to test your script without having to close out the Property sheet (you don't need to save the script to test the changes you made). This is very useful when you’re trying to get something to work and don’t want to create an excessive amount of revisions of what are essentially debugging versions. You will see (Test) next to the name of a script instance that was triggered using the Test Trigger feature. The payload of the script is run during a test trigger.
Edit in External Editor allows you to edit the script using an external program editor. The editor that will launch is determined by a setting configured in Tools > Settings > Default Text Editor. By default, the editor is Notepad. This means Notepad will launch with your script contents when you click on this button.
Find and Replace is a simple edit facility that lets you make string substitutions through the Find and Replace facility. It allows you to find a string, replace a string or replace all strings. In the image below, the two buttons on the top control Find and Replace. Quick Find performs a Find (search) operation. The Find options allow you to refine your search. The “Search type” supports a “normal” type of string search. Advanced searches, including the use of Regular Expressions, are also supported. Quick Replace is used with the Find operation to replace either a single occurrence or all occurrences of the matching string.
![]()
Script Extension is where you specify the script’s underlying extension (i.e. PS1, VBS, PL, XCP, etc) that will be used to lookup the associated program that is eventually used to execute the script. The extension must be specified without any periods or other file system notation. If you do not see your scripting language extension in the list, you can manually enter it in the textbox (make sure the underlying software required to run the script is installed on the target Execution Agent system).
Parameters: This optional field can be used to pass input parameter(s) to the Job. Separate multiple parameters with a space. If a single parameter naturally has a space in it, enclose the parameter in quotes so the system treats the value as one parameter (e.g. "Hello World"). All data specified is passed as a string. Supported in this property are hard-coded values, ActiveBatch variables (constant and active), environment variables, Date Arithmetic and additional date and time substitution options discussed below. You may perform environment variable substitution using the syntax %var% for Windows systems and $var for UNIX systems.
Input parameters can also be specified dynamically using
Current or Param qualifiers, using the SetVariable or SetVariables Jobs Library Job steps, or using the Advanced Trigger > Parameters option.
Date and Time Substitution
Date and Time substitution can be used in the Parameters property only. You can specify one or more of the strings listed below - surrounded by pound signs (#). The strings are all date and time related - and are limited to the current date/time, returned in various formats.
Note: If your input parameter data uses the # sign as data, specifying two consecutive pound signs (##) means that one will be actually passed.
%a abbreviated weekday %p AM/PM indicator for 12-hour format (locale)
%A full weekday name %S
seconds in decimal (00-59)
%b abbreviated month name %U week of year - decimal; begin Sunday (00-52) %B full month name %w weekday in decimal (0-6; Sunday is 0)
%c date/time representation (by locale) $W week of year - decimal; begin Monday (00-52) %d day of month in decimal (01-31) %x date representation for current locale
%H hour in 24-hour format (00-23) %X time representation for current locale %l hour in 12-hour format (01-12)
%y year without century (00-99) %j day of year in decimal (001-366) $Y year with century in decimal
%m month in decimal (01-12)
%z,%Z time-zone abbreviation or name %M minute in decimal (00-59)
%% percent sign
For example: CUST_#%m%d%Y#.dat would look like when resolved: CUST_08302023.dat (when the current date is August 30th, 2023).
You may insert any formatting characters within the delimited string that you also want to appear. This substitution occurs on the Execution machine. The substitution offered above supports current date and time only. If you need to perform date calculations as well as substitution, you should use the Date Arithmetic Uses and Syntax feature.
As a Date Arithmetic example, you could use the following syntax: <BOM/+Day=2>. BOM means beginning of month. Therefore, the current month is returned. Next, +Day=2 adds two days to the beginning of the current month, which is the final date value. The Date Arithmetic string can be used in the parameters property, or if you have several jobs sharing the same Date Arithmetic parameter, you could create a Date Arithmetic Active Variable and use the variable name in the parameters property - e.g. ${DateVarName}. The Job Scheduler performs Date Arithmetic variable substitution, therefore the variable is resolved before the Job is sent to the Execution Agent to run.
Completion Status Rule: This section concerns how a Job’s exit status is to be interpreted.
Success Code Rule: This field can contain keywords and numeric (decimal) exit codes all of which are interpreted as success. A range can be specified as “m-n” (for example, 0-5 means zero through 5). Several keywords are also supported: NTMsg, Odd, Even and LZero (negative). NTMsg indicates that an NT Message Code is passed back as an exit status. The upper two (2) bits indicate the severity. XLNT customers can use this type of success code rule. Odd and Even indicate status codes which are either odd or even numbers. OpenVMS uses an odd status as a success indication. Multiple codes can be specified by separating whole positive numbers and ranges with commas (for example: 0,5-10,20). This field offers a simple pull-down control with zero, NTMSG, LZERO, ODD and EVEN as quick selection choices.
Note: Certain IT type Jobs which require a system reboot may want to consider the “Lost” status as success. The Lost status occurs when ActiveBatch loses communication with an Execution Agent and then, on reconnection, discovers that the Job has already finished without any possibility of determining how the Job actually fared. For a reboot this would be a known state. -1071120348 is the exit code representing Lost.
Use Search String: Sometimes a script doesn’t pass a reliable exit code. By “reliable” we mean that the exit code may be the same regardless of whether the script worked or not, and that the only indication of a script success (or failure) is some data that is written to the batch Job’s log file (or another file created by the script). If that is the case, ActiveBatch supports the scanning of a log file or user-specified file for a string that indicates either success or failure of the Job. To enable this feature, check the “Use Search String” checkbox and then click the “Setup” button.
![]()
Clicking the Setup button produces this dialog:
![]()
Search for: This field indicates the word or phrase that you want to search for. If you begin the search with REGEX: you can use Regular Expression syntax. By default, with REGEX omitted, any search string found will indicate success (or failure if “Presence of…” checkbox is enabled).
in file: This radio button allows you to either select the ActiveBatch generated Job Log file or you can specify another file (if your process creates one) to be searched. Either way, the file must be accessible from the Execution machine.
Case Sensitive Search: By default, the string search is not case sensitive unless you enable this checkbox.
Presence of string indicates Job failure: By default, the presence of the string indicates success. If you enable this checkbox, the presence of the string will indicate failure.
Additional Search String Details and Examples
For Process and Script type jobs, the ability to search a Job’s Log file or designated file to determine success or failure is an important capability. Sometimes a program or utility’s Exit Code is not reliable. This section explains the capabilities of the Search String facility.
![]()
The figure above shows a simple example of this facility. The presence of the word “success” (in a case-less match), in the Job’s Log File, means this Job succeeded (regardless of the exit code). Sometimes it may be that a program will always indicate failure but not more specifically success. In that case, the following example may apply.
![]()
In this case, the word “error” with the checkbox “Presence of string…” enabled means the Job failed. When multiple words are used, all of the words that match will denote success or failure (depending on the “Presence…” checkbox).
![]()
In this case, the phrase “success good worked” must match for the Job to succeed.
Another capability within Search String is the use of Regular Expressions. A Regular Expression is a syntax for creating patterns that allow complex character/word matching. ActiveBatch supports “extended” Regular Expressions which is a POSIX standard.
![]()
The above example, the prefix “Regex:” indicates that a Regular Expression follows. This expression indicates that a match of one or more occurrences of the word “first”, followed by any number of spaces, followed by the one or more occurrences of the word “this”, followed by any number of spaces, followed by zero or more occurrences of the word “second”. This pattern can occur over multiple lines.
![]()
The above example (credit: Microsoft MSDN) illustrates the use of “extended” regex. In this case we are capturing groups. The Regex matches with one or more occurrences of “a”, followed by one or more occurrences of “b” and then one of more of “c” and then any number of the third capture group (which is “b+”). The index of a capture group can be found by counting the left parenthesis from left to right. So this Regex matches with the target sequence “aabbcbbb”.
The checkbox “Case Sensitive Search” is disabled by default. This means that, by default, Regex expressions are matched in a case insensitive fashion. To have Regex match in a case sensitive manner please enable the checkbox. You can still match on case or not within the regular expression itself.
The prefix “Regex:” is case-less.
Search strings may be specified in Unicode for Microsoft based Execution Agents only. Non-Windows systems are currently restricted to ASCII.
Note: Some scripting languages have the concept of terminating and non-terminating errors, for example, PowerShell. This means that a script may “fail” but if the error is non-terminating the exit code will be of success. Such languages do provide mechanisms for treating non-terminating errors as terminating (or even for the errors to be catch-able).
Run Job Interactively: If enabled, Windows Execution Agents will run the Job on the user’s desktop. Three (3) modes are available: Normal, Maximized and Minimized. Typically, Jobs are not run interactively but rather, are run in background mode.
Note: For Best Security Practices, you should ensure that any user who will be running Interactive Jobs also place the ActiveBatch Interactive Desktop Helper (AbatIDH) program into the Startup folder. AbatIDH will execute Jobs that are marked as interactive to avoid having the ActiveBatch Execution Agent service run the program. A Microsoft service named "Interactive Services Detection" (ISD) intercepts any windows or message boxes that are generated from a service. Since the Windows Execution Agent runs as a service, this caused a problem (when IDS was initially introduced) when running interactive Jobs. The solution was interactive Jobs are run using AbatIDH, not the Windows Execution Agent. Running an interactive Job on a desktop through a service does pose a security risk since the desktop user will have possible access to interactive Job.
Terminate all child processes: Enabling this checkbox means that all processes created by the Job are to be terminated when the Job completes. For Windows systems, the Job is created as a Windows Job object and all children can be tracked. For UNIX systems, parent/child processes are part of the operating system. For OpenVMS systems, sub-processes are terminated regardless of the setting of this checkbox and detached processes are never terminated based on this property setting.
Jobs Library
The Jobs Library represents built-in functionality in which the Job designer can configure a Job step to obtain the benefit of that step without having to write a script. The Jobs Library allows the creation of a multi-step Job in which one or more Job steps can be logically constructed.
![]()
In the above image, the Jobs Library ToolBox contains all the step categories. Steps are dragged and dropped from the ToolBox to the workspace on the right. As depicted in the above image, two steps have been configured, the Delete File Step and the Log Step.
Job Steps are executed from top to bottom. Each Step has an error action property that you can configure which determines what to do if an error is encountered during the Job execution.
The ToolBox consists of step categories that come with a basic ActiveBatch subscription. It also lists other step categories that require additional licensing ("Extensions"). For more information about the Jobs Library Job type and categories that come with a basic subscription, see the ActiveBatch Jobs Library Reference Manual. For additionally licensed categories, see the Extensions Guide.
Associations
Associations allow a Job to associate one or more ActiveBatch objects for later possible inclusion when the Job executes.
When associating a Service Library Object object, the association allows the Job to have access to a REST Service, Web Service, .NET Assemblies, SQL Server Stored Procedures/Functions or Oracle Stored Procedures/Functions methods. The Service Library allows logic to be defined and reused within ActiveBatch. Service Libraries can also be associated at the Plan level.
For all other objects, Associations indicates those objects may be accessed run-time through variable substitution of the object’s path.
For example, if you needed to FTP a file to an FTP server, and the selection of the FTP server was determined at run-time, the security credentials to connect to the FTP server would also be determined at run-time. Connection credentials are specified using a User Account object. In this scenario, a variable containing the path of a specific User Account object would be used. The system does not check variable values to see if they point to ActiveBatch objects for the purpose of security checks. Rather, all the User Account objects the variable could possibly resolve to must be added to the Associations list. This must be done to check that the Job author has been granted "use" permission to all the possible User Accounts objects the Job may use.
Getting back to the FTP Job example, assume two User Accounts have been added to the Associations list because during Job execution, the variable will resolve to one of the two User Accounts. A variable is specified on the FTP Job in the connection property, instead of selecting a User Account object. When the Job author saves the Job, the system will perform a security check to make sure they have "use" rights to the two User Accounts they added to the Associations list. If they do not, saving the Job will produce an access denied error. If they use a variable and do not add the possible User Account objects to the Associations list, they will be able to save the Job, but the Job will fail run-time with an access denied error.
See the Object List object for more information when you need to association one or more objects within one or more Jobs or Plans.
Generic Queue Properties
This selection allows the Job author to associate various machine and user characteristics that must match in order for the proper Execution Queue to be selected. This tab is only applicable when a Job is associated with a
. Machine and User Characteristics will be disabled when a Generic Queue is not specified on the Job type (Process, Script, Jobs Library) property sheet. This feature is optional. You may be fine with the Job running on any system within the Generic Queue. However, you may have some Jobs that cannot run on all the systems because they don't have enough resources for the Job to run efficiently (just one possible reason) You could create another Generic Queue and only include the Execution Queue systems the Job can run on, and that would be a possible solution. But if this happens frequently, you may find yourself managing more Generic Queues than desired. Therefore, on a per Job-basis you can specify what systems in the Generic Queue the Job can run on - if and when this need should occur.
![]()
User and Machine Characteristics: These list boxes allow you to associate certain characteristics (machine and/or user) that the execution machine must have in order for the Execution Queue(s) to be considered eligible to execute the Job. Existing characteristics associated with a Job (if any) are listed. The example above indicates that the Job requires an x64 CPU architecture machine. This characteristic implies that the Generic Queue has a mixed set of architecture systems.
Queue Selection Method: This field indicates an initial Job-to-Execution Queue selection criteria. Possible choices are: Any and All (the default is “Any”). “Any” means that any (one) eligible Execution Queue member may be used to run the Job in accordance with the Generic Queue’s Scheduling Algorithm. “All” means that all eligible Execution Queue(s) members should be used to run the Job. “All” is useful when you need to broadcast a process and have it complied with on a large scale. For example, broadcasting anti-virus updates. When “All” is selected, the Job's Execution > If Active properties are ignored. When the Job triggers, all eligible Execution Queues will run the Job simultaneously.
Note: If a Job is triggered with “All” selected as the Queue Selection Method, and that Job has a completion trigger that triggers downstream Job(s) with "Any" configured; the Execution > If Active properties of the downstream Job are not ignored. This means the downstream may run once. If the intention is for the “broadcast” Job to continue to broadcast all Jobs within the stream, all Jobs must be set to have “All” enabled.
Job – Machine Characteristics
![]()
To add, edit or remove associated machine characteristics, click the Add, Edit or Remove buttons. The above dialog appears when you want to add or edit a machine characteristic. Machine characteristics contain static information about the execution machine. The image above indicates that the Job is to run on a machine with at least eight (8) processors. Machine characteristics are used by ActiveBatch when a Job is submitted to a Generic Queue and the proper Execution Queue (machine) needs to be selected. This allows the author of a Job to indicate what minimum runtime requirements the Job has. Please note that user/machine characteristics are ignored when a Job is associated with an Execution Queue.
Name: This field contains a dropdown list of the machine characteristics.
The following table provides the machine characteristics that are available.
Characteristic Description AbatServicePack
(String) ActiveBatch Execution Agent ServicePack
ABATVersion
(String) ActiveBatch Execution Agent Version. (Example: V12)
ABATVersionId
(Numeric) ActiveBatch Execution Agent Version. (Example: 12)
CloudEnvironmentType
(String)
CloudMarketplace
(String)
ClrVersion
(String)
CPU
(String) CPU Platform Type. (Example: Pentium)
CPUArchitecture
(String) CPU Architecture. (Example: x86 or x64)
CPUSpeed
(Numeric) Processor Speed in Mhz. Note: Not all operating systems expose a chip’s processor speed.
FileVersion
(String) ActiveBatch version of Execution Agent
Characteristic
Description
Hostname
(String) Hostname of Execution Agent
IPAddress
(String) IP address of Execution Agent
LicensingPoints
(Numeric) Number of points for this Execution Agent
ManagedFramework
(String)
Memory
(Numeric) Physical Memory Size in Mb.
NumProcessors
(Numeric) Number of processors
OS
(String)
OSBuild
(String) Operating System Build Number. Note: Not all operating systems use a build number. (Example: 7600)
OSServicePack
(String)
OSType
(String) Operating System Platform Type. (Example: Professional)
ScriptLanguage
(String) Default script language extension
TlsEnabled
(String)
VersionThumbprint
(String)
Virtualized
(String) Machine is a Virtual Machine (Yes/No). Note: This characteristic will only be properly set if the Execution Agent is running V7 SP2 or later. Otherwise the default value is “No”.
Condition: This field allows you to select a comparison operator to be used when determining whether an Execution Queue meets the machine characteristic requirement.
The following operators are available for numeric characteristics only: Equal to, Not Equal to, Less, LessEqual, Greater and GreaterEqual.
If the value is a string you may only use the Equal operator.
Value: This field allows you to enter a value that will be used in the above comparison. The data entered must meet the numeric or string value requirements of the machine characteristic chosen. For example, processor speed is numeric only. For string comparison purposes any data entered is matched from left to right until the data given is matched. This means you can enter partial data (for example: “Windows” would match both “Windows 2019” and “Windows 2022”).
Note: Machine Characteristic values vary quite a bit between operating systems and hardware platforms.
Job – User Characteristics
The dialog below appears when you want to add or edit a user characteristic to a Job.
![]()
User characteristics contain free-form data that you determine is important to your Job. User characteristics, like machine characteristics, are used to determine the appropriate Execution Queue. The example above allows the Job author to select a machine Queue where the characteristic “Region” is the “Northeast”.
Name: This field contains a dropdown list of the user characteristics that have been added on the Generic Queue (Characteristics) and/or that have been added to the Execution Queue (Properties > User Characteristics).
Description: This field allows you to document your use of the User Characteristic.
Condition: This field allows you to select a comparison operator to be used when determining whether an Execution Queue meets the user characteristic requirement.
The following operators are available: Equal to (default), Not Equal to, Less, Less Equal, Greater and Greater Equal.
When a string value is specified, a comparison is made from left to right, byte-wise.
Value: This field allows you to enter a value that will be used in the above comparison. The data entered must meet the numeric or string value requirements of the user characteristic chosen. For string comparison purposes any data entered is matched, in a case-less fashion, from left to right until the string is exhausted. This means you can enter partial data (for example: “North” would match both “Northeast” and “Northwest”).
Variables
Variables are one of the most important and powerful aspects of ActiveBatch. With variables you can pass data to other related Jobs as well as form constraints for proper execution of related Jobs. You can also use variables to soft-code Job properties that have the same value in multiple Jobs. It is much easier to change a single variable value as opposed to updating each Job with the new property value.
![]()
For a full discussion of variable topics, see Variables
ActiveBatch variables represent data that can be passed to Jobs, Plans, and programs or used anywhere variable substitution may be used within ActiveBatch. A variable can be exported to an executing Job by enabling it as an environment variable.
When the checkbox Strict Variable Processing is checked, all Active Variables must execute successfully. If even a single variable fails, the Job will be terminated in failure. A Job will also be terminated if a Constant Variable has a blank value, or if an Active Variable resolves to a blank value. For more precise control, on a per-variable basis (as opposed to all variables), see the Use property - which is on every variable property sheet.
When the checkbox Inherit Environment Variables is checked, variables marked as “Export as Job Environment variable” in parent Plans should be included when this Job is dispatched. By default, variables in Jobs and parent variables are only included if actually used and referenced in this dialog.
![]()
ActiveBatch variables are further broken down into two (2) groups: Constant and Active.
A constant variable is a variable that contains a hard-coded value. It's value specified on the Job object does not change unless you change it.
An active variable contains a data source that populates the variable. ActiveBatch supports several data sources. For example, you could issue a SQL SELECT statement to populate an active variable.
Name/Label: This mandatory field provides the name of the variable. Variable names can consist of the following characters: a-z, A-Z, 0-9, and underscore (_). A variable name cannot begin with a numeric character. Any other character is illegal (in particular, any operator character notation).
Description: This optional field allows you to describe the variable and its usage.
Constant: This radio button indicates that the value for the variable is a constant. You can add any value, including referencing another variable as its value.
Active: This radio button allows you to select a data source for the variable. Depending on the selection you make from the dropdown menu, a set of additional properties will appear appropriate to the data source.
![]()
Active Variable Data Source Menu Properties: The set of properties will vary based on the data source selected. As you click on each property a brief help message is displayed in a small window towards the bottom of the display (just above the Credentials property).
Credentials: Depending on the data source, you may need to specify security credentials appropriate to its successful operation. When applicable, select a User Account Name object from the dropdown list. Two small buttons allow you to manage or create a new User Account object. If the Credentials are omitted, ActiveBatch will check the Plan’s variable credentials property. If that property is omitted (or the Job is not executed within a Plan), ActiveBatch will use the Job’s execution user credential.
Active Variable Source Description Date Arithmetic
This source uses the ActiveBatch Date Arithmetic facility and allows you to compute dates in various formats.
Disk Free Space
This source returns the amount (in bytes) of free space on the specified disk drive.
Disk Info
This source returns information about the specified disk drive.
Drive Exists
This source returns a Boolean value indicating whether a specified drive letter exists on the specified machine.
File Content
This source reads a file (up to a specified limit).
File Exists
This source returns a Boolean value indicating whether a specified file exists.
Folder Exists
This source returns a Boolean value indicating whether a specified folder exists.
IniFile
This source reads the contents of an INI file into a variable.
Ping
This source is a Ping operation. The Boolean variable will indicate whether the machine ping succeeded or not.
PowerShell
This source executes an embedded Powershell script.
RegistryValue
This source retrieves a specified registry value.
Service State
This source retrieves the state of the specified service on a specified machine.
SQL Query
This source executes a SQL Query
VBExpression
This source executes a VBscript expression
Web Service
This source execute a Web Service.
WMI Ping
This source is similar to a Ping but uses WMI.
WMI Query
This source is a WMI Query. You can use WQL language to retrieve properties from a specified object.
XpathQuery
This source is an XML X-Path query.
Use: This property indicates the criticality of the variable and whether the data source expression must succeed. This property also indicates whether a variable, if undefined, must be requested when a manual Trigger operation is performed.
Unspecified: This property means the variable is not critical to proper operation. In addition, the user will be not prompted for the variable value if a manual Trigger is performed.
Optional: This value means the variable is not critical to proper operation, but that ActiveBatch will prompt for a value if a manual Trigger is performed. If a value is not entered, the Job/Plan will still be triggered.
Required: This value means the variable value must be defined, and the user will be prompted for the variable value if a manual Trigger operation is performed.
When using Optional or Required, you can optionally provide a static picklist when a manual Trigger operation is performed. This will result in the user selecting from a list to populate the variable value. To accomplish this, create a constant variable, and in the Description property, enter $options=<your comma separated list>. For example, $options=Low,Medium,High. See the image below. Make sure the Use property is set Optional or Required.
![]()
When the manual trigger occurs, the user can select from a value in the static dropdown picklist. See the image below. The value selected will be stored in the variable named UserVar.
![]()
Access: You can select whether your variables are public or private. By default, all variables are public and accessible by related Plans and Jobs. If you select Private, then only the current object (Plan or Job) can access these variables.
Read Only: This checkbox selects whether the variable is read-only and may not be changed by another Plan or Job. By default, all variables are read/write.
Secured: This checkbox indicates whether the variable value is secret and should not be displayed (or otherwise allowed to be retrieved). Enabling this checkbox is useful if you decide to use a variable value as a password or to hold other secret or confidential information. A secured variable is a write-only property and can be changed but never retrieved. Once a variable is secured it cannot be unsecured.
Volatile: This checkbox, if enabled, means that the variable must be re-evaluated when a Plan/Job is executed for whatever reason (including restarts).
Export as Job Environment Variable: This checkbox controls whether the variable and value should be exported to the Job instance as an environment variable (for OpenVMS this would be a logical name). By default, a variable is not exported as an environment variable.
Note: Variables declared at the Job level can be exported. For other hierarchical variables to be exported they must be re-defined at the Job level unless the Inherit Environment Variables checkbox is enabled.
Triggers
This tab allows you to specify when a Plan/Job should be triggered. This includes Date/Time triggers, which consists of a few options, including setting a time interval, associating one or more Schedule objects, or using constraint-based scheduling (CBS).
Also configurable on the Triggers property sheet are a variety of event triggers. When a configured event occurs - for example, a file arrives in a directory being monitored by the Scheduler - the Job will trigger.
All triggerable objects (Job, Plans and references) have the same options for Scheduled triggers. They also share the same options for Event triggers.
![]()
Scheduled Triggers
Scheduled triggers are configured on a triggerable objects' Triggers property sheet. Triggerable objects include jobs, plans and references.
ActiveBatch supports (3) types of date/time scheduling:
Interval - Configure trigger times using an interval that includes days, hours, and minutes, or a combination thereof. For example, trigger a Job every 45 minutes.
Schedule - Obtain trigger dates and trigger times from a Schedule Object. The time can also be configured on the triggerable object, which when doing so, ignores any times set on the associated Schedule object. For example, trigger a Plan Monday and Wednesday at 2pm and 6:15pm.
Constraint-based triggers (CBS) - Obtain trigger dates from a Schedule object. The trigger time comes from a CBS-specific property that has a default time that can be overridden with a time of your choosing. All General Constraints configured must evaluate to true before the triggerable object can run. For example, trigger a Job reference Monday through Friday, and evaluate the constraints at the start of the calendar day (i.e. midnight). If the constraints are met, run the Job immediately. Note - CBS can only trigger a Job or Plan once a day. Use other methods if you need to trigger the object more frequently.
Note: For any of the 3 above-described date/time triggers to work, check the Enable Date/Time Trigger checkbox at the top of the Triggers property sheet.
Expand the desired Scheduled trigger type to learn more about it.
Interval Trigger
To schedule Jobs/Plans based on an interval, click the Interval option.
When looking at future runs for objects configured using an Interval, you will see a state of "Not Run (I)", where the I stands for Interval.
Interval allows you to enter a time expressed in days, hours and minutes. This “interval” is added to the starting execution time and forms the next time the Job/Plan is to be scheduled for execution. For example, let’s say the time is now 11:00am. An interval of 1 day, 1 hour and 0 minutes would result in a next scheduled execution time of tomorrow at 12:00pm , and so on.
Interval is useful as a relative expression of time and when an exact time is not needed. For example, an interval of 1 hour does not mean the Job /Plan will run on the hour but rather every 60 minutes.
The interval is calculated based on the creation time of the object that has been configured with this trigger method. For example, if a new Job is configured to run every 15 minutes, and the Job is saved at 2:10pm (the creation time), the Scheduler will begin to schedule future runs 15 minutes after 2:10pm. Therefore, the first trigger will be at 2:25pm. If the Job is modified, the original creation time of the Job is used to calculate future runs. For example, if the interval property is modified to run every 30 minutes, future triggers will be calculated based on the Job's original creation time of 2:10pm (not the modify time). Therefore, in this example, the first future run would be 2:40pm, the next at 3:10pm (providing the property "compute interval after completion" is not checked, described below).
The “Compute Interval after Completion” checkbox allows the Scheduler to compute the next time the Job/Plan is scheduled to run by adding the interval period when the triggerable object completes rather than when the triggerable object begins to execute. For example: assuming a ten (10) minute interval and a five (5) minute elapsed execution time, if an instance starts to execute at 12:00 and completes at 12:05 this checkbox will schedule the next occurrence at 12:15 rather than the default of 12:10.
Note: When using the Hours and/or Minutes interval option, the assumption is the Job/Plan will trigger 7 days a week every "x" Day, Hours and/or Minutes. If you wish to limit this (e.g. exclude weekends), you can add a Date/Time Constraint to the triggerable object. See Date/Time Constraints for more details.
Date/Time Trigger
To schedule Jobs/Plans using one or more Schedule Object, click the Use Schedules for Date/Time Triggers option.
When looking at future runs for objects configured using a Schedule, you will see a state of "Not Run (S)", where the S stands for a Scheduled trigger.
ActiveBatch supports very flexible date/time scheduling. Schedules objects can be shared among Jobs and Plans, and like all objects - are securable. You can schedule both pattern (e.g. every 2 hours) and nonpattern (e.g. 1:31 PM, 2:19 PM) time periods. Dates can be Calendar, Fiscal or Business dates.
At its simplest, Schedules consist of Date and, optionally, Time specifications. When only Date specifications are included, a Schedule will emit a series of dates. If Time specifications are included, the Schedule will emit both dates and times. However, the Schedule's time(s) will only be used on a triggerable object if the trigger object does not have the time embedded (set on the triggerable object itself). See below for more details.
It is a common scenario that many jobs and plans will run on the same days, but not at the same time. This means that you will typically want to create Schedule objects that contain Day/Date specifications - but not Time specifications. This way, a single Schedule can be shared by those related jobs and plans. The time the triggerable object runs would be embedded within the triggerable object. However, if you do have multiple triggerable objects set to run on the same dates and times, you can certainly add the time to the Schedule object. The Schedule's time is ignored if the time is set on the triggerable object.
![]()
The image above includes a schedule that is associated with a Plan. You will notice several action buttons along the bottom of the Schedules grid. Associate lets you select a schedule to use. Disassociate lets you disassociate a selected schedule (not use it anymore). Edit Times allows you to add/edit times associated with the triggerable object. when used, the times are embedded within the object. Edit Schedule allows you to edit the selected Schedule and make changes. New allows you to create a new Schedule, and when saved, it will automatically be added to the Schedules list.
As mentioned previously, you have a choice of embedding the trigger times as part of the object itself or set the times on the Schedule. Embedding times with the object provides more flexibility and allows more sharing of the Schedule object since many Jobs/Plans may share the same trigger dates, but not the same times.
For example, in the above image, a Schedule named Monday implies that the schedule will result in triggers every Monday. Observe that under the Associated Time column there is time specification present. This means that the time is coming from the Plan, not the Schedule object. The user clicked on the Edit Times button to embed the time on the Plan-level. When the words In Schedule are displayed in the Associated Time column, the trigger time(s) are coming from the Schedule object. In this example, the Plan is scheduled to run every Monday at 06:00, :15, :30, :45 and again at 07:00, :15, :30 and :45. The Plan's embedded time takes precedence over any times that may be set on the associated Schedule.
Creating a Schedule Object is straightforward in ActiveBatch. You just need to think about what kind of date/time triggers you need for your triggerable objects.
A triggerable object can have one or more schedules associated as date/time triggers. Therefore, do not think you need to cram every possible date and/or time pattern into one schedule. As a basic example, you may have one schedule that specifies weekday date/time triggers, and a second schedule that specifies weekend date/time triggers. They can both be associated to a single triggerable object.
Constraint Based Scheduling (CBS) Trigger
To schedule triggerable objects using Constraint Based Scheduling (CBS), click the Use Constraint Logic as a Trigger option.
When looking at future runs for objects configured using CBS, you will see a state of "Not Run (S)", where the S stands for Scheduled. The future run Execution Time is based on the Earliest Time property described below.
Constraints allow you to set pre-conditions that must be true for the Job/Plan to execute. See Constraints for more details. These constraints are always enforced unless an operator overrides the constraint requirement.
Constraint-Based Scheduling allows you to indicate that whenever, in a 24-hour period, a Job or plans constraints are satisfied (i.e. met), the Job/Plan is permitted to execute without an explicit trigger. This feature is designed to work only with triggerable objects that need to run once in a 24 hour day, which can be typical for many workflows. If your workflow needs to execute multiple times per day, then Constraint-based Scheduling is not an option. Additionally, this type of trigger assumes you have one or more General Constraints configured on the triggerable object's Constraints property sheet.
![]()
The above image depicts a Plan that is enabled for CBS. The Plan has a constraint configured (on the Constraints property sheet) where a previous Plan must execute successfully prior to the execution of this one. CBS imposes no restrictions or limitations on the constraints that may be used for CBS scheduling. It can be one or more of the (4) General constraint types - Job (Instance), Variable, Resource and File. Whatever pre-condition(s) you need to specify are configured on the Constraints property sheet.
By default, a calendar day beginning at Midnight (0000) and ending at 2359 is assumed. By default, a "business day" and "calendar day" have the same beginning and ending times (the calendar day is always 0000 to 2359). It is the business day that can vary. An ActiveBatch Administrator can establish a new start (and indirectly a new end) time by configuring the Business Day feature. A new start time means a different start of day time, instead of midnight. For example, assume a company begins their business day at 0600. This would mean a business day is defined as 0600-0559. Crossing midnight would therefore not change the business date. Please see Business Day for more information on Business Date semantics.
Triggerable objects marked as CBS enabled, become “armed” when a new and eligible day begins. By “eligible” we mean a day/date specified in any associated Schedule object. To clarify, a Schedule object must be associated to the triggerable object as it specifies CBS trigger dates.
The execution time is determined by the “Earliest” and “Latest” times properties. The Earliest Time is the first time the system will check all the constraints configured on the triggerable object, to see if they are met. If they are met, the triggerable object will start, barring any other conditions preventing dispatch - e.g. a Queue is offline or full, etc.). The Latest Time indicates the latest time that the triggerable object must run by before it is disarmed and no longer eligible to run. Typically a triggerable object would advance from the Earliest time to the Latest because the constraints have not been met (evaluated to true) yet.
The frequency at which CBS rechecks constraint logic is set on the Constraints property sheet. The property labeled Wait. Check every "x" Minutes, or Hours, etc. is what determines this. How long to check is determined by the CBS Latest Time property.
Earliest Time - The earliest time a CBS enabled object can be armed on a scheduled run date. If the constraints are met at the earliest time, then the triggerable object will begin executing at that time.
Earliest Time - Default value
The earliest time is the beginning of the calendar day, which is midnight.
If "Use Business Day Semantics" is checked on the Constraints property sheet, the earliest time is the beginning of the Business Day.
The Business Day is configurable by an ActiveBatch Admin.
Earliest Time - Override Default value
If a time is entered in this property and the box is checked, the default earliest time is overridden and replaced with the time entered here.
Latest Time - This is the latest ending time that a CBS enabled object is disarmed and/or can be executed through CBS.
Latest Time - Default value
The latest time is the end of the calendar day, which is 2359.
If "Use Business Day Semantics" is checked on the Constraints property sheet, the latest time is one minute before the end of the Business day.
The Business day is configurable by an ActiveBatch Admin.
Latest Time - Override Default value
If a time is entered in this property and the box is checked, the default latest time is overridden and replaced with the time entered here.
Using the above figure as an example, the earliest this triggerable object can run is 0900. The latest it can run is 1300 (1pm). The defaults were overridden by the user.
As CBS enabled objects adhere to a 24-hour cycle, it is possible that a late running instance can run past the end time of the day (calendar or business). The “Abort Executing Instances…” property determines what should be done if that happens. By default, the executing instance is allowed to continue to run. If you would rather the instance be aborted, then check the Abort Executing Instances... property.
Note: You must associate at least one (1) Schedule object for CBS to work, with the date(s) in the Schedule specified (no time specifications are used). If you don’t specify any Schedule(s) - the triggerable object will not run based on CBS. Also, if you do specify a Schedule that has time(s) configured, the times will be ignored as the Earliest Time / Latest Time properties are always used to determine the arming/execution of the CBS trigger.
Run Last Missed Schedule: This field indicates whether the last “missed” schedule time should be executed. For example, let’s say a triggerable object was scheduled to run at 17:00 (5pm) today, but the Job Scheduler machine was down. When the Job Scheduler machine is started at 18:00 (6pm) that scheduled execution time would have been missed. With this field enabled, the Job Scheduler will execute the Plan based on its last scheduled time.
Note: Only the “last” missed schedule is honored. This is true even if the Plan had missed five (5) scheduled times. In other words, the object is triggered once (not 5 times).
Time Zone to use: This field indicates the time zone to use for the triggerable object. Possible time zones are: Job Scheduler, Client (Submitter’s machine), UTC (Universal Time Coordinated or Greenwich Mean Time) or any time zone you select. The Time Zone is used for time trigger(s), CBS time constraints and the @TIME variable.
Event Triggers
ActiveBatch supports a wide variety of event triggers. An event trigger is different from a date/time trigger because ActiveBatch is monitoring for an external event to occur, and when it does, a trigger occurs. An external event is not controlled by ActiveBatch, the way date and time triggers are. Event triggers may occur in a predictable manner - or be completely random.
A File Trigger is one example of an event trigger. When this trigger is configured on a Job, ActiveBatch monitors a specified directory for changes (e.g. a new file has been added, modified or deleted), and when that happens, the Job triggers. Event triggers are useful because the event is typically an indicator that the Job is ready to run. Using the File Trigger example, the file that has been added to a monitored directory may be the file that the Job must process (the payload of the Job uses the file). Rather than scheduling a Job at a time you think the file may arrive, then use a file constraint to periodically check for the file arrival - you can use the arrival of the file as the trigger mechanism. This takes the guesswork out of setting up a schedule and configuring file constraints. You know the file is available because the File Event detected its arrival. The Job can be dispatched immediately upon the arrival of the file. No schedules or constraints are required.
Event triggers can be added to all triggerable objects (Jobs/Plans/references). They are configured on the Triggers property sheet, as depicted in the image below. This image was taken from a Job property sheet, but it is the same for plans and references.
![]()
To configure any event type trigger, check the Enable Event Triggers checkbox. Next, there are two other checkboxes on the Trigger property sheet which are:
Enable Manual Trigger - By default, this checkbox is enabled. When checked, it means the object can be triggered manually using various methods that access the "Trigger" command, where the most common is using AbatConsole or WebConsole (e.g. a right-click > Trigger or Trigger > (Advanced) menu option). This property, despite where it is located, it is not related to Event triggers in any way.
Allow Deferred Event Execution - By default, when an event occurs during an “excluded” period (i.e. a period the object is not to execute) the triggering event is ignored. If the Allow Deferred property is checked, then the triggered instance will be dispatched as soon as the exclusionary period is over. Exclusionary periods are configured on the Constraints property sheet. See Date/Time Constraints This includes any exclusions specified in the Date/Time list, and exclusions specified using one or more associated Calendar objects. If an event trigger occurs during an exclusionary period when the Allow Deferred checkbox is enabled, an instance will be created, but it will go into a "Waiting Date/Time" state. The waiting instance's Execution Time will specify the time the instance will move into an executing state (again, after the exclusionary period is over). As an example, if the event trigger occurred during a Calendar holiday, then the Execution Time would be the start of the next business day.
If you anticipate multiple events occurring during an exclusionary period, and you would like all events to create a waiting instance, be sure to configure the triggerable objects Execution > If Active properties to allow the creation of multiple instances. If the default value of "Skip" is set, only one instance can be active at a time. Any instance that is not complete (success, failure or aborted) is considered active. As an example, if 10 file trigger events occur during an exclusionary period, and If Active is set to "Skip", only one instance for one file trigger will be created. The rest of the events would be ignored.
Note: The Allow Deferred property is not applicable to a manual Trigger operation.
To add a new event trigger, Click the Add button as depicted in the image above. Currently, sixteen (16) event trigger operations are supported. Five (5) additional event trigger operations are available via separate licensing and purchase: HDFS File Trigger, Oracle Database Trigger, SAP Event, ServiceNow and VMwareTrigger.
Common Event Trigger Properties
There are two properties that appear on almost all ActiveBatch Event Triggers (except WMI and System Startup events): Queue and User, as depicted in the image below (see the bottom 2 properties).
![]()
The Queue property represents an
(and therefore the Execution Machine) that the Event Trigger will be initiated from. By default, if the Queue is omitted, the Event is initiated from the Job Scheduler’s machine. With the exception of the File Trigger event, the Execution Queue specified must represent a Windows machine platform with the appropriate software installed as it relates to the event type selected (i.e. JMS, Growl, etc).
The User property represents a User Account object whose security credentials will be used to initiate the ActiveBatch Event framework (other than the File Trigger event in which case the security credentials are used when performing the File Trigger event itself). The ActiveBatch Event Framework is a process that then initiates the various supported events. With the exception of File Trigger, all the other events use this two-stage process. By default, when the User Account is omitted, the ActiveBatch service account is used to initiate the ActiveBatch Event framework. With the exception as noted above, that’s fine because the actual event itself will still require security credentials to perfect the event trigger you want to enable. For File Trigger events, we recommend that you do specify a User Account object since those events, in particular, assume a “default” security context (in other words, they use the credentials of whatever initiated the Framework).
Next, there are a couple of other properties you can configure for each event you create.
Trigger Once Only: If enabled, this event is triggered once (when the event occurs) and then is disabled for the life of the object.
Expected Date Times: This facility, when enabled, allows you to associate a date and time with an expected event, which is useful when the event occurrence is predictable.
![]()
In many cases, events are not predictable. This means views such as the Daily Activity view, the Runbook or Operations views - do not depict expected future runs since no date or time expectations are configured. It is very possible that the event trigger will occur randomly, on random dates and/or at random times. In that case, this Expected Date Time feature would not be useful.
Alternatively, if there are scenarios in which you can predict when an event will occur, you may find this feature useful. It allows you to associate one or more Schedule Objects that are configured with the dates and times you expect the event to occur. The triggerable objects may not run at exactly that time, therefore you are using this feature to set general expectations, which is especially helpful when:
Displaying various instances views that depict future runs (it provides a more accurate picture as to what is coming).
You would like to alert users if the event does not occur. The alert type is named: Job/Plan missed expected trigger. You must configure this alert if you would like to use it.
Note: When Schedule(s) are associated this way - on the Event property sheet, the Schedule object will not produce date and time triggers but rather, date and time expectations are set.
When looking at future runs for objects configured using Expected Date Time, you will see a state of "Not Run (E)", where the E stands for Expected trigger. The Execution Time field for the future run will be the expected trigger time, based on what was set in the Schedule object.
The Delta field allows further flexibility when setting up an expected time frame for your triggerable object. It expands the expected trigger time window, beyond the set time taken from the Schedule object. It also represents the amount of time that can go by before missed expected trigger alert (described above) goes out - if the expected event does not occur by the Scheduled time plus the Delta time. The alert is useful when a predictable event does not occur because there could be an underlying issue that needs to be investigated.
To use the facility, enable the Expected Date Times checkbox, as depicted in the image below. You can add one or more Schedule objects that include the date(s) and time(s) the event is expected to occur. Click the Associate button if you have an existing Schedule object to add. To disassociate a schedule, select the schedule in the list, then click the Disassociate button. To edit a schedule, select the schedule in the list, then click the Edit button. To add a new schedule, click the New button and configure the new schedule object accordingly.
![]()
The settings above depict a Schedule object named M-F_2_10PM which produces a weekday time expectation of 2:10pm. Combined with the Delta property of (30) minutes, this effectively produces an expectation that this Plan or Job is expected to run each day between the hours of 2:10pm and 2:40pm (not including the duration of the Plan/Job itself). If the event does not occur by 2:40pm and the missed expected alert is configured, the alert will go out at that time.
![]()
The example above, using the Daily Activity view, depicts the same Job with an “expected” (E) future run-time of 2:10pm.
Now that the common Event trigger properties have been described, this topic will now describe each event trigger in detail. Expand the desired event trigger to learn more about it.
E-Mail Trigger
The E-Mail Trigger allows you to trigger a Plan or Job based on various criteria within a received E-Mail message.
![]()
Mailbox: ActiveBatch currently supports two (2) mailboxes for accessing e-mail as an event trigger: Microsoft Exchange (and Hosted Exchange) and POP3. Clicking on the dropdown shows the two possible choices. When selected the input parameters for your selection are displayed.
This section describes the properties needed for accessing the selected user’s mailbox using Microsoft Exchange.
MailServer: This property is the host name or FQDN of your Microsoft Exchange mail server OR the URL endpoint of your EWS server (for example, https://mymail.company.com/EWS/Exchange.asmx).
Credentials: This property is used to specify the actual user’s mailbox. Please select a User Account object representing the proper credentials by clicking on the dropdown.
Note: The User Account “username” property must employ UPN (User Principal Name) syntax (i.e. user@company.com) as this will be used to denote the target mailbox.
AttachmentFolder: This property is used to indicate that, if the received e-mail contains attachments, you would like the attachments created in the folder specified. If the e-mail does not have attachments nothing will be created. The filenames of the attachments are taken from the e-mail itself. If this property is omitted, then attachments are not externally saved.
Domain: This property is used when accessing a hosted Exchange server in which the domain needs to be specified along with the Username and Password credentials. If omitted, only the security credentials as specified in the User Account object will be used.
EWS Page Size: This optional property indicates the number of mailbox messages that will be processed at any one time. By default, that value is 50. Specify a higher value if the mailbox will be receiving more than that value at any one time.
Mailbox Folder: This optional property allows you to specify a mailbox folder or sub-folder. By default, the folder “InBox” is used. If specified the syntax is “ParentFolder\sub-folder” where “ParentFolder” is a Microsoft Well-Known folder name. (EWS Only).
Mark As Read: This optional Boolean property indicates whether you want the messages in the mailbox to be considered to have been read when the trigger is processed. This is very useful when the mailbox is only used for automated processing. By default, mailbox messages are not considered to have been read.
This section describes the properties that may be optionally specified if you need to filter for specific criteria that the mail message is to have for the trigger to be performed.
ExclusiveWords: If specified, one or more words (or phrases), separated by a comma, whose absence in the incoming E-Mail message body is necessary in order to act as a trigger.
From: If specified, indicates the “From” field that must match the incoming E-Mail (multiple addresses can be specified separated by a comma).
HasAttachment: If specified this optional Boolean parameter, allows you to ignore whether an e-mail has an attachment or not. If True is specified, the E-Mail must contain an attachment to be considered. If False the email must not contain an attachment. If omitted, no attachment requirement is imposed.
InclusiveWords: If specified, one or more words (or phrases), separated by a comma, whose presence must be contained in the incoming E-Mail message body in order to act as a trigger.
Subject: If specified, one or more words (or phrases), separated by a comma, whose presence must be contained in the “Subject” field in order to act as a trigger.
To: If specified, indicates the “To” field that must match the incoming E-Mail (multiple addresses can be specified separated by a comma).
This section describes the properties needed for accessing the selected user’s mailbox using POP3.
MailServer: This property indicates the machine name for your POP3 Mail server. Typically this would be a fully qualified domain name.
Credentials: This property is used to specify the Windows credentials to be used when accessing the mailbox. Please select a User Account object representing the proper credentials by clicking on the dropdown.
Port: This property contains the POP3 port number. By default, 110 is used.
UseSSL: This Boolean property indicates whether SSL (Secure) POP3 should be used. The default is False. Please note that if you set this property to true you will probably also need to change the port number.
ExclusiveWords, InclusiveWords and Subject also support wildcards (asterisk for multi-character wildcard and question mark for single character wildcard). As multiple entries are comma separated, a phrase containing an embedded space is valid and does not require a quoted string. All matches are performed in a case-less manner.
![]()
The above figure displays the @Trigger structure variables that are passed back from a successful e-mail event. When multiple files are attached, the “AttachmentFile” variable is a comma separated list of files stored within “AttachmentPath”. In a later release of ActiveBatch, an additional variable was been added to the above structure named .RawBody. Where “.Body removes all HTML and formatting characters (i.e. newlines), .RawBody does not. All HTML and/or formatting characters are left intact.
System Startup Trigger
The System Startup event will trigger a Job/Plan when the Job Scheduler service is started or restarted. When you select this event, the onStartup value will be set to True. Keep this value, then click OK to save. This is all you need to do when using this event trigger.
![]()
File Trigger
The File Trigger event provides you with the ability to specify a folder, recursive set of folders and/or specific file(s) (using wildcards) in which one or more files are subject to a file operation occurring. When that operation occurs, the event is produced and the Job/Plan is triggered for execution.
Supported file operations through the Filter are: Created, Changed, Deleted and Appeared (renamed). By default, the creation operation is enabled. File Event is therefore especially helpful when you want to trigger a Job/Plan based on the creation of a file. Appeared is useful when a new file may be created in another directory and then later moved to the target directory. Windows IIS server uses this technique when downloading a file.
Note: If using the Delete file operation, please note that some Windows facilities (i.e. DOS/CMD) use the short-form filename for these operations. This means you must also use the short form name for the proper pattern matching.
You can specify a specific file or a directory specification. For example, if you want to trigger a Job/Plan based upon the reception of a file through FTP, the trigger will occur only after FTP has populated the file (see note below).
Note: An Exclusive access check is implicitly performed on the target file(s) to determine if the file trigger event may be declared. If this check fails, ActiveBatch will poll the file(s) starting with a one (1) second delay and build to a sixty (60) second delay the longer it takes for the Exclusive access check to be successful.
The “changed” operation is subject to certain limitations imposed by Windows and other platforms. In particular file size and date processing may not be timely due to caching considerations (see note below)
Note: If using the “changed” filter, understand that multiple unintended trigger operations can occur (even with ActiveBatch attempting to suppress). In addition, each operating system handles caching of directories differently so updates may not be timely or even match with file changes you know are occurring. For this reason we caution your use of this filter as it can be problematic unless you’ve experimented with your actual intended use.
Recursive refers to whether the specified directory should also include any nested sub-directories. If enabled, sub-directories are included. When monitoring directories, please note that the ending backslash is required as in the above example. You can also use wildcards such as C:\test\*.*.
For monitoring files on non-Windows systems you must specify Queue and User properties. The Queue represents the machine in whose context the “File Trigger” specification will be interpreted (for example, C:\test\ would be a local C drive on that Execution Queue/machine). The User property indicates the security credentials that will be used for file monitoring.
For monitoring files on Windows system you may specify Queue and User properties. If you omit these properties the file monitoring will be performed on the Job Scheduler machine and use the Scheduler’s service credentials. If you specify the Queue property the file monitoring will be performed on that Execution Queue/machine. If you specify the User property the file monitoring will be performed using the specified security credentials.
Note: By default, all file specifications are evaluated from the Job Scheduler machine’s point-of-view. This is the case when the “Queue” property is left blank. If the Queue property is completed, the file specification will be evaluated from the target Execution machine’s point-of-view.
Note: You may use ActiveBatch variables for the “File Trigger” property, however, they are only evaluated once when the triggered is declared (typically on Job Scheduler startup).
Note: File Triggers performed on Windows use the Directory Change Notification (DCN) facility. This facility does have limitations in terms of the number of outstanding directories that may be watched as well as the number of outstanding file triggers at any one time that may be executed. For more information, please read the Knowledge Base articles “File Trigger Session Limitations” and “File Triggers and simultaneous events”. The “File Trigger Session Limitations” article in particular also references the Microsoft article that speaks to various quotas that may need to be increased. This is especially true if you intend to watch or access over 100 file trigger events. As of V8 a change has been made to improve reliability in the event of a DCN failure. In the event of a DCN failure (for example, a network share was specified and the host sharing that directory lost connection), on resumption of DCN, a check is made to determine “created/appeared” and “deleted” changes. Those file trigger events will then be initiated. Please note that if a file is created and deleted before the DCN can be resumed, ActiveBatch will not be aware of the directory changes. Users must ensure that directories to be watched do not contain thousands of files for optimum performance.
Note: File Triggers performed on a non-Windows system use a built-in polling mechanism to determine directory changes. By default the poll is thirty (30) seconds. Users must ensure that directories to be watched do not contain thousands of files for optimum performance.
Note: If you prefix a File Trigger specification with “poll:” (case-insensitive) that will cause Polling logic to be used instead of DCN on Windows systems. “poll:” has no effect on non-Windows systems since that is the only mechanism available.
![]()
In the above example, the @Trigger variable structure contains several useful variables to help identify the specific file that caused the event. Note that .FileName contains the complete file specification where .FileTitle contains just the filename and extension portion. This can be useful if you need to move the file to another location.
File Triggers also support the use of Regular Expressions in a manner similar to that of the Success Code Rule Search String. Prefixing a File Trigger specification with “regex:” will cause the File Trigger specification to be viewed in the context of a Regular expression. For example, regex:c:\test\regpoll[0-9].bat allows for any file containing regpoll0.bat through regpoll9.bat to be included. If you need to also include the poll: prefix, regexpoll: should be specified. File Trigger Regular Expression support is available for Microsoft Windows, UNIX systems and OpenVMS. Please note that some minor differences in the handling of Regular Expressions may be present between OSes due to differences in the underlying RegEx engines used.
FTP File Trigger
FTP File Trigger events provides you with the ability to specify a folder, recursive set of folders and/or specific file(s) (using wildcards), on a specific FTP server, to determine when a file is subject to a file operation occurring. When that operation occurs, the event is produced. For example, if a file is created on an FTP server within \etc\test, an event is produced and the Plan or Job is triggered. File operations include Created, Deleted, and Modified.
By creating an FTP File Trigger event you can avoid the workload of polling an FTP server and instead create a workflow when a file(s) is created, modified or deleted on an FTP server.
![]()
The FTP File Trigger event consists of three (3) sections. The first is the Connection Data. You can either specify the server and security credentials within the trigger (known as “embedded”) or by reference to a special User Account (known as “managed”). Part of the Connection Data is what type of FTP protocol you’ll be using: Standard FTP (which includes FTP as well as FTPS (SSL FTP) or Secure Shell FTP (SFTP). The second part is the File Specification and Recursion. This area indicates the type of file specification (folder or folder/file) and any wildcards used. Recursion indicates whether sub-directories are to be examined. The last part is the Filters specification. This include whether the event is to be generated when a file is created, modified or deleted. In addition, you can specify a size parameter as well as a comparison operator to be applied to the desired file size.
Note: This event does perform polling using a global value that is part of the event extension. The default is five (5) minutes but can be changed by the ActiveBatch Administrator.
Growl Trigger
The Growl trigger provides you with the ability to trigger a workflow based on a specific Growl notification message.
![]()
Hostname: This property represents the hostname, IP address or FQDN of the system housing the Growl server software.
NotificationName: This property represents search criteria for the name (description) of the Growl message. The use of wildcards (asterisk and question marks) are supported.
NotificationTitle: This property represents search criteria for the title of the Growl message. The use of wildcards (asterisk and question marks) are supported.
SearchString: This property represents search criteria for the Growl message itself. The use of wildcards (asterisk and question marks) are supported. If search criteria is not specified, any Growl message will cause a trigger to occur.
JMS Event Trigger
The JMS Event Trigger allows you to trigger a Plan or Job based on receiving a JMS message from a selected Queue. The message (both body and properties) can be subject to additional filter criteria that must be met before the trigger action can be performed. JRE V1.8 or later is required to be installed on the Job Scheduler machine for this event to be operational.
![]()
JMS Provider Info: This collection of properties represents the JMS server and software you are attempting to connect to. The dropdown lists of the JMS servers that has been tested. A custom setting is available for you to add a new JMS configuration.
![]()
JMS Provider Name: This is the name of the JMS server software.
InitialContextFactoryName: This is the name of the Initial Context Factory class for the JMS server’s JNDI implementation.
Protocol: This is the protocol that will be used to connect to the JMS server.
Machine Name: This is the machine where the JMS server software resides. For TIBCO only, failover is supported by specifying a comma separated list of machine names where a machine name is a legal hostname and optional colon port-number (i.e. server1:3717).
Port Number: This is the TCP/IP port number that will be used for communication.
Jar Location(s): This the location of the required Jar files necessary to communicate with the JMS server.
JNDI Connection Factory Name: This property represents the JNDI name of a Connection Factory object. A ConnectionFactory object encapsulates a set of connection configuration parameters that has been defined by an administrator.
JNDI Destination Queue Name: This property represents concerns the destination queue name for the possible JMS message to be received. This destination object can be a queue or a topic.
Credentials: This property, if specified, provides authentication for the JMS receive. The property represents a User Account object with a username and password that is appropriate for JMS authentication with your JMS provider.
Topic Durable Subscription Name: This property, if specified, indicates the durable subscription name for this topic.
Message Header Filter: This property indicates filter criteria for the message properties that must match for the message to be considered event-able.
Message Content Filter: This property indicates filter criteria for the message content that must match for the message to be considered event-able.
JMX Event Trigger
The JMX Event Trigger allows you to trigger a Plan or Job based on the specification of a JMX attribute. You can further indicate the value of the attribute that must be met before the trigger action can be performed. JRE V1.8 or later is required to be installed on the Job Scheduler machine for this event to be operational.
![]()
JMXServiceURL: This property contains the URL of your JMX server. The format is similar to:
“service:jmxrmi:///jndi/rmi://server-name:port-number/page where “server-name” is the host name of the JMX server, port-number is the port number being used by that JMX server and page is the directory being used for JMX connections.
MBeanName: This dropdown lists the mbean names that are housed on the JMX server.
Operations: This selection sheet helper allows you to select those operations you’re interested in monitoring. Currently only Attribute Change is supported.
AttributeName: This dropdown lists the attribute names for the mbean you’ve selected.
Filter: This property, if specified, indicates the value the attribute must be to allow the event to occur.
MSMQ Trigger
The MSMQ Trigger allows you to trigger a Plan or Job based on the reception of a message to a selected MSMQ queue.
![]()
MachineName: This property indicates the name of the machine that is hosting the MSMQ system.
MessageQueueName: This property indicates the name of the Queue that you want ActiveBatch to use for triggering operations.
Twitter Trigger
The Twitter Trigger allows you to trigger a Plan or Job based on a message received by a specified Twitter account using Twitter Authentication. You can further indicate filter criteria for the message itself and whether a trigger action should take place.
![]()
Twitter Credentials: This property is a User Account object with Twitter Authentication enabled. The object must allow proper access to Twitter through a security token.
SearchString: This optional property represents search criteria for the event. When a tweet is received, the search string is compared to determine if the message meets the eligibility criteria. If so, the event triggers the objects. If omitted, any received message will satisfy the event requirements.
Web Service Trigger
The Web Service Trigger allows you to trigger a Plan or Job based on an event generated by a Web Service. Since a Web Service needs an “endpoint” or destination to send its web service message to, this facility creates those endpoints for you. The basic dialog deals with naming the endpoint (and making sure it’s unique), setting its security requirements and specifying an optional filter. Every web service endpoint also provides a Trigger method to allow an ActiveBatch aware web service the ability to trigger objects.
![]()
Identity: This property is used to create a unique endpoint. The created endpoint must be unique Job Scheduler wide (since the Job Scheduler is the publisher of all endpoints system wide). Please note that once you create a reference to the endpoint you should never change this value. Doing so, would cause you to also have to change all references to both the Endpoint and WsdlLocation.
EndpointType: This property is used to denote the type of endpoint that will be used. Four (4) options are supported: Basic, Secure, SecureCertificate and SecureUsername. Basic refers to a completely clear text, no security authentication required endpoint (think http://). The other three “Secure” options all support https: level communications. Secure indicates that no authentication credentials are required, SecureCertificate indicates that a valid client certificate is required to communicate with this endpoint. SecureUserName indicates that a username and password are required to communicate with this endpoint.
PublishMetadata: This property indicates whether the Job Scheduler will publish the endpoint as a Hosted Web Services. You can check which web service endpoints are published by copying the “Endpoint” property to a web browser. You will then receive a list of all published web service endpoints. A “true” property indicates the endpoint should be published. “False” indicates it shouldn’t be published.
IsGeneric: This Boolean property indicates whether the incoming message must adhere to the message standards imposed by the Wsdl schema or whether the message can be free formatted. A value of “true” indicates that a free formatted message is allowed and a value of false indicates that adherence to the Wsdl is required. This property does affect the setting of the @Trigger variable. A value of “true” will cause the XML Body and Headers to be returned. A value of “false” will result in only the user-specified variables, if any, returned. Note: The sending web service will receive an error if this setting is not adhered to.
XPathRule: This optional property indicates that an XPathRule filter will be applied to the message. If the filter expression is true, the message will be allowed to trigger the object. If omitted, any valid message will trigger the object.
Endpoint: (Read Only) For convenience the actual endpoint URL is displayed. This URL can be copied into a browser to examine and test the endpoint.
WsdlLocation: (Read Only) For convenience the base Wsdl specification is displayed.
![]()
The above figure displays the @Trigger structure variables that are passed back from a successful Web Service event. The variable VAR value is set by the caller of the Web Service and available to the underlying triggered object.
WMI Trigger
ActiveBatch supports the integrated use of Microsoft Windows Management Instrumentation or WMI. WMI is Microsoft’s implementation of Web Based Enterprise Management (WBEM). ActiveBatch is both an Event Provider and Event Consumer. This means that ActiveBatch can register for any interested events and is notified by WMI when they occur. This section discusses the consumer aspects of ActiveBatch.
![]()
Note: WMI must be active on the Job Scheduler machine for you to issue WMI Event triggers.
ActiveBatch allows Job authors to indicate the events that a Job or Plan may be interested in and can trigger execution of the object when the event occurs. After completing the requested information, click OK to confirm and apply or click Cancel to cancel any addition or changes to the list of event(s).
The dialog box requests that a WMI query be entered. For maximum flexibility, ActiveBatch supports the use of WQL (WMI Query Language, similar in syntax to SQL).
WMI Event. Enter WQL string below: This mandatory field takes a valid WQL statement describing the event you’re interested in. All WQL statements begin with SELECT. You are not restricted in what you can enter however, you should not specify polling intervals that would adversely impact ActiveBatch and/or system performance. The example above shows a WQL query requesting an event be triggered if the “Telnet” service enters a “stopped” state.
Namespace: You must indicate the namespace to connect to. The Namespace specification is:
\\machine\namespace, for example, ROOT\CIMV2 is the namespace for the local machine. In the above example, ${VM} represents a machine. Note: This variable will be evaluated only once when the trigger is armed.
Privileges: This field allows you to add or remove any specific privilege that the selected WMI provider will use to execute your query. Clicking the Add button causes two (2) properties to be shown:
Privilege and Enabled. Privilege is a dropdown list of all the possible privileges and Enabled is a Boolean property that indicates whether the specified privilege should be enabled or not.
User Information: This section allows you to select a User Account to associate with this event. The drop down button allows you to select specifying either a User Account object or an embedded Username/Password. It is highly recommended using a User Account Object instead of embedding the Username/Password within the Job itself. For a local machine (Job Scheduler machine) the ActiveBatch Event Framework authentication credentials are used and WMI does not support the specification of different authentication credentials for a “local” machine. For non-Job Scheduler machines you must specify authentication credentials that will be used for WMI Event processing. As with other portions of ActiveBatch, you can indicate that the username and password are to be saved.
User Account: This property causes a dropdown list of all User Account objects that you can specify. You would select one.
Username/Password: This pair of properties represents the embedded username and password.
Authority: (Optional) Server Principal Name.
Authentication Level: (Optional) Authentication Level
Impersonation Level: (Optional) Impersonation Level
Run Job on Event Machine (Generic Queue Only): You can indicate to ActiveBatch that the Job Queue to select is the machine that actually generated the event. For this feature to work properly the Job must be queued to a Generic Queue that contains at least one Execution Queue for the possible event machine. If no valid Execution Queue can be found that matches the event machine the Job will not be run.
HDFS File Trigger
This event trigger is only available via a separately purchased license. The HDFS File Trigger allows you to generate events based on changed to an HDFS folder (and the file(s) within that folder).
![]()
Name Node URL – The URL of the HDFS Name Node.
Authentication – This set of properties is used when executing on the HDFS Name Node. These credentials will be used to authenticate with Kerberos if necessary.
Path – Folder and file specification (including wildcards)
Filter – One or more operations concerning the file; Created, Appeared, Modified and Deleted.
Recursive – This Boolean property determines whether any sub-folders that are present in the path and examined in a recursive fashion.
Oracle DB Trigger
This event trigger is only available via a separately purchased license. The Oracle DB Event Trigger allows you to obtain database events on a specified Oracle table. The events available are currently: Insert, Update and Delete modifications to a table.
This facility uses the Table’s transaction log file to seamlessly determine committed changes. While no changes are made to your database by ActiveBatch this facility (and the underlying LogMiner usage) does require that minimal supplemental logging be enabled. Please read the section on “Oracle DB Event Trigger” in the “ActiveBatch Installation and Administrator’s manual” for additional information.
Please note that as this facility exposes data within the specified table, ActiveBatch requires that the user requesting this event have a role of “DBA Access”.
![]()
DataSource – This property references the target data source that the Schema and Table name are located on. This property also supports ActiveBatch variables.
Credentials – The object path of a User Account object. Clicking on the “Helper” will cause a tree display of all
ActiveBatch containers. You may then select a User Account object. The User Account credentials must have proper access to the target data source. Typically, the credentials will be a valid database username and password for this data source (unless Windows authentication is used in which the username/password will be a valid Windows account). This property also supports ActiveBatch variables.
SchemaName – The name of the schema which when used with the TableName identifies the desired table. This property also supports ActiveBatch variables.
TableName – The name of the desired table. This property also supports ActiveBatch variables.
Operations – This property indicates the operation(s) (and optionally “filter”) that you want ActiveBatch to declare an event. Valid operations are Insert, Update and Delete and may be specified by clicking on the property’s dropdown and checking those operations you are interested in.
DictionaryFilePath – This allows LogMiner to start in the context of a pluggable database (PDB) from the CDB level. To create the dictionary file (assuming UTL_FILE_DIR is set):
Login to the PDB where the trigger will be armed.
Create a new DIRECTORY or locate an existing one where the dictionary file will be stored on the file system (on the Oracle database server).
Generate the dictionary file via EXEC DBMS_LOGMNR_D.BUILD('<NAME>', '<DIRECTORY>', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE)
This is the path that will be used in the Event Trigger.
LogMiner is used because this option prevents the system from modifying or locking your tables, and it reduces performance and file I/O impact. LogMiner is started from the CDB using the dictionary file. You will still need the appropriate privileges to arm the Event Trigger. In the event the trigger fails to arm, an access occur will occur.
These changes were specific to 12c+; however, because the DictionaryFilePath property is displayed for 11g as well, it is an optional field for Oracle 11g instances and CDB data sources. The field is required for any PDB data source.
ExtractValues – Depending on the operation you can extract field (columns) values from the change record and have them returned within the @Trigger.Values built-in ActiveBatch variable for later usage by the triggered object. For example, if the field ‘Value’ was specified the following variable specification could be used to access the data: @Trigger.Values.Value. The syntax for this property is to specify one or more separated fields.
Filter – The filter property allows you to refine your declaration of the event. With no filter specified, when the specified operation occurs an event is declared. When a filter expression is specified, the expression must evaluate to true for the operation to be declared an event. This allows very precise refinement of the database change that must take effect for the event to be declared. In the above example, VALUE=’${VALUE}’ this expression tests the table field VALUE against an ActiveBatch variable ${VALUE}. If the expression is true then an event would be declared. The expression syntax supported is the same as for constraints (meaning you can use Boolean operators, parenthesis, and arithmetic operations where applicable).
![]()
The above figure displays the @Trigger structure variables that are passed back from a successful Oracle Database event. Note the “Values” sub-structure. These variables are the column (field) and value that created the event. The “Operation” variable indicates that the event was caused by a table insert operation.
Note: On an Update operation the only variable values returned are those which have changed and also been specified in the “Extract” parameter. On a Delete operation, no variable values are returned for the fields within the deleted record/row.
SAP Netweaver Trigger
This event trigger is only available with the SAP Netweaver license (which is separately licensed). The SAP Event Trigger allows you to trigger an ActiveBatch object (Job/Plan) based on any number of supported SAP events.
![]()
Login – This property is a User Account object that provides security credentialed access to an SAP system.
Event – This dropdown lists all the supported SAP events.
Select State – This dropdown lists the event state that is to be considered for the event trigger. Choices are: All – all events since the last time; New – new events since the last time and Confirmed – confirmed events since the last time.
Change NEW Event Status – A Boolean property that if true will change any event state to “Confirmed”.
Parameters – This optional property allows you to pass parameters to the triggered object (Job/Plan) when the event is triggered.
ServiceNow Incident Trigger
The ServiceNow Incident Trigger allows you to obtain events that occur on a specified ServiceNow instance. This event trigger is only available via a separately purchased license.
![]()
Connection Information: This set of properties describes the ServiceNow instance, security credentials and any proxy that must be used, to connect to the ServiceNow instance.
The properties list are those within the ServiceNow Incident. You may select specific values by using the helper dropdown. When an event matches those specified, a trigger is generated and executes the associated Plan or Job.
![]()
The above figure displays the @Trigger structure variables that are passed back from a successful ServiceNow trigger.
VMware Trigger
This event trigger is only available with the VMware license (which is separately licensed). The VMware Event Trigger allows you to obtain events that occur on a specified VMware Host system (and pertain to either the Host and/or Guest Operating System).
The initial portion of the event definition pertains to the selected VMware Host or vCenter system and security credentials for accessing that system. The optional portion consists of selecting the enumerated event and then selecting the event source. The event source can be either a: Virtual Machine, Host or Datacenter. In the example below, we’re interested in declaring an ActiveBatch event when a VmPoweredOffEvent occurs on the Virtual Machine QAVM. When the event occurs the associated Job or Plan will be instantiated and the details of the event are available through the standard @Trigger built-in variable.
![]()
ServerName – Host Name or IP-address of the VMware Host or vCenter system. This property also supports ActiveBatch variables.
Credentials – The object path of a User Account object. Clicking on the “Helper” will cause a tree display of all ActiveBatch containers. You may then select a User Account object. The User Account credentials must have proper access to the VMware Host. Typically, the credentials will be a valid Windows username and password for this system. This property also supports ActiveBatch variables.
Event – This property, accessible through the dropdown, enumerates all the possible VMware events you might be interested in. If none is specified, then all possible events are eligible. The event list is dynamically accessed from the specified ServerName.
User – The object path of a User Account object. Clicking on the “Helper” will cause a tree display of all ActiveBatch containers. You may then select a User Account object. The User Account credentials must have proper access to the VMware Host. Typically, the credentials will be a valid Windows username and password for this system. This property also supports ActiveBatch variables. If omitted, the Credentials specified are used.
EventSource – This property allows you to select the source of the event. VMware currently supports three (3) types of events: VirtualMachineEvent, HostEvent and DatacenterEvent. Depending on your selection an additional property is displayed requesting the name of the underlying machine (either virtual machine, host or data center).
![]()
Depending on the event captured, ActiveBatch will pass information through the built-in @Trigger structure variable.
These values can be retrieved through ActiveBatch string substitution for use within the triggered ActiveBatch Job or Plan.
Constraints
A constraint (or dependency as it is often called) is a specification or condition that must be true before a triggerable object (Job, Plan or reference) is allowed to execute. An object triggered to run will not do so unless all the constraints (you can set more than one) have been met.
Constraints are configured on a Job or Plan's Constraints property sheet. The constraint properties are the same for both jobs and plans, with the only difference being there are two additional properties on the Job's Constraints property sheet (Dispatch Alert Delay and Maximum Dispatch) that are not present on the Plan's Constraint property sheet.
Constraints are not triggers. However, there is a type of trigger that uses the general constraints discussed here. This type of trigger is named Constraint Based Scheduling (CBS), which is configured on a Job or Plan's Triggers property sheet. While CBS is a trigger type, it should not be confused with the constraints described here, which are not triggers. They are conditions that must be met before an already triggered object can run.
ActiveBatch supports (4) General constraints: File, Job, Variable, and Resource. Additionally, it supports (2) Date/Time constraints: Date/Time exclusion list and Calendar object associations. Below is an image depicting a list of general constraints and action buttons that allow you to add, edit and remove general constraints. Additionally, the general constraints section includes: the Constraint Logic property, properties associated with a constraint failure, and a checkbox enabling the Business Day Semantics property.
When you click on the Add button, you will be prompted to select one of the 4 types of general constraints. Depending on which one you choose, the appropriate dialog window will open, providing you with additional property settings described in this section (each of the 4 general constraints are described in detail below). Please note you can add multiple constraints for any give Job or Plan, which can include a mix of the 4 general types, all of the same type, etc. In the image below, a Job and File constraint have been configured.
![]()
Let's look at the properties that are not specific to any particular general constraint type.
Use Business Day Semantics: This Boolean property indicates that this object (Job or Plan) is to use a Business Day instead of a normal Calendar Day. By default, a calendar day beginning at 0000 and ending at 2359 defines a day period. If Business Day Semantics is enabled, then an ActiveBatch Administrator established a business day, which is a 24 hour period with a start time something other than 0000. Please see your ActiveBatch Administrator for the Business Day definition that governs your system. It should be noted, however, that a Business Day, even though it spans past midnight, is still considered 1 day. For example, January 1, 0600 (the Business Day start) and January 2, 0559 (the Business Day end) are all considered January 1 in terms of a business day.
Constraint Logic: This section indicates how the various listed general constraints should be checked and in what order (the evaluation is done from left to right). When you save new constraints, the constraint label is automatically added to the Constraint Logic property. However, additional information may be required in the constraint logic property (for example, a comparison operator and value when using certain types of variable constraints). You can specify comparison, Boolean operators and parenthesis to ensure that any constraints match your expectations. Boolean logic operators, in English or VBScript-style, or arithmetic operators may be used (all arithmetic operations are integer based). For example, “and” or && may be specified. A unique label identifies each constraint. In the above example, “JOBA” and “DataFile” constraints must both be met. See Constraint Logic Operators for a complete list of operators. Please note that you should exercise caution when performing logical operations on strings. Other than “0”, “1”, “False” and “True” the behavior when using logical operations on strings is undefined.
Note: When a constraint is removed from the "General" constraints list using the Remove button, you must always ensure that you also remove the associated label referencing that constraint (and its additional associated logic, if any) from the Constraint Logic property. Missing constraints whose labels remain in the Constraint Logic property are treated as false.
Note: A constraint in the "General" constraints list will be ignored if its label is not present in the Constraint Logic property.
If constraint logic fails: There are a few fields that control what actions should be taken if one or more constraints fail. In the above image, “JOBA” must complete successfully and the file c:\Temp\Temp.dat must be present, be at least 1000 bytes in size and created within the last five (5) hours - for this constraint to be satisfied. If you look at the bottom of the figure you’ll see an “If constraint logic fails” specification which indicates that the system should wait up to 15 minutes to determine whether the constraint failure has resolved itself.
Fail this Job/Plan: When checked, ActiveBatch fails the Job immediately if the constraint is not met after the trigger occurs. It will fail with a Failed Constraint state, where State is a column present in various instances views.
Wait: This indicates that the system should wait (the default behavior), and not fail the Job immediately. How long to wait is determined by the next set of paired controls. Wait. Check every <number> / <units> for <number> / units interval. Units is one of the following: Hours, Minutes or Seconds (the legal range of the number depends on the unit specified). Interval is one of the following: Days, Hours, Minutes, Seconds, Times or Forever. The default recheck interval is “Check every 2 minutes for 10 minutes”. This is the how long the system will check to see if the constraint is satisfied, and the check frequency. A instance whose constraint is not initially met will go into a Waiting Constraint state. If the constraint is not met within the specified time frame, the instance will fail with a Failed Constraint state, where State is a column present in various instances views.
Note: Job Scheduler performance can be negatively impacted by frequent constraint logic checks, especially if multiple jobs are waiting on constraints at the same time. Every failed constraint check causes a round of instance preprocessing logic to run. This includes the Job Scheduler communicating with the ActiveBatch back-end database. For example, configuring a constraint check with a frequency of every couple of seconds and a duration of hours and days would not be recommended. This is especially true if there are many other jobs waiting on constraints at the same time, also configured for frequent constraint checks. It is recommended you find the right balance when establishing constraint logic. Use the largest check interval with the shortest duration that is practical for your workflow.
Operator Description +
Addition
-
Subtraction
*
Multiplication
/
Division
%
Modulo
^
Raise to power
&&
Logical AND
AND
Logical AND
||
Logical OR
OR
Logical OR
!=
Not Equal
<>
Not Equal
NOT, !
NOT or Complement
==
Logical Equal
=
Equal
>=
Greater than or equal
>
Greater than
<=
Less than or equal
<
Less than
Note: You can force an instance to run that is waiting on constraint(s) using the "force run" operation. You can also manually trigger an object and ignore constraints by checking the appropriate "ignore" options in the Trigger (Advanced) operation.
Instance Constraint
An Instance constraint is one where a previous Plan/Job must have executed to completion before this instance can be allowed to execute. The author of the constraint can further indicate whether the instance must have completed successfully, failed or simply completed (where success or failure is not considered).
To ad an Instance constraint, click the Add button, then select Job Constraint (it is named Job Constraint but Plans can be used as well). The following Job Constraint dialog appears:
![]()
The information requested is to identify the Job or Plan that the current Job or Plan will be dependent on - and to populate other associated properties, described below.
Label: This mandatory field names the Job Constraint. This label must be unique within the Plan or Job’s usage. If <AutoAssign> is used, the label will consist of the Job or Plan’s label. For example, /CaseStudy2/JobA would yield a label of JobA (as depicted in the above image).
Job: This mandatory field contains a dropdown box listing all known Jobs/Plans, by name. Select the Job or Plan that the current Job or place will be depending on before it can run.
Type: This mandatory field indicates whether the specified Job must complete successfully (the default), must complete in failure, or just complete. Failure is a less common configuration, but there are scenarios where the current Job should only run when the dependency Job fails.
Instance: This field indicates how current the specified Job instance must be to consider the Job meeting the Type property. Possible choices are defined below.
Exact Active means either the currently active instance or the last scheduled instance. This is the default and most precise setting. Jobs or Plans that are executed within a single
always adopt the Exact Active scope.Exact Active Today Only refines the Exact Active scope further by limiting instance checking to “Today”. Today is defined as the standard 24 hour period beginning with midnight (unless this object is using Business Day Semantics” in which case the period begins based on the StartBusinessDay configuration property). The instance must have been created today, however, it does not have to actually begin execution today. This scope allows for Job/Plan constraints to be considered as part of a today’s business run even though the actual execution of that run could take several days. Note: This scope is only applicable when the target specified is within another Plan or batch run (i.e. outside the current batch run).
Last Completed means the last completed instance as specified by a user provided time period. When this dropdown is selected, the time control labeled “within the last” becomes active and you can set the days and hours/minutes as a time period for ActiveBatch to determine whether a completed instance meets these requirements.
Not Active means that the selected Job/Plan is not currently running. If the specified Job/Plan is a part of the workflow, the scope will be limited to current batch run. If the specified Job/Plan is not a part of the workflow, the scope is not limited and a simple determination is made to determine if an instance is active. The “Not Active” scope is very similar to “Exact Active” with the single notable exception that a check of the previously completed instance is not performed.
All Instances includes the preceding settings and expands the scope to include any instance of the Job that completed. This is the most flexible setting.
Ignore Constraint if Job/Plan has/is not run or not scheduled to run, today: By default, all constraints when specified must be met. This can be an issue when you need to constrain against an object in another Plan which may have a different schedule. For example, JobA, which runs daily, needs to be constrained against a Plan named “MonthlyPlan” however, as the name implies, the Plan only runs once a month while JobA runs daily. If a “normal” constraint is specified, JobA will wait even when it shouldn’t. This attribute, when enabled, refines the constraint logic so that a constraining Plan/Job that has not run, is not currently run or is not scheduled to run today; is ignored. Today is defined as the standard 24 hour period beginning with midnight. If specified, using the above example, JobA will only be constrained on the day MonthlyPlan is actually scheduled to execute. On other days, the constraint will be ignored. Please note that this attribute is ignored if the object specified in the constraint is within the same
.
Note: If the constraint logic fails and the Wait, Check every... option is enabled, the recheck logic kicks in. The system reevaluates the constraint logic based on the frequency and duration configured. The system also forces a reevaluation of the constraint logic as the Job(s)/plans(s) the constraint Job is waiting on complete. This is true because the system knows about its own jobs (it's not checking an external resource, like it does with a file, variable or dynamic (active variable) Resource Constraint). Therefore, as soon as the constraint jobs(s)/plans(s) complete, a constraint logic recheck occurs. The Job will not have to wait for the next recheck logic interval. This means that your recheck logic interval does not need to be overly frequent due to the forced recheck.
File Constraint
A file constraint allows you to specify what file(s) must be present or absent in order for the constraint to be met. To add a File Constraint, click the Add button, then select File Constraint. Alternatively, you can select an existing file constraint, then click the Edit button. Below you see the dialog associated with editing an existing file constraint.
![]()
The information requested in this dialog box is primarily details about the file.
File Specification: This mandatory field indicates the file that the Job or Plan is dependent on, before it can run. The file specification must be complete and can represent a local or UNC file specification. You can specify wildcard characters. The characters must be added as per the execution machine’s operating system’s requirements. Please note that local represents the execution machine since all file dependency checks are performed in the Job’s security context on the execution machine. This means that you must have security access to the file. (Variable Substitution supported).
Check for File Present/Absent: This radio button indicates whether the file must be present or absent. The default is present.
The following checkboxes allow further refinement of the file constraint check.
If enabled File must be available for exclusive access means that no other process can be accessing the file. If a process is accessing the file, the dependency will fail. An example of when to use this might be expecting a customer to FTP a file into your production system. You don’t want to start the Job until the file has completed populating.
If enabled File must be at least nbytes means that the present file must be at least n bytes in size in order to successfully pass the file constraint check. This is particularly useful when a zero (0) byte file should be considered as a file constraint check failure.
If enabled File should have been allows you to perform a date validation on the specified file. You can choose Created, Last Accessed, Last Written dates as well as Before or Within and a relative day/time range. The relative time range can be expressed in days, hours and minutes from the initial file dependency check start time. This option allows you to discriminate between “old” files that just happen to still be present from newer files that should have been created.
If enabled the If Wildcard spec… option allows you to further refine wildcard processing by indicating whether ALL files must meet the above checking criteria or simply the first file should result in a dependency check failure. By default, the first file to meet the above criteria will cause the dependency check to succeed.
Queue and User properties may be specified when you want to check a File Constraint that is actually present on another machine (in particular, if that machine is another OS platform). The Queue property, if specified, indicates the Execution Queue (and machine) in whose context the file constraint is checked. Similarly, the User property represents a User Account object whose credentials are appropriate for the Execution Queue specified and will pass the authentication necessary for accessing the file and directory specified.
Note: By default, File Constraints are performed on the target Execution machine for Job objects and on the Job Scheduler machine for Plan objects. For Jobs, File Constraints are checked using the security credentials of the Execution User. For Plans, File constraints are checked using the security credentials as noted in the Plan’s “Execution” properties. If this property is omitted, the File Constraint will fail.
Variable Constraint
A Variable Constraint lets you to create an Active Variable from a built-in data source, then use the return variable value for comparison purposes, to determine if the constraint is met. To add a Variable Constraint, click the Add button, then select Variable. The following dialog appears:
![]()
Using the above dialog, configure the desired Active Variable. Variable usage within the constraint should not be confused with variable substitution. In other words, when you configure variable(s) as a constraint, the system does not add the standard curly brace variable syntax in the Constraint Logic property (it just adds the variable constraint's label). Reminder: The label for any type of new constraint is automatically added to the Constraint logic when the constraint is saved.
In the above image, an active variable constraint is defined. MainFolderExists checks whether the directory C:\MainFolder is present. If it is, a Boolean value of True (1) is returned. Otherwise, a Boolean value of False (0) is returned. In this example, the Constraint Logic property would simply be: MainFolderExists.
Let's say another variable constraint is added (in addition to the above-described variable constraint) using the SQL query active variable. The query retrieves a database table record count that is then used to determine if there are enough records to satisfy the constraint. If the variable label is “RecordCount” then AND RecordCount is what would be added to the existing Constraint Logic by the system after saving the new variable constraint. It is up to you to enter the comparison portion of the constraint logic (since RecordCount doesn't return a simple True of False value). For example, the Constraint Logic property would look something like this: MainFolderExists AND (RecordCount > 5). Both conditions must evaluate to true to satisfy the constraint.
Note: By default, the AND operator is automatically added to the Constraint Logic property when you add multiple constraints. You can manually change this to another supported operator, such as OR.
Note: All Active Variable constraints require security credentials to access the data source. If the Execution User’s credentials (the default credentials used) are not appropriate for Windows, you must specify alternative credentials in the Variable constraint.
User Input Variable Constraint
A special constraint is an “Interactive” constraint. An Interactive constraint is used when you need to request information and/or pause a Job/Plan mid-stream.
To create an Interactive constraint, create a variable constraint as an active variable using the “UserInput” action.
![]()
The above variable named “input” uses the UserInput action (an active variable type) which is used during a Respond operation to format and request data for the variable. The “waiting-for-the-information” portion is performed as part of the constraint. In the above example, the variable “input” requests “text” from the user - displaying a question (“Enter Database to attach…”).
Note: Proper use of the UserInput active variable requires that you allow some period for a Wait clause (Wait. Check every... property). This operation will not work properly if you fail the Job immediately on the constraint failure.
![]()
The variable “input” is checked, as part of the constraint logic” for the value of “DB”. Unless the user enters that value, the constraint will not be satisfied.
Resource Constraint
A Resource represents a finite value that is to be shared among other jobs and plans. When the object triggers, the Plan/Job attempts to access the resource it needs, based on how the Resource Constraint is configured. If it cannot access the resource, the instance will fail or wait, depending the constraint's failure logic (applicable to all general constraints).
ActiveBatch resources are numeric by definition. For example, the Resource may be a static number that represents the maximum number of jobs of a particular type that can run at the same time. More dynamic resources might be the amount of free disk space a particular system has and the fixed amount needed by this Job. If the required amount of disk space isn’t available, the Job shouldn’t run. In order to configure a Resource Constraint, you would first need to create a Resource Object, since you must specify one in the Resource Constraint, as depicted in the image below (see the Resource Object property).
To add a Resource Constraint, click the Add button, then select Resource Constraint. The following dialog appears:
![]()
In the image above, the Resource Constraint labeled “FreeSpaceCDrive” references the dynamic Resource Object (named DiskSpace) which corresponds to the amount of free space on drive C: (in Megabyte units). This particular Job needs one hundred million bytes of free space (as per the Units needed property) before being allowed to run (assuming any other constraints are met). For Constraint Logic purposes, the system will add the label - FreeSpaceCDrive (after you click OK), and that is all that is needed (no comparison operation is required, since the "Units needed" is specified in the Resource Constraint itself). If the Resource constraint is met, the label FreeSpaceCDrive is equated to true, if not, false; when true, 100 (megabytes) is then subtracted from the resource.
Note: If the constraint logic fails and the Wait, Check every... option is enabled, the recheck logic kicks in. The system reevaluates the constraint logic based on the frequency and duration configured. When you are using a static Resource Constraint, the system also forces a reevaluation of the constraint logic when a Job that was allocated unit(s) has completed and returned the unit(s). This is true because the system keeps track of its static units (it's not checking an external resource, like it does with a file, variable or dynamic (active variable) Resource Constraint). Therefore, as soon as a Job returns its static resource unit(s), a constraint logic reevaluation takes place. The Job will not have to wait for the next recheck logic interval. This means that your recheck logic interval does not need to be overly frequent due to the forced recheck.
Date/Time Constraints
The Date/Time constraint lets you specify when triggerable objects should not run, even if a trigger occurs. Two types of date/time constraints are provided: Exclusion (List) and Calendar, as depicted in the image below.
![]()
The Date/Time Exclusion List constraint allows you to indicate a day, specific date (or date range), and time(s) that indicate when a Plan or Job is not allowed to execute. This means that should a Plan or Job trigger on a date/time specified in the exclusion list, the Plan or Job will not execute. These exclusions are set on a per Job or Plan basis. The Calendar constraint uses the shared Calendar object to filter triggers. A common use to is add holiday dates to a Calendar object, then associate the Calendar to multiple Jobs and/or Plans. The holiday dates indicate when the Plan or Job should not run, even if a trigger occurs.
Exclusion List Constraint
The exclusion list has two (2) properties: Date and Time. These are the date(s) and time(s) the Job/Plan should not run if triggered. The date can be a specific date or date range, or any day(s) Monday through Sunday. The time can be all day, or a time range (e.g. 1:00 PM to 2:45PM). Using the exclusion list, you could specify that a Job scheduled to run every 5 minutes should not run at 3:05 am. Or, not run on Mondays. The figure below is the dialog box that appears when you add or edit a Date/Time exclusionary period. You can have more than one exclusionary period for any given Job or Plan.
![]()
Calendar Constraint
The Date/Time Calendar constraint is used when a Plan/Job is only allowed to execute on business days. The Calendar Object acts as a filter constraining triggers to only operate on business days. Holidays and non-working days (typically weekends) would not be considered business days. Therefore, what you add to a Calendar object are non-business days and/or holidays. You can associate one or more Calendar objects as constraints to the Plan/Job.
As an example, assume a Job is configured to trigger Monday through Friday, using a Schedule object. A holiday is set to fall on a Monday. Add the Monday holiday to the Calendar object and associate the Calendar to the Job. When the holiday date arrives, the Job will not run.
Alternatively, you can also associate a Calendar object with a Schedule object. Please see a discussion about this in the Schedule Object section. This topic only discusses how the Calendar object works when it is associated to a Plan/Job on the Constraints property sheet.
To add an existing Calendar object to the Calendars list, click the Associate button. An "Associate" window will pop up, allowing you to navigate to your desired Calendar object. Click the checkbox to the left of the Calendar name, then click OK. The Calendar will be added to the list of Calendars. You can also select an existing Calendar to edit it, or disassociate it. Additionally, you can click the New button, which will pop up a window allowing you to select the container to place the new Calendar in. After selecting the container, the property sheets for the new Calendar will be tabbed in the Main view. Configure the Calendar, then save it. The Calendar will be added to the Calendars list, and it will be visible in the container you previously selected.
On Business Day - This property is located under the Calendars list. If a holiday date occurs on a day that the object normally is triggered, you can opt to run the Job on a different day - which includes the next day or the previous day. You can also skip the run (the default selection). If you choose UseNext or UsePrevious from the dropdown list of options, the triggerable object will be scheduled to run on the previous or next business day. If it is already scheduled to run on the previous or next business day, it will run as usual and the next/previous selection will be ignored (it won't run twice).
Instance Restart and Constraint Logic
This section discusses what happens to Constraints when an instance is restarted. Instances can be restarted automatically through Completion properties or via the Restart operation. When an instance is restarted, the following constraint rules apply:
The only variables that are re-evaluated are those marked as Volatile and those ActiveVariables that have never been re-evaluated before.
FileConstraint, ResourceConstraint and JobConstraint are also re-evaluated.
UserInput is only re-evaluated if the operator checks the “Use Latest Template Properties” checkbox on a Restart operation. Otherwise, the UserInput is considered as being met, if a value was entered, and no new input is requested as a result of the restart.
Instance Dispatch Options
Instance Dispatch options are applicable to Jobs only. The properties listed below are at the bottom of a Job's Constraints property sheet.
Both properties relate to events that will occur if a Job triggers but has not yet been dispatched to an Execution Agent to run. The reason the Job has not been dispatched does not matter (waiting constraint, waiting queue busy, etc.). The only thing that matters is the Job triggered, an instance has been created, and it has not been dispatched to an Agent to run yet. See below for more details.
Dispatch Alert Delay
This property is specified in minutes, and it is the amount of time that can pass before an alert goes out indicating that a Job instance has not yet been dispatched to an Agent to run. The factory default value is 5 minutes. A value of 0 indicates that any delay is acceptable and not subject to an alert. Next, the alert must be configured by you for an alert to go out. The Alert type to configure is "Job/Plan Delayed in Starting". An email (or another supported notification type) can be used to alert interested ActiveBatch operator(s). As a final note, an instance's audit trail will include a "Late" audit if the amount of time passes that is specified in this property. If an alert was configured and went out successfully, you will also see a "Notification sent" audit in the instance's audit trail. As far as the instance is concerned, no action is taken against it. This is simply a notification method so you can be advised that jobs are not moving as expected, and could cause problems if left unchecked.
Maximum Dispatch
This property is specified in days, hours, minutes, and seconds. It is the amount of time that can pass before a Job is automatically canceled by the system because the instance has not been dispatched to an Agent within the specified time frame. By default, no time is set, therefore no jobs will be canceled for this reason unless you enter a time, and the time entered passes.
As an example, let’s say a Job is queued for execution at 0800 with a 1 hour maximum dispatch time. At 0900, if the Job has not transitioned to an executing state, the Job is aborted. If you wish to tie an alert to this condition, use the "Job/Plan Exceeded Dispatch Time" alert type. It is not required that you send out an alert, but most likely there would be ActiveBatch operator(s) interested in knowing if a Job was auto-canceled by the system. When a Job is canceled for this reason, the instance's audit trail will state: "Dispatch time exceeded" and so will the instance's Exit Code Description (the State will be listed as "Aborted")
Note: Maxiumum Dispatch is particularly useful when a long running Job must begin to execute by a certain time in order to meet service level agreements or to avoid impacting other aspects of the business.
Note: The two options described above are not mutually exclusive. For example, you may have an Dispatch Alert Delay configured for 30 minutes (tied to an alert), and you may have a Maximum Dispatch time configured for 60 minutes. This provides operators with 30 minutes, after the Alert delay notification is received, to try and resolve the issue - before the system auto-cancels the Job.
Execution
This selection provides the Job’s execution properties.
![]()
Working Directory: This optional field can contain the initial directory associated with the Job when it is executed. The directory specification must be accessible from the execution machine. By default, for Windows systems, the user’s TEMP directory specification is used. By default, for non-Windows systems, the user’s Home directory is used (Variable Substitution supported).
Queue Priority: This field indicates the priority of the Job within the ActiveBatch system. The priority is used when queuing a Job for submission to a named Queue. Values from 1 to 100 are acceptable. The higher the value, the higher the priority. The default for this field is 10.
O/S Priority: This field indicates the Operating System priority that should be assigned to the Job. For Windows, values of Low, BelowNormal, Normal, AboveNormal, High and Realtime are used. For non-Windows systems, a numeric value from 1 to 32 may be used.
Target CPU Platform: This property indicates the platform type you expect to run this Job on. By default, the default setting indicates that ActiveBatch should use the same platform as the Execution Agent. For example, if the Execution Agent is 32-bit, the Job will run in a 32-bit environment. x86 indicates a 32-bit platform (regardless of how the Execution Agent is configured) and x64 indicates a 64-bit platform (this includes Itanium).
Processor Mask: If enabled and the execution machine is multiprocessor capable, the mask is used to indicate which processors this Job can use. The processor mask only applies to the main Job step (when using a Process Job type). The keyword ALL means all processors should be used (default). To enter a specific processor(s) specify the CPU number as a comma or dashed based entry. For example, 0 means CPU 0 should be assigned. “0,2,5-7” means CPUs 0, 2, 5, 6 and 7 should be used. This property may not be supported by all operating systems.
Load User Profile: If enabled, for Windows Execution Agents this will also load the user’s profile as part of logging the Job into the system. This is useful if the Job expects to access the Registry specifically for that user (i.e. HKEY_CURRENT_USER). If your Job won’t be accessing that registry hive or user profile environment variables, we recommend that you disable this property.
If enabled, for UNIX/Linux Agents, this will load your user profile prior to your Job’s execution. For Script type Jobs, the shebang syntax is used to select a shell and also what profile to use when looking through .abatprofiles. This provides additional flexibility when selecting a profile to use. For Process type Jobs, the shebang syntax may not be provided in which case the user’s default shell is used.
If Job Active: This field indicates what action should be performed if an instance of this Job is already running.
Skip: If selected, skip execution of this instance. In other words, only one instance can run.
Run Multiple: If selected, run the instance. This means multiple instances of the same Job can be running.
The additional value Maximum number of active instances further allows you to indicate the maximum number of simultaneously running instances of the Job. Zero (0) means unlimited. When maximum instances are reached allows you to further refine your processing. You can elect to skip that run or wait a period of time (in seconds) before the run will be skipped.
Wait for: If selected, this instance is to wait, for a specified time (in seconds), before this run is skipped. While you can specify zero (to wait forever), we recommend you specify a non-zero value.
Note: This set of properties also indicates the maximum number of instances that can be instantiated within a Plan and within a batch run (or if at the root level globally).
The Logging checkbox allows you to enable or disable the logging of standard output and standard errors when a Job executes. When Logging is enabled, standard output and errors are captured in the Job log file, if any such information is generated during the execution of the Job. If the Logging checkbox is not enabled, standard output and errors are not captured (no log files are created). Log files are generated for Job and Job References only. It is recommended you keep logging enabled for troubleshooting purposes.
Centrally Manage Log File: This is the default option, when creating a new Job. Centrally Managed Log File means that
ActiveBatch will handle the complete specification of the log files and the directories they are created in. You can view a log file from the Instance Properties > Log tab, or right click in the instance in the various instances views and select View Log. The name of the log file includes the Job name, its instance ID and date/time stamp (unless the "Reuse same log file" option is checked).
Retain for: This field indicates whether you want the Centrally Managed log file(s) to be retained for a specified period of time. Clicking the checkbox means you want the log file(s) to be retained after the Job completes. The period of time is measured in days. The default is one (1) day. A value from 1 to 366 is acceptable. Not enabling this field means you want the log file(s) to be deleted after the Job has completed. Enabling this field means when the retention period is over, the log file will either be deleted or archived, depending on whether the "Archive after retention period..." checkbox is checked.
Note: The retention period you specify must be less than or equal to the Job’s History Retention period property (see Completion properties - History section).
Archive after retention period (Execution Agent must be enabled for archiving): By default, when a log file’s retention period expires, the file is deleted. However, if this property is checked, and the feature is enabled on the Execution Agent level (see KB articles below), the log file will be copied to the archive area and then deleted from the centrally managed area.
For details on how to establish the archive location on the Execution Agent, see the following KB articles:
The two major benefits of using Centrally Managing Logging is the clean up is done for you (managing old log files is one less thing for you to do), and the log file can now be viewed from any Console user's application without you having to first copy it to your local system. Simply click on the Properties > Log tab or right-click on the instance and select View Log.
Note: DbPurge is a built-in Job that keeps the ActiveBatch database trim and archives Centrally Managed log files after their retention period is over.
Log File: This property and the Centrally Managed Log File option are mutually exclusive. Clicking on this radio button means you want the standard output file to be created. When selecting this option, the file specification entered must be accessible from the Execution Agent machine point-of-view. For example, entering c:\train\JobA.log specifies the directory and file name on the Agent system where the Job ran. The Log File name will capture both standard output and standard errors, unless the Standard Error File property is enabled. When enabled, and a file specification has been entered, standard errors will be captured in the specified file, and standard ouput will be captured in the file specified in the Log File property. Please use two different file names if using the Standard Error File feature.
For recurring Jobs, the Log File name specification will also have the instance ID, and date/time appended to the file name for uniqueness purposes (unless the Reuse same log file property is checked). This is to avoid a conflict where multiple instances of the Job have executed and would otherwise overwrite the log file. If no default log file location is specified, the log file is saved in the Job's working directory.
Standard Error File: Clicking on this property's checkbox means that you want the standard error stream to be captured in a different file other than the one specified in the Log File property. A file specification must be entered if this checkbox is enabled. If omitted, a batch Job’s standard error is directed to the same Log File as standard output.
The remaining Logging properties apply to both Centrally Managed and the Log File options.
Reuse same file: Checking this option means that the same log file will be used for each Job instance execution. When this is the case, the log file name does not include an instance ID and date/time. When Centrally Managed is enabled, the log file name that is reused includes the Job name and object ID (not instance ID, since the file is being reused). When Log File is enabled, the log file name that is reused is whatever name is entered in the field (the same is true for the Standard Error File, if that option is being used). Nothing additional is appended to the file name by the system.
This feature is useful when you are only interested in the last Job instance and do not want to fill your log file directory up with lots of unwanted files. For platforms in which the underlying file system supports versions (i.e. OpenVMS) the Reuse feature will result in multiple versions of the same log file. This is intended. You can always purge the versions down using the OpenVMS command PURGE.
Append to same log file: Enabling this checkbox allows each Job instance to add its log file information to the end of the previous reused log file. This feature is not available for OpenVMS and will be ignored if specified.
Log trailing ActiveBatch statistics: By default, this checkbox is enabled and ActiveBatch inserts a series of various statistics into the log file when a Job completes. If this checkbox is not enabled, ActiveBatch will not insert any statistics within the Job’s log file.
Interpret Exit Code: If enabled, ActiveBatch will attempt to interpret the Job's exit status code as a possible message and provide a description in the Job’s Log File. Sometimes the exit code itself is merely an artificial number that the developer uses for a success or failure indication. In this case, the value itself should not be interpreted. This checkbox has no effect on the success or failure interpretation of the exit code.
Notes for OpenVMS:
For repetitive Jobs where each Job is expected to use a separate Job log file, the log file name cannot exceed ten
(10) characters. This is due to the date, time and Job ID that is appended to the filename to ensure uniqueness. OpenVMS filenames are limited to thirty-nine (39) characters. If you do exceed the filename limitation, ActiveBatch will shorten the filename to ensure the length remains valid.
If you would like to view the Job log file while the Job is still executing, add a variable named
#Execution.AbatVMSBatchQueue with a constant value of SYS$BATCH (depicted in the figure below) to the OpenVMS Queue object. This will enable this feature for all OpenVMS Jobs associated to that Queue - although it can be set on a per-Job basis. Be sure to check the Export as Job Environment Variable checkbox. The default frequency at which the data is written to the Job log file is every 60 seconds.
![]()
Monitoring
This tab controls the ability for ActiveBatch to monitor a Job’s progress in order to detect an overrun or underrun condition. You can be alerted that a Job is in an underrun or overrun situation, and optionally have the system take action against the instance. For an overrun, the Scheduler can automatically abort the Job; for an underrun, the Scheduler can mark the Job a failure.
![]()
The example above is using the Job's historical average runtime to detect overruns and/or underruns. It includes an allowable under/over tolerance of ten percent.
Enable Run Time Monitoring: This checkbox, when enabled, allows ActiveBatch to examine the Job’s expected elapsed time and determine whether the Job is operating as expected.
Set Initial Expected Time: If selected, you can enter days, hours, minutes and seconds that represents the expected Job’s elapsed runtime. For example, if you think the Job will run for 1 hour and 15 minutes, then enter the hours and minutes in this property. If you check the Set Run Against Historical Average field described below - this field (Initial Expected Time) is ignored (unless there is no average runtime yet - either because the average has been reset, or the Job is new and has not executed yet). As a best practice, use the Historical Average because a static run time (which is what this property represents) may not be as accurate (over time) as using the Historical Average.
Set Run Against Historical Average: If enabled, ActiveBatch will use the Job's run time and average it against previous successful runs (aborted and failed Jobs are not part of the average). This average will be used to determine if an underrun or overrun is encountered.
Tolerance: You can specify a tolerance that will modify the initial or average elapsed time (for the purposes of Monitoring). This tolerance can be specified as a Percent or as a Delta Time.
Percent means that the over or under time period is created as a percentage of the initial expected time or average run time.
Delta Time is added/subtracted from the initial expected time or average runtime to determine the monitoring period.
Abort if Overrun: If enabled, ActiveBatch will automatically abort the executing Job on an overrun condition. An overrun condition occurs when the initial expected time or average run time plus the tolerance time (percent or time) are exceeded. When aborted, the completion State will be Aborted, and the Exit Code Description will be: %ABAT-E-EXCRUNTIME, Job aborted exceeding expected run time. Example: Average runtime is 60 minutes. The Percent is 50% (30 minutes). If the Job runs over 90 minutes, it is considered an overrun.
Fail if Underrun: If enabled, ActiveBatch will change the completion State to failure if the Job does not run within the low-end range of the monitoring period. That is, the Job must run, at a minimum for the initial expected time or average runtime minus the delta. When failed, the Exit Code Description will be: Runtime underrun. Example: Average runtime is 60 minutes. The Percent is 50% (30 minutes). If the Job runs for less than 30 minutes, it is considered an underrun.
Reset Average: This button is available when modifying an existing Job. Click this button to reset any historical averaging values that ActiveBatch is retaining for this Job. This might be a good idea to do if any changes you make to a Job will significantly impact its runtime (for example, adding more work for the Job to do when configuring its payload, or moving the Job to run on a more resource rich system).
If you wish to create an alert for overrun and/or underrun, these are the alert types you should use:
Overrun: Job/Plan Elapsed Time Overrun
Underrun: Job/Plan Elapsed Time Under Run
If you decide to abort the Job on an overrun, you can also add a Job/Plan Aborted alert. If you decide to fail the Job on an underrun, you can also add a Job/Plan Completed in Failure alert.
See the Job's Alerts tab to either embed an alert, or associate a shared Alert object (Best Practice).
Alerts
This tab is used to associate various Job alerts and their corresponding actions.
![]()
ActiveBatch allows you to specify alerts by either grouping alerts into an Alert object and/or by individually assigning alerts to a specific Job. The benefit when using an Alert object is that you can change the contents of the Alert object and the changes will apply to all associated Jobs. Individual Job alerts must be changed on a Job-by-Job basis.
Alerts: This area is where you would assign an alert to an individual Job. Use this option when it is more of a one-off and unique to the Job you are configuring. As a Best Practice, associate an Alert object when you have multiple Jobs using the same alerts.
Alert Objects: This area lists all Alert objects associated with the Job. Use of the Associate, Disassociate, Edit and New buttons allow you to otherwise manipulate the objects and their Job associations. When you click the “New” button you will be prompted to select the location where the Alert object should be placed in. After selecting a location, the Alert property sheets will be tabbed in the Main view. After configuring and saving the Alert, it will be automatically associated to the Job, and added to the Object Navigation pane.
See the Alert Object section for details on how to configure an alert.
Completion
This tab provides properties that concern the completion phase of a Job.
![]()
Properties
This section includes two subsections - restart options and Job history retention.
Run Once Only: Enabling this checkbox means the Job will execute only once. The first time a Job is triggered after this property is checked, the Job definition will be disabled by the system after the instance runs to completion (abort, success or failure). Disabled Jobs, when triggered, will not run. The existing completed Job instance can be restarted, either manually or automatically by setting the Job's Completion/Failure Restart property. If desired, the Job can be re-enabled via a menu option, and triggered again. It will run to completion, and automatically be disabled by the system as long as the Run Once Only checkbox remains enabled. If a user wishes to allow the Job to run regularly, they can disable Run Once Only by removing the check from the checkbox, saving the Job, then re-enabling the Job definition. Modify rights are required to update the property, and Manage rights are required to enable the Job definition.
Failure Restart Options: This section determines when and if ActiveBatch should restart a failed Job. Typically restart concerns the Job’s ability to be automatically restarted in the event of a machine failure (usually the execution machine). However, there are two sets of options - one for Job failure, and the other for machine failure.
On Failure: This set of radio buttons indicates what should occur if the Job fails. This failure does not include machine failure since a separate set of criteria can be specified for that condition. You may choose one of the following:
No Restart: Selecting this radio button (default) means that no special restart action should be performed if the Job fails. The Job will simply remain in a failed state.
Restart & Failover: Selecting this radio button allows a failed Job instance to failover back to the original Generic Queue it was submitted to for re-dispatch. Failover is only applicable for Generic Queues. The Execution Queue it is re-dispatched to depends on Generic Queue property settings (the same settings that determined where to dispatch the Job initially). If Failover is specified and a Generic Queue is not associated to the Job, the Job is restarted on the same machine (the same as if Restart had been specified, described below). The restart and failover option is particularly useful for Microsoft Cluster Server or any fault tolerant cluster systems.
Disable Template: Selecting this radio button allows you to disable the Job template (definition) if the instance fails. If you manually restart the failed instance (that caused the definition to be disabled) and the instance completes successfully, the Job definition will be automatically enabled by the system. This complements the “auto hold” functionality by supporting “auto release”. The reason you may opt to use this selection is it will allow you time to investigate why the Job failed, knowing that it will not run again until either the failed instance is restarted successfully, or you manually re-enable definition.
Restart: Selecting this radio button allows a failed Job instance to be restarted on the same Execution Queue (and machine).
On Machine Failure: This set of radio buttons indicates what should occur if the Job fails as a result of an Execution machine failure. The failure could be a network connection issue (Job Scheduler to Execution Agent connection, or a problem with the Agent system, etc.). When machine failure occurs, all that is really known is the Scheduler has disconnected from the Agent that is running the Job. This failure does not include a non-machine failure since a separate set of criteria can be specified for that condition (listed above). When configuring on Machine Failure, you can select one of the following:
No Restart: Selecting this radio button (default) means that no special restart action should be performed if the machine fails. When No Restart is specified, ActiveBatch will initially change the state of the Job from Executing to Orphaned. This means the Job is currently in an indeterminate state due to the loss of contact with the Execution Agent. When contact is restored, ActiveBatch attempts to determine if the Job is still running on the Agent, or if it has completed. On an actual machine failure, it may not be possible to determine whether the Job completed successfully or not (especially if the system went down, was rebooted, etc.). In that case, the Job’s state is changed from Orphaned to Lost. Lost is considered a failure state (unless the Lost exit code is specified as part of the Success Code Rule).
Restart & Failover: Selecting this radio button allows a failed Job instance to failover back to the original Generic Queue it was submitted to for re-dispatch. This is particularly useful for Microsoft Cluster Server or any fault tolerant cluster systems. Failover is only applicable for Generic Queues. When the failure occurs, the Job will be re-dispatched based on the Generic Queue property settings. This means it may or may not be dispatched to the same machine. It will not be dispatched to the same machine if the Queue is still offline (the Queue goes offline when the "machine failure" occurs. If Failover is specified and a Generic Queue was not used, the Job is restarted on the same machine (the same as if Restart had been specified). This means that the Job instance will wait until the failed machine is active and available.
Disable Template: Selecting this radio button allows you to disable the Job Definition if the instance fails. This option would be a less likely choice when machine failure occurs because the Job itself may not have been the cause of the issue.
Restart: Selecting this radio button allows a failed Job instance to be restarted on the same Execution Queue (and machine). This means that the Job instance will wait until the failed machine is active and available.
Restart Options:
Wait: If non-zero, the Job Scheduler will wait the specified value (in seconds) before restarting the Job. This is particularly useful if you’ve also selected On Job Failure recovery. By delaying the restart whatever resource exhaustion or other temporary condition that occurred may have ended.
Maximum Restarts: The radio buttons allow for an unlimited number of restarts (not recommended from a practical point-of-view) or a specific number of restarts (the acceptable value range is 1 to 999). The maximum number of restarts controls the total number of restarts attempted for this instance.
Reset on Restart: This checkbox determines whether active variables, if any, should be re-evaluated when the Job is restarted. If enabled, the variables are re-evaluated.
History: This section deals with the period of time that you elect to keep a Job’s instance history.
Delete on Completion This option, when enabled indicates that the Job’s history is to immediately be deleted and removed from the database. This is useful when you are running the same Job many times and the actual Job history would be burdensome or otherwise obscure other more important Job history. Please note that this Job will not appear in any ActiveBatch reports.
Save for, This option, when enabled, indicates how long a completed Job’s instance history is to be retained within ActiveBatch’s database. If the feature is enabled (highly recommended), a value of 0 indicates that the next scheduled running of DbPurge should delete the Job’s instance history. The value specified is defined in days, and an acceptable range is 0 through 366. If the checkbox is not enabled, the Job’s history will be deleted upon the next scheduled running of the DBPurge facility.
Triggers
This selection allows you to specify which Job(s) and/or Plan(s) should be triggered when this Job completes. You can refine the trigger to simple success or failure; or a series or range of exit codes.
Three (3) buttons are available for you to add, edit or remove Completion Triggers. The display provides two (2) columns: Name (or Label) and Condition. The Name (or Label) identifies the Job (or Plan) that you want to trigger. The Condition identifies the criteria that you want to use when your Job completes to determine which triggers are executed.
Please see the Completion Trigger section for more details about this feature.
Service Level Agreement (SLA)
This tab is used to configure an Availability Service Level Agreement (SLA) within a Job.
![]()
An Availability SLA is used to indicate when a Job must be completed successfully by a specific time deadline. If the Job has not been completely successfully by its deadline time; the SLA is considered to have been breached.
Two aspects need to be defined for an Availability SLA: Deadline and Remedy.
The Deadline can be expressed as a list of absolute time(s), for example, 1300, or as a single relative deadline (duration) in which a deadline is calculated when the Job becomes instantiated (meaning when a Job instance is created).
Remedy refers to either an alert that is to occur when a percentage of time has taken place and the Job is still running and/or an action.
Action refers to a series of steps that are taken to prevent the Job from breaching its SLA. Please see the Service Level Agreement section for more details about this feature.
SLA Deadline: This property indicates when the Job must have successfully completed by.
Absolute Deadline indicates the actual deadline clock time (hh:mm). For Absolute Deadline you can specify one or more clock times (hh:mm) by clicking on the Add button. Individual clock times can be removed by clicking on the small stylized “x” that appears on the right. The Delete All button can be used to start over and remove all clock times specified. When a collection of deadline times has been specified the deadline time closest to your scheduled or begin instance time that has not yet expired is used.
Relative Deadline is also a time (hh:mm) which is added to the Job instance instantiation time to calculate a deadline. If Relative Deadline is used only a single time period can be specified.
Remedy Thresholds: This collection of properties allows you to indicate either a percentage of time (deadline minus instance created) or an absolute time. When this threshold is created you can specify a type of warning which can form an alert. In the above figure, if the Job is still running after 80% of the elapsed time prior to deadline is reached, then an SLA Warning alert should be indicated and actions taken. Likewise, if the Job is still running after 90% prior to deadline, then an SLA Critical alert should be indicated. Please note that once “Take Action” is initiated it cannot be canceled.
Analytics
The Analytics tabs provides statistical information, audits and revision history.
![]()
The Counters above are specific for this Job definition. The counters are presented in both list and graphical form.
The Reset Averages button allows you to reset the average run times (both elapsed and CPU) to zero.
The Reset Counters button allows you to reset the counters back to zero.
The Refresh button retrieves the latest set of counters. If the Job is marked as SLA sensitive then SLA counters will also appear.
The Audits section allows you to view the audits that are created when a Job is initially defined. Changes made to the Job are audited as well as the creation of Job some instances (event-driven and manual triggers). Audits covering the Job instances themselves can be found in the individual instances. Audits are a good place to look when a Job runs or doesn’t run when expected because all Job state information can be retrieved and examined.
The Audits panel includes controls that allow you to filter the audits based on start and end dates. You can also limit the audits retrieved to a maximum number. The refresh button allows you to retrieve any audits that were generated after this dialog was initially displayed.
Each audit is contained in a single line in date and time sequence. Audits are read-only and cannot be modified. An icon appears at the beginning of each audit to help visually signal the severity of the audit. If an
has been established, you will see an additional comment icon to the right of the severity icon. If you mouse over the comment icon, the system will display the audit information as a tooltip.
Opening an audit item (by double-clicking on the item), depending on the nature of the audit, will sometimes reveal additional information concerning the audit.
The Copy to Clipboard button copies the contents of the retrieved audits into a copy buffer that you can later paste into a document or other program.
The Print button allows you to print the retrieved audits.
The Revision History button allows you to select one or more audits concerning changes made to an object and perform a difference operation between the selected revised objects.
The History section provides useful information about the execution of this Job. It includes the number of revisions made to the Job, when it was last modified and by who, the date/time it last ran, and how it completed (success, failure, aborted).
Security
This tab is where object security is configured. Security in ActiveBatch mirrors how security is granted using Windows security. That is, permissions applicable to the object (Read, Write, Modify, Delete, etc.) are Allowed or Denied for the Active Directory users and/or groups assigned to the object.
![]()
When new objects are created, they will either be assigned
security, or the factory default security will be overridden by a .
When a default policy has been used to preset object security, the new object will either have:
The Inherit Security from Parent Object property checked (it is not checked by default). When checked, the listed users and/or groups along with their permissions granted will be read-only. The Add and Remove buttons will be disabled because security is being inherited from the object's parent container. When Inherit Security from Parent Object is checked, it is likely the ActiveBatch Administrator will be setting up security on parent container(s), and the Job author will not have to modify anything about security. This would require a larger discussion with an ActiveBatch Administrator who it typically tasked to manage object security, since there are options.
The Inherit security from Parent Object property is not checked. The users and/or groups are listed along with their permissions granted, but there is likely some differences when compared to the factory default security, since the purpose of setting a default policy for security is to add Active Directory groups and/or users (and their access permissions) that are specific to your organization. Since Inherit Security is not checked, the Add and Remove buttons will be enabled. When this is the case, Job authors will need to be advised if there needs to be any changes made to security when they create new objects.
As a best practice, it is best for an ActiveBatch Administrator to preset security using a default policy (for all object types) so Job authors do not have to manage security, which can be time consuming and error-prone. See the ActiveBatch Installation and Administrator's Guide for more best practice information regarding object security.
Below is a list of permissions related to this object.
Access Description Read
Account is allowed to view any properties/variables of the Job (Read implies both Read Properties and Read Variables)
Read Properties
Account is allowed to read the Job’s properties.
Read Variables
Account is allowed to read the Job’s variables.
Write
Account is allowed to write to the Job.
Modify
Account is allowed to read/write any properties of the Job
Delete
Account is allowed to delete the Job
Take Ownership
Account is allowed to take ownership of the Job
Use
Account is allowed to use the Job and to create a reference to it
Manage
Account is allowed to perform operations on the Template Job (Enable/Disable/SoftDisable/Hold)
Instance Control
Account is allowed to perform operations on the instance (Abort, Pause/Resume, Restart, Force Run, Override Completion Status,Delete, etc.). Account can also take ownership of an Alert in the Alerts view and respond to it.
Change
Permissions
Account is allowed to change permissions (set security) on the Job
Trigger
User may trigger Job
Trigger and
Change Queue
User may trigger Job and direct Job to execute on a specified Queue
Trigger and Change
Parameters
User may trigger Job and specify new input parameters to Job. User may also pass new ActiveBatch variable(s) that override any specified at the Job or Plan level
Trigger and Change
Credentials
User may trigger Job and specify new security credentials
Full Control
Account may issue all of the operations mentioned above.
The owner of an object is always granted Full Control by the system, and their permissions cannot be changed or reduced. If another user takes ownership, then the original owner's access will depend on how security is set up (if they are a user or group that has been given access). The new owner will automatically be granted Full Control, and once again, their permissions cannot be changed or reduced.
To take ownership, you will need to be granted the Take Ownership permission. Click the Take Ownership button and confirm the action. Another way to take ownership is to right-click on the object in the Object Navigation pane, then select Advanced > Take Ownership.
To modify security, you need "Change Permissions" security granted.
The Deny permission is generally used for users who have been granted access based on a group membership, but there is a need to override this for a particular user. Deny takes precedence over Allow.
Below you will find the instructions on how to modify security when Inherit Security from Parent Object is not checked.
To edit an existing account, select the listed user or group, then change the permission using the Permissions list box (Allow or Deny access).
To remove an existing account name, select the listed user or group and click the Remove button.
To add a new user or group, click the Add button and follow the dialog as depicted in the image below.
![]()
The dialog is similar to that of other Windows objects, and leverages Active Directory services. The Locations button allows you to select either the Job Scheduler machine or any applicable domain. Clicking the Advanced button allows you to search for specific users and/or groups. Alternatively, you may enter object names (a user or group) in the large edit box. Clicking the Check Names button allows you to validate the accounts. Click the OK button to add the selected Account to the object’s security list.