Case Study 2 - Introduction to Workflows
Problem:
We have been tasked to create a file and then calculate a
of the file. Later processing should then read the file, calculate its own checksum and determine if the two checksums are equal. While we could do all this processing in a single job, we will create two jobs to introduce more ActiveBatch functionality.
We need two (2) jobs to run consecutively. When the first job completes successfully, the second job should run. In addition, the first job creates two files that must exist before the second job can execute. The plan where the jobs will be placed needs to execute every Monday starting at 6:00am, triggering at 15 minute intervals thereafter, where the last trigger is at 7:45 am. The plan should not run on any Monday that is a holiday, but rather, it should run on the next business day.
Solution:
To complete this task, the following ActiveBatch objects will need to be created: Plan, Schedule, Calendar, and Jobs.
-
The Plan is where the two jobs will be stored.
-
The Jobs will be linked such that when JobA completes successfully, JobB will execute.
-
This will require adding a Completion Trigger on JobA, to trigger JobB.
-
We will also discuss how to (optionally) add a Job Constraint to JobB (and why you may decide to do this).
-
Finally, we will add a File Constraint to JobB, so it does not run unless the files it needs are present.
-
-
A Schedule will be created and configured with dates that the Plan will run on. The run times will be configured on the Plan.
-
A Calendar will to created to prevent the Plan from running on holidays.
Note: As we move through more complex jobs the actual payloads may, in most cases, be simplistic. The reason behind this is to keep the focus on ActiveBatch itself and not shift the complexity to the scripts or processes that execute. We expect that you have the expertise to set up the actual business process itself, but could benefit from the case studies to learn about best practices and how to use key features.
In addition to creating the objects described above, we will also be introducing the concept of variables and default policies.
Variables
The directory to store our files for this use case study (c:\ActiveBatchTutorial) is the same as it was for Case Study 1. In that case study we hard-coded the directory name when setting the Process job's File Name property. In this case study we will introduce the concept of variables.
A variable represents a named value. Variables have several uses in ActiveBatch. The one we will explore in this case study is using variables to soft-code object properties. When soft-coding object properties in different jobs that share the same property value, it is easier to change a single variable value should the property value change, as opposed to changing many objects where the value was hard-coded.
Variables can be declared (created) on various objects. Where the variable is created will determine its scope. The variable we will be creating will be added to the existing UsersGuideCaseStudies folder. Any object that has this folder in its fullpath will be able to access the variable, providing the object supports the use of referencing variables.
Note: There are many facts to learn about ActiveBatch variables. We must limit the scope of the discussion for simplicity purposes. Please see the Variables section in the ActiveBatch Reference Manual for more details.
ActiveBatch supports a couple of methods for setting a variable’s value. A variable can contain a constant value that does not change unless a user changes it. As another option, a variable’s value can be set as a result of a data source. The data sources you can choose from are built in to the product. This type of variable is called an active variable. This case study will use a constant variable.
The constant variable we will be creating is named app_path. We will add it to the UsersGuideCaseStudies folder. Adding it to this location means that it can be referenced by any future case study objects we create (that support referencing variables), since all new objects will have UsersGuideCaseStudies as part of its full path.
Let's create the variable. In the Object Navigation pane, right-click on the UsersGuideCaseStudies folder object and select Properties. The property sheets will be tabbed in the Main view. Click on the Variables category to access the listing of variables. The list will be empty as none have been created yet. Click on the Add button.
The Variable property sheet will initially be blank. Enter app_path in the Name/Label property. Enter c:\ActiveBatchTutorial as the Constant property value. Click OK to save the variable. Rather than hard-code the value c:\ActiveBatchTutorial within our jobs, we will reference the variable.
See the image below, what you will see after saving the variable. You can create multiple variables on any given object type that supports the creation of variables.
Creating a Plan
To summarize from Case Study 1, a Plan is a container and a workspace. When we created the job in Case Study 1, that job was truly stand-alone. The job wasn’t dependent on anything. The job had no relationships to anything. Most tasks are related in some manner even if broadly classified (i.e. “administrative tasks”, “FTP jobs”, "HR on-boarding task", etc.). Many tasks do have dependencies. In fact, even if a task didn’t have a dependency, it would still be a good idea (and a Best Practice) to “wrap” the job within a plan, just as files are typically categorized within a folder. We used this concept in Case Study 1, and the use of Plans as both organizational containers as well as run-time objects will become more important as the complexity increases.
When we added new objects in Case Study 1, we did so by right-clicking on the desired container (folder or plan) in the Object Navigation pane, then selecting the New menu option. We could do the same for this case study, but let's learn about a new view that also supports the creation of ActiveBatch objects. It is named the Map View.
Map View
To enter Map View, right-click on the UsersGuideCaseStudies folder, then select View > Map.
Based on the work we completed in Case Study 1, this is what you should see (or something similar).
The previous Plan and Folder objects (created in Case Study 1) are displayed on the left. The icons signify that CaseStudy1 is a Plan and Objects is a Folder. The Map Overview box, displayed on the right, also depicts the same two objects. Since the Map view can display many objects (depending on how many objects are in the container you selected), the purpose of the Map Overview is it allows you to pan around to examine different portions of the Map view canvas area. In future images of the Map View, we will turn this feature off as we really don't need it; our Map view is not very busy. To turn this feature off, click on the toolbar's Gear icon (to the far right), then select View > Map Overview. Use the same method to turn it on.
Moving along, our task is to create a new plan. To create a new Plan in Map View, right-click on the white space of the Map View canvas area, then select New > Plan.
The property sheets for the new plan will be tabbed in the Main view.
We will name our plan CaseStudy2 and click the Save and Close button to create the Plan. Map View now displays three (3) objects.
Creating a Schedule
Let's create our Schedule object. As per our use case specification, the plan and its associated jobs should run every Monday at specific times. The Schedule object will be used to satisfy the date requirements. The times will be embedded in the plan itself.
We’ve already created a Folder object to store our non-executing objects. We will add the Schedule there. Right-click on the Objects folder in the Map view, then select New > Schedule.
The New Schedule property sheets will be tabbed in the Main view.
On the General property sheet, enter Monday as the Name/Label. Next, click on the Time Specification tab. Uncheck the Time Specification checkbox. When doing so, this means that time-based triggers will not be produced by this schedule. The trigger time will be configured on the Plan. The Schedule will only be used to specify the trigger dates.
Next, click on the Day Specification tab.
Keep the default Calendar Type property as-is (Calendar). For the Specification Type, select Weekly. In the Weekly Specification panel, keep the default of Every 1 week(s), then check Monday. Make sure the other days of the week are not checked. Click the Save and Close button. The Schedule object will now be added to the Objects folder.
The Map view canvas area depicts Folders, Jobs and Plans. The remaining non-executing objects (like the Schedule object we just created) can be viewed in another location, if you would like to see them.
To view non-executing objects, click on the Map view's gear icon in the toolbar, then select View > Child Objects. A new Child Objects pane will be added to the Map View (see the left-hand side of the image below).
To see the non-executing child objects of any container in the Map View, click on the desired container. Since our Objects folder is the only container with non-executing child objects, we will click on that. When doing so, observe the Child Objects pane displays the non-executing objects that are stored in the Objects folder. See the image below that depicts this.
If you open the Objects folder in the Map View canvas area, nothing will appear. Only Jobs, Plans and Folders appear in the canvas area, by design. If you open the CaseStudy1 folder, you will see the Powershell job created in Case Study 1.
Creating a Calendar
A Calendar object allows you to filter triggers against a list of days in which the trigger is not allowed to execute a job or plan. Non trigger dates consist of any Non-Business Days you specify (i.e. day(s) of the week), as well as relative and fixed holidays. As per our use case specification, the plan and its associated jobs should not be allowed to run on a holiday. That is why we are creating a calendar object. The calendar object will include holiday dates.
To create the Calendar object, right click on the Objects folder in the Map view canvas area, then select New > Calendar.
The Calendar property sheets will be tabbed in the Main view. On the General property sheet, enter CalendarUS as the Name/Label since this will be a United States specific calendar. Next, click on the Properties tab.
You can see that we have added 4 relative holidays and three 3 fixed holidays. In addition, the weekend days of Saturday and Sunday have been marked as non-business days. Later case studies will add more holidays to this calendar. Notice that Thanksgiving Day shows a duration of two (2) days. This is very useful when a company includes multiple days off for any given single holiday. It is a common practice for US companies to include the Friday after Thanksgiving Day (Thursday) as a holiday.
To add a relative holiday, click on the Add button within the Relative Holidays panel. A dialog will appear, as depicted below.
Enter Presidents Day in the Name property, then match the remaining properties to what you see in the above image. Click OK to save.
Note about the "On Non-Business Day" property
Both Relative and Fixed holidays include a “On Non-Business Day” property (depicted in the above image). This controls what the Scheduler should do if the configured holiday falls on a non-business day. This is more likely to happen for Fixed Holidays, not Relative Holidays because Relative Holidays (for most of the world) usually fall on a business day. That is why they are relative in the first place, because they are intentionally planned to land on a business day, providing workers with a day off.
Alternatively, Fixed Holidays, by circumstance, can fall on a non-business day (this is why the term “Observed” versus “Actual” applies to these holidays). ActiveBatch addresses this aspect of a holiday by allowing you to select the logic that will determine what day the holiday is observed. For example, New Year’s Day is always January 1, and therefore is a Fixed Holiday. If that date falls on a week day (in the US and Western Europe), the actual and observed holidays are the same. If the date falls on a Saturday (a non business day), then the observed holiday is usually Friday; likewise if the date falls on a Sunday (another non business day) the observed holiday is usually Monday. This is the what the operation of the “Closest” setting does, a possible selection to choose from for the On Non-Business Day property (see image below). When selected, if the holiday falls on a non-business day, the closest business day is used as the observed holiday. Note: Closest is one option, there are others, depicted in the image below (i.e., Skip, Next, Previous).
To add a fixed holiday, click on the Add button within this panel. A dialog will appear, as depicted below.
Enter New Years Day in the Name property, then enter the occurrence as Jan 1, (for whatever the current year is). Do not click Use Year. This means every January 1 will be considered a Fixed Holiday, not just 1/1/2024. Next, select Closest from the On Non-Business Day dropdown. This means the business day closest to the holiday date will be used as the observed holiday date, if New Years Day happens to fall on a non business day. Click OK to save.
Add the remaining fixed and relative holidays by matching the properties you see in the US Calendar image above. When you are done adding the relative and fixed holidays, click the Save and Close button to add this calendar to the Objects folder.
Associating the Schedule
Now that we’ve defined the schedule and calendar objects lets associate them with our plan.
Right-click on the CaseStudy2 plan in the Map view and select Properties. The Plan's property sheets will be tabbed in the Main view. Select the Triggers category, then check the Enable Date/Time Trigger checkbox to enable it (if it is not already enabled).
To associate an existing Schedule, click the Associate button; navigate to the previously created Monday schedule in the Objects folder and click on the checkbox to the left of the Monday schedule name. Click OK. The schedule will appear in the list of Schedules. As a reminder, trigger dates will come from the Schedule object. Trigger times will be configured on the Plan itself, which we will do next.
Select the Monday schedule in the Schedules list, then click the Edit Times…. button. The Object Time Specification dialog will appear. Click the Hours radio button and enter 6-7. In the Minutes textbox enter 0,15,30,45. This results in a union of hours and minutes. The trigger times will be: 6:00 am, 6:15 am, 6:30 am, 6:45 am, 7:00 am, 7:15 am, 7:30 am, and 7:45 am. Using the Hours and Minutes option works well when there is a pattern in the trigger times.
We could have added eight exact times, but instead used the above variation as it is a simpler approach. Clicking OK to this dialog causes a display of the Schedule's section to look like this:
By adding the times to the Plan, we observe Best Practices. We could have added the times directly to the Schedule object itself and skipped this (Edit Times) step. The only problem with that approach is that most workflows tend to run at their own unique times. So adding times to a schedule results in many more schedules being created to accommodate the time variations.
Associating the Calendar
To associate the previously created Calendar object to the Plan, click on the Plan's Constraints tab next. The Calendars section is at the bottom of the page.
Click the Associate button to launch our now familiar dialog. Navigate to the UsersGuideCaseStudies folder, followed by Objects. Expand the Objects folder and check the box to the left of the CalendarUS object. Click OK to apply that selection.
Next, you can see that we’ve changed the On Non-Business Day property from the default of Skip to UseNext (Business Day). This means that should a holiday occur on a Monday (which, based on the calendar, we know it will at least 3 times a year) the plan will run on the next business day (most likely Tuesday). Other processing choices included Skip (don't run the plan at all, before or after the holiday - just skip it) and Use Previous (Business Day) (which would mean Friday for a Monday holiday).
This is a good time to point out that just as we can associate multiple schedules to an object, we can also associate multiple calendars to an object. If you can't fit all the options into one schedule or calendar, then create more as needed. For example, you may have a weekday schedule and a weekend schedule for any given job or plan.
Click the Save and Close button to save the changes we’ve made to our CaseStudy2 plan.
Now that we have our plan we’re almost ready to define our two (2) jobs, but before we do - this is a good time to introduce Default Policies. Default Policies are default property values you would like to have preset when any new Job is added to the UsersGuideCaseStudies folder (or any subfolder or plan nested within the UsersGuideCaseStudies folder). Default Job Policies provide you with both ease-of-use and allow you to set standards by presetting object properties. Note, this optional feature allows you to preset properties for any new object type, but for our purposes, we will be setting a default job policy.
To set Default Job Policies on a folder, right-click on the UsersGuideCaseStudies folder in the Object Navigation pane, then select Policy > Defaults. Note: Auditing Policies will be described in a future Case Study.
The Default Polices dialog is depicted in the image below. As you can see, a list of all ActiveBatch object types are listed in the Default Policies dialog. These are all the objects you can optionally set default policies for.
Since we want to set Job Policies, we will select Job, then click the Set button.
This results in what appears to be job property sheets tabbed in the Main view. Since we are presetting job properties, that makes sense. However, notice the tab states "Policy on /UsersGuideCaseStudies". It does not state "New Job".
Next, we want to set a few job properties that will be common for all jobs that we will be adding to UsersGuideCaseStudies folder. Rather than requiring Job authors set each property individually for each new job created (time consuming and potentially error-prone), we can instead set a job policy to pre-populate the desired job properties.
First, click on the Process tab depicted in the above image (on the left, under General). Select Server1 as the Submission queue from the dropdown list (like we did in Case Study 1). Next, select the TrainingAccount as the User Account from the dropdown list. Next, change the Job type to Jobs Library. Jobs Library is one of three job types, and rather than have the Process job as the default type, we can change it to Jobs Library (see the image below).
Lastly, click on the Execution tab and the set Working Directory property. Enter ${app_path}. This is referencing the variable we created earlier on the UsersGuideCaseStudies folder. In this case, we are using it as the default policy for the Working Directory.
Each Folder or Plan can have its own object policies. There are more properties we could have set, but what we set is sufficient for now.
Click the Save and Close button to save the Job policy. The next time you create an object, ActiveBatch will scan upward through your folder/plan hierarchy (e.g. full path) until an applicable object policy (if any) can be found. Since all new job objects will have UsersGuideCaseStudies somewhere within their full path, the job policy will be applied.
Creating the first job - JobA
In the Object Navigation pane, right click on CaseStudy2, and select New > Job. As before, we begin with the General properties. The Name and Label should be JobA.
Next, click on the Jobs Library tab. This tab used to read "Process", but the Default Job policy we set earlier has resulted in this change. Note that the Job Type is set to “Jobs Library”- again because of how we set the Default Job Policy. Notice that we also no longer have to specify the Submission Queue and User Account properties. Those property values were also set through the Default Job Policies created earlier.
Next, we want to configure the payload of the job. One of the most powerful features within ActiveBatch is the Jobs Library and the Job Steps Editor. The Jobs Library contains many built-in features and functions which eliminates the need for you to have to write your own script to accomplish the task at hand. You can still write scripts if you need to and, in fact, the Jobs Library supports script writers in an integrated fashion as well (there is an "EmbeddedScript" step).
On the left-hand side of the above image, there is a partial listing of Jobs Library Step Categories. They are stored in what is called the ToolBox. As you expand the categories, you will see the category-related steps that you can select, then drag and drop to the workspace area on the right (notice it states 'Drop job step here').
Getting back to our use case, JobA's task is to create and write text to a file (filename: CaseStudy2.txt), then calculate a checksum and write the checksum value to another file (filename: CaseStudy2Checksum.txt).
In the ToolBox area, navigate to the File System category and expand it, then look for the Write Text To File step. Drag and drop it onto the workspace area on the right.
In the step's Destination property, enter ${app_path}\CaseStudy2.txt. We would like the newly created file to be added to our previously created ActiveBatchTutorial folder. Rather than hard-code the folder name as part of the file's fullpath, we are using the app_path variable we created earlier, that equals C:\ActiveBatchTutorial.
ActiveBatch constant and active variables are referenced using curly braces. The Scheduler resolves active and constant variables after a trigger, but before the job is sent to the Agent to run. If a variable is not found, a null string is returned (an audit message is produced when this occurs as well - it's a "missing" variable). When you reference a variable name, in our case the Destination property, the search begins within the current job, first looking to see if a variable has been defined on the job's queue object. If it's not defined there, the system looks on the job's user account object. If not defined there, it looks on the job itself. If the variable definition is still not found, the system moves up to the parent container (folder or plan), and the parent’s parent container etc, until the Scheduler root is reached (if necessary). The system stops looking when the variable definition is found. In our case, the variable will be resolved at the folder-level (i.e. UsersGuideCaseStudies) because that is where we defined the variable earlier.
In summary, the Destination property of ${app_path}\CaseStudy2.txt allows flexibility if the file location ever needs to be changed, which is especially useful if multiple jobs (or multiple job steps within the same job) are using the same location. You only need to modify one variable, and all the jobs referencing it will reflect the change.
Next, in the TextToWrite property, enter the 4 lines of text you see in the above image, which includes Text Line 1, Text Line 2, Text Line 3, Text Line 4 (each on a separate line).
Lastly, set the ExistingFileAction property to Overwrite. We are done configuring the first step of this job.
Next, go back to the File System category in the Jobs Library ToolBox and locate the CalculateFileChecksum step. Drag and drop it under the existing WriteTextToFile Step.
The CalculateFileChecksum step performs a checksum calculation on the Source file using a specified algorithm. The Source file is the one created in the WriteTextToFile step. Enter the Source property as ${app_path}\CaseStudy2.txt. Next, select MD5 as the Algorithm. The special icon under the Algorithm field (circled in red in the image below) indicates that this job step returns a value. Following job steps can optionally use the return value.
Next, go back to the File System category in the Jobs Library ToolBox and locate the WriteTextToFile step again. Drag and drop it under the existing CalculateFileChecksum Step. Note, the steps are run in top to bottom order.
The second WriteTextToFile step (outlined above) writes out the calculated checksum value to the CaseStudy2Checksum.txt file. Enter the Destination property as ${app_path}\CaseStudy2Checksum.txt.
Next, enter %{CalculateFileChecksum.ReturnValue.Result} in the the TextToWrite property. This property is referencing an Execution variable. This is a different type of variable. It is not an active or constant variable, discussed earlier in this case study. Constant and active variables are substituted with their values by the Job Scheduler before the job is sent to an Agent to run. They are referenced with $ curly brace. An execution variable is resolved during the execution of the job. Therefore, the syntax is different when referencing an execution variable. The % curly brace syntax is used %{VarName}. The variable %{CalculateFileChecksum.ReturnValue.Result} is a variable that is passed back from a successful execution of the CalculateFileChecksum job step. It contains the result of the checksum calculation.
Lastly, set the ExistingFileAction property to Overwrite.
We are done configuring the Jobs Library Job steps for this job.
Next, click on the Job's Execution category (e.g. property sheet tab). This will open the dialog depicted below.
Notice that we no longer have to specify the Working Directory property. It is automatically set to ${app_path} due to the Default Job Policy we created earlier. We are just bringing this to your attention. You don't have to edit anything on this property sheet.
Now we can click the Save and Close button to save JobA. To see the newly created job, click on the expand/collapse icon to the left of CaseStudy2's name in the Map view.
JobA has now been created within plan CaseStudy2. Now we’ll define JobB.
Creating the Second Job - JobB
Right-click in the white space area of the CaseStudy2 plan (in the Map view), then select New, then Job. The New job's property sheets will be tabbed in the Main view. On the General property sheet, enter JobB for the Name and Label. Next, click on the Jobs Library tab. Like before, the required Submission Queue and User Account properties have been preset based on our previously created Default Job Policy. This saves us time - we don't have to populate these properties.
Next, in the Jobs Library ToolBox, expand the File System category of steps. Drag the Calculate File Checksum step and drop it in the workspace area on the right. Enter ${app_path}\CaseStudy2.txt in the Source property and select MD5 for the Algorithm property. This step calculates the checksum of the previously created CaseStudy2.txt file, created in JobA. As a reminder, the checksum was also calculated in JobA, for the same file. This is because a comparison of the checksums will be performed in JobB.
Next, drag the Read Text From File step listed in the File System category and drop it under the CalculateFileChecksum step. Enter ${app_path}\CaseStudy2Checksum.txt in the Source property. This step Reads, one line at a time, the information stored in the CaseStudy2Checksum.txt file previously created in JobA. The information stored in this file is the checksum result as it was calculated in JobA. There is only one line in the Checksum result file to read. That's OK, we can still use the Read Text From File step, as it supports reading one or more lines. Because it can read more than one line, the Read Text step is a type of "loop", like other looping constructs (e.g. For, While, etc.) that you may have heard of, especially if you are familiar with scripting or programming languages.
The text read in the ReadTextFromFile Step is stored, one line at a time, in a context variable named "Object". See the Context Name property in the image below. You can name the variable anything you would like, but for our use case, we have named it Object. As an example, assume a text file we are reading consists of two lines; the first is Hello, and the second is World. The first iteration of the ReadTextFromFile loop would result in Object having a value of "Hello". The second iteration of the loop would result in Object having a value of "World".
Next, the Delimiter property is configured as \r\n which means carriage return and line feed. Enter the Context Name and Delimiter properties just described (see the image below).
Next, expand the Flow Control category in the ToolBox, and drag the If Branch Step and drop it within the ReadTextFromFile Step (not under it).See the image below indicating where the IfBranch step should be dropped.
Next, enter the following in the IfBranch Expression property: ReadTextFromFile.Object.text == CalculateFileChecksum.ReturnValue.Result (see the image below). The IfBranch is comparing the checksum result generated by JobA to the checksum result generated in this job. Note: When a job step property is an expression (like the IfBranch), all values specified are implicitly considered to be variables and the % { } context variable syntax should not be specified.
If the checksum values match, we want to write something to the job's log file indicating as such. To accomplish this, expand the General category in the ToolBox, and drag the Log step and drop it within the IfBranch Step (drop it where you see "Drop Job step here" in the above image). After doing so, enter the following text in the Text property of the Log step: Checksums match. (see the image below).
Let's summarize the job steps:
-
CalculateFileChecksum performs the same action as JobA. It computes the checksum for the CaseStudy2.txt file. JobB does not store the checksum result in a file. JobA stored it in a file named CaseStudy2Checksum.txt.
-
ReadTextFromFile reads the only line of text in the CaseStudy2Checksum.txt file. That line is the computed checksum from JobA, that was subsequently stored in the file. Note that the ReadTextFromFile Job step is a looping construct. The loop will be executed once for every line read. The line will be loaded into the execution variable named “Object”. Since this file is carriage return-line feed delimited, /r/n denotes these delimiters.
-
Within the ReadTextFromFile loop we have an IFBranch job step which compares the value read from the file, and stored in ReadTextFromFile.Object.text against the checksum computed in JobB. The JobB checksum result is obtained by specifying the context variable CalculatedFileChecksum.ReturnValue.Result. This is the return value of the first Checksum step. Typically context variables are denoted with %{ContentVarName}, as previously mentioned. To reiterate, the IfBranch Expression property is one of those exceptions where the variable syntax is not used because when the job step property is an expression, all values specified are implicitly considered to be variables.
-
If the values being compared are the same, a successful message is produced using the Log Step. The "Checksums match" text would be written to the job log file.
Click Save and Close to save Job's property settings. The figure below now depicts both JobA and JobB.
Establishing Relationships
By default, when a plan is triggered, all the jobs will run in parallel, unless we establish relationships. This is not the behavior we want. We want JobA to run first, and if it completes successfully, it should trigger JobB. Therefore, the first relationship we will create uses a Completion Trigger. Using the Map View, click on the icon above JobA (the Rubik cube) and, while holding down the mouse, drag it in the direction of JobB. The blue arrow and the purple outlines around the two jobs will appear as a result of this action. This indicates that we are establishing some type of relationship between the two objects.
When you release the mouse, an Add Completion Trigger/Constraint dialog will appear, as depicted in the image below. The "Trigger" option is the default relationship. This is what we want to configure, so keep this field as-is. In addition, "Success" is the default condition. We want to keep this default value as well. When Trigger and Success are configured, this means that JobA will have a completion trigger added, to only trigger JobB upon JobA's success.
Clicking OK to save these default values results in the completion trigger being added in the Map view, like this:
The solid arrow depicts a completion trigger, and the green color means JobA will only trigger JobB if it succeeds.
Triggering the Workflow
To trigger and test the workflow, do so using the Map view. The Map view depicts real time updates of jobs as they execute. First, expand CaseStudy2 if it closed, so you can see the two jobs within the plan. Next, right in the white space area of CaseStudy2, then select the Trigger operation. You will see a status box added to next to each job in the Map view, as they execute. For example, while executing, the status box turns blue. If the job completes successfully, it turns green. Trigger the plan a few times, if you would like. If both status boxes turn green, that means both jobs completed successfully.
If you wish to view the job log files for the jobs (especially if you experienced a failure), click on the CaseStudy2 plan in the Object Navigation pane. By doing this, the "Instances" pane is updated to reflect instances for the CaseStudy2 plan only. If you do not see the Instances pane, click on Open/Close Panes in the lower right-hand corner of the UI, then check the "Instances" menu item. Once you have accessed the Instances Pane, make sure the filters are not blocking any visibility for the just triggered plan. For example. to see today's instances for all status types (e.g. failed, succeeded, etc.), set the filters accordingly. See the image below. Note, you must click the refresh icon (far left, on the toolbar) whenever a filter is changed.
To view a job log file, right click on the desired job instance, then select View Log. Only Jobs generate log files. Plans do not.
Constraints
When JobB executes, we assume the two files we need; CaseStudy2.txt and CaseStudy2Checksum.txt are present (as a reminder, the files are created by JobA). If these files were not present, JobB would fail. When a Job (or Plan) has certain pre-conditions before it can execute, they are called Constraints or Dependencies. ActiveBatch supports several types of Constraints: Job (i.e. a Plan or Job Instance), File, Variable and Resource. In this sub-section we’ll examine File and Job constraints.
File Constraint
A File Constraint can be configured on any job or plan if the job or plan needs a file to be present (or absent) in order for the execution of the triggerable object to take place. To add a File Constraint, right click on JobB (in the Map view, or in the Object Navigation pane), then select Properties. JobB's property sheets will be tabbed in the Main view. Navigate to the Constraints tab where you will see a grid at the top of the dialog (in the General panel) that will initially be empty, since no constraints have been configured yet. Click on the Add button associated with the General panel, then select File Constraint. The File Constraint dialog will appear, as depicted in the image below.
First, the Label property can be left blank, as the system will auto-assign a unique label to the constraint. You can also enter your own label. Let's leave it as-is, and let the system assign the label.
Next, the File Specification property specifies the file that, in our case, must be present to allow JobB to run. The File Specification to enter is ${app_path}\CaseStudy2.txt.
Leave the remaining properties as-is except check the "File must be" checkbox, and keep the default values of > (Greater than) 0 bytes. This means an empty file with 0 bytes would not satisfy the constraint.
Clicking the OK button adds the file constraint to the General list. See the image below. Observe the Label column in the grid. This is the system assigned label that is also automatically added to the Constraint Logic property, as per the image below. When the Scheduler sees a label in the Constraint Logic property, it evaluates the constraint associated with the label (is the file present and > 0 bytes). If these conditions are true, the constraint is satisfied. The job is ready for dispatch. If the conditions are false, what to do next depends on other property settings. The job can fail immediately if the constraint does not evaluate to true, or the Scheduler can check periodically, for a certain amount of time, to see if the constraint is satisfied. The default behavior is the Scheduler will Wait and check every 2 minutes for up to 10 minutes (see the image below). If the constraint is not met after 10 minutes, the job will fail with a constraint failure status. This means the job was not dispatched and therefore the payload did not execute.
Since JobB uses two files that are created by JobA, we can add a second file constraint to JobB where it will not be allowed to run unless both files are present.
Click on the Add button again, select File Constraint, then enter a File Specification of ${app_path}\CaseStudy2Checksum.txt. Click on the "File must be" checkbox, and keep the default values of > (Greater than) 0 bytes. Click OK to save.
Notice in the image below that there are now two File constraints. The General list includes the new constraint, and the constraint logic was auto updated to include "AND FC_app_path_CaseStudy2Checksum.txt". The AND means both files must be present for the constraint to be satisfied. Following the AND is the auto-generated label for the second file constraint. The AND operation is the default operation added to the constraint logic when multiple constraints are added. This could be changed to other supported operations, such as OR, if only one of the files needed to be present. For our use case, AND is appropriate, since both files must be present.
Click Save and Close to save the CaseStudy2 plan, then right click on it and select trigger. If all goes well, the constraints should be satisfied and JobB should run when triggered by JobA. The instance of JobB will store constraint information in the Variables tab. To see this, right click on the Instance of JobB that includes the newly added file constraints, then select Properties. The instance Properties will be tabbed in the Main view. click on the Variables tab. Expand the Constraint item in the list. You will see something similar as depicted below. Notice it states "True" next to both file constraints, meaning they evaluated to true since the files were present in the specified directory, and greater than 0 bytes.
Job Constraint
A Job Constraint lets you indicate which Jobs (or Plans) must complete before the current Job (or Plan) can execute. For example, we could add a Job Constraint on JobB, specifying that it cannot run unless JobA has run successfully. It wouldn't make sense for JobB to run without JobA running first, as JobA creates the two files that JobB needs. But, you may be thinging, we already have a completion trigger which sets the order in which the jobs will run, JobA first, then JobB. And, we could certainly leave it at that.
However, if you want to ensure that JobB doesn't run (someone inadvertently triggers JobB directly) unless JobA has run before it, adding a job constraint would give you that added protection. There are other ways to prevent a user from manually triggering a job, but for learning purposes, we will discuss how to add a Job Constraint to JobB, where it will not be allowed to run if JobA has not yet run. Again, in this example it is being added to prevent the accidental execution of JobB if it is directly triggered, as opposed to the intended design, which is to always trigger the plan. Note: Job and Plan Constraints can certainly be used in other types of scenarios, for purposes other than preventing the execution due to accidental triggers.
Let's create the Job Constraint using the Map view. It will likely still be open in the Main view, but if it is not, right click on the UsersGuideCaseStudies folder in the Object Navigation pane and select View > Map. Expand CaseStudy2 in the Map View canvas area. To create the job constraint, click on the icon above JobB (the Rubik cube) and, while holding down the mouse, drag it in the direction of JobA. The blue arrow and the purple outlines around the two jobs will appear as a result of this action. This indicates that we are establishing some type of relationship between the two objects.
When you release the mouse, an Add Completion Trigger/Constraint dialog will appear, as depicted in the image below. The "Trigger" option is the default relationship. We don't want to use this option. Click on the dropdown and select Constraint.
After doing so, the dialog will change as per the image below.
For this Case Study, the only property to consider is “Type”. This indicates what condition is to be satisfied as the dependency. The default of “On Job Success” is the most commonly selected pre-condition. Keep this property as-as. Like the file constraints added earlier, we will leave the Label property blank and let the system assign a label.
Click OK to save the constraint. The image will be updated in the Map view, depicting the constraint. Constraints are displayed with a dashed line, distinguishing them from completion triggers, which is a solid arrow. In addition, if you mouse over either relationship indicators, a tool tip will pop up describing what has been configured.
With the constraint configured via the Map view, let's see what JobB's constraints look like on the job's Constraints property sheet. Right-click on JobB and select Properties. The property sheets will be tabbed in the Main view. Click on the Constraints tab. Earlier, we added the two file constraints by accessing the features to do so on the Constraints property sheet. That said, you could also add job constraints using the features on the Constraints property sheet. However, it is typically quicker to add job constraints using the Map view, which is why we configured it that way.
In the image below, another constraint was added to the General list (JobA). The Constraint Logic property was auto updated with additional text: AND JobA. This means that all 3 conditions must be satisfied before JobB can run. This includes the file constraints and the job constraint. Please note that Jobs configured with Job constraints experience their constraint job(s) complete, the constraint logic is automatically rechecked, outside of the "Check every" frequency specification.
Plan Completion Rule
Now that we’ve defined our jobs, this is a good time to discuss what constitutes success for our plan. We know we want JobA and JobB to run successfully.
The above figure is the default Plan Completion Rule. It is accessed by right-clicking on the plan and selecting Properties, then Completion. The All Completed in Success selection means that all jobs that run must complete successfully. This means that if you had two (2) jobs; JobA and JobB and only JobA executed, JobB never ran at all, the plan would be marked as successful ("not run" jobs are ignored). If you specifically need to indicate a specific job or job(s) as running and completing successfully, you would selection Custom as the Plan Completion Rule (see the image below).
The above figure shows unambiguously that both JobA and JobB must run and complete successfully for the CaseStudy2 plan to be considered successful. Note the Use as Plan’s Exit Code button. This allows you to specifically select a job/plan and have that specific object’s exit code be used for the plan's exit code. By default, the last completed job/plan is used.
Summary Case Study 2
We were tasked with creating two (2) jobs that are to run in succession, only when the first job executes successfully. We created a Schedule to establish trigger dates, and created a Calendar to filter out holidays. We learned about a Plan and the benefits of using a plan as a mechanism for running nested jobs. We learned about setting Job Policies on a folder, so child jobs added that have the folder in their full path will use the policies. These policies allow us to preset properties to help save time creating jobs, and reduce job creation errors. We were introduced to variables and how they can be used for substitution purposes within a job. We learned about 2 types of Constraints - file and job. Lastly, we learned that the Plan's Completion Rule is a configurable property used to determine what needs to happen in order for the plan to be marked a success.
Note: Since this plan will run several times a day on Mondays, you may want to disable it. To disable the plan, right-click on the CaseStudy2 plan in the Object Navigation pane, then select Disable. This will stop it from executing. Alternatively, we could have disabled the plan’s schedule.