500+
ActiveBatch includes over 500 prebuilt Job Steps for reliable, end-to-end workflows that incorporate processes from a variety of systems and applications.
1/2
With 100s of production-ready Job Steps, native integrations, and advanced scheduling tools, IT can build end-to-end workflows in half the time.
An ActiveBatch Workload Automation Success Story
The Indiana University Foundation (IUF) relied on UNIX- and Linux-based systems, a Microsoft Windows-based system with a SQL Server database, plus custom in-house applications.
Because the IUF relied on basic, solution-specific schedulers, the IT team had to manually trigger most of the jobs they ran, relying on custom scripts for monitoring those jobs and to pass data between systems. As a result, IUF resolved to implement an advanced IT automation solution.
How the Indiana University Foundation Succeeds with ActiveBatch
“With four major releases and several service packs per year, we’re constantly creating and changing our nonproduction environments for testing, demonstrations, and pilot projects. And that means a lot of data and code deployments.”
- The IUF uses ActiveBatch Alerts to automatically notify IT personnel via email or mobile when a job fails, enabling the IT team to resolve any issues before customers are affected.
- ActiveBatch’s in-depth reporting capabilities have given the IUF what it needs in order to schedule all of its jobs from a single solution: all of their error messages, log files, instances, and other reporting tools can be viewed from a single, unified window.
- The ActiveBatch Integrated Jobs Library includes over 500 prebuilt, platform-independent job steps, giving the IUF the ability to run reliable, end-to-end workflows that incorporate processes from a variety of systems and applications.
- By leveraging many of ActiveBatch’s event triggers (completion triggers, FTP triggers, file event triggers, and a dozen more), the IUF is able to run jobs back-to-back, optimizing their resources, improving uptime, and allowing the IT team to run more batch jobs in less time.
What Is Indiana University Foundation?
Industry: Not-for-Profit
Customer Site: Bloomington, Indiana, United States
Indiana University Foundation (IUF) is a not-for-profit corporation dedicated to maximizing private sector funding for IU. In fiscal year 2009-2010, Indiana University received more than $100 million in gifts from more than 100,000 individuals, corporations, and foundations. IUF manages an endowment of approximately $1.3 billion, administers about 6,000 gift accounts, and provides related fundraising services to IU and its donors.
Success Story Highlights
- Integrating disparate applications
- Breaking free from manual monitoring
- Run jobs based on set conditions
- Get staff notified of errors immediately via email alerts
- The importance of reporting services
Managing $1.5 Billion In University Funds
Scheduling when various software applications on your campus should exchange data among themselves might not seem like the most glamorous of needs, but it’s essential — and can waste large amounts of IT time and resources when it has to be done by hand.
Just ask Jay Sissom, manager of systems administration and customer support at Indiana University Foundation, a 250-person subset of Indiana University that manages more than $1.5 billion in funds for the university. The foundation’s responsibilities include tasks like soliciting contributions from donors and investing and monitoring funds so that the university gets the best returns possible.
To keep everything in sync, Sissom’s department runs 30 or 40 software jobs nightly to send information from one system to another. To do so, Sissom’s 18-member IT staff must make sure that communication between various applications — the donor information system and the accounting system, for example — runs smoothly. Some of the applications, such as the general ledger and the donor information system, are from Datatel. The foundation also runs UNIX and Linux and has a number of custom applications written in-house, such as its investing system, which connects with both the GL and the donor information system and runs on Microsoft Windows and uses Microsoft SQL Server for its database.
That mix of operating systems and applications ruled out a number of job scheduling tools because they were specific to a particular operating system. Instead, the foundation needed a solution to handle job scheduling across disparate systems and applications.
Also, although Sissom could schedule operating system-specific jobs in advance through tools in the various operating systems, such as Windows Scheduler or Unix cron, it wasn’t an efficient approach. “Those worked,” he said, “but we had to build a lot of code to monitor the jobs, [since] those built-in tools don’t provide any monitoring.”
The Problem With Manual Monitoring
Given the mix of applications and operating systems, scheduling and tracking jobs at the foundation used to mean manually monitoring the data exchanges between applications — exchanges that typically take place at night, when computer resources are most available and users are least impacted. Often, one job could not start until another job has finished running correctly.
When jobs were scheduled manually, if problems were encountered overnight, the staff often had no way of knowing that until a user complained the next day. That could mean running jobs during the day, consuming considerable resources and makes users wait for correct data.
The Solution
To solve the problem, Sissom turned to a tool from Advanced Systems Concepts called ActiveBatch Workload Automation, which he and his staff have used for more than a year to schedule jobs in advance.
Instead of writing code and monitoring jobs manually, ActiveBatch provides “a nice GUI to schedule the jobs,” Sissom said, one that allows jobs to run based on set conditions. If job A succeeds, for example, ActiveBatch can be set to run the next job. If a job fails, someone on Sissom’s staff can be notified via email immediately. Eventually, he said, he hopes to add software that will skip the email notification for key jobs and phone a staff member immediately instead.
Reports are another plus for the system, since the software tracks what jobs ran when — or why a particular job failed to run. Partly because of ActiveBatch’s reporting capabilities, Sissom said, he hopes to eventually run every scheduled job in ActiveBatch. With the reporting functions available, he will then be able to look in one place to see exactly which jobs succeeded and failed each night. “Whenever there are problems, we don’t have to spend as much time researching it. ActiveBatch tells us the exact place the problem occurred,” Sissom explained. “Using the GUI, we can go in and look at the log to see what the error messages are and correct the underlying program.”
He’ll also be able to schedule jobs right on top of one another, so that one can begin right after the preceding one finishes; that will allow more batch jobs to be run closer together over a single night.
“It’s really improved our response to jobs that failed,” Sissom said. “It’s improved our uptime.” Instead of waiting for users to complain that data hasn’t been updated, he said, “In a lot of cases, we can fix problems and users won’t even know there was a problem.”
