Mar 14, 2016

Automated deployment to IBM DataPower using Java

Here is the way you can automate a deployment to DataPower, by using the below code in JAVA project.

File: Datapower_SOMA_Request

package DataPower_SOMA;
     * Author: Saptarshi Mandal
import java.util.Properties;

import org.apache.commons.codec.binary.Base64;


public class Datapower_SOMA_Request {
    private static Properties param = new Properties();
    // Code to Allow Opening insecure HTTPS Connection
    // Allowing all DataPower XML Management Interface Cert to create Connection
    // without it's validation
    static {
        try {
            TrustManager[] trustAllCerts = { new X509TrustManager() {
                public X509Certificate[] getAcceptedIssuers() {
                    return null;

                public void checkClientTrusted(X509Certificate[] certs,
                        String authType) {

                public void checkServerTrusted(X509Certificate[] certs,
                        String authType) {
            } };
            SSLContext sc = SSLContext.getInstance("SSL");

            HostnameVerifier hv = new HostnameVerifier() {
                public boolean verify(String arg0, SSLSession arg1) {
                    return true;
            sc.init(null, trustAllCerts, new SecureRandom());
        } catch (Exception exception) {

     * @param args
    public static void main(String[] args) throws Exception {
        javaPostToDatapower connections = new javaPostToDatapower();
        String dir = param.getProperty("WorkSpacePath");

        File fDir = new File(dir);
        File[] files = fDir.listFiles();
        for (int i = 0; i < files.length; i++) {
            if (files[i].isDirectory()) {
                if (files[i].getName().endsWith("_dp"))
                    doIt(files[i], connections);


    private static void doIt(File file, javaPostToDatapower connections) {

        File[] files = file.listFiles();
        for (int i = 0; i < files.length; i++) {
            if (!files[i].isDirectory()) {
                if (files[i].getName().endsWith(".xcfg"))
                    try {
                        readAll(files[i].getAbsolutePath(), connections);
                    } catch (Exception e) {



    public static String getParam(String paramName) {
        return param.getProperty(paramName);

    private static void readParameters() {

        try {
            param.load(new FileInputStream(""));
        } catch (FileNotFoundException e) {
            // TODO Auto-generated catch block
        } catch (IOException e) {
            // TODO Auto-generated catch block


    private static final String BEFORE = ""
            + ""
            + ""
            + ""
            + "";
    private static final String AFTER = "
" + "
            + "
" + "

    private static void readAll(String path, javaPostToDatapower connections)
            throws Exception {
        File file = new File(

        PrintStream ps = null;
        BufferedReader br = null;
        try {
            FileOutputStream fos = new FileOutputStream(file);
            ps = new PrintStream(fos);

            br = new BufferedReader(new FileReader(path));


            StringBuilder content = new StringBuilder();

            BufferedReader rd = new BufferedReader(new FileReader(path));

            while (true) {
                String row = rd.readLine();
                if (row == null)

            byte[] bytesEncoded = Base64.encodeBase64(content.toString()

            ps.print(new String(bytesEncoded));
        } catch (Exception e) {
            // TODO Auto-generated catch block
        } finally {
            try {
                if (br != null)
                if (ps != null) {
                    String output = sendRequest(
                            "user", "password");
            } catch (IOException e) {
                // TODO Auto-generated catch block

     * Send GetFileStore Request with location "local:" to DataPower box to file
     * local file system
     * @param pUrl
     * @param pXmlFile2Send
     * @param pDomain
     * @param pUsername
     * @param pPassword
     * @return
     * @throws Exception
    public static String sendRequest(String pUrl, String pXmlFile2Send,
            String pUsername, String pPassword) throws Exception {
        String SOAPUrl = pUrl;
        String xmlFile2Send = pXmlFile2Send;
        String SOAPAction = "";

        // Create the connection where we're going to send the file.
        URL url = new URL(SOAPUrl);
        URLConnection connection = url.openConnection();
        HttpsURLConnection httpConn = (HttpsURLConnection) connection;

        // Open the input file. After we copy it to a byte array, we can see how
        // big it is so that we can set the HTTP Content-Length property.
        FileInputStream fin = new FileInputStream(xmlFile2Send);
        ByteArrayOutputStream bout = new ByteArrayOutputStream();

        // Copy the SOAP file to the open connection.
        copy(fin, bout);

        // Replace domainName in Request
        String soapRequest = bout.toString();

        // Convert into bytes
        byte[] b = soapRequest.getBytes();

        // Set the appropriate HTTP parameters.
        httpConn.setRequestProperty("Content-Length", String.valueOf(b.length));
        httpConn.setRequestProperty("Content-Type", "text/xml; charset=utf-8");
        httpConn.setRequestProperty("SOAPAction", SOAPAction);

        // Create UsernamePassword
        // To Base64 decoding, Apache common-codec is used.
        String authString = pUsername + ":" + pPassword;
        byte[] authEncBytes = Base64.encodeBase64(authString.getBytes());
        String authStringEnc = new String(authEncBytes);
        httpConn.setRequestProperty("Authorization", "Basic " + authStringEnc);

        // httpConn.setRequestProperty("Authorization","Basic Z295YWxyYWRtaW46VHJhbnNmZXIxMiM=");

        // Everything's set up; send the XML that was read in to b.
        OutputStream out = httpConn.getOutputStream();

        // Read the response and write it to standard out.
        InputStreamReader isr = new InputStreamReader(httpConn.getInputStream());
        BufferedReader in = new BufferedReader(isr);

        String inputLine;
        String output = "";
        while ((inputLine = in.readLine()) != null) {
            output = output + inputLine;

        return output;

    // copy method from From E.R. Harold's book "Java I/O"
    public static void copy(InputStream in, OutputStream out)
            throws IOException {

        // do not allow other threads to read from the input or write to the
        // output while copying is taking place
        synchronized (in) {
            synchronized (out) {

                byte[] buffer = new byte[256];
                while (true) {
                    int bytesRead =;
                    if (bytesRead == -1)
                    out.write(buffer, 0, bytesRead);

File: javaPostToDatapower

package DataPower_SOMA;
       * Author: Saptarshi Mandal

import java.util.*;
import java.nio.charset.StandardCharsets;
public class javaPostToDatapower {
public String excutePost(String targetURL, String urlParameters, String string, String string2) {
  HttpURLConnection connection = null; 
  try {
      final String s="user:password";
      final byte[] authBytes=s.getBytes(StandardCharsets.UTF_8);
      String userNamePassword = Base64.getEncoder().encodeToString(authBytes);
     System.out.println (targetURL);
    //Create connection
    URL url = new URL(targetURL);
    connection = (HttpURLConnection)url.openConnection();
    connection.setRequestProperty ("Authorization", userNamePassword);
    connection.setRequestProperty("Content-Language", "en-US"); 


    //Send request
    DataOutputStream wr = new DataOutputStream (

    //Get Response 
    InputStream is = connection.getInputStream();
    BufferedReader rd = new BufferedReader(new InputStreamReader(is));
    StringBuilder response = new StringBuilder(); // or StringBuffer if not Java 5+
    String line;
    while((line = rd.readLine()) != null) {
    return response.toString();
  } catch (Exception e) {
    return null;
  } finally {
    if(connection != null) {

Also create a file called and add the below entry to it or any location where you want to create your "Import_Object.xml" which contains the base64 encoded version of DataPower object:


All these three files needs to be in same location of your project!! Let me know if you need any more help on this.

Sep 5, 2013

IBM WebSphere Deployments: Automation Best Practices to Lower Risk and Accelerate Time-to-Market

Handling complex application environments with manual scripts is a resource burden for all modern organizations. Deployment automation (also known as Application Release Automation) can assist with this and deliver a range of benefits, including:

- Faster application releases (leading to faster time-to-market)
- Faster and more reliable environment management (updates, scaling, etc.)
- Lower business risk, through improved compliance, audit and reporting

In this article, learn the basics of deployment automation, and how the free RapidDeploy™ tool can provide some surprising improvements to your WebSphere environments, including IBM WebSphere Application Server, IBM WebSphere Message Broker, IBM WebSphere MQ and IBM WebSphere DataPower.

Finally, learn how a tool like RapidDeploy can fit into your existing architecture, and how to get started with this industry-leading WebSphere automation tool.

Defining Deployment Automation

Deployment automation, also known as Application Release Automation (ARA), is an emerging trend identified by many analysts (such as Gartner: as important for organizations to handle the speed and agility of software releases today. Without automation, applications are deployed with manually executed scripts and other activities that leave significant risk for human error, missed steps, and consequent production issues. A manual deployment process is also labour-intensive, thus will be more costly in resources than a process that utilizes automation.

A deployment automation solution acts as a coordination mechanism across an organization’s applications, middleware and databases. It should also integrate with the existing build tools, source control and any artifact repositories. By applying a template approach to storing configurations, deployments can be repeated with identical outcomes through a simple click in a GUI (or through a simple CLI command). This not only accelerates the process of deployment by eliminating many of the gaps between manual steps, but also removes the risk of manual configuration errors going into production.

A strong ARA solution can do much more than just this, though. Because the software sits across all deployment activities, configuration and release data can be efficiently stored for future use. This should then provide the capability to roll back to previous deployments, snapshot configurations, maintain audit records, and manage deployment access rights.

Configuration Management

To be able to configure deployments consistently, a release process needs to be mindful of configuration management. By storing configurations in templates, the benefits of a “build once, deploy anywhere” philosophy can be appreciated. This not only accelerates the deployment of applications through the development process, but also means that those same applications can be readily scaled up to meet growing demands in the production environment.

Applying this to deployment automation, the configuration management of infrastructure and application assets is based on the concept of a "desired state." This is a definition of what a target system should look like. A template or model of a logical target system is defined and stored in a configuration management system (such as RTC, GIT or Subversion), and a release will bring a target system from its current state to its "desired state."

Model Driven Deployments

Model Driven Deployments are used to define a model for the automation we want to carry out in many target environments. For example, it could be a model for an IBM WebSphere Application Server application deployment (perhaps with associated dependencies such as a cluster definition, data source or JMS configuration) for a particular business application or component. This model will be made up of a series of steps that we want to carry out every time we deploy, whether in development, QA, a Test environment or Production. Model Driven Deployment processes are described as "Data Driven," which simply means the data and the logic are abstracted, with the data or variables themselves being applied at the time of deployment.

The significance of using a model to deploy a single automation type to many targets is that the only differences between execution in one environment and another are the variables—the model (or process followed) on each target will be exactly the same. This is an important concept. It is only the continual re-use of the same process with every deployment, in both target environments and in the delivery pipeline, which gives us the assurance the release process itself should work. This is because in a typical development environment, it will have been followed many dozens or even hundreds of times before reaching production.

Configuration Drift and Snapshots

Once we have defined what our system should look like and provisioned, configured, and deployed it, how do we know that someone has not made a manual or unauthorized change to it?

We use a concept called "Configuration Drift" to identify this. Configuration drift occurs when the state or the actual runtime configuration differs from its desired state. This usually happens when someone logs onto a target server or system and makes a manual change to it, by-passing the process or tooling in place to control the system.

Configuration drift can only be readily identified if you have the capability to import or snapshot the current configuration of a system. Snapshots are the foundation that allows you to identify differences through comparisons. This could be comparing a target system with its desired state; or in fact the same target over time (my environment is not working now and it was last Friday—what's changed?); or two different environments (what is the difference between QA and System Test?).

Automation in Action: Domestic & General Insurance

Domestic & General Insurance was implementing an OS platform migration for their IBM WebSphere Application Server environments. Maintenance and code release were both identified as risks, as these were both being handled manually, and consequent inconsistencies had led to issues in production. This also prevented it from producing concise audit trails of code and configuration change.

With over 100 web applications to deploy against a tight timescale, Domestic & General recognised that it would need to implement an automation solution to handle much of the deployment.

RapidDeploy was their chosen solution to manage this, handing the configuration and release of the IBM WebSphere Application Servers, as well as the IBM WebSphere Message Broker and IBM WebSphere MQ deployments. Through exploiting RapidDeploy’s features of template configurations, user roles and workflow scheduling, over 90 of the existing applications were migrated within three weeks of RapidDeploy’s implementation.

“Ultimately, RapidDeploy™ is making daily life easier and has freed us up from some tedious, arduous tasks,” concludes Robert O’Connor of Domestic & General Insurance. “We are delivering new web application capabilities to the business at a vastly increased speed, supporting our ambitions for growth.”

Continuous Delivery

Continuous Delivery is a practice that has grown as a discipline in its own right, but in the context of this whitepaper we are defining it as the continuous delivery of change, using not only the same code but also the same release process in every target environment from development to production. This is often organized into release "Pipelines" that define the route or process to be followed, such as quality gates, approvers, etc. There are many considerations to make if you are moving to a continuous delivery model, such as batch size, quality gates, etc., that go beyond the brief overview provided in this whitepaper.

The principle of Continuous Delivery is built upon the expectation of automation. By automating the process of deployment through a workflow, not only are individual deployments implemented faster, but the feedback loop for issues or defects is also accelerated. This frees up individuals to focus on high-value activities, and takes away low-level push-button tasks.


As the influence of agile software development methods has grown, and aligned closely with manufacturing principles like kanban, expectations for the infrastructure teams within organizations to show similar flexibility have grown.
Labeled as “DevOps,” this concept is more of a philosophy than a specific methodology. In practical terms, a DevOps approach to software releases can come as a natural consequence of the successful implementation of Continuous Delivery. Where Continuous Delivery can often be focused on the activities of developers and QA testers, the DevOps concept very specifically extends this to the wider IT department.

Regardless, the same requirements still underpin both. Providing a flexible infrastructure service to development teams, whilst retaining standards around consistency and reliability, is practically impossible without high levels of automation. Although we can relate how ARA solutions are sometimes used around DevOps environments, we need to be clear that there is no such thing as a ‘DevOps tool.’ As mentioned above, DevOps is a philosophy, not a methodology. However, sitting as the cornerstone of the release process, properly implemented Application Release Automation should provide an effective support to the practical considerations around the transition towards a DevOps approach.

And from the reverse perspective, you do not need to have DevOps ambitions, to feel the benefits of implementing a deployment automation solution.

Roles & Responsibilities

Consistency, reliability and making the release process more robust are all based on the assumption that a common approach is used by all actors across the Application Lifecycle Management (ALM) process. This also makes the assumption that each of the users and roles has different capabilities based on their security context and area of responsibility.

In short, this means giving access privileges to individual users (or groups of users) to permit different activities. For example:

§  Developers may have the ability to build and deploy applications into development
§  Release managers possess the ability to deploy to QA
§  System Admins have the ability to define new target environments
§  Operations team members have the ability to approve or schedule releases into production
§  Business Analysts possess the ability to view metrics and Management Information (MI) reports related to the release process, including how many deploys per day, week, to what environments, the success rate, etc.

Importantly, this also can be used to prevent issues such as development code being deployed directly into production without approval. The important point to note is all users are using the same process and tooling, which provides continuity throughout all environments.

An ARA or deployment automation solution should have the capability to define, group and manage roles to ensure that the correct release processes are duly followed. Ideally, this function should be represented within a GUI for administration by a business user.


Self-service is a cornerstone of any successful deployment automation solution, enabling the users defined by the system to perform their own specific activities at will, in real time, based on the privileges they have been granted.

Adopting a self-service model is one of the greatest accelerators for many organizations. No longer do users have to wait for a system administrator to deploy a development environment, make a configuration change etc. This not only releases time so the team can concentrate on value-creating activities, but it also means that team members working in the development phases are not held up, waiting for relatively menial and repetitive tasks to be carried out. It removes many individual bottlenecks.

Compliance & Audit

Awareness and consideration of SOX, PCI or other regulatory expectations can be an important factor when considering changes to a software deployment process. Manual deployment processes and custom automation frameworks will probably not provide a comparable depth of capability to a purpose-built ARA tool.

A robust deployment automation solution will maintain a log file of all job executions, environments defined by the system, their configurations, etc., along with the capability to snapshot and compare target systems (to identify any change that may occur outside of the defined process). This should provide users with a much greater ability to address audit requirements, and with more ease.


With all the log data and configuration management information stored in an ARA system, the opportunity for valuable BI and reporting is clear. A good deployment automation framework should be able to provide visibility of the current state of the application infrastructure, as well as metrics assessing throughput, bottlenecks, etc.
Modern approaches to infrastructure management, including deployment automation and DevOps environments, will tend to be metric-driven through such systems. This allows operations, managers and others to see information such as what is installed (including versions), where it is installed, how long a deployment has taken, how many deployments have been made, etc.

Implementing an ARA solution

Solution to address the topics raised in this article is RapidDeploy™, the leading enterprise-class release automation solution.

Getting started with RapidDeploy is about to become incredibly easy with the release of RapidDeploy: Community Edition. This is a completely free and full-featured version of RapidDeploy, for handling up to five target environments. This will be available from, and prior to release you can register for access here:

RapidDeploy itself is a simple Java application framework that can run from within all common middleware, including IBM WebSphere. RapidDeploy can be handled through its own GUI, via Web Services, or through a CLI. There are a full range of plugins available that extend the RapidDeploy solution to handle all common enterprise environments, and you can even build custom plugins to handle bespoke technologies. Full documentation to get you started is available from:

If you would like help from MidVision, you can visit the community support forums via, or receive more formal guidance from our team via

The Global WebSphere Community @

MidVision @

Aug 29, 2013

Manage and Monitor the Middleware Superhighway While Containing Costs and Mitigating Risk

This article discusses management and monitoring of transactional environments by using a simple representation of the problem; a metaphor of highways as the network, traffic as the data, on ramps and off ramps as the intersection where the data meets the network.

I was initially attracted and volunteered to work on this strange technology called MQ back in 1996. As a person who was working on converting 3270 screen data into HTML via TN3270 sessions, I thought this was a novel way to get data from here to there, forget the screens, stick to just the data, and use any programming language on any platform I chose. Programmatically and architecturally, it made sense. At least more so than mapping 3270 characters to HTML!

As MQ matured, many companies were using it to assure transactions would get from point A to point B and, if they were lost or misplaced, could at least be located most often in a default dead-end location.

Fast forward to the current day, and transactions carry data across web services, rest, EDI, and enterprise message backbones. From afar, the transactions can be thought of as traffic going from point A to point B.

Think of looking down from far above a set of high-ways; cars and trucks and buses travelling along, entering and leaving this data highway. There are many features besides the actual highway to think about: lanes, intersections, tolls/gateways, merging lanes and also different languages for signage.

In real world road travel, control devices are good enough to direct the traffic, but don’t do much for con-figuring changes to the patterns or alerting problems either before they occur, or when a certain behavior or threshold is exceeded. Most of that is dependent upon the manual intervention of traffic, police or construction crew. Come to think of it, that reminds me of many IT organizations!

In business, ‘traffic’ usually contains important content to run the business whether financial data, supply chain data, personal information, travel logistics, etc. If these transactions are lost or misplaced, it generally causes grief for the corporation responsible for the transaction, not to mention their customers; whether it is a monetary loss (trades, bank transactions), or information loss that prevents business to move forward, (such as airlines, hotels), supply chain orders whether retail, wholesale, B2B, parts, inventory, etc.

In order to prevent such scenarios, companies spend considerate amounts of money on staff to make sure they can manage the entire transactional environment beyond the already significant sums spent for logistics of hardware and software. This is done in order to be able to make appropriate configuration changes to prepare or react to that environment. Normally, this is accomplished via a buildup of scripts and process libraries in order to keep watch on these transactions and the environment in which they flow.

But many companies realize that a specialized soft - ware product focused on these tasks is needed for this purpose. In almost every case, the operating cost of a commercial product is far less than the budget necessary to build, enhance, maintain and support all of these responsibilities in-house.

Advantages of Building

      Complete control

      Tailored to unique business needs

      Ownership of the software code

Drawbacks to Building

      Development Time

      Training and Support

      Staying Current

      Integration with Other Applications

      Competitive Functionality

      Validation for Regulated Organizations

      Reporting tools?

      Employee Turnover

      Back Door Access

      Deploy now vs. when?

      Total Cost of Ownership

      Budgetary flexibility

      Employee resources available

      Opportunity Cost (time to market)

      All platform development

      Ability to execute on all phases 

Even in the case of specialized software products, some part of the IT staff is generally dedicated to managing those as well. This can range from entire departments for some of the heavier and surprising-ly expensive solutions that require lots of scripting or customization, to just one or two people for more simple and intuitive solutions—even within large organizations. As a consumer of these products I always found some vendors a bit haughty in that they’d charge expensive fees to license products that in turn made me do tons of scripting and deployment. My philosophy as a consumer was that if I’m going to pay for it, I shouldn't be doing all the work.

Could you imagine hiring a contractor to build front porch steps onto your home and he says, “OK, start measuring and figure out how many bricks per step and count them out for me?”

So let’s go back to the traffic scenario. The advent of multi-platform then multi-delivery environments like web services, rest, and EDI in addition to messaging technologies has made the chore of managing those environments problematic because the work needs to be done on more than just the enterprise messaging backbone. This includes administration, configuration, and event monitoring. Typically when a variety of delivery systems and associated tools on each are utilized within the transactional environment, it makes correlation of problem events very difficult, isolating and identifying problems slow, and necessary problem resolution is often delayed.

Let us consider the enterprise message backbone as a main highway. The entry points of data can come from many interfaces to that highway: web services of many types, database queries, EDI interfaces, pro-grams running in local or application server containers, transformation engines like Message Broker and IBM WebShere DataPower, etc.

These entry points are the on-ramps to the e-message highway. Data jumps onto the e-message highway to be delivered elsewhere via an exit point or off-ramp. Sometimes the data exits the highway, goes to a rest stop, and gets transformed. Sometimes the data is merged with other data onto the e-message highway. When this occurs it may need to be sequenced in order to find out which merged data belongs with which. This is similar to friends traveling in different cars trying to follow each other to a location, but other cars are interspersed between them.

You’ll notice they’ll all get off on the same exit even if not directly in line with each other. As with any complex traffic, making it all flow is a multifaceted effort of preparation and adjustments.

In order to make sure this important data is not lost in this more complex environment, more than just the E-message highway needs to be managed, administered, and monitored. ALL of the on and off ramps, toll stations, rest stops, and weigh stations do as well, and in some cases, even the actual license plate number needs to be identified! Think about this simple analogy. If an exit (off-ramp) is closed, there’ll likely be a backup on the highway approaching it. Conversely your highway may be traffic free. Is that good or bad?

Well it depends upon the time of day and the normal pattern of activity on that highway. If it’s rush hour and a stretch of road is empty it may be because a main feeder (on-ramp) is down. Think about tangent devices to middleware such as IBM WebSphere DataPower. If it’s not feeding the transactions through to your middleware, then even a flawless middleware system won’t help the transaction.

While the domain of transactions was originally with programmers, the fate of the programs controlling the actual handshake between systems has moved to platform software, originally via APPC, then EDI, then enterprise messaging, then web services not solely for security but for the actual data exchange. Each performs their task admirably but differently.

Since the programmers themselves are forced to use quicker methods to keep up with the projects and timelines they are responsible for, it is not unusual that shortcuts are taken. These shortcuts can cause many issues. While developer talent is there, it is sometimes transient, usually understaffed and under-estimated for time requirements. Because of this, there is a higher risk to maintain transactional integrity. The characteristics that identify an MQ transaction are not the same characteristics that identify an application server transaction, and not the same as for a database trans-action, and so on.

If the developers of these transactional systems don’t plan for this cross-identification, then when an error occurs, there can be correlation issues when determining where the transaction went wrong. The effort involved in the synchronization of the characteristics that identify the error, capture and write it to a data store, and correlate the information about the error, can significantly decrease performance or require significant storage requirements in large volume transaction sites.

Given the effort and costs involved (time, storage, performance) most transactional environments do not manage or monitor their transactions using this method. Therefore it is imperative to become more proactive and isolate the points of failure in advance. If the points of failure can become known and corrected before a major transactional error occurs, then the above costs are mitigated.

In order to do so, some time is needed up-front to identify the on-ramps, off-ramps, and to identify the behavior on each of these that would suggest there may be a problem creeping up. After-the-fact problem-solving can be time consuming, resource consuming, frustrating, and fruitless. In this scenario, what is usually considered the most cost-effective solution is log analysis. Given different logs for different platforms are in different locations and formats, this is a problematic way to solve an issue as well. While there are some good software solutions for this type of forensic log analysis, sometimes this method doesn't allow for quick isolation, investigation, and action.

So why do companies spend so much on this infra-structure management? It’s because the e-commerce infrastructure provides convenience and therefore customer satisfaction, and that is how you gain and retain customers. It is a quick way of gathering and disseminating business data, gaining faster time to market for new products and services, streamlining application business processes, and reducing operating costs.

How much? When I worked for a typical large NYC bank in the 1980s, a paper bank transaction cost about $1.10. Voice response technology brought it down to 50 cents. Home banking software to 25 cents. Today, internet banking brings it down to just 13 cents. The positive, operating-cost impact is eye-opening when you consider the national and global reach of banks and the growing volume of daily transactions.

However, the reliance on e-commerce has a flip side. In today’s IT world, the applications enabling the business processes are more distributed, more complex and more prone to transaction slowdowns or outright failure. Reliance on forensic problem analysis can be time consuming. This is not acceptable for any business process. The sheer volume and importance of these transactions make it essential to proactively manage and monitor that infrastructure in order to keep it continuously running.

The bottom line is that transactional systems and their associated infrastructure are essential for corporations to do business in today’s world. How do you achieve this in a cost-effective manner and still provide agile, flexible, convenient services to customers or B2B partners?

The answer is clear: “Be proactive!”

The following list contains rules of thumb to enable proactivity.

Use products or solutions that:

    Run on standards-based platforms and support standard software interfaces so that you do not paint yourself into a corner with proprietary systems that make change difficult and costly.

    Allow you to automate management procedures, to significantly increase your efficiency, versus having limited internal staff do everything in a reactive mode.

    Allow you to manage events at ALL the locations of the transactional middleware infrastructure, on the main highway, and the on-ramps and off-ramps.

    Provide an easy, intuitive interface, limit deployment time and reduce maintenance effort.

Operating inefficiency means wasted (cost) dollars, hurting the bottom line. Loss of productivity means even more wasted (revenue) dollars, hurting the top line. Identifying and deploying the most efficient and operationally cost-effective monitoring and management solution has been proven to increase business process profitability—a core goal of every organization.

White paper: “Managing & Monitoring Transactions on the Middleware Superhighway”
Author: Peter D’Agosta, Product Manager, Avada Software

Lessons learned from an IT Veteran

Perhaps it’s due to lessons learned over 30+ years in IT development, support, administration, architecture, planning, and product management, or maybe it’s because I gravitate toward new possibilities, but I have developed a core belief that simplifying IT is the best approach to get things accomplished. In a discipline notorious for making the complicated even more complicated, my goal is simply to remove the unnecessary complications from otherwise efficient processes.

While the acceptance of using open source has shifted methods and techniques quite a bit in recent years, most IT people, especially those who have been in the field for 20+ years, have a similar experience early in their career: While in the midst of either operationally or programmatically trying to solve an urgent problem, many IT brethren would rather watch you squirm than give you a simple syntax or a reference to suitable material that would lead to quicker resolution. I under-stand the ‘teach them to fish’ philosophy, but when Mrs. O’Leary’s barn is burning you need one direct answer to extinguish the fire, not four to five cascading questions and a treasure map to get there.

Before I knew Unix, getting a Unix admin to give you the syntax of some arcane command (grep | ps –ef...) was like pulling teeth from an elephant. When

I first learned zOS I was given a CMD line interface only to discover there was TSO (F-keys, Menus, short cuts). My first impression of IT people was similar to that of a fraternity; you had to do some crazy stuff and show you were worthy before they helped you out. After scrounging for vendor docs and creating lots of ‘Cheat sheets’ to put all the commands at my cut/paste fingertips, I could finally concentrate on the problem at hand and not the syntax. Of course using a GUI would have been out of the question because it was for ‘end users’!

Why am I bringing this up? Because it brings awareness to the fact that significantly improving productivity is more important than being a member of the IT “know-it-all” fraternity. At one time while I was responsible for instructional training, I realized the ability to “keep it simple” was crucial to the position. Another enlightening moment came when I was taking graduate courses and someone had asked me if I was a computer science major or an IT major. I didn't really know the distinction and like all creatures of both pursuits I sought out books and manuals, and periodicals. Sorry Yahoo and Google users, but we had to do it the old fashion way back then. Plus, I needed to do something in between submitting my ‘batch job’ to be processed!

I realized that I was not trying to solve equations for orbit trajectories, or figure out where the Fibonacci sequence hit seven digits. What I was essentially trying to do was move this data to another place, perhaps in a different format, and make sure it showed up there. That basic understanding of being an IT person (and not a computer science person) has led me happily around the globe and to interesting companies; initially with Pan Am, Volvo, Prodigy (yes, the email program I developed was new and innovative at the time, but essentially was still ‘move this data from here to there, and make sure it arrives in a format we can all read’); then later to scores of F1000 companies as a consultant for technologies like MQ messaging, portal servers, web services, and basically any transactional delivery system environment.

The Global WebSphere Community @
Avada Software @