All of the biggest technological inventions created by man - the airplane, the automobile, the computer - says little about his intelligence, but speaks volumes about his laziness. Incognito... the knack of so arranging the world that we don't have to experience it.
complex application environments with manual scripts is a resource burden for
all modern organizations. Deployment automation (also known as Application
Release Automation) can assist with this and deliver a range of benefits,
Faster application releases (leading to faster time-to-market)
Faster and more reliable environment management (updates, scaling, etc.)
business risk, through improved compliance, audit and reporting
this article, learn the basics of deployment automation, and how the free
RapidDeploy™ tool can provide some surprising improvements to your WebSphere
environments, including IBM WebSphere Application Server, IBM WebSphere Message
Broker, IBM WebSphere MQ and IBM WebSphere DataPower.
learn how a tool like RapidDeploy can fit into your existing architecture, and
how to get started with this industry-leading WebSphere automation tool.
Defining Deployment Automation
automation, also known as Application Release Automation (ARA), is an emerging
trend identified by many analysts (such as Gartner:http://www.gartner.com/id=2477020)
as important for organizations to handle the speed and agility of software
releases today. Without automation, applications are deployed with manually executed
scripts and other activities that leave significant risk for human error,
missed steps, and consequent production issues. A manual deployment process is
also labour-intensive, thus will be more costly in resources than a process
that utilizes automation.
deployment automation solution acts as a coordination mechanism across an
organization’s applications, middleware and databases. It should also integrate
with the existing build tools, source control and any artifact repositories. By
applying a template approach to storing configurations, deployments can be
repeated with identical outcomes through a simple click in a GUI (or through a
simple CLI command). This not only accelerates the process of deployment by
eliminating many of the gaps between manual steps, but also removes the risk of
manual configuration errors going into production.
strong ARA solution can do much more than just this, though. Because the
software sits across all deployment activities, configuration and release data
can be efficiently stored for future use. This should then provide the
capability to roll back to previous deployments, snapshot configurations,
maintain audit records, and manage deployment access rights.
be able to configure deployments consistently, a release process needs to be
mindful of configuration management. By storing configurations in templates,
the benefits of a “build once, deploy anywhere” philosophy can be appreciated.
This not only accelerates the deployment of applications through the
development process, but also means that those same applications can be readily
scaled up to meet growing demands in the production environment.
this to deployment automation, the configuration management of infrastructure
and application assets is based on the concept of a "desired state."
This is a definition of what a target system should look like. A template or
model of a logical target system is defined and stored in a configuration
management system (such as RTC, GIT or Subversion), and a release will bring a
target system from its current state to its "desired state."
Model Driven Deployments
Driven Deployments are used to define a model for the automation we want to
carry out in many target environments. For example, it could be a model for an
IBM WebSphere Application Server application deployment (perhaps with
associated dependencies such as a cluster definition, data source or JMS
configuration) for a particular business application or component. This model
will be made up of a series of steps that we want to carry out every time we
deploy, whether in development, QA, a Test environment or Production. Model
Driven Deployment processes are described as "Data Driven," which
simply means the data and the logic are abstracted, with the data or variables themselves
being applied at the time of deployment.
significance of using a model to deploy a single automation type to many
targets is that the only differences between execution in one environment and
another are the variables—the model (or process followed) on each target will
be exactly the same. This is an important concept. It is only the continual
re-use of the same process with every deployment, in both target environments
and in the delivery pipeline, which gives us the assurance the release process
itself should work. This is because in a typical development environment, it
will have been followed many dozens or even hundreds of times before reaching production.
Configuration Drift and Snapshots
we have defined what our system should look like and provisioned, configured,
and deployed it, how do we know that someone has not made a manual or unauthorized
change to it?
use a concept called "Configuration Drift" to identify this. Configuration
drift occurs when the state or the actual runtime configuration differs from
its desired state. This usually happens when someone logs onto a target server
or system and makes a manual change to it, by-passing the process or tooling in
place to control the system.
drift can only be readily identified if you have the capability to import or
snapshot the current configuration of a system. Snapshots are the foundation
that allows you to identify differences through comparisons. This could be
comparing a target system with its desired state; or in fact the same target
over time (my environment is not working now and it was last Friday—what's
changed?); or two different environments (what is the difference between QA and
Automation in Action: Domestic &
& General Insurance was implementing an OS platform migration for their IBM
WebSphere Application Server environments. Maintenance and code release were
both identified as risks, as these were both being handled manually, and
consequent inconsistencies had led to issues in production. This also prevented
it from producing concise audit trails of code and configuration change.
over 100 web applications to deploy against a tight timescale, Domestic &
General recognised that it would need to implement an automation solution to
handle much of the deployment.
was their chosen solution to manage this, handing the configuration and release
of the IBM WebSphere Application Servers, as well as the IBM WebSphere Message
Broker and IBM WebSphere MQ deployments. Through exploiting RapidDeploy’s
features of template configurations, user roles and workflow scheduling, over
90 of the existing applications were migrated within three weeks of RapidDeploy’s
RapidDeploy™ is making daily life easier and has freed us up from some tedious,
arduous tasks,” concludes Robert O’Connor of Domestic & General Insurance.
“We are delivering new web application capabilities to the business at a vastly
increased speed, supporting our ambitions for growth.”
Delivery is a practice that has grown as a discipline in its own right, but in
the context of this whitepaper we are defining it as the continuous delivery of
change, using not only the same code but also the same release process in every
target environment from development to production. This is often organized into
release "Pipelines" that define the route or process to be followed,
such as quality gates, approvers, etc. There are many considerations to make if
you are moving to a continuous delivery model, such as batch size, quality
gates, etc., that go beyond the brief overview provided in this whitepaper.
principle of Continuous Delivery is built upon the expectation of automation.
By automating the process of deployment through a workflow, not only are
individual deployments implemented faster, but the feedback loop for issues or
defects is also accelerated. This frees up individuals to focus on high-value
activities, and takes away low-level push-button tasks.
the influence of agile software development methods has grown, and aligned
closely with manufacturing principles like kanban, expectations for the infrastructure
teams within organizations to show similar flexibility have grown.
as “DevOps,” this concept is more of a philosophy than a specific methodology.
In practical terms, a DevOps approach to software releases can come as a
natural consequence of the successful implementation of Continuous Delivery.
Where Continuous Delivery can often be focused on the activities of developers
and QA testers, the DevOps concept very specifically extends this to the wider IT
the same requirements still underpin both. Providing a flexible infrastructure
service to development teams, whilst retaining standards around consistency and
reliability, is practically impossible without high levels of automation. Although
we can relate how ARA solutions are sometimes used around DevOps environments,
we need to be clear that there is no such thing as a ‘DevOps tool.’ As
mentioned above, DevOps is a philosophy, not a methodology. However, sitting as
the cornerstone of the release process, properly implemented Application
Release Automation should provide an effective support to the practical
considerations around the transition towards a DevOps approach.
from the reverse perspective, you do not need to have DevOps ambitions, to feel
the benefits of implementing a deployment automation solution.
Roles & Responsibilities
reliability and making the release process more robust are all based on the
assumption that a common approach is used by all actors across the Application
Lifecycle Management (ALM) process. This also makes the assumption that each of
the users and roles has different capabilities based on their security context
and area of responsibility.
short, this means giving access privileges to individual users (or groups of
users) to permit different activities. For example:
may have the ability to build and deploy applications into development
managers possess the ability to deploy to QA
Admins have the ability to define new target environments
team members have the ability to approve or schedule releases into production
Analysts possess the ability to view metrics and Management Information (MI)
reports related to the release process, including how many deploys per day,
week, to what environments, the success rate, etc.
this also can be used to prevent issues such as development code being deployed
directly into production without approval. The important point to note is all
users are using the same process and tooling, which provides continuity
throughout all environments.
ARA or deployment automation solution should have the capability to define,
group and manage roles to ensure that the correct release processes are duly
followed. Ideally, this function should be represented within a GUI for
administration by a business user.
is a cornerstone of any successful deployment automation solution, enabling the
users defined by the system to perform their own specific activities at will,
in real time, based on the privileges they have been granted.
a self-service model is one of the greatest accelerators for many
organizations. No longer do users have to wait for a system administrator to
deploy a development environment, make a configuration change etc. This not
only releases time so the team can concentrate on value-creating activities,
but it also means that team members working in the development phases are not
held up, waiting for relatively menial and repetitive tasks to be carried out.
It removes many individual bottlenecks.
Compliance & Audit
and consideration of SOX, PCI or other regulatory expectations can be an important
factor when considering changes to a software deployment process. Manual
deployment processes and custom automation frameworks will probably not provide
a comparable depth of capability to a purpose-built ARA tool.
robust deployment automation solution will maintain a log file of all job
executions, environments defined by the system, their configurations, etc.,
along with the capability to snapshot and compare target systems (to identify
any change that may occur outside of the defined process). This should provide
users with a much greater ability to address audit requirements, and with more
all the log data and configuration management information stored in an ARA
system, the opportunity for valuable BI and reporting is clear. A good
deployment automation framework should be able to provide visibility of the
current state of the application infrastructure, as well as metrics assessing
throughput, bottlenecks, etc.
approaches to infrastructure management, including deployment automation and
DevOps environments, will tend to be metric-driven through such systems. This
allows operations, managers and others to see information such as what is
installed (including versions), where it is installed, how long a deployment
has taken, how many deployments have been made, etc.
Implementing an ARA solution
to address the topics raised in this article is RapidDeploy™, the leading
enterprise-class release automation solution.
started with RapidDeploy is about to become incredibly easy with the release of
RapidDeploy: Community Edition. This is a completely free and full-featured
version of RapidDeploy, for handling up to five target environments. This will
be available from www.midvision.com, and prior to release you can register for
access here: http://www.midvision.com/rapiddeploy-community-edition-earlyregistrations
itself is a simple Java application framework that can run from within all
common middleware, including IBM WebSphere. RapidDeploy can be handled through
its own GUI, via Web Services, or through a CLI. There are a full range of
plugins available that extend the RapidDeploy solution to handle all common
enterprise environments, and you can even build custom plugins to handle bespoke
technologies. Full documentation to get you started is available from: http://docs.midvision.com/LATEST/
you would like help from MidVision, you can visit the community support forums
or receive more formal guidance from our team via firstname.lastname@example.org.
Global WebSphere Community @ websphereusergroup.org MidVision
article discusses management and monitoring of transactional environments by
using a simple representation of the problem; a metaphor of highways as the
network, traffic as the data, on ramps and off ramps as the intersection where
the data meets the network.
initially attracted and volunteered to work on this strange technology called
MQ back in 1996. As a person who was working on converting 3270 screen data
into HTML via TN3270 sessions, I thought this was a novel way to get data from
here to there, forget the screens, stick to just the data, and use any
programming language on any platform I chose. Programmatically and
architecturally, it made sense. At least more so than mapping 3270 characters
matured, many companies were using it to assure transactions would get from
point A to point B and, if they were lost or misplaced, could at least be
located most often in a default dead-end location.
forward to the current day, and transactions carry data across web services,
rest, EDI, and enterprise message backbones. From afar, the transactions can be
thought of as traffic going from point A to point B.
looking down from far above a set of high-ways; cars and trucks and buses
travelling along, entering and leaving this data highway. There are many
features besides the actual highway to think about: lanes, intersections,
tolls/gateways, merging lanes and also different languages for signage.
In real world road travel, control devices are good enough to direct
the traffic, but don’t do much for con-figuring changes to the patterns or
alerting problems either before they occur, or when a certain behavior or
threshold is exceeded. Most of that is dependent upon the manual intervention
of traffic, police or construction crew. Come to think of it, that reminds me
of many IT organizations!
In business, ‘traffic’ usually contains important content to run the
business whether financial data, supply chain data, personal information,
travel logistics, etc. If these transactions are lost or misplaced, it
generally causes grief for the corporation responsible for the transaction, not
to mention their customers; whether it is a monetary loss (trades, bank
transactions), or information loss that prevents business to move forward,
(such as airlines, hotels), supply chain orders whether retail, wholesale, B2B,
parts, inventory, etc.
In order to prevent such scenarios, companies spend considerate amounts
of money on staff to make sure they can manage the entire transactional environment
beyond the already significant sums spent for logistics of hardware and
software. This is done in order to be able to make appropriate configuration
changes to prepare or react to that environment. Normally, this is accomplished
via a buildup of scripts and process libraries in order to keep watch on these
transactions and the environment in which they flow.
But many companies realize that a specialized soft - ware product
focused on these tasks is needed for this purpose. In almost every case, the
operating cost of a commercial product is far less than the budget necessary to
build, enhance, maintain and support all of these responsibilities in-house.
the case of specialized software products, some part of the IT staff is
generally dedicated to managing those as well. This can range from entire
departments for some of the heavier and surprising-ly expensive solutions that
require lots of scripting or customization, to just one or two people for more
simple and intuitive solutions—even within large organizations. As a consumer
of these products I always found some vendors a bit haughty in that they’d
charge expensive fees to license products that in turn made me do tons
of scripting and deployment. My philosophy as a consumer was that if I’m going
to pay for it, I shouldn't be doing all the work.
Could you imagine hiring a contractor to build front porch steps
onto your home and he says, “OK, start measuring and figure out how many bricks
per step and count them out for me?”
go back to the traffic scenario. The advent of multi-platform then
multi-delivery environments like web services, rest, and EDI in addition to
messaging technologies has made the chore of managing those environments
problematic because the work needs to be done on more than just the enterprise
messaging backbone. This includes administration, configuration, and event
monitoring. Typically when a variety of delivery systems and associated tools
on each are utilized within the transactional environment, it makes correlation
of problem events very difficult, isolatingand
identifying problems slow, and necessary problem resolution is often delayed.
consider the enterprise message backbone as a main highway. The entry points of
data can come from many interfaces to that highway: web services of many types,
database queries, EDI interfaces, pro-grams running in local or application
server containers, transformation engines like Message Broker and IBM WebShere
entry points are the on-ramps to the e-message highway. Data jumps onto the
e-message highway to be delivered elsewhere via an exit point or off-ramp.
Sometimes the data exits the highway, goes to a rest stop, and gets
transformed. Sometimes the data is merged with other data onto the e-message
highway. When this occurs it may need to be sequenced in order to find out
which merged data belongs with which. This is similar to friends traveling in
different cars trying to follow each other to a location, but other cars are
interspersed between them.
notice they’ll all get off on the same exit even if not directly in line with
each other. As with any complex traffic, making it all flow is a multifaceted
effort of preparation and adjustments.
to make sure this important data is not lost in this more complex environment,
more than just the E-message highway needs to be managed, administered, and
monitored. ALL of the on and off ramps, toll stations, rest stops, and weigh
stations do as well, and in some cases, even the actual license platenumber
needs to be identified! Think about this simple analogy. If an exit (off-ramp)
is closed, there’ll likely be a backup on the highway approaching it.
Conversely your highway may be traffic free. Is that good or bad?
depends upon the time of day and the normal pattern of activity on that
highway. If it’s rush hour and a stretch of road is empty it may be because a
main feeder (on-ramp) is down. Think about tangent devices to middleware such
as IBM WebSphere DataPower. If it’s not feeding the transactions through to
your middleware, then even a flawless middleware system won’t help the
domain of transactions was originally with programmers, the fate of the
programs controlling the actual handshake between systems has moved to platform
software, originally via APPC, then EDI, then enterprise messaging, then web
services not solely for security but for the actual data exchange. Each
performs their task admirably but differently.
Since the programmers themselves are forced to use quicker methods
to keep up with the projects and timelines they are responsible for, it is not
unusual that shortcuts are taken. These shortcuts can cause many issues. While
developer talent is there, it is sometimes transient, usually understaffed and
under-estimated for time requirements. Because of this, there is a higher risk
to maintain transactional integrity. The characteristics that identify an MQ
transaction are not the same characteristics that identify an application
server transaction, and not the same as for a database trans-action, and so on.
developers of these transactional systems don’t plan for this
cross-identification, then when an error occurs, there can be correlation
issues when determining where the transaction went wrong. The effort involved
in the synchronization of the characteristics that identify the error, capture
and write it to a data store, and correlate the information about the error,
can significantly decrease performance or require significant storage
requirements in large volume transaction sites.
effort and costs involved (time, storage, performance) most transactional
environments do not manage or monitor their transactions using this method.
Therefore it is imperative to become more proactive and isolate the points of
failure in advance. If the points of failure can become known and corrected
before a major transactional error occurs, then the above costs are mitigated.
to do so, some time is needed up-front to identify the on-ramps, off-ramps, and
to identify the behavior on each of these that would suggest there may be a
problem creeping up. After-the-fact problem-solving can be time consuming,
resource consuming, frustrating, and fruitless. In this scenario, what is
usually considered the most cost-effective solution is log analysis. Given
different logs for different platforms are in different locations and formats,
this is a problematic way to solve an issue as well. While there are some good
software solutions for this type of forensic log analysis, sometimes this method doesn't allow for quick isolation, investigation, and action.
So why do companies spend so much on this infra-structure
management? It’s because the e-commerce infrastructure provides convenience and
therefore customer satisfaction, and that is how you gain and retain customers.
It is a quick way of gathering and disseminating business data, gaining faster
time to market for new products and services, streamlining application business
processes, and reducing operating costs.
When I worked for a typical large NYC bank in the 1980s, a paper bank
transaction cost about $1.10. Voice response technology brought it down to 50
cents. Home banking software to 25 cents. Today, internet banking brings it
down to just 13 cents. The positive, operating-cost impact is eye-opening when
you consider the national and global reach of banks and the growing volume of
the reliance on e-commerce has a flip side. In today’s IT world, the
applications enabling the business processes are more distributed, more complex
and more prone to transaction slowdowns or outright failure. Reliance on
forensic problem analysis can be time consuming. This is not acceptable for any
business process. The sheer volume and importance of these transactions make it
essential to proactively manage and monitor that infrastructure in order to
keep it continuously running.
bottom line is that transactional systems and their associated infrastructure
are essential for corporations to do business in today’s world. How do you
achieve this in a cost-effective manner and still provide agile, flexible,
convenient services to customers or B2B partners?
answer is clear: “Be proactive!”
following list contains rules of thumb to enable proactivity.
Use products or solutions
standards-based platforms and support standard software interfaces so that you
do not paint yourself into a corner with proprietary systems that make change
difficult and costly.
to automate management procedures, to significantly increase your efficiency,
versus having limited internal staff do everything in a reactive mode.
to manage events at ALL the locations of the transactional middleware infrastructure, on the main highway, and the on-ramps and off-ramps.
an easy, intuitive interface, limit deployment time and reduce maintenance
inefficiency means wasted (cost) dollars, hurting the bottom
line. Loss of productivity means even more wasted (revenue) dollars, hurting
the top line. Identifying and deploying the most efficient and operationally
cost-effective monitoring and management solution has been proven to increase
business process profitability—a core goal of every organization.
White paper: “Managing & Monitoring
Transactions on the Middleware Superhighway”
Author: Peter D’Agosta, Product Manager, Avada Software
Lessons learned from an IT Veteran
it’s due to lessons learned over 30+ years in IT development, support,
administration, architecture, planning, and product management, or maybe it’s
because I gravitate toward new possibilities, but I have developed a core
belief that simplifying IT is the best approach to get things accomplished. In
a discipline notorious for making the complicated even more complicated, my
goal is simply to remove the unnecessary complications from otherwise efficient
While the acceptance of using open source has shifted methods and
techniques quite a bit in recent years, most IT people, especially those who
have been in the field for 20+ years, have a similar experience early in their
career: While in the midst of either operationally or programmatically trying
to solve an urgent problem, many IT brethren would rather watch you squirm than
give you a simple syntax or a reference to suitable material that would lead to
quicker resolution. I under-stand the ‘teach them to fish’ philosophy, but when
Mrs. O’Leary’s barn is burning you need one direct answer to extinguish the
fire, not four to five cascading questions and a treasure map to get there.
knew Unix, getting a Unix admin to give you the syntax of some arcane command
(grep | ps –ef...) was like pulling teeth from an elephant. When
learned zOS I was given a CMD line interface only to discover there was TSO
(F-keys, Menus, short cuts). My first impression of IT people was similar to
that of a fraternity; you had to do some crazy stuff and show you were worthy
before they helped you out. After scrounging for vendor docs and creating lots
sheets’ to put all the commands at my cut/paste fingertips, I could finally
concentrate on the problem at hand and not the syntax. Of course using a GUI
would have been out of the question because it was for ‘end users’!
Why am I
bringing this up? Because it brings awareness to the fact that significantly
improving productivity is more important than being a member of the IT
“know-it-all” fraternity. At one time while I was responsible for instructional
training, I realized the ability to “keep it simple” was crucial to the
position. Another enlightening moment came when I was taking graduate courses
and someone had asked me if I was a computer science major or an IT major. I didn't really know the distinction and like all
creatures of both pursuits I sought out books and manuals, and periodicals.
Sorry Yahoo and Google users, but we had to do it the old fashion way back
then. Plus, I needed to do something in between submitting my ‘batch job’ to be
realized that I was not trying to solve equations for orbit trajectories, or
figure out where the Fibonacci sequence hit seven digits. What I was
essentially trying to do was move this data to another place, perhaps in a
different format, and make sure it showed up there. That basic understanding of
being an IT person (and not a computer science person) has led me happily
around the globe and to interesting companies; initially with Pan Am, Volvo,
Prodigy (yes, the email programI developed was new and innovative at the time,
but essentially was still ‘move this data from here tothere,
and make sure it arrives in a format we can all read’); then later to scores of
F1000 companies as a consultant for technologies like MQ messaging, portal
servers, web services, and basically any transactional delivery system
Sources: The Global WebSphere Community @ websphereusergroup.org Avada Software @ avadasoftware.com