RPA is an expensive workaround for future

What is RPA?

RPA or Robotic Process Automation is, to put it simple,  a predefined set of clicks of your mouse on top of an application. In theory it gives you ability to replace a human being and save costs on clicking the buttons and entering some data. Sounds great.

What is AI?

AI or Artificial Intelligence is means of simulating a human being defining those mouse clicks that we want to achieve in RPA. To put it very simple – it’s a capability of a machine to take actions upon multiple variables in different situations.

So why RPA is an expensive workaround?

Being an introvert, the words are not sprouting to make huge paragraphs like other websites, nevertheless, the key point is the following. Enterprises have dozens of applications that are used by business, in many cases they differ from country to country – different versions, different functionality, maybe even different applications at all. In order to maintain RPA functionality for all of these, you need to keep your change management function very busy to say the least.

Imagine, you run your business unit in Luxembourg on SAP. It has a certain version, your development team has several environments to work with, on top of your existing features, you need to take into account RPA. Every single test must cover RPA which is madness. You move a button, you need to change your RPA. Every time. There’s no escape.

Imagine, you have more than Luxembourg. Every time there is a major SAP release, RPA brings hell of a burden, in every country, with all specific things.
Remember RPA is a dumb thing – click here to do this, click there to do this.

Whatever the business benefit is in a short term, you have to think – what’s coming up. It contradicts the management setup nowadays – no one gets bonuses in a long term anymore. Whenever you bring benefit today (and huge setback tomorrow) you get a bonus. KISS.

What should enterprises do?

Enterprises should embrace change and focus solely on AI. It’s not going to be a break through moment instantly and the investment is huge, nevertheless, in a long term, you get some sort of intelligence behind your robotic actions and you gain more by paying less adapting every single robot you asked your vendor to develop.


Kubernetes autoscaling

We always want to automate things and while Kubernetes already has a lot of unique features, the autoscaling part is missing. Perhaps I haven’t looked at the latest releases and I’m already missing something.

Here I’d like to outline my findings and the way I did the autoscaling on Kubernetes.

Requirements for the setup:

  1. Patience
  2. Working Kubernetes cluster

The setup involves time-series database InfluxDB and Heapster for monitoring and some custom scripting for evaluation and scaling.

For the application, I used simple WordPress container that had one container running by default. The replication controller config looked like this.

apiVersion: v1beta3
kind: ReplicationController
name: wordpress
name: wordpress
replicas: 1
name: wordpress
name: wordpress
- image: wordpress
name: wordpress
value: yourpassword
- containerPort: 80
name: wordpress

For the WordPress service, we add configuration into Heapster. I did the following:

"targets": [
"target": "randomWalk('random walk')",
"function": "derivative",
"column": "value",
"series": "cpu/usage_ns_cumulative",
"query": "select container_name, derivative(value) from
\"cpu/usage_ns_cumulative\" where $timeFilter and labels =~ /name:wordpress/ group by
time($interval), container_name fill(0) order asc",
"condition_filter": true,
"interval": "5s",
"groupby_field_add": true,
"groupby_field": "container_name",
"alias": "",
"condition": "labels =~ /name:wordpress/",
"rawQuery": false,
"fill": "0"

With this configuration I was able to monitor all containers that belonged to the WordPress service in Kubernetes which dynamically allocated the total capacity and the used capacity of all containers of WordPress application.
After that I used httperf to see if it actually works. I could see the following graph in Grafana.

Now the interesting part comes with getting the state of the container group.

I managed to get the data of last 5 minutes with curl.

curl -G 'http://ip_address:8086/db/k8s/series?u=root&p=root&pretty=true' --data-urlencode
"q=select container_name, derivative(value)from \"cpu/usage_ns_cumulative\" where labels =~
/name:wordpress/ and time > now() - 5m group by time(5s)"

curl -G 'http://ip_address:8086/db/k8s/series?u=root&p=root&pretty=true' --data-urlencode
"q=select container_name, mean(value) from \"memory/usage_bytes_gauge\" where labels =~
/name:wordpress/ and time > now() - 5m group by time(5s)"

To use the data, in simple PHP script, I did the following:

$options = array(
CURLOPT_URL => $url,
curl_setopt_array($curl, $options);
$result = curl_exec($curl);
$result = json_decode($result, true);
echo $result[0]['points'][0][1]

Now the last point is to trigger the autoscaling. For that I used cron to schedule the check every 5 minutes with a simple shell script that would have the thresholds defined and operations with kubectl utility.

Basically, it takes the data from Heapster and compares to the defined threshold.

mem=$((mem / mb))
getcurrent=`kubectl describe rc wordpress > /tmp/wprc`
replicas=`grep "Replicas" /tmp/wprc | awk '{print $2}'`
currentstatus=$((mem * replicas))
triggerneeded=$((triggerpoint * replicas))
if [ $currentstatus -gt $triggerneeded ]; then
echo "Memory usage above threshold, adding replica" \
`/opt/bin/kubectl resize rc wordpress --replicas=$new`
echo "Nothing to do this time"

I used httperf tool to simulate user requests in order to verify our autoscaler works. During the test we succeeded with the autoscaling script and new containers were scheduled.

Name: wordpress
Image(s): wordpress
Selector: name=wordpress
Labels: name=wordpress
Replicas: 3 current / 3 desired
Pods Status: 2 Running / 1 Waiting / 0 Succeeded / 0 Failed

That’s it, the dirty autoscaling works on Kubernetes. I not a developer therefore I could not really go deep with coding, nevertheless, perhaps someone will find it interesting to pick up as a starting point to play with Kubernetes autoscaling.


Running Kubernetes on Debian

I couldn’t find a very straight forward manual on how to get Kubernetes running on a Debian (Jessie) system therefore decided to write it down mainly for myself nevertheless it may be useful for other beginners like me.

First of all, get your system up to date

apt-get update
apt-get upgrade

Install dependencies

apt-get install gcc golang-go curl git

Get ETCD (get updated version here – https://github.com/coreos/etcd/releases/)

curl -L  https://github.com/coreos/etcd/releases/download/v2.0.3/etcd-v2.0.3-linux-amd64.tar.gz -o etcd-v2.0.3-linux-amd64.tar.gz
tar xzvf etcd-v2.0.3-linux-amd64.tar.gz
cd etcd-v2.0.3-linux-amd64

You may want to test if etcd is working


stop etcd (Ctrl+C) and copy the binary to /usr/bin

cp etcd /usr/bin/etcd

Now get the latest Kubernetes release

git clone https://github.com/GoogleCloudPlatform/kubernetes.git
cd kubernetes
make release

You will be asked to download docker image (450MB)
In the end you should see something like this

+++ Integration test cleanup complete
+++ Integration test cleanup complete
+++ Running build command....
+++ Output directory is local. No need to copy results out.
+++ Building tarball: client darwin-386
+++ Building tarball: client darwin-amd64
+++ Building tarball: client linux-386
+++ Building tarball: client linux-amd64
+++ Building tarball: client linux-arm
+++ Building tarball: client windows-amd64
+++ Building tarball: server linux-amd64
+++ Building tarball: salt
+++ Building tarball: test
+++ Building tarball: full

Now the last step – to start the cluster


You should see this in your terminal

+++ Building go targets for linux/amd64:
+++ Placing binaries
Starting etcd

etcd -data-dir /tmp/test-etcd.FoRYZH --bind-addr >/dev/null 2>/dev/null

Waiting for etcd to come up.
+++ etcd:
Waiting for apiserver to come up
+++ apiserver:
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.


To start using your cluster, open up another terminal/tab and run:

 cluster/kubectl.sh config set-cluster local --server= --insecure-skip-tls-verify=true --global
 cluster/kubectl.sh config set-context local --cluster=local --global
 cluster/kubectl.sh config use-context local

Now you can use scripts from directory “cluster” to manage pods

I was successful with this example – https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook

defect management – release management perspective

Defect management – what is it?

Basically it’s a bug, defect, issue tracking. Usually there’s plenty of different options for a tracker and also Release Management is using some bits of that. Usually tracker is used to categorize defects by product or application.

Sometimes (in my case) it may become quite complex. Once you have 10+ environments and somewhat 100 interfaces to/from your major apps, there’s no doubt it’s a challenging environment. Let’s say – challenge is not only to track everything but to actually manage this information. Luckily, it’s not Release Management activity; however, there is a point where we are touching this area.

Release Management activity is making sure all delivered packages are tested prior deployments to particular environment and after that connection of RM function to particular release is quite weak until it has to be promoted to another environment. What RM should take into account is the package has to be evaluated not only from application and/or infrastructure part but also on defect level (oh, BTW, did I mention change requests before? Those are also here, business is not always willing to have separate release for that purpose).

So, how do we do assessment on defect level? Defects may contain a lot of valuable information like

  • release identifier (can be number, letter, etc. – depends on your environment)
  • environment (where the defect was discovered)
  • status (input pending, ready for deployment, closed, etc.)
  • business input (this can be divided to priority, severity, deadlines)
  • dependencies on other defects
  • etc.

It has to be decided what is the information you really need for Release Management activities.

So far quite easy, right? So, where’s the challenge here?

Challenges start when defect information is different from initial release package. Let’s do some list of what may happen to a defect:

  • defect can be fixed (good) or even closed (perfect)
  • defect can open another defect or new dependency is discovered
  • defect can be re-opened
  • defect can be rejected
  • defect can be postponed
  • etc.

As you can see, there is “normal” defect life-cycle with standard open-analyze-build-test-close and not so normal life-cycle when the route of defect is not predictable.

What is pain for Release Manager?

It is very important to understand what to do with such release leftovers but firstly we need to discover them.
When receiving your first release package, make sure all information you need is there – defects, their origin, business inputs and dependencies. It will help you to perform analysis before promotion of the package to your next environment. Before each promotion, defects should be assessed on everything is important to you and your organization.

Challenge with broken fixes

One defect is re-opened – should you wait for new release package or promote package irrespective testing results? Perhaps building a new package or excluding fix from your package. Decision has to be taken based on your environment – what can you actually do? You can decide only if you have more than one option.

Challenge with showoffs

You have many releases planned and defects are linked accordingly. Future release is not that far and something is ready. Why not to include this into your current release? Here’s one of the risks – a new defect can be opened on particular functionality and testing can be performed only in future release (dependency on another app or interface). Here’s the thing – if you have few environments, easy way of excluding fixes out of the package and deployment can be performed within hours – you are absolutely safe. If you have challenging environment in terms of size, speed, decision making and other important variables, this may cause hardcore problems. You can forget your work-life balance at the very same moment!

Challenge with postponed fixes

Sometimes even postponed fixes can make your job difficult. Whenever defect fix is postponed, make sure that there is procedure for this activity. The defect should be routed through functional or technical architects to understand whether it can actually be postponed and has no dependencies. It would be a bad surprise to discover a blocking defect that is not included in any release package.

I could continue the list but I would like to concentrate on the lessons learned!

In parallel to standard defect information, I would suggest to keep in mind also parent-children relationship on defect level. Make sure parent defects of your release packages have all children included in the packages for your release.

If every application has it’s own releases, maintain “big” releases to make sure dependencies on other apps are assessed and are promoted all together to avoid failures.

Pink Elephant 13 conference

In case you are visiting Pink Elephant 13 conference in Las Vegas, there are few Release Management related topics:

Monday, February 18, 2013

3:20 PM – 4:20 PM Robin Hysick Mature Release Management – What It Really Looks Like!
4:35 PM – 5:35 PM Darren Dunn Using Release Management to Improve Financial Governance at Bell Aliant

Tuesday, February 19, 2013

3:20 PM – 4:20 PM Anthony Krasinski Still Managing Chaos Through Release & Change Management … But Getting Better!

See you there!

Release Notes

Before leaving on vacation to Vietnam, I would like to share some insights about release notes.

Release Notes is very important document when preparing the patch for the environment. The process for reviewing and approving release notes should be formalized so that delivery/vendors are taking this seriously. For that I would suggest to have a standard template for the Release Notes document.

The format of the template

The first thing to have in place is format. The template should be standardized for everyone and expectations should be clear for each involved party. Usually Release Manager is the owner of this document. The template should include general information about the release –

  • application
  • version
  • type of release (patch or configuration or parameter change)
  • release (if you have more than one)

and more details on technical and/or functional observations.

If you are responsible for functional assessment, it is critical to have this information in the Release Notes. Functional assessment is usually done either on application functionality level or product/service level in cross-application environments.

Next important step is to have technical assessment. You will need sections for infrastructure part like operating system requirements, hardware requirements, database, etc. Make sure that template does not state only minimum requirements but also have scalability view.

The most important sections, however, are dependencies and roll-back plans.

Areas to focus on

When it comes to the assessment, make sure you have people supporting the process. It is common mistake to rely only on Release Manager to identify the risks or potentially negative impact on environment after deployment. If you need to assess functionality, you will need support from business analysts, product owners. When it comes to technical assessment, it may become even more difficult if you have many environments and huge infrastructure. Discuss the requirements for infrastructure with the owners of it. Discuss the potential risks when it comes to growth. It is wise to add all major components to the template so that the release can be assessed efficiently among all parties involved.

The critical part is roll-back plan and files if applicable. You should not accept release notes without this information. Even though the assessment on functional or technical part might be very good and there are limited risks possible, the roll-back plan must always be present. In case something goes wrong and there is no roll-back plan, it might cause really big problems to deal with. For the major releases or very critical systems it might be worth considering option of testing roll-back plan.

You should also focus on the time – Release Notes should present information about downtime for deployment, time for roll-back if possible, location of the files, whether the deployment can be done only outside working hours and else time related questions. Release Note could also indicate the validation process of the release. You might need to involve not only technical persons but also functional.

Approval process

When it comes to the approval process it is always Release Manager’s responsibility. However, it is not likely that this decision is made by release manager only. They need to consult with functional, technical or even performance related people to evaluate the impact of the release. It should be formalized process and the commitment has to be done by each party; otherwise it will become the process for the sake of process with no value as an outcome of the review.


Release Management webcasts by BrightTalk

Perhaps you can find value for yourself by these new RM webcasts


Webcast: Nov 14 2012 5:00 pm Is There a Business Impact from Change, Configuration and Release Management?
Channel: IT Service Management
Attend: http://www.brighttalk.com/webcast/534/58549

Webcast: Nov 14 2012 6:00 pm Demystifying Release Management and Deployment Automation
Channel: IT Service Management
Attend: http://www.brighttalk.com/webcast/534/58543

Webcast: Nov 14 2012 7:00 pm Change, Configuration and Release Mngt – Enabling Breakthroughs or Bottlenecks?
Channel: IT Service Management
Attend: http://www.brighttalk.com/webcast/534/58893

Webcast: Nov 14 2012 9:00 pm Change, Config & Release, Oh My!? Which Comes First & Why?
Channel: IT Service Management
Attend: http://www.brighttalk.com/webcast/534/58875