Category: Uncategorized

RPA is an expensive workaround for future

What is RPA?

RPA or Robotic Process Automation is, to put it simple,  a predefined set of clicks of your mouse on top of an application. In theory it gives you ability to replace a human being and save costs on clicking the buttons and entering some data. Sounds great.

What is AI?

AI or Artificial Intelligence is means of simulating a human being defining those mouse clicks that we want to achieve in RPA. To put it very simple – it’s a capability of a machine to take actions upon multiple variables in different situations.

So why RPA is an expensive workaround?

Being an introvert, the words are not sprouting to make huge paragraphs like other websites, nevertheless, the key point is the following. Enterprises have dozens of applications that are used by business, in many cases they differ from country to country – different versions, different functionality, maybe even different applications at all. In order to maintain RPA functionality for all of these, you need to keep your change management function very busy to say the least.

Imagine, you run your business unit in Luxembourg on SAP. It has a certain version, your development team has several environments to work with, on top of your existing features, you need to take into account RPA. Every single test must cover RPA which is madness. You move a button, you need to change your RPA. Every time. There’s no escape.

Imagine, you have more than Luxembourg. Every time there is a major SAP release, RPA brings hell of a burden, in every country, with all specific things.
Remember RPA is a dumb thing – click here to do this, click there to do this.

Whatever the business benefit is in a short term, you have to think – what’s coming up. It contradicts the management setup nowadays – no one gets bonuses in a long term anymore. Whenever you bring benefit today (and huge setback tomorrow) you get a bonus. KISS.

What should enterprises do?

Enterprises should embrace change and focus solely on AI. It’s not going to be a break through moment instantly and the investment is huge, nevertheless, in a long term, you get some sort of intelligence behind your robotic actions and you gain more by paying less adapting every single robot you asked your vendor to develop.

Advertisements

Kubernetes autoscaling

We always want to automate things and while Kubernetes already has a lot of unique features, the autoscaling part is missing. Perhaps I haven’t looked at the latest releases and I’m already missing something.

Here I’d like to outline my findings and the way I did the autoscaling on Kubernetes.

Requirements for the setup:

  1. Patience
  2. Working Kubernetes cluster

The setup involves time-series database InfluxDB and Heapster for monitoring and some custom scripting for evaluation and scaling.

For the application, I used simple WordPress container that had one container running by default. The replication controller config looked like this.


apiVersion: v1beta3
kind: ReplicationController
metadata:
labels:
name: wordpress
name: wordpress
spec:
replicas: 1
selector:
name: wordpress
template:
metadata:
labels:
name: wordpress
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: yourpassword
ports:
- containerPort: 80
name: wordpress

For the WordPress service, we add configuration into Heapster. I did the following:

"targets": [
{
"target": "randomWalk('random walk')",
"function": "derivative",
"column": "value",
"series": "cpu/usage_ns_cumulative",
"query": "select container_name, derivative(value) from
\"cpu/usage_ns_cumulative\" where $timeFilter and labels =~ /name:wordpress/ group by
time($interval), container_name fill(0) order asc",
"condition_filter": true,
"interval": "5s",
"groupby_field_add": true,
"groupby_field": "container_name",
"alias": "",
"condition": "labels =~ /name:wordpress/",
"rawQuery": false,
"fill": "0"
}
],

With this configuration I was able to monitor all containers that belonged to the WordPress service in Kubernetes which dynamically allocated the total capacity and the used capacity of all containers of WordPress application.
After that I used httperf to see if it actually works. I could see the following graph in Grafana.

Now the interesting part comes with getting the state of the container group.

I managed to get the data of last 5 minutes with curl.

curl -G 'http://ip_address:8086/db/k8s/series?u=root&p=root&pretty=true' --data-urlencode
"q=select container_name, derivative(value)from \"cpu/usage_ns_cumulative\" where labels =~
/name:wordpress/ and time > now() - 5m group by time(5s)"


curl -G 'http://ip_address:8086/db/k8s/series?u=root&p=root&pretty=true' --data-urlencode
"q=select container_name, mean(value) from \"memory/usage_bytes_gauge\" where labels =~
/name:wordpress/ and time > now() - 5m group by time(5s)"

To use the data, in simple PHP script, I did the following:

$options = array(
CURLOPT_URL => $url,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_SSL_VERIFYPEER => false,
CURLOPT_SSL_VERIFYHOST => 2,
CURLOPT_HTTPGET => true,
);
curl_setopt_array($curl, $options);
$result = curl_exec($curl);
$result = json_decode($result, true);
echo $result[0]['points'][0][1]

Now the last point is to trigger the autoscaling. For that I used cron to schedule the check every 5 minutes with a simple shell script that would have the thresholds defined and operations with kubectl utility.

Basically, it takes the data from Heapster and compares to the defined threshold.

mb=1048576
57
mem=$((mem / mb))
getcurrent=`kubectl describe rc wordpress > /tmp/wprc`
replicas=`grep "Replicas" /tmp/wprc | awk '{print $2}'`
triggerpoint=500
currentstatus=$((mem * replicas))
triggerneeded=$((triggerpoint * replicas))
new=$((replicas+1))
if [ $currentstatus -gt $triggerneeded ]; then
echo "Memory usage above threshold, adding replica" \
`/opt/bin/kubectl resize rc wordpress --replicas=$new`
else
echo "Nothing to do this time"
fi

I used httperf tool to simulate user requests in order to verify our autoscaler works. During the test we succeeded with the autoscaling script and new containers were scheduled.

Name: wordpress
Image(s): wordpress
Selector: name=wordpress
Labels: name=wordpress
Replicas: 3 current / 3 desired
Pods Status: 2 Running / 1 Waiting / 0 Succeeded / 0 Failed

That’s it, the dirty autoscaling works on Kubernetes. I not a developer therefore I could not really go deep with coding, nevertheless, perhaps someone will find it interesting to pick up as a starting point to play with Kubernetes autoscaling.

Cheers

Running Kubernetes on Debian

I couldn’t find a very straight forward manual on how to get Kubernetes running on a Debian (Jessie) system therefore decided to write it down mainly for myself nevertheless it may be useful for other beginners like me.

First of all, get your system up to date

apt-get update
apt-get upgrade

Install dependencies

apt-get install gcc golang-go curl git

Get ETCD (get updated version here – https://github.com/coreos/etcd/releases/)

curl -L  https://github.com/coreos/etcd/releases/download/v2.0.3/etcd-v2.0.3-linux-amd64.tar.gz -o etcd-v2.0.3-linux-amd64.tar.gz
tar xzvf etcd-v2.0.3-linux-amd64.tar.gz
cd etcd-v2.0.3-linux-amd64

You may want to test if etcd is working

./etcd

stop etcd (Ctrl+C) and copy the binary to /usr/bin

cp etcd /usr/bin/etcd

Now get the latest Kubernetes release

git clone https://github.com/GoogleCloudPlatform/kubernetes.git
cd kubernetes
make release

You will be asked to download docker image (450MB)
In the end you should see something like this

...
+++ Integration test cleanup complete
+++ Integration test cleanup complete
+++ Running build command....
+++ Output directory is local. No need to copy results out.
+++ Building tarball: client darwin-386
+++ Building tarball: client darwin-amd64
+++ Building tarball: client linux-386
+++ Building tarball: client linux-amd64
+++ Building tarball: client linux-arm
+++ Building tarball: client windows-amd64
+++ Building tarball: server linux-amd64
+++ Building tarball: salt
+++ Building tarball: test
+++ Building tarball: full

Now the last step – to start the cluster

 hack/local-up-cluster.sh

You should see this in your terminal

+++ Building go targets for linux/amd64:
 cmd/kube-proxy
 cmd/kube-apiserver
 cmd/kube-controller-manager
 cmd/kubelet
 plugin/cmd/kube-scheduler
 cmd/kubectl
 cmd/kubernetes
 cmd/e2e
 cmd/integration
 cmd/gendocs
+++ Placing binaries
Starting etcd

etcd -data-dir /tmp/test-etcd.FoRYZH --bind-addr 127.0.0.1:4001 >/dev/null 2>/dev/null

Waiting for etcd to come up.
+++ etcd:
{"action":"set","node":{"key":"/_test","value":"","modifiedIndex":3,"createdIndex":3}}
Waiting for apiserver to come up
+++ apiserver:
 {
 "kind":
 "PodList",
 "creationTimestamp":
 null,
 "selfLink":
 "/api/v1beta1/pods",
 "resourceVersion":
 8,
 "apiVersion":
 "v1beta1",
 "items":
 []
 }
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
 /tmp/kube-apiserver.log
 /tmp/kube-controller-manager.log
 /tmp/kubelet.log
 /tmp/kube-proxy.log
 /tmp/kube-scheduler.log

To start using your cluster, open up another terminal/tab and run:

 cluster/kubectl.sh config set-cluster local --server=http://127.0.0.1:8080 --insecure-skip-tls-verify=true --global
 cluster/kubectl.sh config set-context local --cluster=local --global
 cluster/kubectl.sh config use-context local
 cluster/kubectl.sh

Now you can use scripts from directory “cluster” to manage pods

I was successful with this example – https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/guestbook

ITIL project pitfall – lack of knowledge inside organization

Henry Ford said:

“The only thing worse than training your employees and having them leave is not training them and having them stay.”

It is a really good quote to pass to stakeholders when you see their organization’s inability to get to the common language with consultants. In many cases there is a common practice within organization that is claimed to be “fully ITIL compatible” and “works perfectly”. However, when it comes to the real work of improving something in the process, there can be a lack of understanding of several things.

When you are to introduce new service or process, make sure you have properly trained personnel to take over the things. It will help you a lot in various phases of the implementation – proper design with input from related processes or services, smooth integration of services or processes, etc. If you are creating Release Management or Change Management process, make sure organization has a least someone with ITIL Service Transition certification unless you want to have a fight with waterfall in the project