DevOps. On the ground.

While DevOps brings Dev, QA and Ops – both infra and service management guys – together, how do you realize it on the ground? So, here you go (however, you will still need to get experts who practice it, and related technologies, day in and day out):
1. Lets’ start with people first. The simple straight point is to have Agile scrums – with daily stand-up meetings (assuming that either the team is co-located, or can see each others’ faces over a video-con in case of remote/ virtual teams) – involving representatives, though preferably all in the one-IT team, from Dev, QA and Ops. They all need to know what others are doing, and hence are all collectively responsible for quality as well as meeting the timelines together.
2. Cross-skill Dev, QA and Ops. Why? Because, in case there is suddenly a spurt of one activity over the other, each can help out the other. For example, once infra provisioning has been done, the infra guy can join the QA guy (and same goes for Dev, as that guy has not yet started coding) to create the test scripts. This is what we call as self-organizing teams.
3. Lets’ now talk about the processes. Shift-left Ops – infra. What does that mean? This is quite obvious that unless infra (with relevant environments) are provisioned, Dev or QA cannot start their work effectively. Writing code on paper does not help much. That’s why, once the change request or requirements flow in, discuss and align during the stand-up the exact requirements of infra and environments that need to be provisioned. Once the infra guy goes back to the desk, he/she immediately gets on to the job.
4. Shift-left QA. The test scripts should all be ready before Dev takes over writing any code, so that the code can burn through the test scripts once ready. It may fail for a few times before it starts passing all the test cases. The question is, how do you write the test scripts for a new application that needs to be built (given that we typically talk of automating regression tests with respect to some baseline code which can be assumed to have some state of perfection)? Such test scripts actually may be derived from the first-hand requirements that are written, either in form of BRS or SRS, and should contain both functional and non-functional test scenarios. Then the question is, how would it be foolproof? Note that we are not yet talking of foolproof applications here (it practically does not happen except for real-time life-saving applications that may launch a rocket or operate a surgical robot in a hospital); however, we are talking of continuous releases so that once you feel your current code (satisfying the current requirements) needs to improve, another scrum team would have already, and in parallel, started working on the next release! So, don’t worry.
5. Enable continuous release-readiness. Be prepared to churn out new scrum teams – a part of the current team can start working on the ever-evolving backlog using separate sprints – to work on next releases (see point 4 above).
6. Now comes technology & tools !

Automate infra and environment provisioning – use tools. What kind of code is ‘Infra as code’? Imagine something like –
initiate cloud1 {
RAM 4GB;
HDD 200GB;
create environment1 {
install Windows10 ( );
install Compiler-C++ ( );
install MS-Office10 ( );
}
}
……………. and your developers (or QA) immediately gets the relevant environment they wished for!

Automate tests – use tools. What kind of code is ‘Test as code’? Imagine something like –
test App1 {
/* Comment – each of the functions or methods denoted by XXX ( ) returns a pass or fail */
boolean success[1] = module1.test-login ( );
boolean success[2] = module1.test-billpayment ( );
boolean success[3] = module2.test-printbill ( );
if (any success[] value = fail) { return ‘build has failed’; }
}

Automate develop and build – use tools. What kind of code is ‘Function as code’? Imagine something like –
generate code1 {
if (usecase = login) { generate code-block-login ( ); }
if (usecase = billpay) { generate code-block-billpayment ( ); }
createInterfaces (usecase login, usecase billpay);
}
…………. and also use tools to automatically compile and package the code with other dependent application packs

Automate deployment – use tools. This goes hand in hand with automating infra and environment, given that deployments typically get stuck at infra or environment levels, assuming the build is successful. However, there may be cases where the build is successful (code has worked in developer’s machine), but it fails in target deployment environment (say, for some environment specific variable settings). In such cases, there are alternatives such as putting the code along with its development/test environment directly into the target, as a single container; though we would not discuss it here.

Now your code is in production. However, DevOps is all about getting all of the IT life cycle onto a single seamless chain. So what’s next? Hence, we automate applications and infra getting monitored in production in terms of say, performance and security. If a suspicious event takes place – say, application may show signs of getting into a hanging or breaking state, infra may point to a possible crash due to overload – it immediately and automatically triggers an incident for the service personnel.
Now say, it is Sunday and the personnel is off duty. So the next logical step is to have a self healing system that can, once the event is detected, provision relevant fail-safe mechanisms and enable the system to normally operate. Far-fetched? Nope. This is happening ………….. and I won’t cover here given that it warrants a separate blog 🙂

The Fear of DevOps. For Dev, QA and Ops.

fear-shadowsDevOps suddenly seems to be a great buzzword that is gradually taking over organizational mindsets thinking of achieving agility in how IT gets delivered, consumed and again delivered based on consumption-based behavioral feedback. But then, there are consumers of DevOps who have different ideas of what means DevOps; to what extent people, process and technology needs to be re-organized to achieve a state of DevOps – and this is good enough to not only confuse themselves, but also keep the IT organization (or vendor) who talks to that consumer’s business organization (or in-house IT) enough confused on the expectations. Again, coming down to the people who comprise the IT team – typically, the developer [Dev], quality assurance [QA] and Operations [Ops] – they have enough evidence of being confused and afraid, even more; more so, as their traditional roles and responsibilities start getting broken up. So let us see what this fear is all about, and how such fear can be broken in this new scheme of things. And then enters the enterprise architects of either organization, who either tries to drive the show or try his/her best to implement DevOps into how IT is to be done. This causes more confusion when the architect tries to get these seemingly heterogeneous roles work as one-IT team.

I am a developer. I am so afraid of moving away from deep coding (thereby, forgetting my language), and scripts are now perceived as code! Moreover, why should I participate in testing after all?
Till now, what meant development was all about functional coding – so knowledge in typical languages such as java, VB.NET, C++, C#, Scala, etc. were of essence. Enter DevOps. While the depth of your functional coding knowledge still has room to grow deep, you start getting (and hence, appreciating) the spread of IT when you indulge in coding for the entire IT space – more specifically, test coding and infrastructure coding in addition. Hence, the expectation is to move up the value chain where you see entire IT as code. Two things – (a) While scripts may also be perceived as code, it is a path towards higher complexities where the barrier between scripts and code become blurred, (b) While functional code development is already getting automated, though this is in nascent stage as of industry adoption today, a state of NoDev is poised to be realized in future whereby, the typical developer will have either of two options – cease to be a developer and lose the job, or grow up in the value chain with respect to realizing IT as code.

I am a tester. I am so afraid that DevOps in automating everything and I don’t have my cubicle to peacefully sit and test! My nice excel sheets with repeatable test frameworks are now taken over by the machine.
That is the very nature of DevOps – to break the walls between Dev and Ops. And what happens when this comes in? Traditional QA becomes redundant and gets embedded as part of the entire automation. So, manual testing goes away (at least partially to start with) and testing becomes the job of the machine; whenever code gets pushed on to the test environment, it gets automatically tested for quick and real-time feedback to Dev team on what didn’t work. So what does a tester do now? The tester again has two options – cease to be a typical tester and lose the job, or grow up in the value chain whereby the QA guy (that’s you!) is perceived as someone who creates the overarching automation test framework (read, test code) that will drive functional and non-functional development. For functional, collaboration with developers would definitely be needed; and for non-functional, collaboration needs to be done with the operations guys.

I am the Ops – infrastructure guy. Till now I was having so much knowledge on how to configure servers, workstations and environments. And suddenly there are new tools I have to learn which will eventually eat up my job!
The most important essence of DevOps is to bring in Ops automation; and more so, starting with infrastructure automation. Hence, what does the infrastructure guy do? Surprise, surprise – It is indeed creating the code that automates infrastructure and environment provisioning! And yes, this is niche when it comes down to actual DevOps implementation today given that while everyone seems to be doing Agile based Dev (and functional QA), Agile based Ops [Infra] is something that customers look forward to from a capability perspective. So tools such as Puppet, Vagrant, etc. suddenly become so important. And that makes you to move up the value chain whereby you sit on a single machine, either in office or at a beach in Florida, and keep provisioning any piece of IT infrastructure across the world on-demand – options include virtual machines, cloud, or even bare metal ! Also note that the world is going towards a NoOps state whereby you create this automation, sit back and enjoy monitoring whatever you have created – being the God of infra things!
Further, for the infrastructure guy, two greater things spin off – (a) release & deployment automation, and (b) continuous monitoring. So what are these? (Oops, you have to again learn something new; hope you are not afraid?)
(a) Release & deployment automation is whatever it takes to eliminate the Ops nightmare whenever a developer pushes code for release to any environment, and even to production (which is then called deployment or deployment-ready). And this is basically done again through … yes, you guessed it right – automation!
(b) Continuous monitoring – Given that DevOps is all about making entire IT seamless and agile, both in pre-production and in production, it entails monitoring of applications, data and infrastructure (including networks) on a continuous basis to detect any disaster, provide real-time feedback for action to the Ops team at large (including yourself) and provision feedback-based automated self healing mechanisms. Ooooh, too much to digest!

I am the Ops – Service Management guy. What will I do if tickets get resolved automatically?
Again, given that DevOps spans across the entire IT cycle, it would definitely mean that the world is slowly moving towards self healing systems. So a move higher up in this value chain would be to know how to create the service management automation framework where a ticket from tools such as ServiceNow or BMC Remedy may get auto approved based on a rule engine, resolved subject to a certain level of maturity with little or no manual intervention, and scripts automatically run to get everything up to the desired state in production that neutralizes the time taken to handle the disaster.

And a few more fears, rather myths, that needs to be demystified

1. DevOps will not work or is so irrelevant with our scheme of things – we do not need agility in driving our IT
Then think – what is the harm of being ready for the future, just in case business starts demanding agility in delivering IT; in case there is a end customer behavior suddenly getting too dynamic? Or, the country undergoing an economic turmoil and regulatory changes suddenly pace up? Or say, a new business model gets spun off to create new business opportunities needing agility for the very systems which woke up once in year? Or better still, the business undergoes a hostile takeover or a merger, and thus two large disparate IT landscapes found a need to talk to each other?

2. There is no point in doing DevOps because the customer’s non-IT business supply chain is too slow
Firstly, DevOps in not a concept purely for IT to adopt. It is more of a philosophy, and then a practice, that can be easily adopted across industries. Read the book ‘The Phoenix Project’ by Gene Kim, available on Books 24×7, to appreciate how manufacturing has parallels in DevOps – and also note that companies such as GE have increasingly adopted it across their non-IT processes. Also, just think of the agile readiness again – what happens when suddenly one day you have a non-IT vendor pushing changes to their supply chain systems every other day, having adopted DevOps?

3. I am quite comfortable in pursuing work with (and keep learning) the rich proven technology which is also niche in the market. So no need to get myself fitted on to this new world of DevOps.
Note that DevOps is not new, and is gradually becoming mainstream (Gartner says this, not me … just in case). And also, DevOps is evolving; some already say that DevOps is dead, giving way to stuff like NoDev and NoOps, and adopting AI to do IT. So if you still garner the fear in you, better think of (a) getting voluntary retirement after a few years, (b) open a grocery store, or (c) get into academics to teach ‘how to: belly dance in 3 days’.

….. Oh yes; in case you are still right, expect me knocking on your doors after a few years for a job back to good ol’ IT 🙂

A Manager’s challenges in working with a DevOps Architect

I am a quintessential ‘Manager’ who has an experience of working with an ‘Architect’. Both of us are having about the same years of experience in IT (more or less), but have been through different curves in the industry in a way that our paths went through different lines until we came together to form the DevOps CoE.

(For the sake of simplicity, let us not yet argue about the obsolescence of having a manager in the first place, in the DevOps world)
Rugby_socks.jpg

So what is it that makes working with an architect, and in this case specifically, a DevOps architect, so different for a manager? To understand that, lets’ briefly talk about what the manager essentially has to do. However, note that on top of being a manager, I have to handle sales and resources to an extent as well, as we just started a few months back and have the usual budget crunch. In that perspective, I have to (a) proactively search through, connect and reach out to people trying to gauge and talk about something called the ‘DevOps value proposition’ and make a sale, (b) once I get the work, trying to get profiles that fit on to the engagement, and possibly buy time talking about getting a value workshop done by an architect or a consultant to get a better view of the engagement and understand customer’s vision, (c) most importantly, scope out the work so that I have more or less some control on what these guys will do and possibly minimize the unexpected, and (d) market research and blogging at my own sweet will. (For convenience’ sake, I am not talking here about the mundane operational tasks that I still have on my plate).

And now enters the DevOps Architect !

What does this guy expect? Ok, here’s the list (+++ means positive impact for me, — means negative):

— (a) proactively search, connect and reach out to architects across the world and talk about, rather write on the web on, DevOps and agility and architectures; now that makes my job difficult given that guys already get to know all that stuff in the market from him, even possibly before I get the project; the risk is high – (i) customers gets to know how a solution to their problems can be achieved, thereby probably bypassing me – so no sales! (ii) information wide open for the competition to pitch in

+++ (b) getting profiles – the architect helps me big time here, given the right connects across to associates and industry experts who know at least 10% DevOps; further this is the guy I look forward to when it comes to buying time by conducting a customer workshop while I search for people who will comprise the ‘project team’

— (c) BIG, BIG PROBLEM ! The DevOps architect talks about Agile, and works with chief architects (these are powerful people) of the customers’ world – that essentially means, he wants to incessantly keep on increasing the scope without any bounds or containment, drawing architectures on the board (on the cloud as if that was possible) – and it takes very less time to actually have things getting out of control. I am suddenly lost on estimations, how many people you will take for the project, whether that has to be done in phases or as a Big Bang, and how much will I actually charge the customer for. On top of that, two more things happen – (i) even the customer managers get jittery (thinking, what will be the invoice coming to at the end of the month), and (ii) the architect convinces everyone around, quite powerfully, that estimates need to be on-the-go (read, Agile!) given that everything is ‘automation’ and hence, can be done in a jiffy! (he becomes the Dev and all around him become Ops)

— (d) There is an indirect impact here. While I may want to concentrate on market research, blogging and conference activities as a marketing guy, the architect pursues his own interests in getting new ideas, building technology solutions and DevOps blueprints, and writing white papers and blogs. The problem is, all these activities, though started with a great zeal, tend to get on to a pending queue with no idea when they will complete. And while I wear the ‘marketing manager’ hat, my attention suddenly gets diverted to emails on awaited expectations on these activities, left incomplete by the architect; as I suddenly start becoming accountable of such activities from marketing perspective.

But then at the end of the day, I am not at all ready to lose my architect as he anyhow saves the day (and helps me make the sale) with his geek talk!

Baby Steps to DevOps (rather,Continuous Integration for the uninitiated)

pcb1DevOps is a philosophy, and any attempt to burn your hands in implementing DevOps (it may be IT or any other industry – though here we are concerned on the former) is a long stretched set of activities based on organization appetite to achieve IT agility. Now, this blog is to give you, as an individual, a very very basic guidance on how you can try it out in a small team (or at home using your personal laptop), more from an experimental standpoint.

While for an actual DevOps implementation, the three dimensions – people (read, culture), process and technology (read, tool-based automation) – are equally important; we would concern ourselves with the last factor for this small experiment. Further, we would constrain ourselves to the following scenario in a pre-prod IT environment:

a. Provision a virtual machine (VM) to set up an environment on-the-go
b. Install specific tools to write a small program (java in this case, given that the associated tools are available free over the web) and move it to a source code repository
c. Build the program (this is one-time before we automate in next step)
d. Establish an automated workflow such that whenever a change is made to the program, the code would be picked up and built – this is a very basic form of what is called ‘Continuous Integration’

Note – You can perform the given set of activities either on a Windows PC, or a Mac, or a LINUX PC; we would do the rest of the activities on Ubuntu LINUX that we will install in the VM. Further, you may refer to unlimited content available on the web for specific implementation level instructions based on your host O/S. Note that this is a basic guidance that will give you a basic understanding of where typically DevOps may start, and is in no means a tutorial on DevOps or associated software, tools or processes.

The steps are as follows [WARNING: Details are quite a bit technical]

1. Provision the VM – Install Oracle VirtualBox from the web; it is available for free. Go to the UI and create a new VM; you may either go with the default parameters, or choose your own while it asks for the options (say, 1GB RAM, 80GB hard drive space, etc.). Note that you can create as many VMs as you wish, use them or remove them at will.

2. Download Ubuntu 32-bit (or 64-bit if your host O/S is 64-bit) which is available as an image (typically, .iso format for a Window machine).

3. Start the VM. It will ask for the O/S image you want to install. Browse to your local copy of the .iso file that you have saved in your machine, select and install it.

4. Once Ubuntu starts, go to the command line interface [CLI] by selecting ‘X-Terminal’ from the start menu in Ubuntu GUI.
5. Install the following tools – Git, Maven, Jenkins. In Ubuntu, following may be the commands (may vary slightly based on conditions or parameters of the specific installation):

$ sudo apt-get update
$ sudo apt-get install git
$ sudo apt-get install maven
$ sudo apt-get install jenkins

(Note – Git is a source code repository, Maven is a java build tool, Jenkins is a CI tool that will integrate Git and Maven to achieve the automation objective stated in point (d) above. Further, there are specific Linux commands that you can use to check the versions, adjust parameters in configuration files in Linux, etc., that are quite well covered in various internet sources)

6. Write a java program using any CLI editor such as nano (you can also use VI editor if you are a seasoned Linux user) and save it in a folder

7. Generate the Maven folder structure and project object model (pom) file using ‘mvn archetype:generate …..’ command; accordingly, Maven downloads several pieces of .jar files from the web, creates a default folder structure using one of it’s available templates and creates a pom.xml file.
Default folder structure is typically of this form:
/<yourname>/src/main/java/<yourstructure>/java (a default program ‘App.java’ is also placed at the bottom of this structure)
Note: <yourname> and <yourstructure> are names you specify as parameters to archetype:generate command
Also, Maven creates a separate similar looking structure for tests under /src and places a file ‘AppTest.java’
– Actually, Maven also installs JUnit for unit tests, and this file is a unit test script file
The pom.xml file is created under /<yourname> folder itself

8. Copy your java source code in the folder structure (note that a default program called ‘App.java’ has been already created at the very bottom of the structure, which you can replace with your source code)

9. In the folder where you have App.java, initialize Git and check in your source code to it; typical commands are:

$ git init
$ git add App.java
$ git commit -m ‘my first version’

10. You can use maven to test, compile and package [commonly known as ‘build’] the source code using only one command, ‘mvn package’ (this will create a .jar file which can also be run using maven; there are numerous internet sources where you can find further details; this step is optional as the next steps would achieve this using Jenkins

Note – Next few steps are the most important that will give you the very basic essence of automation using CI

11. Start jenkins (if not already started); check by going back to the Ubuntu GUI, opening the browser – typically, Mozilla Firefox – and enter ‘http://localhost:8080&#8217; (note that Jenkins uses port 8080)

12. Login to jenkins using your Ubuntu userid and password (if asked for)

13. In the jenkins dashboard screen, go to ‘Manage Jenkins’ at the left

14. Configure your Git (specifying location, etc.) and Maven installations in appropriate panels (there are thousands of internet sources to talk about it). Coming back to the dashboard, you have to create a ‘New Item’ (button at top left) and create a Maven project. For this project, you will get a form where you need to specify (among other things) the Git location, and the Maven command that needs to be run whenever code is changed and committed to Git; in this case, it would be ‘mvn package’. Further, a schedule needs to be specified to tell Jenkins at what frequency the code would be checked for changes; for checking every 10 minutes, the schedule is specified as ‘* * * * H/5’ – appropriate help is available in Jenkins describing what it means.

15. Once you make any change to the code (or even if you do not), you can always run ‘Build Now’ from the specific project’s dashboard to build your code anytime.

16. Alternatively, Jenkins will check for every 10 min. if your code in Git has changed, and accordingly will pick it up and build it (a successful build is shown as a solid blue circle, and unsuccessful one in red; at bottom left of the project dashboard).

Voila! You have achieved a very basic CI on java.