Thursday, January 20, 2011

The Dark side of Agility

Agility is considered as a major advantage of SOA, Virtualization and Cloud Computing.
By using Abstraction we create an Agile less complex model for Business people and IT experts. 
In this new Agile world it is easy to Change, Adapt and Build new entities.


Sometimes it could be too easy. 


With too much Agility it may be a chaotic Enterprise rather than Elastic, Flexible and Easy to change enterprise or Virtual Enterprise.

If we would like to enjoy the Bright side of Agility we have to take into consideration the possible Dark side of Agility and by Planning and Managing the Architecture and Infrastructure avoid of too much Agility.  





Virtualization as  an example
The following section is an example illustrating the problems caused by too much Agility in Virtualization. It should be remembered that not necessarily Virtualization implies too much Agility. Without too much Agility it is a valuable technology both in the Data Center context and the Public Cloud context.  

It should also be remembered that the following section is only illustration. It is not an exhaustive list of all challenges derived from Virtualization ease of change.   


Lack of Life Cycle Management 
It is so easy to define a new instance of Virtualized Operating System, so in some Enterprises and Public Clouds people forget to delete unused instances.
 The results could be waste of Hardware Resources, aimless Backups and unnecessary Management Overhead. 


Too much usage of Hardware Resources  
Enterprises which use about 10% of Intel based Servers CPU  ,easily build Virtual Server after Virtual Server on a single Physical Server. The CPU usage could be about 110%. As well as high Memory usage rate. 


After defining too much instances the CPU usage rate is too high in most cases and instead of improved Performance the Performance is less optimal than before. In case of a Physical Server downtime it may be impossible to find another server to run the Virtual Operating Systems instances of the overloaded server.

The Best Practice Rule of Thumb is to use about 60% or 70% of the CPU in peak times. 60% is a lot better than 10% to 15% used before virtualizing the Servers.


A upper threshold for CPU usage is not new: back in the 1970s a maximum threshold for Mainframe CPU usage was defined in case of Online Transactions workloads.    


No planning of Virtual instances distribution on Physical Servers
Too much Usage of Resources is one of the scenarios of lack of planning. 


Other scenarios examples are: placing CPU Bound instances on the same physical server causing Performance degradation, Placing all instances of an Application on a single server so no Business Continuity is possible in case of Physical Server failure.   
     
No Network traffic optimization
If  two Virtual Machines which are communicating often with each other and transforming huge amount of data, are placed easily, on different Physical Servers the result is a lot of Physical Network traffic. 
By locating them on the same physical server and transmitting the data using internal server resources instead of physical network resources could save Network resources. 


The result of placing them on differnet Physical Servers is degraded communication Response Time due to usage of the physical network which is a lot slower than memory to memory data transition on the same server and due to higher network contention. 


Is it possible to move the Storage together with the Virtual Operating System Instance?
It is possible to move easily a Virtual Machine from one physical server to other using technologies such as VMware's VMotion. CPU and Memory resources movement is a part of this process. However, Storage movement or reallocation is not part of this process, therefore it may function less optimally in the new server environment.   



Conclusions
The Value Proposition of Agility is ease of Change and ease of building new entities. However, when it is too easy to make changes, many enterprises will not resist the temptation of too much, quick and unplanned changes or additions. 
The consequences could be no Value at all or even worse results than traditional non-Agile architectures and systems.  

I used Virtualization as an example to illustrate the damage of too much Agility.

Cloud Computing is based upon Virtualization and therefore the conclusions about Virtualization could be applicable to Cloud Computing as well.


I could also use SOA as another illustration of the same issue. For example by describing very Agile implementation based on large number of fine grained Services. In that case Change is easy but Management is a nightmare.

Wednesday, January 5, 2011

SOA in 2011

My prediction as posted in ebizQ SOA Forum:


The most promising area in SOA developments in 2011 could be Services for Social Networks.

I would expect SOA related key developments in Business Process Management (BPM), Business Intelligence (BI) and Case Management.


I doubt if we will see key developments in SOA related to Cloud Computing in 2011. As far as Cloud Computing is concerned, we may see Service Oriented Infrastructure (SOI) related developments, which will also be related to Virtualization.



Read also other opinions in ebizQ SOA Forum and Zapthink's prediction.


Unfortunately, failure rates of IT projects are high, and more unfortunately SOA endevours failures rates are even higher.
will 2011 be the year in which SOA endevours rates fall gradually? I am not sure it will.

Friday, November 5, 2010

Revival or no Death at all: Burton Group and The Lazarus Effect

On January 05, 2009 Burton Group's analyst Ann Thomas Manes published her famous blog post SOA is Dead; Long Live Services. She arguied that "SOA met its demise on January 1, 2009, when it was wiped out by the catastrophic impact of the economic recession. SOA is survived by its offspring: mashups, BPM, SaaS, Cloud Computing, and all other architectural approaches that depend on “services”.

Recently, Burton Group published a Research note titled: The Lazarus Effect: SOA Returns. According to this Research Note, SOA demised during the Recession, but SOA is returning now after the recession. In Burton Group's words: 
"As the global economy struggles back to health and organizations seek to redefine themselves and make strategic investments, many organizations are reconsidering SOA".

Unfortunately, I did not read Frank Herbert's and Bill Ransom's Science Fiction book titled The Lazarus Effect, so I could only imagine that it is about recovering from death or almost death. However, in the real world, SOA was far from death, despite of the Recession and despite of many SOA initiatives failures.

It is only another example of IT Evolution Spiral Model I described in a previous post.

It should be noticed that in a presentation titled: SOA a means for Leveraging Business Development? I argued that Recession may be beneficial for SOA initiatives in case of organizations adapting their SOA initiatives for the new circumstances. 
 
It should also be noticed that Burton Group was acquired by Gartner.
 Leading Gartner analysts' Research Notes (e.g. Research Notes written by Yefim Natis, Roy Schulte) never shared the opinion that SOA is Dead, but view it from more balanced perspective: Less enthusiasm during Hype preiod and no premature death notices.   




Sunday, October 17, 2010

The illusion of static Enterprise Architecture

I recently read a post by ZapThink's analyst Jason Bloomberg titled: 

Continuous Business Transformation: At the Center of ZapThink 2020


According to that post the permanence of change drives how we run our organizations,but it is against our human quest for stability. As far as Enterprise Architecture is concerned, he notes that the To-Be Architecture organizations trying to move to from current As-Is Architecture is a moving target: There will never be a stable Enterprise Architecture.

I do agree that Architecture is dynamic in nature, however we should look more deeply at the characteristics of that ever changing process.  

Does Enterprise Architecture evolve linearly or  Spiraly?
I use the term linear for describing any type of monotonic evolution, just because linear is simpler than other monotonic fuctions.
In my opinion as described in a previous post it is spiral.
Yesterday, I encountered a SaaS example supporting my case.
I looked at an old Giga Information Group article dated 2002. The article written by Byron Miller title is: "ERP Software as a Service".
The issues and observations are similar to current ERP SaaS issues (described in many articles and Reaserch Notes including my post: Future Applications SaaS or Traditional).
The term SaaS in the old article does not refer to Cloud Computing but to Application Service Provider(ASP)model.  


Is the As-Is to the To-Be Architecture approach a wrong approach?

I do think that it is a usefull approach. The fact that even if we complete the transformation from As-is to To-Be we will need a new To-Be, does not deny the value of reaching a better architectural state than the current state.
Prepetual change is against Human nature, but reaching a goal is not. It is easier for us to reach a goal(To-Be Architecture) and look afterwards for another goal (next To-Be Architecture), than to act in a chaotic ever changing environment without any sub-goals. 

Why Architecture is doomed for change?

It is not only because of the Dynamic Business, The Technological changes and other organizational changes.

Another main reason for Enterprise Architecture inherent dynamics is its nature. EA is an abstract model describing artificats (Business artifacts, Technological IT artificats and Applicative IT artifacts) and the relations between them.
most abstract models does not fully correspond to the real entities they describe, so even if nothing is changed the model should be improved  and changed.
   

Friday, September 17, 2010

Cloud Computing and the Security Paradox

Cloud Computing and the Security Paradox

On September, 14th  I participated in a local IBM conference titled: Smarter Solutions for a Smarter Business. One of the most interesting and practical presentations was Moises Navarro's presentation on Cloud Computing.
He quoted an IBM survey about suitable and unsuitable workload types for implementation in the Cloud. The ten leading suitable workloads included many Infrastructure services and Desktop Services. The unsuitable workloads list included ERP as well as other Core Applications as I would expect (for example, read my previous post SaaS is Going Mainstream).
However, it also included Security Services, as one of the most unsuitable workloads. On one hand, it is not a surprising finding because Security concerns are Cloud Computing inhibitors, but on the other hand Security Services are part of infrastructure Services, and therefore could be a good fit for implementation in the Cloud.

A recent Aberdeen Group's Research Note titled: Web Security in the Cloud: More Secure! Compliant! Less Expensive! (May 2010) supports the view that Security Services implementation in the Cloud, may provide significant benefits.
The study reveals that applying e-mail Security as a Service in the Cloud is more efficient and secure than applying e-mail On Premise Security. Aberdeen study was based upon 36 organizations using On Premise Web Security solutions and on 22 organizations using Cloud Computing based solutions.
Cloud based solutions reported significantly less Security incidents in every incident category checked. The categories were: Data Loss, Malware Infections, Web-Site compromise, Security related Downtime and Audit Deficiencies.
As far as efficiency is concerned, Aberdeen Group found that users of Cloud based Web Security solutions realized 42% reduction in associated Help Desk calls in comparison to users of On Premise solutions. 

The findings may not be limited to Web Security and e-mail Security. Aberdeen Group identifies convergence process between Web Security; e-mail Security and Data Loss Prevention (DLP).

The paradox is that most Security threats are internal, while most Security concerns are about External threats. For example, approximately 60% of Security breaches in banks were Internal. Usually insiders can do more harm than outsiders.
The Cloud is not an exception to that paradoxical rule: many Security concerns about Cloud Based implementations and about Cloud based Security Services and relatively less Security breaches and more efficient implementation of Security Services in the Cloud.

Friday, August 20, 2010

Is Oracle the Java killer?

Probably not. Java is too strong to be killed.
I posted the following answer to the question:



Will Oracle's lawsuit Against Google Put a Chill on Java Adoption? asked in ebizQ SOA Forum


When Oracle acquired Sun, I thought it was a wrong

decision (read my post: Vendors Survival:The Sun is red - Oracle to buy Sun First Take ).

It seems that Oracle's managers reached a similar 

conclusion and are trying to minimize the amount of

money they lose. The lawsuit against Google is one 

of the ways to achieve it. However, this lawsuit 

supports the concerns about Java after Oracle 

acquired Sun.

The delicate balance of the Java community with two 

strong players (IBM and BEA), Sun as the owner of 

Java and leader of the Java Community Process and 

other strong players (Oracle, SAP, RedHat/Jboss etc.) no longer exists.

Oracle swallowed BEA and Sun and is now the owner 

of Java. Java will not disappear: It is still popular 

language and environment, especially for Software 

products developers, because of its platform 

independence. However, the major Java players will 

probably ask the question: Against which competitor 

Oracle's next lawsuit will be? IBM? SAP? or even 

RedHat due to Linux competition. 


For the Long Term they will look for a strategy less 

dependent on Java and Oracle. It is easy for SAP 

because they are platform agnostic. SAP can easily 

develop SOA ERP Services in other programming 

languages e.g. c#, as part of its applications 

products portfolio. 

It is more difficult for IBM and RedHat whose 

strategy is based on Java. As far as Google is 

concerned, it may also look for Long Term 

alternative for Java. The alternative may be Java 

like, same as C# and more suitable for Cloud 

Computing.

Tuesday, August 17, 2010

Why IBM is going to acquire Unica? or Unica's uniqueness


Unlike the other three leading Eco Systems vendors Microsoft, Oracle and SAP, IBM is not a player in the applications market. Its absence from this market is based on strategy which does not include ERP, CRM and other applications as one of its target markets.

In order to answer this question I am going to describe the first time the name Unica was mentioned to me.
It was a strategic CRM consulting project I was participating in. The large customer was using Siebel. I joined a CRM expert with vast knowledge and experience of the customer's  implementation as well as other CRM projects. My role was to analyze the CRM market and its trends focusing on implications relevant to that client. I choose to focus on Siebel and the other three market leaders of that time: SAP, Oracle and PeopleSoft and two other unique products which may supplement them (Unica and Kana). Two days before we gave the client our report Oracle announced Siebel acquisition and my role in the project became more important than planned. I had to answer the key question: will Oracle continue to develop Siebel or will another CRM product (Oracle CRM or PeopleSoft CRM) will be the strategic product?
In case of the conclusion that Oracle acquired it only for market share the customer would have considered of replacing it by SAP CRM. 
My First Take analysis provides the right conclusion: Siebel is going to be Oracle's leading CRM product.
As far as Unica is concerned, its uniqueness was in Campaign Management and in unifying the Operational CRM and the Analytical CRM parts of its Campaign Management offering.

As part of my work in the same consulting project, I also learned something new to me about Siebel by reading a Datamation report: Siebel defined itself as a Business Intelligence (BI) vendor in addition to defining itself as a leading CRM vendor. Its BI solutions were not limited to the context of Analytic CRM.




So, why IBM is acquiring Unica?
It is because of the analytic capabilities of Unica products. IBM, and it is not the only one, predicts that extensive usage of more sophisticated and smarter BI and Analytic tools is a must for most enterprises. The BI and Analytic markets are target markets for IBM. It already acquired companies like SPSS (Statistical and Analytical vendor) Cognos (BI market leader). These tools, as well as Unica tools, can be used together with other IBM's infrastructure products such as DB2 database for Operational and Data Warehouse systems, various IBM's BPM and BAM solutions, DataStage ETL product and others.
IBM's challenge is similar to challenges the company faced in other topics such as SOA and Integration: building a comprehensive solution from the acquired and in house developed products. 

Is it Viable?

  As a Independent IT Consultant I often used Analysts Groups Services.  I read Research Notes written by Gartner Inc., Forrester, IDC , Me...