Friday, November 5, 2010

Revival or no Death at all: Burton Group and The Lazarus Effect

On January 05, 2009 Burton Group's analyst Ann Thomas Manes published her famous blog post SOA is Dead; Long Live Services. She arguied that "SOA met its demise on January 1, 2009, when it was wiped out by the catastrophic impact of the economic recession. SOA is survived by its offspring: mashups, BPM, SaaS, Cloud Computing, and all other architectural approaches that depend on “services”.

Recently, Burton Group published a Research note titled: The Lazarus Effect: SOA Returns. According to this Research Note, SOA demised during the Recession, but SOA is returning now after the recession. In Burton Group's words: 
"As the global economy struggles back to health and organizations seek to redefine themselves and make strategic investments, many organizations are reconsidering SOA".

Unfortunately, I did not read Frank Herbert's and Bill Ransom's Science Fiction book titled The Lazarus Effect, so I could only imagine that it is about recovering from death or almost death. However, in the real world, SOA was far from death, despite of the Recession and despite of many SOA initiatives failures.

It is only another example of IT Evolution Spiral Model I described in a previous post.

It should be noticed that in a presentation titled: SOA a means for Leveraging Business Development? I argued that Recession may be beneficial for SOA initiatives in case of organizations adapting their SOA initiatives for the new circumstances. 
It should also be noticed that Burton Group was acquired by Gartner.
 Leading Gartner analysts' Research Notes (e.g. Research Notes written by Yefim Natis, Roy Schulte) never shared the opinion that SOA is Dead, but view it from more balanced perspective: Less enthusiasm during Hype preiod and no premature death notices.   

Sunday, October 17, 2010

The illusion of static Enterprise Architecture

I recently read a post by ZapThink's analyst Jason Bloomberg titled: 

Continuous Business Transformation: At the Center of ZapThink 2020

According to that post the permanence of change drives how we run our organizations,but it is against our human quest for stability. As far as Enterprise Architecture is concerned, he notes that the To-Be Architecture organizations trying to move to from current As-Is Architecture is a moving target: There will never be a stable Enterprise Architecture.

I do agree that Architecture is dynamic in nature, however we should look more deeply at the characteristics of that ever changing process.  

Does Enterprise Architecture evolve linearly or  Spiraly?
I use the term linear for describing any type of monotonic evolution, just because linear is simpler than other monotonic fuctions.
In my opinion as described in a previous post it is spiral.
Yesterday, I encountered a SaaS example supporting my case.
I looked at an old Giga Information Group article dated 2002. The article written by Byron Miller title is: "ERP Software as a Service".
The issues and observations are similar to current ERP SaaS issues (described in many articles and Reaserch Notes including my post: Future Applications SaaS or Traditional).
The term SaaS in the old article does not refer to Cloud Computing but to Application Service Provider(ASP)model.  

Is the As-Is to the To-Be Architecture approach a wrong approach?

I do think that it is a usefull approach. The fact that even if we complete the transformation from As-is to To-Be we will need a new To-Be, does not deny the value of reaching a better architectural state than the current state.
Prepetual change is against Human nature, but reaching a goal is not. It is easier for us to reach a goal(To-Be Architecture) and look afterwards for another goal (next To-Be Architecture), than to act in a chaotic ever changing environment without any sub-goals. 

Why Architecture is doomed for change?

It is not only because of the Dynamic Business, The Technological changes and other organizational changes.

Another main reason for Enterprise Architecture inherent dynamics is its nature. EA is an abstract model describing artificats (Business artifacts, Technological IT artificats and Applicative IT artifacts) and the relations between them.
most abstract models does not fully correspond to the real entities they describe, so even if nothing is changed the model should be improved  and changed.

Friday, September 17, 2010

Cloud Computing and the Security Paradox

Cloud Computing and the Security Paradox

On September, 14th  I participated in a local IBM conference titled: Smarter Solutions for a Smarter Business. One of the most interesting and practical presentations was Moises Navarro's presentation on Cloud Computing.
He quoted an IBM survey about suitable and unsuitable workload types for implementation in the Cloud. The ten leading suitable workloads included many Infrastructure services and Desktop Services. The unsuitable workloads list included ERP as well as other Core Applications as I would expect (for example, read my previous post SaaS is Going Mainstream).
However, it also included Security Services, as one of the most unsuitable workloads. On one hand, it is not a surprising finding because Security concerns are Cloud Computing inhibitors, but on the other hand Security Services are part of infrastructure Services, and therefore could be a good fit for implementation in the Cloud.

A recent Aberdeen Group's Research Note titled: Web Security in the Cloud: More Secure! Compliant! Less Expensive! (May 2010) supports the view that Security Services implementation in the Cloud, may provide significant benefits.
The study reveals that applying e-mail Security as a Service in the Cloud is more efficient and secure than applying e-mail On Premise Security. Aberdeen study was based upon 36 organizations using On Premise Web Security solutions and on 22 organizations using Cloud Computing based solutions.
Cloud based solutions reported significantly less Security incidents in every incident category checked. The categories were: Data Loss, Malware Infections, Web-Site compromise, Security related Downtime and Audit Deficiencies.
As far as efficiency is concerned, Aberdeen Group found that users of Cloud based Web Security solutions realized 42% reduction in associated Help Desk calls in comparison to users of On Premise solutions. 

The findings may not be limited to Web Security and e-mail Security. Aberdeen Group identifies convergence process between Web Security; e-mail Security and Data Loss Prevention (DLP).

The paradox is that most Security threats are internal, while most Security concerns are about External threats. For example, approximately 60% of Security breaches in banks were Internal. Usually insiders can do more harm than outsiders.
The Cloud is not an exception to that paradoxical rule: many Security concerns about Cloud Based implementations and about Cloud based Security Services and relatively less Security breaches and more efficient implementation of Security Services in the Cloud.

Friday, August 20, 2010

Is Oracle the Java killer?

Probably not. Java is too strong to be killed.
I posted the following answer to the question:

Will Oracle's lawsuit Against Google Put a Chill on Java Adoption? asked in ebizQ SOA Forum

When Oracle acquired Sun, I thought it was a wrong

decision (read my post: Vendors Survival:The Sun is red - Oracle to buy Sun First Take ).

It seems that Oracle's managers reached a similar 

conclusion and are trying to minimize the amount of

money they lose. The lawsuit against Google is one 

of the ways to achieve it. However, this lawsuit 

supports the concerns about Java after Oracle 

acquired Sun.

The delicate balance of the Java community with two 

strong players (IBM and BEA), Sun as the owner of 

Java and leader of the Java Community Process and 

other strong players (Oracle, SAP, RedHat/Jboss etc.) no longer exists.

Oracle swallowed BEA and Sun and is now the owner 

of Java. Java will not disappear: It is still popular 

language and environment, especially for Software 

products developers, because of its platform 

independence. However, the major Java players will 

probably ask the question: Against which competitor 

Oracle's next lawsuit will be? IBM? SAP? or even 

RedHat due to Linux competition. 

For the Long Term they will look for a strategy less 

dependent on Java and Oracle. It is easy for SAP 

because they are platform agnostic. SAP can easily 

develop SOA ERP Services in other programming 

languages e.g. c#, as part of its applications 

products portfolio. 

It is more difficult for IBM and RedHat whose 

strategy is based on Java. As far as Google is 

concerned, it may also look for Long Term 

alternative for Java. The alternative may be Java 

like, same as C# and more suitable for Cloud 


Tuesday, August 17, 2010

Why IBM is going to acquire Unica? or Unica's uniqueness

Unlike the other three leading Eco Systems vendors Microsoft, Oracle and SAP, IBM is not a player in the applications market. Its absence from this market is based on strategy which does not include ERP, CRM and other applications as one of its target markets.

In order to answer this question I am going to describe the first time the name Unica was mentioned to me.
It was a strategic CRM consulting project I was participating in. The large customer was using Siebel. I joined a CRM expert with vast knowledge and experience of the customer's  implementation as well as other CRM projects. My role was to analyze the CRM market and its trends focusing on implications relevant to that client. I choose to focus on Siebel and the other three market leaders of that time: SAP, Oracle and PeopleSoft and two other unique products which may supplement them (Unica and Kana). Two days before we gave the client our report Oracle announced Siebel acquisition and my role in the project became more important than planned. I had to answer the key question: will Oracle continue to develop Siebel or will another CRM product (Oracle CRM or PeopleSoft CRM) will be the strategic product?
In case of the conclusion that Oracle acquired it only for market share the customer would have considered of replacing it by SAP CRM. 
My First Take analysis provides the right conclusion: Siebel is going to be Oracle's leading CRM product.
As far as Unica is concerned, its uniqueness was in Campaign Management and in unifying the Operational CRM and the Analytical CRM parts of its Campaign Management offering.

As part of my work in the same consulting project, I also learned something new to me about Siebel by reading a Datamation report: Siebel defined itself as a Business Intelligence (BI) vendor in addition to defining itself as a leading CRM vendor. Its BI solutions were not limited to the context of Analytic CRM.

So, why IBM is acquiring Unica?
It is because of the analytic capabilities of Unica products. IBM, and it is not the only one, predicts that extensive usage of more sophisticated and smarter BI and Analytic tools is a must for most enterprises. The BI and Analytic markets are target markets for IBM. It already acquired companies like SPSS (Statistical and Analytical vendor) Cognos (BI market leader). These tools, as well as Unica tools, can be used together with other IBM's infrastructure products such as DB2 database for Operational and Data Warehouse systems, various IBM's BPM and BAM solutions, DataStage ETL product and others.
IBM's challenge is similar to challenges the company faced in other topics such as SOA and Integration: building a comprehensive solution from the acquired and in house developed products. 

Tuesday, July 27, 2010

IBM z-Enterprise First Take: Data Center In a Box or Cloud Computing

I started my career in the seventies, working as a programmer for a Governmental Service Bureau providing service to most of the public sector organizations in my country. We used IBM 360 Mainframes with MVT Operating System.

The V did not stand for Virtual (There was no Virtual Storage support), but stood for Variable, because it was an Opertaing System capable of managing Variable length partitions.

MVT predecessor  SVS (Single Virtual Storage), was followed by MVS (Multiple Virtual Storages).

Current Mainframe Operating Systems are based upon MVS. ON 1995 it was extended and brnaded as OS/390. OS/390 was replaced by z/OS  Operating System.

In the 1990s many people believed that  "the Mainframe is dead". However, the Mainframe is still a valuable and profitable asset for IBM, used by many large enterprises. 

On July 22 2010, IBM announced the new z-Enterprise system, which is actually a Data Center in a Box. The new z-Enterprise computers supports Mainframe systems (including Linux on Mainframe), AIX servers and  Intel  based  Windows.  All systems share common Management implemented by firmware.      

The new announcement, described by IBM as  "The revolutionary new design of the  zEnterprise System", may extend the Mainframe platform era for additional years. 

Mainframes Strengths and Considerations
The IT industry is standardizing on Intel based Servers and Desktops with Windows or Linux Operating Systems.
This trend implies the following considerations and challenges to all Mainframes (regardless of manufacturer):
  • Relatively higher hardware costs in comparison to Intel based servers.
  • The gap in price between Intel based Servers and Mainframes  is increased systematically because the reduction rate of Intel based Servers price is greater than the reduction rate of Mainframes hardware prices.
  • Lack of Software products - Most of the products are developed for the most frequently used platforms and only some of them are ported to Mainframes.
  • Lack of young experts - Most of the young professionals prefer to specialize in the hottest or mainstream technologies and not in Mainframe technology.

Mainframes strengths include:
  • Higher Availability
  • Extended Scalability
If the Data Center serves many thousands of online users running complex transactions, probably it is beyond the capabilities of standard servers and a Mainframe should take care of it.
  • Security
Few years ago I executed a Penetration Test to a banking institute using Mainframe. I noticed that you cannot breach the password mechanisms by downloading a program from the Web. However, I found some free programs for breaching Windows password mechanisms.
  • Batch

Mainframes where built for executing massive Batch workloads.
But the gap between Mainframes and standard platforms is narrowing consistently, as the standard systems evolve.

Why IBM's Mainframes are still executing Mission Critical work while most of other Mainframes phased out?  
Due to the considerations cited above most Mainframes platforms usage is declining, but IBM's Mainframes usage is still a viable option, at least for large enterprises.
According to a Web page I read approximately 40% of IBM's profits are Mainframe profits (Let's assume that the source may be wrong and unreliable and the real figure is half of the cited number, it is still a significant revenues source).
The secrets of IBM's Mainframe long period of market presence are based on two main factors:

1. It is very difficult and costly to migrate to another platform
In order to migrate you have to redevelop or re-host the applications, replace or re-train the IT staff and to acquire and assimilate new hardware and software products. If the migrated Enterprise is a Very Large Enterprise (VLE) it will be almost impossible to justify it from business point of view.
"If it is not broken do not fix it"
Many migration attempts failed or required more resources and time than planned for executing the migration process.
IBM's mainframe pricing models always favored large enterprises, so the TCO after the migration could be higher than the Tco before it for VLE's.  The Service Level could be lower than before the migration.
An alternative way to migrate from IBM Mainframe is by adopting SOA. After transforming the enterprise to SOA, it is possible to migrate Services transparently and gradually to other platforms, but moving to SOA is a long journey and even longer in case that most of your applications are old Legacy Mainframe systems.

2. Platform Adaptability (Agility)
Agility looks like a wrong term for describing Mainframes, however Agility is about Change according to Business Changes. It is also true that Agility implies quick changes and although IBM's Mainframe platform is adapting usually it does not adapt quickly.
The following list describes some Mainframe platform adaptive changes in a nutshell:
  • Partitioning and Virtualization in order to enable multi logical Operating Systems on a single machine.
  • UNIX on Mainframe – The standalone version failed and afterwards UNIX System Services (USS) where added to z/OS operating system
  • Windows NT on Mainframe build by third party Bristol Software) but was not a success story
  • JEE on Mainframe. Supporting Java on Mainframes required many Hardware and Software changes, in order to fit Java into the z/OS Resource Management and Workload mechanisms, for example the Garbage Collection algorithm was totally changed.
  • z/Linux – RedHat Linux and SuSe Linux on mainframe. It is possible to implement Linux instances by sharing a machine with z/OS systems or on a dedicated cheaper Mainframe.
  • Support for SAP ERP and other applications on Mainframe.
  • SOA on Mainframe as well as on other platforms

MY Take
Cloud Computing is widening the costs gap between standard platforms and tools and Legacy systems and platforms due to Elasticity and reduced Complexity and reduced Management requirements. However, currently it is more attractive to SMB's and relatively small enterprises. It is more attractive for non Core applications such as CRM, e-mail and HR. Large Enterprises are reluctant to place their business critical applications and servers on the Cloud, due to Maturity, Psychological issues and Security issues.

z-Enterprise is an alternative  approach for cutting costs and reducing Complexity targeted for Large Enterprises. Clearly its users are Locked In IBM Mainframe, other IBM platforms and IBM's products. For example, your UNIX platform must be IBM's AIX and not competitor like Oracle/Sun who also tries to Lock you In its own platform (see previous posts:  Oracle-Sun Hardware: Easy to Say and Hard to do – Oracle's Exadata2,  The Future of IT according to Oracle). 
But is it really anti-Cloud alternative?  For the Short Term it is, but for the Long Term the number of Cloud providers will be decreased and in addition to players like Amazon and Google, we may find IBM offering z-Enterprise Cloud services to relatively small Mainframe shops, AIX shops and even to large Mainframe shops. It may resemble in some aspects Microsoft's Azure which is designated for Microsoft's platform and tools users.

Wednesday, June 16, 2010

Your private Data is Unforgettable

Borges in 1951, by Grete Stern
Picture Source: Wikimedia Commons 

On June 14th I attended the Israeli Wikipedia Academy 2010 conference in Tel-Aviv University.
The interesting conference focused on Wikipedia and Wiki technology usage in Academic context and schools.
Most of the presentations focused on Wiki or Wikipedia research, usage and projects.
The main theme repeating in most presentations was that Wiki based Collaboration and Participation changes the Game's Rules. However, in some contexts changing the rules is very useful, while in other contexts the usefulness is questionable.
Changing the rules implies new challenges to all process participants such as Users, Content Creators, Managers, Auditors etc.
I already described these challenges in previous posts: Wikipedia the Good the Bad and the Ugly and Web 2.0 for Dummies – Part 7: Wikipedia.

A Keynote Presentation on Remembering and Forgetting
In my opinion, the Keynote by Prof. Viktor Mayer-Schönberger was the most interesting. Prof.  Mayer-Schönberger is the Director of the Information and Innovation Policy Research Center at the National University of Singapore. His presentation titled:  Remembering and Forgetting, looked beyond Wikipedia perspective.
We already know that human beings do not remember everything: They forget.
As a student of Psychology in the 1970s I studied a research by George A. Miller titled: The Magical Number 7 plus minus two.
The experiment he performed proved that Human's Short Term Memory's capacity is about 7 items. It is true, that Long Term Memory Capacity is less limited, but human beings are not capable of remembering everything.
There are some exceptions to that rule. The Argentinean writer and poet Jorge Luis Borges, describes in one of his stories a man who remembers everything.
When that man was asked to tell about an event which happened to him, he describes every detail so the description is virtually another occurrence of the event. For example, describing a one hour event takes exactly one hour.
From the short reference to the story written by Borges it is clear, that remembering everything is a barrier for human happiness and adaptation. 
According to Prof  Mayer-Schönberger current computing systems "remember" everything. Like human unlimited memory, computing systems unlimited remembering is not a recipe for happiness. 

He described examples of people who published personal information which was included in Web Pages and after few years it damaged them. For example, a Canadian Professor who mentioned in a scientific article in 2001 that he consumed drugs on the 1960s, was forbidden for ever from entering USA after traveling from Vancouver to Seattle airport to pick a friend. One of the American clerks googled and found the article and the professor was accused for no disclosure of that information.  

Another example described by him was non-computerizes information included in the Dutch Population Registry. It includes Nationality and Religion of each person included in this "database". During World War II the Nazis used the information for finding the Dutch Jews. The result was killing of higher percentage of the Dutch Jews than in other countries. The lesson that may be learned from this example is that private data may be used for bad purposes, which has nothing to do with the original reasons for collecting the data. 

The effects of not forgetting

Power – The holder of private data, e.g. Google, can use the data for its own benefits. Even if the holder will not use the data, the possibility of using it empowers the data holder.

Time – private data published could be used many years after it was published. Once you published it you are not able to eliminate it, even if many years after the publishing act you would like to. 

The presentation described means for addressing these effects. None of these means fully addresses the problem. Prof. Viktor Mayer-Schönberger suggested that deletion mechanisms could be a relatively effective mean.
Deletion mechanism is identical to deleting a file or a record in a File System or Database

My Take

The unforgettable Private data issue
Prof.  Mayer-Schönberger's good presentation shades some light on the Privacy issue; however he focused on a limited part of the problem: Private Data published intentionally by the subject of that data.
We should think as well of other scenarios of publishing Private data and data retention such as:

1. Unintentional Publishing by the subject e.g. attaching a wrong file to an e-mail message.
2. Publishing by unauthorized access to the subject's computer
3. Publishing from a Governmental Data Source by unauthorized access or by employee's mistake.   
4. Publishing by unauthorized access from other organizations data e.g. Hospitals, Insurance Companies, Banks etc. 
5. Publishing wrong private data by the subject e.g. publishing that he completed his studies in a well known university.
It could be almost impossible to demonstrate that the data is wrong, especially   after a long time after publishing. 
6. Publishing wrong private data by others e.g. anonymously publishing that he is suffering from a disease or responsible for a project failure. 

Although the Privacy violation act could be the same, data deletion rights may differ.  
Addressing the problem
I am very skeptical about mechanisms for private data abuse prevention. 
The only way to fully address the problem is by not publishing the data.
However, this method is unrealistic in many cases. 
The trade off is between availability of large amount of data in the Web with easy accessibility to everyone and the exposure of sensitive data. 

Users Awareness is a key in Security in the Web same as it is in Enterprise Systems:
The amount of Private data should be restricted by the subject to the required minimum.
Accessibility of The minimal Private data exposed, should also be restricted by methods such as Encryption

I am purposely using the word Subject and not the word Owner. The Private Data Owner should be defined and agreed upon. As far as Enterprise Systems Resources , e.g. Files, Databases Tables, are concerned the owner is usually defined. The owner is authorized to scratch the data.
Defining the Private data owner in Web context is more complicated. 
For example, if a person is defined as the owner of Private data pertaining to him, he may be authorized to delete the data or to require data deletion.
Even if the owner will be defined, Data deletion probably will not make it unforgettable.
There will almost always be someone else who will copy or backup the data prior to its deletion.

Tuesday, May 25, 2010

Multi-Tenancy Data issues

The Multi-Tenant model for SaaS  look like an efficient and simple model. However, it is so simple on the abstraction layer. Complexities are hidden in more technical layers. Separating users data and data growth are two of the main issues. I recommend reading an interesting blog post written by Nati Shalom, describing clearly those issues and  current approaches for handling the problems.

Sunday, May 16, 2010

Acquisition is not simple: SAP-Sybase acquisition agreement

On May 14th SAP AG signed a definitive agreement to acquire Sybase Inc. for approximately $5,800 million. The offer price represents a premium of 44% over the three-month average stock price of Sybase and a premium of approximately 56.36% over the closing price of Sybase' common stock of $41.57 on May 11, According to the announcements by the two companies the deal, which follows a partnership for a significant time, is beneficial for both companies. I partially agree: It is very good for Sybase. However, I think that acquiring Sybase looks like a SAP's mistake.

What motivated SAP to acquire Sybase?
Previous SAP's CEO LEO Apotheker was replaced by Bill McDermott and Jim Hagemann Snabe due to conservative business approach. One of the significant results of this approach is few acquisitions. SAP's main competitor Oracle, is acquiring many companies in a relatively short time. So far SAP's significant acquisition was Business Objects a Business Intelligence infrastructure software provider, while oracle acquired ERP and CRM companies like PeopleSoft and Seibel, Infrastructure software leader BEA (Oracle's BEA Acquisition SOA perspective Revisited again), Infrastructure Hardware and Software vendor Sun (Vendors Survival: The Sun is red - Oracle to buy Sun First Take) and other applications and infrastructure vendor such as Hyperion and Golden Gate (The Golden Gate).

The new co-CEOs have to adopt a dynamic acquisition based policy. An acquisition of a large vendor like Sybase is a clear manifestation of a new policy.

Another major reason for Sybase's acquisition is Sybase's leading Mobile products.
Mobile infrastructure and applications as a significant part of Enterprise Architecture is a major IT trend. In order to compete in that segment SAP needs to integrate Mobile solutions to its Applications and Infrastructure. The partnership with Sybase marks it as a good candidate with leading Mobile products.

The third reason is Sybase's presence in the Financial vertical as an Infrastructure and Analytic solutions provider.

Sybase's history
In the middle of the 1990s Sybase was one of the four leaders of the Relational Databases Market together with Oracle, Informix and Ingress (Informix was acquired by IBM and Ingress was acquired many years ago by CA but even CA failed in bringing it back to the Short List of RDBMS leaders). Sybase competed directly with market leader Oracle.
Usually Oracle advocated Central Database model while Sybase favored Distributed model.
One byproduct of the Distributed model was a leading Middleware solution. However, Sybase failed to deliver in the J2EE Application Servers market in 2000-2001 and is no longer a Middleware market leader.
As far as the DBMS market is concerned, Sybase partnered with a giant named Microsoft, which did not developed yet an RDBMS solution. Sybase SQL RDBMS was branded by Microsoft as Microsoft SQL Server. When the partnership failed, Microsoft was a legitimate player in the Enterprise RDBMS market. Few years latter there were only three leaders in the RDBMS market: Oracle, IBM and Microsoft. The fourth significant player is Open Source DBMS MySQL. No one consider Sybase as viable option for a new Enterprise DBMS, however its Installed Base continue to use it.

In order to compete with Oracle's development Suite (RDBMS+ IDE), Sybase acquired PowerSoft. PowerSoft's  flagship product was Fat Client/Server market leader PowerBuilder.
PowerBuilder was an excellent development tool for small environments (e.g. 10 to 50 users) but limited by design for large Enterprise applications (e.g. thousands of users).
Again Microsoft's product became a competitor: Visual Basic was designed for the same market and soon became the leader of Workgroup Client/Server Application Development market, despite the technical superiority of PowerBuilder.
After the emergence of JEE and .Net development environments Power Builder is no longer a significant player in the Applications Development market.
PowerSoft acquires an excellent small footprint database named Watcom SQL prior to being acquired by Sybase.

With no ability to compete with giants like Oracle, IBM and Microsoft in the Enterprise DBMS market, vendors like Sybase and Progress looked for niches.
Watcom SQL's name was changed to Sybase SQL Anywhere and it became Sybase's Database for Mobile and Embedded environments.

Why acquiring Sybase is not great idea?
Buying an Applications vendor is usually better than buying an Infrastructure vendor.
A company profits from selling Applications are usually higher than profits from selling infrastructure solutions. That is why Oracle acquired PeopleSoft and Siebel before it acquired infrastructure vendors like BEA
We should remember that SAP is an Applications vendor and not an Infrastructure vendor, so assimilating an infrastructure vendor is quiet a difficult task for it.
It is true that SAP portfolio includes strategic infrastructure products, mainly Netweaver Middleware, and that the company created the term Applistructure, but the focus on SAP's infrastructure was few years ago in Shai  Agassi days.

The price is too high
Does Sybase worth about 45% or 50% more than its stock price?
My answer to this question is: Definitely not. It is not a brand new high rate growing company, but rather a veteran company. 

Will Sybase acquisition change SAP's infrastructure neutrality?
SAP Applications advantage over Oracle Applications is platform neutrality. It is possible to run SAP's ERP under any of the three leading databases: Oracle, DB2 and SQL Server. It can be executed under UNIX, Windows, Linux and even under IBM's Mainframe Operating Systems.  
SAP ERP and Microsoft's Office applications interfaces were build by cooperation of SAP and Microsoft. 
Will SAP continue its platform neutrality or prefer Sybase's Database or other platform component over external solutions? Probably it is not a good idea, but it may add Sybase's platform as additional option for ERP deployment.    

Will SAP kill or keep Sybase's non-Mobile product portfolio?
 If   Sybase platform will not be an ERP deployment option, than the question above is a valid question. Killing products implies losing maintenance revenues and firing Sybase employees. The other alternative is also not an attractive option: maintaining gradually declining software products.  

Which type of company I would acquire if I were SAP's CEO?
 The acquisition target should be a smaller applications company and not a large infrastructure company. A successful SaaS applications company could be an adequate target. It is relatively small, it is a player in a growing market and it may provide a better opportunity for entering into the SaaS applications market for SMBs than using SAP Business By design. 


Sunday, May 9, 2010

Integrating SaaS: IBM's Cast Iron Acquisition First Take

Writing White Papers and Blog posts about Cloud Computing and SaaS is a common phenomenon. I estimate that approximately 30% of the e-mails (not including Spam messages) I receive are about Cloud Computing and/or SaaS.
SaaS is not only a buzzword but a real solution which is rapidly going Mainstream.(Read previous posts:  Even Sap is offering SaaS ERP, Future Applications SaaS or Traditional, SaaS is Going Mainstream).

IBM's acquisition of Cast Iron is another indication  of SaaS importance.
IBM is a leading Integration vendor. IBM's integration solutions are branded WebSphere, e.g. WebSphere Application Server, WebSphere message Broker, WebSphere Process Server, WebSphere MQ etc.
IBM's integration solutions address many integration needs and patterns; however it seems that none of them address properly integration of SaaS applications with Data Center applications as well as integration of a SaaS service in one Cloud with another SaaS service in another Cloud. 

On the other hand Cast Iron's OmniConnect middleware was built for integrating SaaS and Data Center applications as well as integration of SaaS applications located in different Clouds. It includes a rich set of APIs for many SaaS applications e.g., NetSuite, Google etc.
Cast Iron's partners include Microsoft with its Azure platform, Amazon's Elastic Computing Cloud (EC2) and Google with its Google Apps.

OmniConnect supports three types of integration solutions: UI Mashups, Process Integration and Data Migration from Legacy applications to SaaS.
 IBM acquired Cast Iron in order to preserve its Integration Market leadership.IBM challenge is as usual, integrating different acquired and sometimes overlapping integration products to coherent solution beyond branding it as WebSphere (WebSphere OmniConnect in this case?) and clearly defining the right product for each type of integration style.Even if it will succeed in addressing this challenge, I doubt if its sales people will offer the right product to a customer. This conclusion is applicable to other Mega Vendors as well.Therefore I always recommend usage of Independent Consultant's services (read my post Choosing a SOA Consultant). BPM is a good example of IBM's challenge.OmniConnect is capable of addressing some aspects of this topic. Currently IBM's BPM portfolio includes: WebSphere Process Server developed in house as well acquisition of Netfile and Lombardy. 

Thursday, April 22, 2010

Even SAP is offering SaaS ERP

Few years ago I participated in a strategic CRM consulting. The customer was using  Siebel and SAP ERP. He considered  other alternatives for CRM. My role in the consulting team was to analyze the CRM market and trends. CRM as a SaaS or on Demand was one of the major  trends I mentioned. I wrote on transforming the CRM market.

I wrote on Seibel on Demand as well as on other SaaS style CRM solutions. Sap was an exception. It was the only vendor with no On Demand CRM strategy or solution.

According to SAP, CRM should be integrated with ERP components and database, therefore it is not possible to properly implement CRM as a Service, together with Data Center based ERP.
I explained to the client, that sooner or later we will find SaaS CRM solutions by SAP.

However, about two days before we completed our paper Oracle announced Siebel acquisition.
The most important questions were: Is the acquisition is due to Installed Base or product features?  will Oracle stop developing Siebel? What CRM product will be  Oracle's strategic CRM product: Oracle CRM?, Siebel? PeopleSoft CRM?

My First Take was that Siebel will be the strategic CRM product.
Few months ago, I published a post titled: Future applications SaaS or Traditional? . The topic was the difference between SaaS ERP e.g. Workday or NetSuite and traditional ERP e.g. SAP and Oracle.
I recall these activities while reading Don Fornes's post titled:SAP’s SME Solutions – A Guide to the Product Portfolio.  

Don's article described four types of SAP's ERP solutions related to enterprise size.
The most interesting product is SAP Business ByDesign. Yest, it is a SaaS style ERP suite.
Finally, SAP's product's portfolio includes not only CRM as a Service but also ERP as a Service.   Don's observation of limited functionality in comparison to SAP's flagship SAP Business Suite is in accord with my conclusions as described in my post.
My other conclusion is that SaaS ERP products Agility is their advantage over Traditional ERP products.  It should be remain seeing how agile is Sap's Business ByDesign product.  

Thursday, April 15, 2010

STKI Summit 2010: The effects of Infrastructure Complexity

Complexity was one of the issues presented by Dr. Jimmy Schwarzkopf's STKI summit Keynote presentation.
My Take in a previous post glanced at Complexity and the topic of a comment to my post was Complexity.

Pini Cohen's Architecture and Infrastructure EVP & Senior Analyst's presentation in the STKI Summit included many slides on the same topic. 
The key point was that In Israel  2009 was a year with record downtime dew to Complexity.
Storage was a major cause for not good enough service due to under staffing.
According to Pini's presentation the complexity is driven by three factors:
1.    The total number of entities
2.    Their degree of heterogeneity
3.    Their degree of interconnectedness

The solutions to the Complexity problem according to Pini's presentation are:
1.    Industry in a Box is Consolidation which reduces the total number of entities and could also reduce the degree of heterogeneity.
2.    Automation e.g. by using Business Ready Infrastructure and CMDB driven processes.
3.    Cloud Computing by reducing the total number of entities and by providing Elasticity, Horizontal Scalability, Resources sharing etc.
4.    Other methods for reducing Complexity especially Service Oriented Infrastructure (SOI).

MY Take
  •      Complexity growth is driven by Heterogeneity, rapidly growing Functionality, new non standardized hardware and software types and even by too many competing standards.
  •      The immaturity of the IT industry is also increasing complexity. The Spiral Model depicted in the figure above. Instead of continual paradigmatic and technological evolution, IT is evolving by extreme transformations and disillusions. The result of this evolution style is collection of systems build upon variety of technologies and concepts and additional significant Complexity.   
  •     It is difficult to tell what increases Complexity and what reduces Complexity. For example Pini Cohen identified Cloud computing as a way to reduce Complexity. On the other hand according to Einat Shimoni's comment to my previous post, Cloud Computing increases Complexity due to more complex integration requirements between SaaS and Data Center applications.
My conclusion is that Cloud Computing is a trade off, which reduces some Complexities and enhances other Complexities. The overall effect will be Enterprise dependent: for some enterprises External Cloud may significantly reduce Complexity and for others it will increase Complexity.

  •    It should also be remembered that Complexity depends upon perspective. For the Infrastructure manager of an enterprise external Platform as a Service (PaaS) may reduce Complexity. He does not care how complex it is for Amazon or other Cloud Services Providers Infrastructure Managers.Hiding Complexity is reducing it from those who are less capable to handle it, by increasing it for those who are more capable to cope with it. For example SOA hides technological aspects from Business users, however additional Service Layer increase Complexity for the Enterprise Architect.   Infrastructure Vendors decrease Complexities of software products for users by packing several products as a single product. The packaging may increase Complexity for those products developers.
  •   Some complexities are even too complex for the most capable professionals. The solution for that problem is Automation. Nothing is changed in the complexity level, but an automated system is more capable than any human for handling it. That is why, Pini Cohen's presentation included slides referring to Automation in the context of resolving Complexity issues.You can also read my previous post titled: The end of Load Balancing? The use of Application Delivery Controllers (ADCs) instead of traditional Load Balancers is an example of increasing Automation in order to handle Complexity.

About Me

Share on Facebook