Sunday, March 29, 2026

Vendors Survival: Will OpenAI Survive until 2036?



The vendors Survival posts are about the long time survival probability of leading IT vendors and Risks threatening their Long Term existence. 

 

I wrote posts about IBM, HP. Apple, Facebook etc.  

The most relevant posts related to Open AI are the posts about Google and Microsoft


It is possible to compare the AI Revolution to two previous revolutions.


The first is the Industrial Revolution.

The second is the Internet



Bubble



Comparing current AI market to the Internet of  the beginning of this century may be useful. 


I remember the dot-com bubble

A Seed Venture Capital rejected my proposal to extend the Scalability and Robustness of Microsoft's Windows operating system.

The reason for rejecting it was "It is not pure Internet solution" i.e. it is applicable to Web Server but also applicable to internal Data Center systems. 


Currently AI is the Hype stage of Gartner's Hype Cycle. 

Probably, the current AI bubble is not very different from the dot-com bubble. 

Organizations spend a lot of money on AI initiatives and products, but only few of them deploy AI solutions which provide Value justifying the expanses.  



The sixth wave of AI



The book "The Shortest  History of AI - The Six Essential Ideas that Animate it" was written by Toby Walsh

According to the book, the previous five waves of AI fail to accomplish their promises. 

Huge sum of money was spend without enough value provided. 

For example, the wave of Expert Systems started in the eighties of the previous century. 

A lot of money was spend and a temporary  success was replaced by disappointment after discovering inherent limitations. 

Leading  expert systems companies such as Teknowledge, Intellicorp and Carnegie group are not traded anymore. 





Will Open AI Survive until 2036?



Currently, there are many severe threats to Open AI. 

The probability that it survive until 2036 is not high. 

In the following paragraphs I dwell upon some of  the major threats. 



Threat 1: The Sixth Wave will fail


Five AI waves did not fulfil  their promise. Nobody can be sure if the current wave's fate will be different. 

Most of the Leading Vendors of previous waves are no more significant vendors.



Threat 2: Currently AI is in the Hype stage of Gartner hype cycle

 

Gartner hype Cycle. source: English Wikipedia 



The left side of the graph is Hype. 

Leading Vendors during the Hype phase are not necessarily leaders After disillusionment.

Netscape Web browser was the dominant Web browser in the mid of the 1990s with 90% of the market but lost to Microsoft's Internet Explorer. In 2006 his market share was less than 1%. 



Threat 3: Lack of Revenues


ChatGPT is a generative artificial intelligence chatbot

Its dominance in the retail sector does not provide enough revenues.

The expanses are huge expanses: Data Centers, Electricity, expensive Nvidia Graphic Processors etc. 


The revenues are far from justifying the expanses.


The Organizational Market is potentially significantly larger revenues source. 

Unfortunately for Open AI, Anthropic Claude is leading that market.  


Open AI attempts to create  additional revenues sources include the following:


1. Advertisement

The results of Advertisement deployment would be gathering more personal information and using it for adapting the advertisement to users profiles. 

Another result would be less natural interaction of the AI software and the user.

The third result would be enhanced Security threats.   


2. Content for Adults

Content for Adults stands for paid  Erotic discussions of a human being with a virtual entity created by the Chat Bot.


I doubt if these sources would provide sufficient revenues. 



4. Competition



Google

A large and a profitable vendor which earn a lot of money from other business lines. 
Google Gemini 3.5 is a good product. 

Google builds its own GPUs for internal use. 

Google GPUs capabilities are almost similar to Nvidia's GPUS capabilities.

Using its own GPUS instead Nvidia's GPUs could reduce the expanses significantly. 

Google is aiming at integrating its AI solutions with its popular platform and other products.


Anthropic

Anthropic is a leader in the Organizational market and the leader of code creating.  



Nvidia

Nvidia is a monopoly of AI GPUs. 

GPUs' prices are expensive. Data Centers of Generative AI products deployment consumes a large amount of GPUs. 

Nvidia's  revenues are enormous.


Jensen Huang NVidia's CEO said the company intends to provide full GEN AI solution including GPUs and Software. 

Currently  it is experimenting its GEN AI product based on Open Claw


Nvidia could be a very strong competitor. 

It is possible to compare Open AI to Netscape of the first browsers days and Nvidia or Google to Microsoft of that period.



5. Model Threats


The first  phase of neural Networks was based on model called Perceptrons developed by Frank Rosenblatt


The first Neural networks phase failed.

Marvin Minsky and Seymour Papert proved that Frank Rosenblatt's model includes inherent limitation and therefore is inadequate AI model.


Unfortunately, the current Generative AI model has serious inherent limitation.

This limitation causes Hallucinations i.e. answers unrelated to the question a human being  asked.


The Chatbot  do not understand what Human Being is asking. Instead of understanding it makes Probabilistic decision. 

misunderstandings occurs and the result is Hallucinations.

The inherent limitation is similar to the inherent limitation of a Speller Correction Application.

The Speller Correction Application is simple. 

The Gen AI Chatbots are more sophisticated.

  

The current Generative AI tools are based on training the product on large amount of data in order to reduce Hallucinations probability. 

 

A new version of CharGPT is based on more data than the previous version. 

The first versions used prepared training data.

Unfortunately, current versions use Internet data for training.  

Data Quality is low. 

There are people and organizations deliberately injecting contaminated Data, e.g. the Russian Government and the Iranian regime.

 

There are already startups trying to use other models which are not based on training data.


Among the startups there are startups created by Yann André Le Cun and Mira Murati


If a better model will become Mainstream model then Open AI is at Risk of non-survival same as some of the Leaders of previous AI waves.



The Bottom Line


The probability that OpenAI will not be a significant player in AI on 2036 is high.


Any 10 years prediction could be wrong. 

The longer the prediction time is the probability of correct prediction is smaller.


The probability that OpenAI will not be a significant player or even will not Survive in 2036is higher than the probability that 50% of the workers will lose their jobs until 2036.


Monday, March 2, 2026

Anthropic code: No Revenues Plan as Immaturity indicator



In previous post titled Anthropic code or Hypethropic code? I wrote that currently Anthropic code is not a Real World solution to Legacy Systems modernization, Security or SaaS applications. 


If the Modernization tool of Legacy COBOL systems transformation to Modern languages such as Java, is capable of accomplishing this task in weeks or months Anthropic should sell it to users in very high price and the users will gladly pay that price. 


 The same is true about the Security code.


In both cases the code is not the main issue. 

These issues will be discussed in posts following this post.


Tom Smith's article in Devops.com describes some of the issues. 


The most important sentence in his article is "The market is in “sell first, ask questions later” mode on AI disruption". 





Saturday, February 28, 2026

Anthropic code or Hypethropic code?




On 1978 I participated in an International computers conference in Jerusalem. 

I was a young Systems Programmer and a good Chess player playing in tournaments.


A main attraction was an Artificial Intelligence related event.

It was a Chess Tournament between the best programs in the world.


The leading programs of that time played better than ChapGPT 5 and other general purpose Ai Chatbots. 


After the tournament people played against the Chess Programs.

I won easily a game against one program.

I do not remember if I defeated the program which won the tournament  or against the program which won the 2nd place.


On 1997 IBM's Deep Blue defeated world champion Gary Kasparov.

Today no human being is able to win against leading dedicated Chess programs. 



Anthropic code



Anthropic code solutions is the reason of  shape decline of stocks of other companies providing SaaS software, Security software and COBOL programming language Legacy systems.



My Take



Currently, Anthropic code solutions resemble e Chess Programs of 1978. 

It will take years until they resemble Deep Blue's level beating Kasparov.

It will take a long time until they will enable Real World solutions, if they ever will enable such solutions.

AI is currently in the Hype stage of Gartner Hype Cycle.

Anthropic code is not an exception.


The Chess programs of 1978 were valuable.

Current Anthropic code is also valuable, if the user is aware of its limitations and use it properly. 




Next posts


I have a lot of hands on experience in Security, COBOL and Legacy Systems Modernization

I also have a lot of experience in consulting related to SAAS solutions.  

I will write three posts relating to the limited capabilities of Claude Code in these areas.  

 



Tuesday, July 8, 2025

Is it Viable?


 

As a Independent IT Consultant I often used Analysts Groups Services. 

I read Research Notes written by Gartner Inc., Forrester, IDC, Meta Group, GIGA Group etc. and I asked them questions behalf of my clients.

On 1998 my customer and me, as a CTO, tried to select a proper Enterprise IDE. 

The headline describes exactly what i asked Gartner Inc. about one of the products in our list in order to decide whether it is a candidate to be included in our short list. 

As usual I included a background description in my Query. 

I received a one word written answer: Yes. 

I thought that the answer was not good enough.  

The Analyst had to take into consideration the question: 

Why I asked the question on one software product and I did not ask the same question on any other of the other ten products


The reason for asking was information about the horrible financial situation of the company which developed this software product. 

Selecting a strategic and expensive software product, which may disappear in 2 or 3 years is not a good idea.


Second Round of sending a query


I wrote to Gartner that the answer was not good enough. 

This time I explained why I asked the question  on this specific product. 

This time I received a satisfactory answer and I decided to avoid of including the product in The Short List.


Who is to be blamed for the First Round answer?


Frankly, I had to be blamed and not the analyst. 

I should add to the question the reason for asking it only on this specific Software product.


Lessons Learned about AI in 2025


The reason for writing this post is closely related to the Artificial Intelligence usage by many Dummies.

They ask the wrong questions and expect the AI Software to give them good answers


This is one of the issues I raise in my lecture on AI Risks. 


Unlike me they do not drill down with additional questions.

Unlike me they accept any answer as the ultimate answer.


I doubt if usage of AI Prompts is capable of solving the problem, as far as the user does not understand enough to formalize what he is actually asking. 

 



Monday, June 14, 2021

Public Cloud Core Banking: Hype or Reality? - Revisited


 

More than 4 years ago I was asked if Public Cloud Core Banking is a Hype or a Short Term Reality?

If you had read the post, you would probably find the answer that it was probably Hype and not Short Term Reality.

This post Revisits the same issue.


Size Matters


The issue is a major issue for Large Banks. Those banks use IBM Mainframe for executing their Core Banking systems. 

Smaller Banks used two decades ago UNIX based Core Banking Packages. 
The Systems Availability, Scalability and Security was not as high as the Mainframe based Core Banking Systems, but it was probably good enough.

My very limited Study


I looked for real Case Studies of deployments of Core Banking system in the Public Cloud.
I looked for articles and Research Notes by Googling.

I did not find much. No real Case Studies of Large Banks transforming their Core Banking to Cloud Based systems.

I refer to the following articles:



Mckinsey: Core Banking Systems Strategy for Banks


On the one hand, the Mainframe Legacy Core Banking Systems Reliability and Availability is high and their Performance is good. Performance is critical for Large number of concurrent transactions. 

On the other hand, the Financial services market is changing. 
Banks need to adapt to the New world of Digital Systems, APIs to Fintech companies and non-Financial partners. 
The Legacy Architecture is not suitable for this New World. 
The Long Term Future of Core Banking  Architecture should be a Cloud Based Micro Service Architecture.

Delloitte: Cloud based Core Banking: Is it Possible?

 

The question in the paragraph's title is a kind of evidence that Core Banking in the Public Cloud is not a viable option for the Short Term. The article is about Long Term Cloud migration benefits. 

The article does not refer to Real Large Scale Core Banking Case Studies.


Why immediate Cloud Migration is not a viable option?


1. Banks are Risk Averse

If it is not broken do not fix it. The systems are doing what they were built for. 
Migration Projects are Long and very expensive.

2Functionality Risks

Building new Core Banking systems is a risk of lack of functionality compatibility.
The Legacy Systems were build ten or twenty years ago. They were changed due to Business and Regulatory requirements. Their Documentation probably had not fully updated. 

The new systems or Core Banking Packages were written in new Programming languages by people who are not aware of old programming languages Architecture, Capabilities and Syntax.

3. Losing systems maintenance skills

The older people maintaining the systems, usually written in COBOL programming language, may retire. Most of them are not capable of learning modern Programming Languages and Methodologies.
Their knowledge of the Systems and the Business logic may not be available. 

4. Performance Risks

Would the new systems be capable of handling a large concurrent transactions workload?
Would the new systems be capable of handling the workload providing reasonable and stable Response Time?

5. Security Risks

Few weeks ago I completed a Lead Cloud Security Manager course

The course is a new course by PECB.

I learned that Cloud Security Management is not as simple as Security Management within an Enterprise boundaries.

The following highlights clarify the complexities:

1. The threats management is divided between at least two organizations: the Cloud Services Provider and The Cloud Service Customer.

2. Are the Customer's systems protected from other Customers access or dependencies? 

It should be remembered that Cloud Services Customers share Infrastructure and in Multitenancy SaaS Services they also share Software and Database.

3. The contract should define the Security duties and responsibilities, however the Customer should be aware of the Cloud Provider's Security Policy, Procedures, Controls and Methods. 

4. On going Communication, updates and Reporting between Cloud Provider and Cloud Customer should be executed properly. 

5. Incident Analysis is more complex because some of the aspects and data are not accessible by the customer.

Psychologically, if an enterprise controls Security its managers tend to think that Security is better than Security controlled by a Cloud Provider. 

6. Availability Risks

The Availability of IBM Mainframe Systems is very high. 
Would the new Cloud Systems Availability remain as high as it was?
Technically the answer may be positive but there is a Risk that it would not be as high as it is in Mainframe environments due to the following reasons:

A. The Infrastructure is more complex in Cloud Computing.

B. Outage of Public Clouds.
Outage of Public Clouds of all leading Cloud Vendors systems already happened.  

C. Unavailability due to Application Software
The Availability of Hardware, Infrastructure Software and Communication Hardware and Software is higher than it was few decades ago, therefore Application Software probelms are significant unavailability source.
For example, a computing formula error could require stopping a Core Banking system until the database is restored and the transactions are executed again using a correct formula instead the wrong formula.

7. Regulatory Risks

International, as well as Regional and Country level, Regulation includes Security related Risks.
Adressing the Regulatory requirements should be verified prior to moving Core Banking to a Public Cloud.
However, it is not only Security that should be addressed properly it is also Privacy.
 
Presonally Identifiable Information (PII) should be protected.
ISO/IEC 27018 is a general stnadard that should be addressed in addition to Banking specific PII requirements.


Reasons for Long Term Core Banking in the Public Cloud


1. Digital Infrastucture
Digital systems interfaces are API based. Flexebility and Agility are a must. 
The Mainframe based Systems are less adequate for those kind of interfaces 
 
2.  Enhanced Competition
Banks should adapt to a new Financial Services Market.
Main New Competitors are:

A. Digital Wallets
Including Digital Wallets of Giant IT vendors such as Apple and Google

B. Fintech Services
Fintech services are cheapper than Banking Services. Fintech vendors use a Web sites or a Mobile Applications. They do not have overhead of Branches infrastucture, Emplyees working in Branches, as well as many other employees. 
The Digital Loans market is a good example.

C. P2P Services
P2P services are also cheapper than Banking Services.

3. Enhanced Digital Cooperation
In order to Survive Banks should change their strategy: Their systems should be a Services Hub. 
The Services should include Services of non-Financial Partners. The Partners systems should also include a quick access to the Bank's Services. 

Probably, comparison and access to other Banks' Services will be included in the Hub as well.
  

4. Will IBM Mainframe Survive in the Long term?

IBM plans to split its business into two companies  at the end of 2021. 
It is likely that the Mainframe will be included in the "New CO" i.e focus on revenews and limited R&D, if any. 
In the Long Term the Mainframe will not be adapted to Modern Architectures and technologies.

5. Software Maintenace
The mainframe based Core Banking systems were developed decades ago. Maintenace is becoming more and more difficult.
Maintenace of these Silo Systems is more difficult anyway.
Many of the people, who developed and maintained the systems, already retired. Others will retire soon. Y-Generation and Z-generation developers prefer working in modern environments and does not havee the skills required for maintenace of these systems,  such as the COBOL Programming language, The CICS OLTP Monitor and even the DB2 database.

The results are high maintenace costs and Backlog. 

Conclusions


Currently, Public Cloud based Core Banking for large banks is far from being Reailty. 
However, it will be a Long Term Reality.

Large Banks should gradually prepare for the transition to Cloud by Legacy Systems Modernization
The Key is integrated Services instead of Large Silo Systems. 
 
An Agile Architecture would enable gradual Migration from IBM Mainframe to a Cloud. It could be a Private Cloud, an Hybrid Cloud or a Public Cloud.

Migration from any Cloud Services to a Public Cloud is a lot easier. The Migration period is shorter.



Thursday, November 19, 2020

The Pandemic and the Security Paradox



 Ten years ago, I wrote a post titled: Cloud Computing and the Security Paradox. In this ancient post I argued that the claim of insufficient Security of the Public Clouds systems is based on a perception that what is not controlled by the Enterprise within its Data Center is less secured. However, Public Clouds Security was better than assumed  based on our perception. 

Sometimes it was better than the Security of Data and Systems located within the Enterprise's Data center. 

The COVID-10 magnified the Security Risks and the Public Clouds are more Secured than many Private systems.


The enhanced Threats landscape

 

The COVID-19 Pandemic restrictions changed dramatically the way people collaborate and interact. The Security measures, Procedures, Policies and tools should be adapted to the new interaction style.

Adaptation is a continuos Process therefore the vulnerability is higher than before COVID-19. 

Main reasons for the higher vulnerability are summarized in the following bullets:


1. Work from home

The Client Security and the Home Network Security is not as robust as the Enterprise Security.

Some Employees had worked from home sometimes, but the magnitude is different: Many employees are working only remotely from their company's offices.  

  

2. Charecteristics of Remote Workers

Higher percent of the Pandemic Remote Workers lack technology expertise. 
The probability that they also lack Security Awareness is high. Lack of Awareness could be the weakest link in the chain

3. Extended usage of e-commerce

Due to COVID-19 regulations in many countries restrict activities of physical shops and due to fear of being infected by the Corona virus, more transactions are executed by online services. 
More online commerce implies more Security Attacks.
Some of the novice e-commerce users lack skills and awareness of Security and are potential attacks and fraud victims.  


4. Extended usage of Remote Services

Due to the regulations and attitude described in the previous section and due to service providers face to face interactions restrictions more services are consumed by the Web and Smartphone channels. 
More transactions and more users imply more Security threats. 
 

5. Meeting Solutions 

Meeting solutions Security robustness is questionable. 
Non-Technological Users, such as people using Meeting Solutions to conduct virtual meetings with their grand children may not use or may use improperly existing Security features of the Meeting Solutions.

New Online Services Providers' Limitations


The traditional Public Cloud vendors had plenty of time to plan their systems. The planning included Security and Business Continuity. 
They implemented their solutions. They improved them gradually based on experience of many users. 

Security is essential for their business growth. Data Breaching or other Security problems could harm their reputation and Customers may use competitors' services.

Therefore, the Security of Public Clouds is at least reasonable.


New Online Services Providers were forced to transform their model immediately due to COVID-19 restrictions.

They could not afford postponing the transformation until they plan and test their systems or services properly. They were not able to postpone launching their Services until they endorse bullet proof Security. 

The result is less Secured Services and System outside the Public Cloud.


The Security Paradox is no longer a Paradox. It is a new Reality.

Vendors Survival: Will OpenAI Survive until 2036?

The vendors Survival posts are about the long time survival probability of leading IT vendors and Risks threatening their Long Term existenc...