The below articles was originally written by Ryan Derousseau and published February 14, 2018 by Bank Director.

A small Kansas town with a population of less than 700 isn’t typically where you would expect to find a bank on the forefront of the future. But since Suresh Ramamurthi and his wife, Suchitra Padmanabhan, bought CBW Bank in 2009, they have revamped the 126- year-old institution, making it a digital-first place for conducting business, attracting customers from all over the world while assets jumped 30 percent to $29 million in the past two years. And they did this by completely redesigning the back-end core system used to conduct transactions and process loans.

CBW tackled one of the largest technical issues troubling many banks across the country. It fixed its outdated core systems, which were built nearly 20 years prior, when no one had even heard the term smartphone. It’s an issue many banks must weigh, as the concern to fix these systems, which can often be over 30 years old, run into hurdles based on expenses, risk of downtime and a lack of understanding from decision makers. While fewer banks can afford to ignore these upgrades, vendors have increased cloud-based offerings—software and data are provided from another company’s servers and accessed at will by the bank—which have started to appeal to smaller firms.

CBW is unique, though, because it innovates through another company, Yantra Financial Technologies, that Ramamurthi created. This limited their reliance on third parties, allowing them to develop tools like a payment security checkpoint that’s linked to the customer’s car via the cloud. This ensures that when customers fill up at a gas station using a debit card, the correct vehicle identification number is in the same location.

Peers of CBW have now begun to test ways to upgrade the core systems through the use of cloud vendors that provide them with the back-end capabilities to upgrade services and offerings online.

The core banking system is the term used to describe all the software that supports the bank’s most vital services, like servicing loans or processing customer transactions or signing up new customers. Over the decades, these systems have grown in complexity, as many banks chose to update them through patching outdated aspects. But as more banking moved to the digital and mobile spheres, many systems now lack the flexibility to scale.

While bank directors know upgrades are needed, and 81 percent of banks have at least begun to plan for upgrades or will do so by 2018, according to Forrester Research, there’s a hesitation to move forward with plans, for good reasons. During the initial surge of upgrades that took place in the early 2000s, only 30 percent succeeded, according to McKinsey & Co.

Jost Hoppermann, a vice president at Forrester, who primarily focuses on the financial services space, has seen situations where banks suffer under the weight of these updates. For example, a mid-size bank in a large country began revamping its 40-year banking platform in the mid-2000s, only to dump the plan eight years into the process. What went wrong? The bank had developed 20,000 requirements that the new technology would need to pass, before it would make the shift. This forced them to customize their own technology, doubling the budget.

There’s various risks [to upgrading the core],” says Hoppermann, including downtime, customer dissatisfaction or unexpected costs, but much of it can be circumvented through proper planning. With so many requirements, the bank left out room for flexibility in the technology. Instead, you need to “identify the most important criteria,” adds Hoppermann.

The cloud is just one way to update the core, but it’s also one of the most promising in terms of opening up the bank to better technology. Capital One, for instance, uses Amazon Web Services (AWS) to host much of its mobile banking application. Legacy system operators, like Fiserv, Jack Henry, FIS and D+H all offer cloud tools for core banking solutions. But there are also a growing number of third-party vendors built specifically around the task of updating core systems in the cloud, which can have a particular appeal to smaller banks.

To imagine what the cloud offers, take into consideration when a customer looks for past transactional data on a mobile device. If operating through an older system, then the bank won’t allow a customer to see beyond a few months’ worth of information. A legacy system would choke if a large number of customers looked for past transactions all at the same time.

“Fundamentally, there’s not much difference between transactional data and, say, a Twitter stream,” says Thought Machine Chief Marketing Officer Travers Clarke-Walker. He says when using a cloud-based solution, such as the one Thought Machine developed through its Vault system, customer data becomes easily accessible in real-time, similar to looking at your past posts on Twitter. Thought Machine is currently working with a number of banks to get Vault live to customers, which is expected to occur this year.

The problem with many old core systems is they can’t be upgraded effectively. So if you want to quickly add a new feature, like a mobile loan tool, then you might have to build the tool, instead of linking to a third party capability through an application programming interface (API). “Old solutions do deliver business value, similar to an old red wine,” says Hoppermann. “Again, similar to a red wine, if you use them for too long of a period, [the value] is suddenly zero.”

Another upstart cloud core system provider, Nymbus, has a solution where an organization can use Nymbus’s system to operate a digital-only bank. This can allow a small bank to have digital operations, via the flexibility of the cloud, without upgrading its own core system. You can then choose different services that would connect to the system using APIs. For those directors and officers nervous about how such a digital bank would operate, there’s even an option to brand it separately from the core bank, reducing the impact of a public relations snafu. “It’s really a bank in a box,” says Christopher George, senior vice president of client strategies at Nymbus, who has a number of banks transitioning to its platform, including the $100 million asset Surety Bank based out of Deland, FL.

These types of solutions can dramatically reduce the time it takes to upgrade the systems to less than a year, from planning to launch.

But there are obvious hesitations from directors. Small banks, in particular, tend to outsource cloud services, which can be stored in the public cloud from a service like AWS, or on servers dedicated specifically to the bank known as the private cloud. Larger banks tend to have the resources for a private cloud, which has the reputation for security, even if the public cloud’s popularity and widespread use has began to ease banks’ hesitation.

There’s also a regulatory concern since regulators haven’t kept up with the changing technology in the fintech space. This brings in compliance questions around security, confidentiality and access to the cloud data. In a survey of bank technologists from Forrester, 26 percent were also worried about the technology’s maturity, indicating that some options weren’t dynamic enough just yet.

But as the cost of legacy systems rise, PwC estimates that moving to the cloud would free up a combined $58 billion in IT spending (in total) from banks with $1 billion or more in assets. For smaller banks—without a dedicated innovation arm—it becomes an attractive way to upgrade without hiring an entirely new technology team.

If you need a strategy for imagining how it could work, take the case of CBW. Since a bank account is simply a number, CBW used customers’ account numbers as the “center of the universe” within its system, says Ramamurthi. A customer can define what they use at the bank, whether it’s regular depositing, a need for a loan, or a more advanced solution. The system can adjust its offerings based on the requirements that the customer says it needs. If assets grow faster than expected, as more loans or accounts come through, the back-end can scale with that increase, while also providing the proper restraints for compliance purposes.

It’s nimble banking, not just for the customers, but for the business as well.

In the first article of this series, I posited that a meaningful strategy for accomplishing collaborative work in the cloud can only be achieved by thinking more holistically than has traditionally been the case. The reality is that work collaboration that drives competitive advantage is a multi-faceted endeavor. It requires the adoption of multiple best-of-breed solutions across five categories of collaboration – Create, Store & Sync, Communicate, Reference, and the rapidly emerging category of Manage. Investing in each creates a composite that unlocks unprecedented value – a “new possible” where previously unimaginable levels of collaboration, automation, and insight are achieved.

The New Possible

Simply put, when a new possible becomes reality, it fundamentally redefines what can be achieved and who can achieve it. Remember when video content and editing was exclusively done by pros? Today, tens of millions of producers exist, using little more than their phones and readily available software. Game, changed.

I contend that we’re now at a point when that kind of breakthrough exists for collaborative work in a way that it never could in the pre-cloud world. This new possible for collaborative work has become the new reality, by graduating from merely producing, storing, and talking about work – to actively managing it. When management is coupled with conversation and content, it improves accountability, accuracy, and speed, and yields unprecedented visibility into the work as it’s being done, and the value that can be derived from it. Collaborative software that enables enterprise teams to manage their work in a self-directed, no-code manner is doing for business operations and productivity what the iPhone did for video production. Information workers rejoice!

A Framework for Success

Here are the five core categories that, when fully and effectively deployed, enable organizations to fully realize their potential to undertake collaborative work in the cloud. Note that the walls between each are permeable – although an application’s primary purpose exists in but one category, that does not preclude it from having relevance in one or more of the others.


Applications that aid users in the creation of content or digital assets, often shared, sometimes co-authored. (Think: Google Docs, Office 365, Adobe Creative Cloud, and others)

With more than a billion people having used such software to document, plan, or design, the capability to create digital content is the most immediately understood of the five categories. A core tenet of this category is the freedom it provides its users. These applications enable nearly limitless potential to arrange, format, capture, and produce. With that freedom, however, comes somewhat of a lawless territory. The absence of “collaboration guardrails” can lead to inconsistency in how information is stored, uncertainty by collaborators of how they should complete work, and a lack of business process enforcement — which places the burden of information and process management squarely on the shoulders of information owners and collaborators.

The cost, frustration, and latency that results from relying solely on human oversight has been well documented, and has reached new heights in an era where more “cooks in the kitchen” through sharing is creating more challenges than ever before.


Applications that enable users to organize, access, secure, and share files. (Think: Box, OneDrive, Dropbox, and others)

Content, once created, needs to be stored and synchronized. Performance, regulation, and risk mitigation are all contributing factors to why the storage and management of digital assets is a category that is alive and well. The pre-cloud analog was your c:\drive or your corporate fileshare. The latter was collaborative in the sense that you could provide access to multiple people. The new generation platforms greatly improve the accessibility, permission management, and, in some cases, auditability of digital assets. These platforms aren’t known for their role in creating digital content, but rather housing it, securing it, and regulating its distribution.


Applications that facilitate text, voice, and video from one person or group to another, often feed-based, and searchable. (Think: Slack, Microsoft Teams, Skype for Business, Google Hangouts, and others)

It’s remarkable to think how far things have come since the days when nearly all digital communications were conducted using email. In recent years, video conferencing, threaded messaging, group messaging, team spaces, and team channels, in which people share and comment on content, experiences and best practices, have leapt onto the scene and continue to produce new market entrants. Why is communication so attractive to innovators looking to develop software? Simple. Because communication is fundamental to information workers and the ways in which they want to collaborate with others. To some degree, one hundred percent of all people MUST communicate. Not optional. As such, the temptation of a large total addressable market remains alluring to technology innovators.

Communication solutions principally solve for conveying, and in many cases, storing information. They are optimized for notification and managing the continuous flow of sentiment, ideas, decisions, and targeted status. They, like solutions in the Create category, have virtually no guard rails for content quality or consistency, making it difficult to delineate valuable signal from a sea of noise when looking at communications en masse. As one tech CEO recently stated, communication solutions are like effective radios, but are not ideal at providing GPS information that can help you determine where you are or where you need to go next.


Applications that create structured websites that people reference for knowledge, instruction, and navigation. (Think: SharePoint, Confluence, Google Sites, and others)

While I include Reference as one of the five categories, it’s included mainly because of its close association with collaboration over the years. With respect to my categorization, I’d say it’s very much on the bubble because while the median enterprise employee might regularly access information from such a tool, far fewer contribute, and even fewer still (think less than 1%) actually create new assets on these platforms (without IT involvement) to accomplish the work for which they’re accountable.


Applications that enable visibility, action, status, and automation for collaborative projects or processes.

For years, actually decades, people – and as a result, most businesses (including major enterprises) – have managed important work using office docs like Excel. And those same people have communicated about that work by sending it around, as attachments, in email. In recent years, the volume and velocity at which these office docs and their associated communications have been created has reached a dizzying – many would say unsustainable – pace.

For those of us with a front row seat to the new possible of collaborative work in the cloud, it is amazing to realize that – to a great extent – the old, manual, office doc way of organizing and getting work done has persisted, despite a recognition that the approach is fraught with risk in today’s high-volume, high-velocity world. Clearly, we believe there is a better way.

The Manage element of collaborating on work in the cloud is more than a forklift upgrade of something you used to do using pre-cloud software. It’s fundamentally new and it is transformative. In my next post, I will take a deeper look at the core tenets that bind this next-gen collaborative work management framework together – Track, Report, Scale – and will provide my thoughts on the fourth element – the holy grail long sought by information workers, IT leaders, and line-of-business owners alike – no-code Automation.

In the meantime, I welcome your thoughts on this subject and encourage you to comment as appropriate.


The table with ID 1 not exists.
Oleg Shilovitsky, CEO, OpenBOM

I want to continue my dialog with Jos Voskuil, PLM business consultant and PLM coach. If you’re just catching up the conversation, check the following articles: Why traditional PLM ranking is dead. PLM ranking 2.0? How to democratize PLM knowledge and disrupt traditional consulting. Some interesting thoughts came over the weekend from Jos’ article – The death of PLM consultancy? As Jos mentioned in his comment earlier “a catchy title is always good for a blog post, in particular using the word dead always scores“.

I agree with Jos saying that company should own the decision process and come to tool selection only after companies knows what to do.

If you hire consultancy firms just for the decision process, it does not make sense/ The decision process needs to be owned by the company. Do not let a consultancy company prescribe your (PLM) strategy as there might be mixed interests. However, when it comes to technologies, they are derived from the people and process needs.

At the same time, Jos is questioning potential problem with cloud tools.

One of the uncomfortable discussions is when discussing a cloud solution is not necessary security (topic #1) but what is your exit strategy? Have you ever thought about your data in a cloud solution and the vendor raises prices or does no longer have a viable business model. These are discussions that need to take place too.

Just few years ago, security was considered as one of the biggest risks for companies to use cloud based software. Not anymore. Cloud adoption is growing. Security is an important question, but based on latest surveys made by CIMdata, the biggest concern today is how to integrate existing IT stacks with cloud services.

I agree, cloud might be still not for everyone. But the adoption of cloud is growing and it is becoming a viable business model and technology for many companies. I wonder how “cloud” problem is related to the discussion about the death of PLM consulting. And… here is my take on this. It is all about business model transformation.

Cloud brings transformation to the business and disrupting existing business models. IT was originally in the opposition to cloud technologies. It was a big change to old IT business model. Cloud IT was disrupting traditional IT. But IT organization adjusted their business models and found ways to make business in new situations and business conditions. Cloud is transforming PLM business. Large on-premise PLM projects require large capital budget. It is a very good foundation for existing PLM consulting business. SaaS subscription is a new business model and it can be disruptive for lucrative consulting deals. Usually, you can see a lot of resistance when somebody is disrupting your business models. We’ve seen it in many places and industries. It happened with advertising, telecom and transportation. The time is coming to change PLM, engineering and manufacturing software and business.

There is an interesting passage in Jos’ blog about the role of tools and technologies as well as marketing of software companies. Here is the passage:

Don’t try to find answers on a vendor website as there you will get no details, only the marketing messages. I understand that software vendors, including Oleg’s company OpenBOM, needs to differentiate by explaining that the others are too complex. It is the same message you hear from all the relative PLM newcomers, Aras, Autodesk, ……. All these newcomers provide marketing stories and claim successes because of their tools, where reality is the tool is secondary to the success. First, you need the company to have a vision and a culture that matches this tool. Look at an old Gartner picture (the hockey stick projection) when all is aligned. The impact of the tool is minimal.

I think Jos is missing the point with regards to these vendors. The difference is not in marketing, but in the process of PLM tool democratization. Jos mentioned three companies – Aras, Autodesk, OpenBOM (disclaimer- I’m co-founder and CEO of OpenBOM). All these tools have one thing in common. You can get the tool or cloud services for free and try it by yourself before buying. You can do it with Aras Innovator, which can be downloaded for free using enterprise open source. You can subscribe for Autodesk Fusion Lifecycle and OpenBOM for trial and free subscriptions. It is different from traditional on premise PLM tools provided by big PLM players. These tools require months and sometimes even years of planning and implementation including business consulting and services.

What is my conclusion? PLM industry is transforming. And cloud technologies will play a fundamental role by bringing new business models, removing implementation complexity and breaking communication silos. Cloud technology can democratize PLM and turn it from lucrative business tools targeting large enterprise to SaaS tools that can be used by large network on engineers, contractors, engineering and manufacturing companies of all sizes. Network effect created by these tools will make a huge impact on PLM industry. PLM projects will change their nature from large business transformation projects to agile and lean processes of adopting cloud services and tools. Each step will be a small progress towards transforming company product development processes. What about PLM consultants? Of course, we will need them – to help companies to build their vision and to shop for tools online. But, PLM consulting business will have to adapt to new business realities of cloud services and subscription business models. Just my thoughts…


Congratulations! Your awesome startup is up and running; you’ve leveraged a ton of open source software and tools, leveraged some great SaaS products, deployed to the cloud and you’re scaling effortlessly with increasing demand. So what could go wrong?

Many things, as it turns out, but two stand out: availability and security. Failure to manage either of these issues can be business busting.

Modern software practices enable tremendous velocity to even the smallest teams through the leverage that open source and outsourcing offer. The downside of this leverage is that it pins the successful operation of your services to your dependencies: when they fail, you fail.


Let’s look at availability first. Your team has delivered a solution that runs in multiple availability zones in multiple regions, with replicated databases etc etc; there’s just no way you’re going to go down! Right?

I’m sorry to tell you, but this is just not true.

The truth about any cloud provider or data center is that it will fail. Sometimes it’s subtle, but often it can be spectacular as in the September, 2017 AWS S3 failure.

Here are just a few things you might experience during an outage when you use outside vendors to deliver your solution:

  • You can’t deliver key services
  • You experience data loss
  • You experience loss of system visibility
  • Your Continuous Integration and Delivery (CICD) Pipeline is broken and you can’t deploy

Key Services: Maybe you’re using an outside service to deliver real time events to mobile devices, or you’re outsourcing transactions. If these services are down, it may not really matter that your own service is up.

Data Loss: What if you’re outsourcing your log management? You’ll lose visibility into what is going on with your systems, but more importantly this could lead to spectacular system failures when undelivered logs fill up your server’s disks.

Alternatively, you might be using a third party’s storage for analytics or other functions? Can you cache data until those system become available? What happens if data doesn’t make it there?

Loss of Systems Visibility: What happens when your outsourced monitoring tool suffers an outage? Will you still be able to operate? Will you be flying blind during an outage to your own cloud provider?

CICD Pipeline: It’s not unusual to need to deploy some kind of patch to your production systems during a cloud outage. So, even if you could otherwise weather the storm, not being able to deploy your patch to get your service back online is a real problem.

One word of warning is that many upstream repositories and services lost, or had severely degraded service during the September 2017 AWS S3 outage. Two of these included the important container repositories: Docker and Quay. When these services became seriously affected, it made it difficult for businesses to push up changes, launch new service instances with the latest updates, or even launch new instances at all!

Addressing third party availability:

I hope I’ve convinced you that you really should address this issue. The good news is that you’ve got options. Before you do anything though, take some time and have your team review and catalog your exposure.

As your team builds its report, it should include two items at a minimum: the effect of an outage on the business and the effect of an outage on the service. The effect may be negligible, or it may be profound; try to be specific about the effect. Note that a negligible business effect may also be paired with a serious effect on your systems when software was constructed with the assumption that dependent services are always available.

Armed with your report and the full knowledge of the threats these dependencies pose to your business, you can begin the process of addressing them. Some dependencies may not be worth addressing, or can be kicked quite a ways down the road, but others will clearly be more urgent.

Some items may be addressed through service level agreements (SLA) with your vendors, but avoid simple uptime requirements. If the only outage your vendor has is during 6 hours on Black Friday, they may still satisfy their SLA, but you’d be out a lot of business. Other items may require small changes to your software to handle outages more gracefully, or may inspire you to provide your own in-house solution. Once you understand any actions you need to take, you can prioritize and build a road map that gets you where you need to be.

Now that you’ve got your road map in hand, let’s take a look at security.


The truth about security is that it is not if you will be subject to attack and intrusion, but when, and how big is the blast radius. Addressing security, in general, is well beyond the scope of this article, so I’m just going to focus on what is considered the biggest attack surface for your service: your open source software supply-chain.

The dark side of open source is that it is open; anyone can look at the source code and craft a hacker exploit or sneak some malware into a popular open source library by adding a dependency to some innocuous package that turns out to be malware (just one example).

You’re also subject to the whim of copyright holders who may decide to pull the project from public accessibility which will cause all of your builds and deployments to fail, at best, or work with newly provided namespace replacements from hackers who have seized the moment to introduce their own malicious code into the open source stream.

It’s good news then, that security threats from repositories with well trained maintainers and an active community are usually identified and patched quickly. What makes it hard for you is that you’re likely using dozens, if not hundreds of packages (either directly, or through dependencies). Knowing when, or if you need to upgrade or patch your software is difficult.

Addressing your Supply-chain:

You’ve got a few choices:

  • Use compensating factors
  • Insert security scanning into your pipeline
  • Control your repositories
  • Write your own software

Use compensating factors: Tried and true solutions, and some newer solutions, can be used to make access to and from your servers difficult. This includes activities like locking down your service ports, configuring firewalls with egress (outbound) rules to prevent access to all but a few endpoints, DMZ networks with strict routing rules and service meshes with zero trust networking.

Insert security scanning into your pipeline: Prioritizing your pipeline work to enable quick turnarounds on patches is essential if you hope to be able to react quickly to threats; especially on servers that are directly connected to the Internet. Use solutions like Snyk, Gemnasium and BlackDuck Software (to name a few) to identify vulnerabilities and how to patch them. These tools will ensure that you aren’t flying blind and can quickly repair your builds and deployments.

One issue worth mentioning with respect to patches is that they have the potential to introduce breaking changes to your software. It’s important to try to remain as up to date as possible with your dependencies to minimize breaking changes so you don’t end up passing on an important patch. Passing on patches has led to some of the web’s most catastrophic hacks (naming no names).

Control your repositories: Make sure you’re in full control of what software makes it into production. You can setup your own mirrors using open source solutions, or tools like JFrog’s artifactory to put you in control of when your open source dependencies get updated and what you will accept. How much of this you do, is up to you. You may focus solely on production, on container images, or you may decide to run your entire pipeline on your own repositories. A side-effect of managing your own repositories is that your pipeline can remain functional during a cloud outage.

Write your own software: If there is a need to lock down your systems, you may consider replacing some of your open source software with software your team has written. Sometimes software suffers from bloat when too many dependencies are added to provide too little value. Go with the less is more approach in those cases and write your own. In other cases, you might consider forking an open source library and harden it to your own needs or just replace the open source library because the function is simply too mission critical.

Next Steps

Your customers expect a lot from you. They expect your service to be cheap, provide excellent value and behave like a utility where the lights are always on. You lose their trust when they can’t get what they need from you, or worse, if their personal information is compromised.

If you are currently deployed to the cloud, or are thinking about it, take some time to think seriously about availability and security. These issues seem extremely boring, but that is the point. Delivering your service should be boring. US-east goes down; no problem. A new zero-day exploit is found; no problem.

Build awareness and urgency around these issues, then build your own plan and roadmap with the full knowledge of what availability and security mean to your business.