TNS
VOXPOP
Terraform or Bust?
Has your organization used or evaluated a Terraform alternative since new restrictions were placed on its licensing?
We have used Terraform, but are now piloting or using an open source alternative like OpenTofu.
0%
We never used Terraform, but have recently piloted or used alternatives.
0%
We don't use Terraform and don't plan to use or evaluate alternatives.
0%
We use Terraform and are satisfied with the results
0%
We are waiting to see what IBM will do with Terraform.
0%
Cloud Services / Serverless

Stop Talking About Multicloud and Hybrid Cloud and Start Talking About Integration

Multicloud simply means that users are using multiple cloud platforms — it shouldn’t matter where infrastructure runs. 
Mar 15th, 2021 8:38am by
Featued image for: Stop Talking About Multicloud and Hybrid Cloud and Start Talking About Integration

Mark Hinkle
Mark has a long history in emerging technologies and open source. Before co-founding TriggerMesh, he was the executive director of the Node.js Foundation and an executive at Citrix, Cloud.com and Zenoss where he led their open source efforts.

The terms hybrid cloud and multicloud are as polarizing among cloud users as any. Historically, hybrid cloud indicated that workloads were moving from the private data center to the public cloud. Today, it means integration between applications on-prem and those services in the cloud. Multicloud has a similar genesis — initially indicating workloads that would move from cloud to cloud, based on circumstances such as price and performance. However, those use cases were as hard to spot as a leprechaun riding a unicorn. Now, multicloud simply means that users are using multiple cloud platforms. If cloud computing does live up to the hype, then it shouldn’t matter where infrastructure runs.

In every enterprise, companies are consuming services from multiple clouds. It’s becoming almost irrelevant where the workloads run, as long as there is a way to integrate the services and (with smart choices) manage them without a plethora of tools. The proliferation of high-quality cloud services allows us to consume services from the best provider for our specific needs. It may be object storage from Amazon, compute from Google, CRM from Salesforce, and management services from Splunk and Datadog. Loosely coupled services create applications that are an amalgamation of cloud services — combined over networks — to become cloud native applications.

The design pattern is very similar to what was championed in the late 1990s as service-oriented architecture (SOA). Though instead of merely being API-driven, they are becoming event-driven. When internet usage first saw rapid growth in the 1990s, we had extremely low bandwidth for most users and a minimal number of services that people were consuming outside of email and the web. Fast forward to 2021 and today we have fast internet, a myriad of cloud services, smart devices, and an unrelenting hunger for up-to-the-minute information — whether it’s a Facebook status update, or the latest price of Bitcoin or GameStop.

API Versus Event-Driven Architecture

There are two types of popular interactive models in the cloud. The first is REST API (or RESTful API). The REST part stands for representational state transformation and API stands for application programming interface. RESTful APIs are a way for systems to interact with each other, almost like a conversation. In a RESTful architecture, there is a request and a response; and this back and forth continues until the information is ascertained. This is very chatty and requires synchronous communications. In contrast, event-driven architecture is like drinking from the firehose — where events are streamed asynchronously and users consume topics based on a publication/subscribe model (referred to as Pub/Sub).

Event-Driven architectures can take multiple forms, but two of the most common are the Webhook and streaming. In an event-driven architecture, data comes to you in real-time, not as a response to queries (as with the API approach). For example, in the case of a Webhook, we ask the producer of the event to tell us when a job is done; then we are notified of the event when it happens. That is asynchronous communication. In the case of streaming, we receive events as states change. This could be a change to a database, the upload of a file to a storage blob, or the completion of a serverless function.

The Rise of Event-Driven Architecture

Now that we understand what event-driven is, why does it matter? It matters because these events are used to trigger and communicate decoupled services. An event is simply a change in state in a system. It can carry the state (e.g. a row was inserted into a database) or an identifier (e.g. the database is offline). These events can be used to trigger workflows. These workflows between clouds break silos and provide data synchronization or the completion of complex tasks.

Here is a simple multicloud example: a photographer uploads an image to Amazon S3, which creates an event saying there is a new image. This event triggers Google Vision’s machine learning to identify the image. Once the image is identified, Google Vision generates an event that says the image has been identified as a black and white dog and inserts that image into a MongoDB Atlas database. That information, the image, and other relevant details are then presented via a JAMStack website hosted on yet another cloud, such as Netlify’s CDN (content delivery network). Netlify delivers the website at the edge of the network, closer to the consumers of the service. Because the website is decoupled from the backend, it makes it possible to include data from a variety of sources — and those sources can be the cloud services that best fit the website’s needs.

Event-Driven Architecture in the Cloud

Today, the fabric of the cloud has become Kubernetes. It is pervasive and allows portability of applications running in containers. Users commonly deploy microservices running full time in containers or serverless functions. These functions are able to communicate with each other using events. There are a number of event streaming technologies — some that can span multiple clouds and others that are specific to each cloud.

Amazon Kinesis provides a way to manage streams of events in AWS. Eventarc is Google Cloud’s solution for sending events from Google services to targets or services that can receive messages from Pub/Sub topics. Microsoft Azure’s Event Grid manages the routing of events from source to destination in the Azure cloud. These solutions are good examples of single cloud event streaming solutions.

What if you want to do multicloud? Apache Kafka is an open source distributed event streaming platform. It is used pervasively today for streaming on-premise events or from cloud infrastructure. Additionally, Confluent provides Apache Kafka as a service.

From Multicloud/Hybrid Cloud to Integration

The question you may ask is: how do you know it’s about integration? Truth be told, when we founded TriggerMesh in 2018, we were focused on multicloud serverless management. Our thesis was that cloud users would want tooling that was consistent across all clouds. This thesis was correct, although it turned out that managing deployment of serverless functions was fairly easy. What wasn’t easy was providing a consistent way to communicate between cloud services.

As we continued to speak with users of cloud services, we realized the problem that cloud architects are wrestling with is having a way not only to communicate between cloud services, but also how to route event streams — and sometimes even transform those events from one format to another. We saw that the emerging standards were around Cloud Native Computing Foundation‘s Cloud Events specification.

We saw large banks that wanted to filter events from Azure to Splunk, to reduce storage costs. We talked to others who wanted to integrate Salesforce and their existing ERP systems, via event streams served by Apache Kafka. We saw that some clouds like Oracle had less tooling than the Big 3 cloud providers. They needed to stream cloud metrics to Datadog. All of these problems are integration problems; the source and targets were in the cloud and on-premises. That’s when the “aha moment” arrived and we went all-in on integration. We felt that tools of the ilk of HashiCorp‘s Terraform infrastructure-as-code, which was used for deploying cloud infrastructure, needed to exist for integration (we call this integration-as-code). That’s why it doesn’t matter so much where the service exists — it just needs to be easy to integrate.

Feature image via Pixabay.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Pragma, Kubernetes.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.