Cloud-Native Journey Part 1: Defining Goals and Responsibilities

Loft Labs
6 min readFeb 22, 2022

--

by Tyler Charbonneau

Picture of a light blue sky with clouds

With on-premise deployment models quickly fading, organizations have shifted their focus to the cloud. Businesses are now looking for cloud-native systems that are feature-rich, reliable, and accessible from anywhere.

But what classifies something as cloud native? A simplification might highlight that cloud-native workloads and infrastructures are built atop cloud-based resources-typically in the form of private, public, or hybrid clouds. Accordingly, cloud computing relies heavily on sound planning, design, and deployment of remote services. When you consult the CNCF for an official definition, the picture grows slightly more complex. The following principles are integral to cloud-native approaches:

  • Application scalability
  • Loose coupling of systems and services that are instead pieced together harmoniously
  • Transparency and observability
  • Smaller codebases and features instead of monolithic ones
  • Reliance on containers, service meshes, APIs, microservices, and stable infrastructure

Ultimately, a development strategy encompassing these principles grants teams and users more freedom. Each can tap into powerful services from almost any location with ease. It’s easy to summon a document or use enterprise applications on mobile devices.

On the DevOps side, technical teams have an easier time performing maintenance and monitoring ecosystem health indicators. Cloud native’s speed plus agility lets teams deploy rapidly, giving them a competitive advantage. It’s often the difference between launching software products faster or introducing groundbreaking features first. Cloud approaches allow organizations to remain nimble.

There’s no denying that cloud-native approaches are complex. In this guide, you’ll learn about the process of defining goals and responsibilities that’ll influence your cloud-native setup.

Cloud-Native Goals and Responsibilities

Before building a cloud-native infrastructure, you need to assess what problem(s) you’re trying to solve or how a cloud-native transition would benefit the organization as a whole. That means gathering feedback and taking stock of the applications, systems, and priorities of various teams within your organization.

Additionally, you need to evaluate your team’s expertise. Is there anyone on staff with cloud-native experience? Are your technical teams well versed in technologies like containers, microservices, and REST APIs? Responsible companies won’t jump-start a transition like this without having the knowledge on board or hiring experts to make it successful.

Firmly understanding your business model will help guide any subsequent decisions and directly impact the progression of your deployment. Do you operate like Netflix, where you must keep your services afloat without interruption, or like Amazon, where every second of downtime impacts your bottom line? Maybe you’re an enterprise behemoth or a SaaS vendor maintaining extensive catalogs of internal and external services. It’s crucial to consider how a transition can impact your users (or customers).

Let’s take a look at a few things you should consider before transitioning to a cloud-native infrastructure:

Prepare for Challenges

Ripple effects will be dynamic as each service shifts cloudward. Modernization comes with short-term obstacles. How extensive must testing be to ensure success? What will the balance be between development testing and production testing? Are users going to experience interruptions, and will cloud native’s benefits outweigh any user friction?

The extent of your transition and its smoothness may test user loyalty to some extent-unless your services are truly one of a kind. Internally, it’s smart to anticipate an influx of help desk calls and confusion as teams first navigate these new systems.

Keep Flexibility Needs in Mind

Next, consider how much flexibility you actually need. Embracing cloud-native technologies means leveraging offerings from many Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) vendors. There’s a massive number of service providers out there. However, some household names like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) have risen to the top.

While each offers a lot of functionality through its service offerings, teams lose some configurability despite these conveniences. These services are largely vendor managed and are therefore locked down to some extent. You might have trouble accessing settings, making customizations, and fine-tuning those products (developed for millions) to your unique requirements.

For many organizations, the ability to plug and play with different services is hugely beneficial. Cloud native’s piecemeal nature and a growing emphasis on vendor-agnostic integrations can help you avoid lock-in. You need to determine how much control you’re willing to sacrifice and what positive outcomes that may yield.

Security and Compliance

Security and compliance should also be considered before starting your cloud-native journey. This is doubly true for companies in sensitive industries like finance or healthcare. Keeping data from falling into the wrong hands is more important than ever and hugely expensive.

Consider start-ups, for example, which are typically eager to embrace newer technologies. Breaches are death sentences to these companies, who often lack the financial means to survive them. Proper vetting and testing are critical-no matter how established an organization is. Security needs to be as airtight as possible.

You need to ask yourself if you’re making your systems more secure by going cloud native. Any vulnerabilities could lead to hacks, stolen IP, and potential account lockouts. That’s problematic for employees and external users alike. You should ensure that any services and platforms you use align with your own security philosophies. It’s also worthwhile to confirm their alignment with any compliance regulations you’re legally bound by.

Financial Impacts

Last but not least, consider cost. Adopting a whole new infrastructure can be expensive-both in the hours committed to planning and the adoption of vendor services. While it’s possible to design your own cloud solution, going the popular public route will incur its own expenses. You should plan on being charged for the following:

  • Compute resources
  • Memory allocation
  • Remote storage
  • API calls
  • Server activity
  • Software subscriptions
  • Service contracts

If possible, cross-reference these costs with your application suite’s demands and your user activity. Make use of each vendor’s pricing tables and cost estimators. Be sure to assess base fees, tier-based usage costs, and how frequently those costs are applied.

Notes on Kubernetes

Kubernetes is far and away the most popular container-orchestration platform on the planet. Since upwards of 61 percent of backend developers are using containers, pairing a distributed system like Kubernetes offers exceptional control over application performance, data governance, and associated configurations. It’s why over 5.6 million users rely on Kubernetes to support their infrastructure.

That said, Kubernetes’s power comes with some risks. Malicious or unauthorized individuals who gain access to it can inflict damage to your cloud ecosystem, either intentionally or inadvertently through misconfiguration. Additionally, you don’t want to expose how your applications are running to just anyone. As with any system, it’s important to control who has administrative access to Kubernetes within your organization. This reduces internal security risks and avoids handing everyone the keys to the kingdom.

Authenticated and authorized users should have the ability to access and alter only the systems they need. Your Kubernetes admin should log in to the Kubernetes cluster associated with their job role or team only. If one team is responsible for managing service X and another for service Y, then those individuals should stick to their respective products. Thankfully, Kubernetes offers tools supporting intelligent authorization, cluster-level access, and role-based access control (RBAC).

These access principles are applicable across your various cloud services and should be applied across your organization-for internal applications or those from outside developers. Restricting certain functions or “areas” of your ecosystem to individuals, teams, and the organization as a whole-based strictly on need and best practices-will help keep your information safe. While going cloud native can democratize access to essential tools, proper oversight is still key.

Conclusion

As you can see, there are many things to consider when making the cloud-native transition. Embracing modernized approaches can be immensely beneficial when done right. However, the complexity of cloud-service ecosystems brings numerous challenges; having clear goals and plans can help you clear those hurdles.

Cloud-native transitions often impact numerous (if not all) business units simultaneously. Gathering input and feedback before, during, and after the process will keep everyone happier and more engaged.

The cloud brings plenty of responsibilities with it. However, the potential upside is attractive to those aiming to modernize their backend systems.

Photo by Dominik Schröder on Unsplash

Originally published at https://loft.sh.

--

--

Loft Labs
Loft Labs

Written by Loft Labs

>> www.loft.sh << Build Your Internal Kubernetes Platform With Virtual Clusters, Namespace Self-Service & Secure Multi-Tenancy

No responses yet