Urbano reaches new horizons with Kubernetes

E-commercing and goods delivery have encountered difficulties in the pandemic and post-pandemic world, and a rapid adaptation to these new needs was of utmost importance in order to keep carrying out the services. Urbano, one of the leading logistic companies in Argentina with more than 1,100 vehicles in their fleet, saw the need to improve their data management to cope with new demands, so it turned to DinoCloud for professional consultancy services and advice as to how to proceed to address these new requirements.
DinoCloud has vast experience and technical resources to successfully create a cloud-first Kubernetes architecture using AWS EKS as their comprehensive medium. Our solution architecture team has deployed a handful of Kubernetes-based infrastructures which results in effective and need-oriented cloud deployment and consultancy.

Enduring in changing times with container clusters

Since the beginning of e-commercing, logistics and carrier companies have faced new challenges in keeping up with the market’s fast-paced demands. During and after the pandemic, the increased package demand has rocketed to new horizons, resulting in new investments on the part of these companies to handle high package traffic. Not only did this involve investing in shipping materials, pieces of equipment, and vehicles, but also in technological services that could improve traffic and data management.

DinoCloud brought Urbano the possibility of migrating its infrastructure to a Kubernetes-based architecture. This was advised so because of the rapid growth Urbano experienced in recent years, and the administrative and scaling complexity that the increased activity would entail was easier to manage with AWS EKS.

Initially, Urbano had its architecture on an on-premise infrastructure, which hindered shipping processes as it could not cope with the high volume of data the company was handling at that time. Knowing that a cloud-first architecture would allow for more traffic and data management, Urbano reached out to DinoCloud for the migration of their architecture to the cloud.

DinoCloud recommended that the best way to approach high-volume data management was to migrate their architecture to a container-based cloud environment. The tech team at Urbano validated this choice. They also believed that having containers to deploy, manage, and scale the application would be the best way to meet the market’s and the company’s needs.
Due to the traffic volume the company was handling, DinoCloud offered Urbano to migrate their on-premise architecture to the cloud using AWS ECS. However, soon after the beginning of the implementation, Urbano experienced quite a boom in their transactions, and the administrative aspects of AWS ECS, albeit easy to configure, proved quite challenging to manage. That is why DinoCloud suggested stopping the migration to AWS ECS and migrating their architecture to the Kubernetes-based AWS EKS.

Meeting requirements and company goals amid unstable markets 

As we stated before, the pandemic rendered companies with new challenges to face up to. Firstly, the supply chain was affected by the regulations imposed by the governments, but, at the same time, social isolation impulsed the demand for door-to-door delivery. Secondly, and as a result of the former, web pages received a flock of traffic, which translated into an increase in operational costs as many people needed to be working on scaling resources and leaving the pages fully functional.

When the coronavirus crisis reached its peak, numerous nations throughout the world imposed lockdown measures, and the share of e-commerce in total retail sales reached previously unheard-of proportions. The e-commerce share reached as high as 31.3 percent in the United Kingdom, a country with an established e-commerce business, before leveling out later. The major nations where e-commerce had a bigger share as a proportion of total retail in the most recent period (as of January 31, 2021) were the United Kingdom, the United States, and Canada, at 24, 17, and 15%, respectively.

Looking to build an auto-scaling infrastructure? We have the technical expertise to help you.

Auto-scaling and cost optimization to continue delivering high-quality services

Following up on the boost of demand Urbano was experiencing, DinoCloud suggested that the best way to tackle this and to bring more efficiency to the company’s procedures was to migrate from AWS ECS to AWS EKS. This Kubernetes-based service facilitates the complex management of various services or complex applications.

The implementation of AWS EKS in Urbano did not impose many challenges on DinoCloud’s architects, mainly because of the team’s experience and expertise. When migration started, the architects decided it needed to be done gradually, one environment at a time. During the migration itself, the tech team was attentive to eventual misconfigurations to keep the client’s system away from downtime. After migrating environments, validations were carried out to check their functionality. 

There were no extensive downtime periods, just two hours of it and in non-production environments, and this was achieved by carefully working with the development team. We used Kubernetes’ best practices such as using a GitOps workflow and services meshes to avoid the common pitfalls in connecting new environments with new databases.
After successfully migrating all the environments, the DinoCloud team started optimizing auto-scaling and implementing tools and third-party apps to facilitate deployment and monitoring. Monitoring is a quintessential part of optimizing the infrastructure because it increases production efficiency by studying the metrics and being ready to patch any problems immediately. Also, tests are being run to check how the environments are working and discover if anything needs updating or improvements before going into production.

The benefits of using Kubernetes

The main advantage Urbano benefited from is the scaling based on usage, which gives more availability and the possibility to separate the company’s different services. What the separation into containers allows is to scale independently and not have to scale a whole monolith but to be able to scale the microservices.

Kubernetes provides good availability management because it guarantees the necessary amount of resources to support the load and, at the same time, it maintains costs at the minimum if resources are not needed.

Kubernetes enjoys constant updates, so keeping your production cluster up-to-date is one of the best practices recommended by the Kubernetes community. This was applied to Urbano’s infrastructure to guarantee the functionality of third-party apps or check whether these apps got depreciated. Going in line with updates allows for changes to be implemented gradually and not all simultaneously, thus reducing the possibility of downtime and pitfalls. Also, being updated allows engineers to keep learning more about the components. When a component changes due to a version change, knowledge about the components is mandatory to understand what needs changing.

Finally, another advantage of Kubernetes is its agnosticism; that is to say, there is no dependence on the background operating system, so in this way, migration procedures are carried out much more quickly.

Having a cloud consulting partner to face fluctuating demands

The eCommerce market, and consequently, the logistic and carrier transactions are prone to constant changes. It is vital to rely on a tech company that has the experience and expertise to tackle these changes promptly without affecting the day-to-day business life. As we have dealt with in this article, Urbano experienced changing scenarios that led to seeking out a consulting and developing cloud company like DinoCloud to sweep off those few rocks on the road. 

DinoCloud is also an AWS Premier Partner, which in turn alleviates many of the administrative complexities that having a Kubernetes-based architecture carries along, so our clients can rest on this partnership to greatness, innovation, and efficiency.

About Urbano

Urbano provides logistics and courier services in Argentina and around the globe. This company offers a broad set of customized delivery and shipping options to cater to the enterprise sector and its ever-growing challenge of delivering more, quicker, and for the customer who wants a more effortless and faster purchasing experience. Urbano’s business philosophy is “what’s important is knowing how to get there,” and the perusal of this ideal led them to close deals with huge accounts worldwide.

About DinoCloud

DinoCloud is an Argentine company whose purpose is to assist and guide companies adopting global innovation and cloud computing technologies to grow their customers’ businesses to make them healthier and more competitive. DinoCloud offers an ample array of cloud services and specialties, assisting both in general and market-specific procedures.

LinkedIn: https://www.linkedin.com/company/dinocloud
Twitter: https://twitter.com/dinocloud_
Instagram: @dinocloud_
Youtube: https://www.youtube.com/c/DinoCloudConsulting

11 best practices to get your production cluster working from the get-go

Containers have become the norm for the creation of cloud-native applications, and Kubernetes, commonly referred to as K8s, is undoubtedly the most well-liked container orchestration technology.

Popularity and usability are not the same things, though, as the Kubernetes system is a complicated one; it requires a steep learning curve to get started. While some of the following Kubernetes best practices and suggestions may not be appropriate for your environment, those that are can help you utilize Kubernetes more effectively and quickly.

This post will delve into 11 Kubernetes best practices to get the most out of your production cluster.

Always use the latest version

We’re kicking things off with a friendly reminder: keep your Kubernetes version updated. Apart from introducing new features and functionalities, new releases come with fixes and patches to relieve vulnerability and security issues in your production cluster, and we think this is one of the most salient advantages of keeping your K8s up-to-date.

However, the production team should study and test thoroughly all the new features before updating, as well as those features or functionalities that were depreciated to avoid losing compatibility with the applications running on the cluster. Updating the version without analyzing it and testing it in a secure environment could hinder production times.

Create a firewall

This best practice may not come as a surprise to you, as having a firewall in front of your Kubernetes cluster seems common ground, but there are a lot of developers that do not pay attention to this.

So here’s another friendly reminder: create a firewall for your API server. A firewall will ward off your K8s environment to prevent attackers from sending connection requests to your API server from the Internet. IP addresses should be whitelisted and open ports restricted by using port firewalling rules.

Use GitOps workflow

kubernetes firewall

A Git-based workflow is the go-to method for a successful Kubernetes deployment. This workflow sparks automation by using CI/CD pipelines, improving productivity by escalating application deployment efficiency and speed.

Bare in mind, however, that the git must be the single source for all automation which will unify the management of the whole production cluster. Another option is to choose a dedicated infrastructure delivery platform, like Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes.

Are you stuck in Git-Ops?
We can help you with that

Audit your logs

Audit your logs regularly to identify vulnerabilities or threats in your cluster. Also, it is essential to maintain a centralized logging layer for your containers.

Besides, auditing your logs will tell you how many resources you are consuming per task in the control plane and will capture key event heartbeats. It’s crucial to keep an eye on the K8s control plane’s components to limit resource use. The control plane is the heart of K8s; it depends on these parts to maintain the system’s functionality and ensure proper K8s operations. The control plane comprises the Kubernetes API, kubelet, etcd, controller-manager, kube-proxy, and kube-dns.

Make use of namespaces

Kubernetes comes with three namespaces by default: default, kube-public, and kube-system. Namespaces are critical for structuring your Kubernetes cluster and keeping it secure from other teams operating on the same cluster. You need distinct namespaces for each team if your Kubernetes cluster is vast (hundreds of nodes) and many teams or apps are working on it. Sometimes, different environments are created and designated to each team for cost-optimization purposes.

You should, for instance, designate various namespaces for the development, testing, and production teams. By doing this, the developer who only has access to the development namespace will be unable to update anything in the production namespace accidentally. There is a likelihood of unintentional overwrites by teammates with the best of intentions if you do not perform this separation.

Resource requests and limits

Resource limits define the maximum resources used by a container, whereas resource requests define the minimum. Pods in a cluster can utilize more resources than necessary if there are no resource requests or restrictions.

The scheduler might be unable to arrange additional pods if the pod starts using more CPU or memory on the node and the node itself might even crash. It is customary to specify CPU in millicores for both requests and limitations. Developers use Megabytes or mebibytes to measure memory.

Use labels/tags

Multiple components, including services, pods, containers, networks, etc., make up a Kubernetes cluster. It is challenging to manage all these resources and track how they relate to one another in a cluster, so labels are helpful in this situation. Your cluster resources are organized using key-value pairs called labels in Kubernetes.

Let’s imagine, for illustration, that you are running two instances of the same kind of program. Despite having identical names, separate teams use each of the applications (e.g., development and testing). By creating a label that uses their team name to show ownership, you may assist your teams in differentiating between the comparable applications.

Role-Based Access Control

kubernetes role access

Your Kubernetes cluster is vulnerable to hacking, just like everything else. To get access, hackers frequently look for weaknesses in the system. So, maintaining the security of your Kubernetes cluster should be a top priority, and verifying that RBAC is at use in Kubernetes as a first step.

Each user in your cluster and each service account running in your cluster should have a role. Multiple permissions are contained in Role-Based Access Control roles that a user or service account may employ. Multiple users can have the same position, and each role can have various permissions.

Track network policies

Network policies serve to to limit traffic between objects in the K8s cluster. All containers have network communication capabilities by default, which poses a security concern if bad actors can access a container and use it to move between objects in the cluster.

Like security groups in cloud platforms limit access to resources, network policies can govern traffic at the IP and port levels. Typically, all traffic should be automatically denied, and rules should be implemented to permit necessary traffic.

Are your application security-sensitive areas being overwatched?

Use readiness and liveness probes

Readiness and liveness probes function like health exams. Before permitting routing the load to a specific pod, a readiness probe verifies the pod is active and operational. Requests withhold from your service if the pod isn’t available until the probe confirms the pod is up.

A liveness probe confirms the existence of the application: it pings the pod in an attempt to get a response before checking on its status. If nothing happens, the application isn’t active on the pod. If the check is unsuccessful, the liveness probe creates a new pod and runs the application on it.

Services meshes

You can add a dedicated infrastructure layer to your applications called service mesh. Without adding them to your code, it lets you transparently add features like observability, traffic management, and security. The phrase “service mesh” refers to the software you employ to carry out this pattern and the security or network domain that results from its application.

It can get more challenging to comprehend and manage distributed service deployment as it increases in size and complexity, such as in a Kubernetes-based system. Its requirements may include measurement, monitoring, load balancing, failure recovery, and discovery. Additionally, a service mesh frequently takes care of more complex operational needs like end-to-end authentication, rate restriction, access control, encryption, and canary deployments.

The ability to communicate between services is what enables distributed applications. As the number of services increases, routing communication within and between application clusters becomes more difficult.

These Kubernetes best practices are just a tiny sample of all those available to make Kubernetes a more straightforward and beneficial technology to use while developing applications. As we said in the introduction, Kubernetes requires a steep learning curve to get started.

Even with the ever-increasing number of tools and services to speed up the procedures involved, that can be overwhelming for development teams already swamped with the numerous duties required in modern application development. But if you start with these pointers, you’ll be well on your way to adopting Kubernetes to advance your complex application development initiatives.

Prevent and reduce vulnerabilities in your Kubernetes cluster

In DinoCloud, we have an excellent team of engineers and architects with vast experience in Kubernetes environments. Let’s find out how we can help you overcome the difficulties in the development of your cloud application.

LinkedIn: https://www.linkedin.com/company/dinocloud
Twitter: https://twitter.com/dinocloud_
Instagram: @dinocloud_
Youtube: https://www.youtube.com/c/DinoCloudConsulting