Conf42 Kube Native 2024 - Online

- premiere 5PM GMT

Empowering Digital Transformation through Cloud Technologies and Java Integration

Video size:

Abstract

Unlock the future of software development with the powerful combination of Java and cloud technologies! Discover how microservices, serverless computing, and CI/CD integration drive 75% faster development and 78% improved scalability.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. My name is Arvind Kumar. I work as a staff software engineer in Guaranteed Write. I work mainly on Java and cloud technologies. My talk today is on empowering digital transformations using cloud and java technologies. Let's start So this is my agenda today where we talk about synergy between cloud and java features Overview of both the technologies. We see what is serverless and CI CD pipelines. We'll have a demo and we have the summary introduction The cloud services, so the cloud technologies, this represents a transformation approach where we manage and deliver IT services. So this is by providing the on demand access. To all the IT resources such as servers the storage like S3 or the storage storage and the databases over the internet, right? So traditional approach was we earlier read before cloud computing, the IT infrastructure it requires a lot of upfront investment. We need to first decide what, how many servers we need and everything, right? And then we need to, we used to build the applications, right? So in cloud computing, we can decrease that. So we can start small and over time we can increase, right? So this advantage this is giving a big benefit to the organizations which are leveraging this flexibility and efficiency of using the cloud computing. So Java role in modern software development. We have the java's philosophy, right? One of the main pillar is portability where you write one java application and you can run it anywhere where we have a jvm, right? You can run the application in java, you can write an application in windows take the class file or bytecode and you can run in any of the jvms like in linux or mac os. You can get the same output. So this portability is one of the main features and there are very robust frameworks for java applications like where we have a Spring Boot, Micronaut. Here we can simply develop scalable applications which are having high performance, right? So the TIOB index this is the indicator where we can gauge a programming language. So the java is consistently high in this and the, Java has been the preferred choice by enterprise applications. The reasons is behind the extensive library support and the community support. So if you see the cloud and Java growth we talked about it just now the reasons why cloud computing are being very adaptive for the business and organizations. This has become the backbone of the digital transformation. And enabling the business to scale they can innovate more because they can optimize the cost like never before, right? So based on this, there are staggering numbers of the future prediction, like we are reaching about 832. 1 billion. The global cloud computing market is going to reach 832. 1 billion and the growth rate is around 17. 5 percent annually. This reflects that business how the business are relying on cloud computing for their operational and strategic needs. So here we'll talk about Java's endurance relevance. Enduring relevance. So simultaneously, Java is going to be the cornerstone of the software development ecosystem. Even though this is very old programming language, we still see that enterprise environments are being using Java because of the robust ecosystem, the platform independence philosophy, and it can be seamlessly integrated with multiple other systems. They have a lot of support for Java based libraries. So the synergy between both Java and cloud. We'll see we have a lot of other things we just now talked about scalability and portability, so we'll see how how it actually works here, right? So when we say scalability cloud have something called as auto scaling option. Where the ability to automatically scale based on demand, we can do easily in cloud, right? And similarly in Java, we have multi threading concept, which is a very high performance where Java can handle many users at same point of time, right? How it handles? It has something magical called garbage collection mechanism where any Java application, if there are unused objects. Or unused items it automatically deallocates the memory for other requests. To happen right whereas in c and c plus you need to do it manually in java jvm It does automatically so the cloud elasticity is nothing but how the All the cloud providers said how they do auto scaling It will be explained in a few minutes. So here we take an example of e commerce application, how it handles huge traffic surges during holiday seasons, right? Automatically they can increase the number of then the number of replicas they need for the application to handle the request for that increased traffic, right? So once the traffic is down, then we can automatically decrease the config parameters so that you don't have, you can decrease the cost. without even manual intervention. So here we, in this depiction, we show how autoscaling is handled using, we have a load balancer, we have two nodes and Route 53 and we have the load balancer which handles the traffic request, it goes to the applications. We have three Java instances in the right side. We'll see the features and the overview of both cloud and Java here. So we briefly talked about the philosophy, write once, run anywhere. Similarly, this kind of philosophy is going to complement when we use cloud. Because we write a Java application, has the right ones, run anywhere philosophy is there, you can run in any of the cloud platforms, right? You like this increase the redundancy and cost optimization. You can deploy the same Java application in AWS and GCP as well. So there is one example which we are talking here about a financial institution which uses both AWS and cloud platform for redundancy. And the core principle of Java is write once run anywhere, right? So this is going to complement it. Here we are seeing the portability in both Java and cloud. We have one AWS cloud where we have a Java application deployed. We have Google cloud and also the Azure. Okay. All of them are having the same java application Going next we have the next feature which is resilient this ensures the highly availability and fault allowance of the cloud services, right? So in cloud platforms This is inherently built how it build we'll see. We'll see some examples so java platforms when they combine with cloud native architecture, we can get the Design features like automatical recovery from failures handle unexpected surges in demand. It automatically increases the number of lift costs we need based on the incoming traffic, right? And cloud enabled resilience. This is using the regional failover and auto restarts and backing up the systems. Whenever there is a Java based application is deployed in AWS or Kubernetes, it can take the advantage of all these features and self heal. Similarly, in Java, we have a strongest, robust application programming language where we can do exception handling in a very great manner and we have a lot of support for logging frameworks like log4j. In Spring Boot also we can have we can integrate with a lot of other tools like Kibana and Elasticsearch where we can debug the issues. We can also also use Grafana for alerting whenever we have a, let's say there are 500 when we are getting 500 exceptions, right? We can handle it if there are three, 500 errors in a particular timeframe. In we can raise our alert to the servicing team, right? So those kind of things we can do here in the logging. Similarly, in AWS side, we have in cloud side, we have AWS, Azure, and Google Cloud logging. We can have, we can monitor this and combine it. It makes a very resilient application, the combination of both cloud and Java applications, right? So here we have a healthcare application. which is running 24 by 7. So here we have this has been deployed in multiple geographic locations. So this is going to be resilient application by default. If you're using cloud you don't have to do anything manually because it will take care automatically where you have, you can see here in the example. US East, it has been deployed to services and invest. If there is any problem, the, with the failover will happen without manual intervention. Similarly, we are seeing the diagram here on the exception handling, how we are going to handle the AWS CloudWatch and we can watch the CloudTrail to see what are the exceptions happening. So we talked about this, right? This is nothing but cloud cross cloud deployment scenario where we can deploy the java application multiple clouds, right? So If you see that in real world scenario, we can adapt this multi cloud strategy for below reasons, right? So one thing is we can avoid vendor lock. We don't have to stick for one particular vendor for a long time, right? And also This will enhance the resilience to geo redundancy. How it happens is let's say we have outage on one particular vendor on one particular Google let's say, cloud platform. We don't have to stick to that so that impact will be completely impacted. If you deploy the same application in cross cloud deployment scenario, we'll have that advantage of having the redundancy for this, so we'll having a backup on other cloud platform or application will be available through even though there is an outage on the current platform, right? And the Java's role in cross cloud, the same philosophy, Vora philosophy makes it like very advantageous where we have the same application being deployed in different clusters using Docker and Kubernetes. We will see that example in few minutes on how we can containerize an application and deploy it in K8. And below the steps how we will develop it in Spring Boot we will deploy we'll containerize, package this complete application using Docker and we'll ensure it runs consistently with other, with across the environments and we deploy this in Kubernetes. So the here we talk about this in an with an example. So let's say we have a big retail company which is running in North America and also in European operations, right? So here we wanted, we are choosing two different platforms, AWS and Google Cloud. So by using this we can deploy it in both different cloud platforms, which enables the cloud, cross cloud deployment. So there is no vendor lock here. We don't need to be tied to a particular vendor. Resilience and redundancy is happening here. Even though there is an outage, we don't have to worry. And cost is optimized based on whenever there is, let's say there is an offer on European side. So the customer can take that advantage. Here this depiction shows how it happens. Docker Hub and we have two different regions with two different cloud platforms deployed in same Java application. Let's see the overview. So we have AWS, Microsoft, and GCP. Each of them are master in their own features. Most of them are common but Google is specialized in big data, machine learning, and data analytics. Same goes with Microsoft and Amazon has been a leading cloud provider with a lot of extensive global support and advanced features. In coming to microservice architecture, this is an architecture style where one big application can be composed to several independent tasks, right? Which can be accessed over network. So if we do comparing to monolithic architecture and microservices, we have the following advantages, right? We have scalability. flexibility and fault tolerance fault solution, where we can take one example where like if one of the active, one of these small activities failing in monolithic architecture, entire application is going to be down whereas let's say we are running a report and it is in monolithic architecture, because of that there might be Lock on the db and it might impact the entire application whereas in microservice. We don't have to worry about it, right? The reporting or any wire transfer or any kind of banking activity, right? Everything can be divided into small independent service And one service if there is a failure, it won't impact the other services. So that's where we get this fault isolation And we have a flexibility, let's say, there are more number of users using a particular service. We can scale up that particular service, so we achieve flexibility and scalability using that. And using Spring Boot, we can create these kind of applications very easily, within no time. Yeah, we talk about we're going to talk about serverless computing. Here serverless computing is we run the code and based on the two events without even managing Or provisioning the servers. We don't we'll not have any server, but we can run the code using so that's the reason it's called serverless computing. One good example is aws lambda, right? We can execute a java code or python code Based on the events or a hdb request. So I can take an example here, right? So let's say if you're using aws dynamo db lambda as your database server So let's say for every new user login you want to do some reporting stuff you want to update something on when did this user log in or any other extra reporting kind of activities from the backend. So you can enable a database stream and we can create a Lambda. You can do the business logic there and we can store that in an RDS. So that's one of the example where you can use this without no need to worry about the infrastructure. We just focus on the business logic, right? And it is very cost effective because we don't need to worry about it because cloud provider is going Take care of this C and CD integration. So this is one of the primary thing which is going which is happening in all the organizations where we have cloud and Java services. So the, this benefits faster development and quality. We'll see how it happens, right? So there are many tools like Jenkins. So they can let's see the definition C and CD stands for continuous integration and continuous deployment. So this practice enabled frequent code changes and deployments for your. Git repos, right? Let's go to the old example. We have whenever there is a new change In our traditional approach we used to create a bill and we used to manually deploy it In a server on a particular day and let's say if you want to do it on Production, let's say you have eight nodes. You need to do everything manually, right? So after cloud computing and integrating with ca and cd whenever the developer changes the code You An automatical build can be triggered using Jenkins, and it can be zilli integrated to gida, GitLab which automatically builds the application and deploys it in a particular environment, let's say dev or qa, right? This will improve the faster development cycle. This automatics automates the testing and deployment process. Which will reduce the development time and so it would be easy for the team to find a Early bugs before even going to product. So there are two real world scenarios here both are different, examples where financial institution is using a cloud And e commerce application is using microservice architecture, right? The outcomes are simple, right? We saw the examples before. We have an improved scalability and performance. Using cloud technologies and faster development and deploy reduced reduction in the operation cost when we use the cloud services. Similarly, in micro services, there is a system readability and reduce the time to the market. Using the new features because if you come, if you like integrate microservices with CNCD, so the deployment lifecycle will be decreased. So these are the key takeaways for the for the key concepts and takeaways in the entire talk. So we will, so this is virtualization, contamination and serverless computing, right? So virtualization abstracts the physical hardware and creates virtual machines optimize the resource usage. This will, this is happening using virtualization and containerization. Dockers will have the applications for consistent developments and K eight will have the container and ate deployment and management here using K eight. So this server is computing. We focus main on the business logic without even worrying about. How to manage the server. And so this is using, leveraging the cloud platform infrastructures. So we'll see a demo on how we are going to achieve this in a simple demo, right? So here we are seeing a spring boot application where we have a rest controller. So this is nothing but a Spring Boot app. We have a get mapping. Which is slash hello. So whenever a SDP request is coming, so this will be called and the request has been processed. We see a docker file here. So this docker file is nothing but the containerization, A Docker file. This is used to containerize, this Spring Boot app. It's used to package the enter application and it dependency into a container. The first line says from OpenJDK. This line specifies the base image of the container and it, it's making sure that it is available to run the application, right? And we are setting the working directory to slash app and copying the jar file, which is created on the above, the Spring Boot jar. And we expose 808. 8080, this is the port number where we are running it. And the enter point just specifies the command to run the container. Going next, the Kubernetes deployment yaml. This deployment YAML file here we are talking about we are just defining the type of deployment. If you see, and we. We have the the most important confirmation is in this specification we have replicas, right? So this replicas will tell the Kubernetes application how many instances of the Java application is going to be deployed, right? So in this example, we set two where there are two appli two. This signifies that there are two replicas of the J same Java application. So this is the application which is running and this ensures the highly availability of the of the application, right? And also, if you see the spec and container, so here we are taking what is the application name and the image. So here, when we talked about cross cross platform deployment cross cloud deployments, so we talked about we can deploy multiple application application in multiple. So here in the image, we mentioned the docker image used for this container. This is now pulled from AWS, but you can change it to Google cloud or whatever we need, right? So this signifies which which particular provider we need to use. So this relevance of this Kubernetes deployment like this automates the management of containerized application, which so here we will tell where to be deployed and how many replicas are should be installed. And this can be automatically scaling automatically scaling is being added here. So this can be used for that. And this YAML file here, this Kubernetes service YAML file, here we define the application API version. And that's it. The name of the application and the load balancer here the kubernetes service file takes the responsibility of exposing our service to the public, right? Where whenever a HTTP request comes kubernetes load balancer will take that and it will decide which replica to be called, right? If there is any issue with one of the application, it automatically goes to the second one. Kubernetes like load balancer will take care of that. And the target port number, we define it here. And the Terraform script here this is making that complete infrastructure has code. We define, instead of doing the configuration every time, DevOps engineers use this and the developers write Terraform scripts using Terraform command in it. And once everything is verified, they'll do apply. Here we are defining the source AWS module. We have the cluster name, the cluster version, and we define the node groups here, right? So what is the desired capacity, max capacity, and min capacity? At least one node, maximum three nodes, and the like desired capacity is two. So this here we are defined whatever we wrote there we can have the infrastructure as code. Whenever is needed, we can run the script and it will create all the infra. This is nothing but infrastructure. The final output of this demo is like we are running a K8 cluster with a load balancer and the final output will have a response from the Spring Boot microservice. Has already. Yeah. So a spring, a simple springboard microservice using docker. We ize that and it is deployed in a E Ks Cooper Andes cluster, and we are exposing using the load balancer. The final output is a very highly scalable, and scalable very available and scalable application. Which is applicable, which is accessed via public URL, right? So here this architecture if you see the Spring Boot microservice along with along with the Docker K8 and cloud service, this makes this architecture ensure that application is resilient, scalable, and portable access around the world. All the cloud platforms. So the conclusion is Java represents a powerful synergy, right? The application is made, the microservices developed using Java, which is a which is a powerful programming language, which we discussed, and this drives the digital transformation across multiple industries, right? You take Netflix, e commerce applications like Amazon, eBay. And many financial banking organizations use this combination. So this cloud computing offers significant advantages, including the scalability, cost efficiency, and operation agility. This will enable most of the applications, most of the organizations, right? By leveraging AWS, Microsoft, and Google Cloud, right? So that's all I have. Thank you guys. All the best. Bye bye.
...

Arvind Akula

Staff Software Engineer @ Rate

Arvind Akula's LinkedIn account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways