Concept of AWS as Today’s leading Cloud Technology

Control Over the Data is all we need

Organizations need to take the best and fast decisions for business growth. But decision without optimum supporting data is a risky job. What if, I say that our all decisions, whether small or big, depends on customer data. Such data can come from customer ratings, feedback, problems, suggestions, etc. This all gives priority to customers and efficiently promises a better business goal. Empowering your decision-making process with AI/ML predictions can give exceptional results and it keeps business goals on track.

This brings me to the next point, i.e.: choosing the right technology and platform that can allow you to scale and build highly available, resilient and secure applications. AWS has always been the first preference of customers because it provides the best industry services with a lot of features. AWS has always kept security their priority because that\’s what business is mainly worried about in public cloud. Global infrastructure allows developers to experiment with their applications to make them faster, Resilient, Fault-Tolerant, etc. AWS has been one of the biggest contributors in Modern Application development.Let\’s talk about some of the contribution in modern application development by AWS:

Serverless: In 2014 when AWS launched AWS Lambda, it is was revolutionary with its capacity to manage any servers, work on events and most of all pay only for the time it has worked. A serverless application is a combination of Lambda functions, event sources, and other resources that work together to perform tasks. Note that a serverless application is more than just a Lambda function—it can include additional resources such as APIs, databases, and event source mappings.AWS has Serverless Application Model(AWS SAM) which is an open source framework and it can make building serverless application pretty easy. Now they have a lot of new features in lambda. It supports almost every language and recently it is also supporting custom runtime. In Nov 2019 AWS announced AWS Lambda Destinations for asynchronous invocations. This is a feature that provides visibility into Lambda function invocations and routes the execution results to AWS services, simplifying event-driven applications and reducing code complexity.Lets take a sample application below and see how easy aws has made it:

Real time Car ride app scenario: A simple serverless web application can enables users to request car ride from the Car rides fleet. The application will present users with an HTML based user interface for indicating the location where they would like to be picked up and will interface on the backend with a RESTful web service to submit the request and dispatch a nearby car. The application will also provide facilities for users to register with the service and log in before requesting rides.

Amazon EKS and Amazon ECS: In recent years microservices has created a good impact in application development because it provides benefits such as Agility, Decoupling, Decentralized Governance, Autonomy etc. So, First AWS introduced Amazon ECS on customer demands. Amazon ECS is a container orchestration service through which we can manage, scale and provision containers easily. They have also added many new features in ECS; most popular of them is Fargate. AWS Fargate can be used with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. Amazon ECS can be used along with AWS services such as IAM, ECR, ELB, CloudFormation etc.It is easy to manage and develop 2-3 applications in microservices architecture because you have few numbers of containers. But a big organization that has hundreds of applications with thousands of containers will have hard time managing those applications, which has containers spread across multiple hosts. For some time Kubernetes was gaining popularity in open source community. So again on customers demands AWS came up with Amzon EKS(Elastic container service of Kubernetes) in June 2018.   Amazon’s Elastic Kubernetes Service (EKS) is a managed Containers-as-a-Service offering that significantly simplifies the deployment of Kubernetes on AWS. With EKS, you simply create your own Kubernetes workers through the EKS Wizard. Creating the Kubernetes master cluster and configuring networking, service discovery and other Kubernetes primitives is done for you. EKS is meant to be a turnkey drop in for custom Kubernetes clusters. Most existing tooling works with EKS with little to no modifications.

Let\’s talk about a basic back-end process that will include hybrid cloud architecture. As you can see in sample application below the data is provided from on-premise servers in form of files that are saved into s3 bucket- which is our object-based storage in cloud. Now we have our application in EKS which will take the files from s3 bucket, will do its processing according the business functionality and will store result data into NoSQL database: DynamoDB.

Now you the data that you need for prediction or business strategy. Here we can use AI/ML to check for patterns and create predictions or give us enough insights that can help us build better business strategies in future.

AWS has many AI/ML services such as:

Sage Maker: Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker data scientists and developers can quickly and easily build and train machine-learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don\’t have to manage servers.

It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, Amazon SageMaker offers flexible distributed training options that adjust to your specific workflows.

\\\”Everything fails, all the time\\\”

This is famous quote from AWS CTO, Werner Vogels who points out the simple truth; your system or application will eventually fail, usually sooner than later. Only thing to be done is prepare ourselves. That’s where Monitoring, Backup, Disaster recovery, automation etc. strategies come into the picture.

Lets talk about Monitoring and Alerts. Logs are vital for monitoring and alerts because logs contain the record of either events or messages that occur in an Operating System, software, application or servers (both on-premise and in cloud such as EC2) and Logging is the act of keeping a log. It allows us to do debugging, Monitoring, Error Tracking and get Alerts from the application. Every service will be generating logs and it will be too complicated to locate or trace the errors. So, we need to have a central location to save all the logs and a simple way to access it. Here comes the concept of CENTRALIZED LOGGING. Centralized Logging in AWS:

A comprehensive log management and analysis strategy is mission critical that enables organizations to understand the relationship between operational, security, and change management events and maintain a comprehensive understanding of their infrastructure. AWS customers have access to service-specific metrics and log files to gain insight into how each AWS service is operating, and many services capture additional data, such as API calls, configuration changes, and billing events. Log files from web servers, applications, and operating systems also provide valuable data, though in different formats, and in a random and distributed fashion. To effectively consolidate, manage, and analyze these different logs, many AWS customers choose to implement centralized logging solutions using either self-managed tools or AWS Partner Network (APN) offerings. There are two main AWS services that are useful for monitoring of services and Auditing of activities in services: Cloudwatch and Cloudtrail.

Lets talk about two Centralized Logging solutions for our sample application in fig. sample application with two most recognized log analysis tools in industry:

ELK Stack: ELK\\\” is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server-side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a \\\”stash\\\” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.

AWS has Amazon Elasticsearch Service (Amazon ES), which is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS Cloud.

In above fig. “Centralized logging with AWS Elasticsearch”- I have taken a sample solution showing how clients can use Amazon Elasticsearch if they have hybrid cloud model, so that we can transfer all our logs from on-premise services to aws (log files to s3). Similarly logs of different aws services can be transferred by lambda functions to Amazon Elasticsearch. We can also use lambda to transfer logs of on-premise services in s3 to Amazon Elasticsearch. This process will give us a centralized place for logs and Kibana, which is in-build in ElasticSearch and can be used as a Dashboard for accessing log information.

Splunk: Splunk provides the industry-leading software to consolidate and index any log and machine data, including structured, unstructured and complex multi-line application logs. You can collect, store, index, search, correlate, visualize, analyze and report on any machine-generated data to identify and resolve operational and security issues in a faster, repeatable and more affordable way. For time being AWS has no official service on Splunk; but we can create an EC2 instances and host our Splunk in it. If you already have Splunk on On-Premise Data Center, Splunk add-ons give us the capability to send our logs to Splunk directly.

Let\’s take an example:

In above fig. “Centralized logging with On-premise Splunk”, I have taken a sample solution of a client that has splunk on-premise and hybrid cloud model. We can take all our logs of aws services to on-premise splunk and on-premise services can directly transfer the logs to splunk. This process will give us a centralized place for logs and a single Dashboard for accessing log information.

To summarize: We have taken a small part of organization business-operations and projected importance of data for making better business strategies. Latest technologies such as AI/ML, AWS services can empower customers to build efficient and reliable architectures. Client can build architecture with multiple models, but hybrid cloud is the most used one. In those architectures, Centralized logging enables client to Monitor, track errors, Debug, automate alert etc. on systems or services of architecture. Without centralized logging, it becomes a logistical nightmare to research a single transaction that may have been processed on any one of an array of app servers, since your support staff will have to log into each server and start searching through each. In this blog, I have given two solutions of centralized logging with two major log analysis tools: ELK, Splunk.



Published On: December 5th, 2019 / Categories: Technology /

Subscribe To Receive The Latest News

    Add notice about your Privacy Policy here.