From Monolith to Microservices: A Gradual Approach with Modular Monolithic Architecture

As software developers, we are always looking for better ways to build applications that are stable, maintainable, and scalable. In this blog post, I will share my experience with using the Modular Monolithic architecture.

What is Modular Monolithic?

Modular Monolithic architecture is an architectural pattern where the application is divided into separate modules that work together as a single, cohesive unit. In this pattern, each module is responsible for a specific set of features and has well-defined interfaces for communicating with other modules. Modules should be created based on business domains to ensure that each module has a clear and specific responsibility.

When to Use Modular Monolithic Architecture

Modular Monolithic can be a good choice for quickly getting a project off the ground and adapting to changing business requirements. By starting with a modular monolithic architecture, it is possible to gradually move towards a microservices architecture without the need for a complete rewrite of the codebase.

When building a new system, it is important to focus on understanding the business domain first. You need to identify the business capabilities, processes, and data that the system needs to support. Once you have a clear understanding of the business domain, you can then start thinking about how to partition the system into individual services.

If you are not sure about the domain boundaries, you may want to start with a monolithic architecture and gradually move towards a microservices architecture as you gain more knowledge and experience. A monolithic architecture allows you to build the system as a single, cohesive unit, and it can be easier to refactor and break down into microservices later on. This approach can help you to identify the domain boundaries and dependencies between modules, which can then be used to guide the partitioning of the system into microservices.

Interactions between Modules

It is essential to decide on a consistent pattern for how modules should interact with each other. Otherwise, the lack of standardization may lead to code that is difficult to maintain.

In general, there are several common patterns for module interaction:

Function calls:
One module calls functions or methods in another module to request some operation or data. However, this can lead to tight coupling, where one module relies heavily on another module’s functions or methods, leading to cascading effects on the calling module. Depending on the number of function calls, the performance of the system may be affected due to the overhead of the function calls.

Events:
One module publishes events to a message bus or event stream, and other modules subscribe to these events to react to them. However, when modules rely on events, there may be a lag between the time an event is generated and the time that other modules receive and process the event, leading to eventual consistency issues.

HTTP APIs:
Modules expose HTTP APIs that other modules can use to request data or trigger operations. However, since HTTP APIs rely on network calls, the latency of the system may be affected, leading to performance issues. As the system evolves, the APIs may need to change, leading to versioning challenges and backward compatibility issues.

Databases in the Context of Modular Monolithic

It is essential to avoid allowing data owned by one module to be directly accessed by other modules. If this is not done correctly, we might end up creating a monolithic architecture.
In general, we have two patterns for managing persistence state in Modular Monolithic architecture:

Database per module:
Each module has its database schema and interacts with it directly. This approach provides isolation between modules and can simplify data management.

Database per bounded context:
Each module has its database schema, but the schemas are organized by bounded contexts, which represent different areas of the system’s business domain. This pattern is common

When to extract a module as a separate service:

While modular monolithic architecture provides a flexible and adaptable approach to building applications, there may come a time when one or more modules outgrow the monolithic architecture and need to be extracted as separate services. One common reason for this is when a module becomes too large and requires more resources to scale than can be efficiently handled within the monolithic architecture.

To determine if a module should be extracted as a separate service, consider the following:

  1. Is the module experiencing performance issues due to its size or resource requirements?
  2. Are there other modules in the system that depend heavily on the module in question?
  3. Are there clear boundaries around the module’s functionality and business domain that make it suitable for extraction as a separate service?
  4. Are there benefits to extracting the module as a separate service, such as improved scalability, fault tolerance, or the ability to independently deploy and maintain the module?
Share Comments

Launching Spring Boot Apps on AWS ECS Fargate

Deploying Spring Boot applications can be challenging, especially when it comes to managing containers in a production environment. However, by using AWS ECS Fargate, you can easily deploy and manage your Spring Boot applications in a scalable and reliable way. In this blog post, we’ll explore the benefits of using AWS ECS Fargate for Spring Boot application deployment, and walk through the steps to prepare your application for deployment on ECS Fargate. We’ll cover creating a Docker image, setting up an ECS Fargate cluster and task definition Let’s get started!

Quick Overview about ECS

AWS ECS (Elastic Container Service) is a fully managed container orchestration service that makes it easy to run, manage, and scale Docker containers on AWS. ECS provides several key concepts for managing containerized applications:

Cluster: An ECS cluster is a logical grouping of container instances that run your tasks. Each cluster can contain multiple container instances, and each instance can run multiple tasks.

Task: An ECS task is a running instance of a Docker container that has been launched from a task definition. A task definition is a blueprint for how to run a Docker container, including the Docker image to use, the CPU and memory requirements, the network settings, and more.

Service: An ECS service is a long-running task that runs continuously in the background, ensuring that your application is always available. A service is defined by a task definition, and it can be scaled up or down based on demand. ECS can automatically manage the load balancing and auto scaling of services.

Batch: An ECS batch job is a task that is launched for a finite amount of time to perform a specific job, like running a batch script or processing a batch of data. Batch jobs are defined by a job definition, which specifies the Docker image to use, the CPU and memory requirements, and other parameters. Batch jobs can be run on demand or scheduled to run at specific times.

By using ECS, you can easily deploy and manage containerized applications, while taking advantage of AWS features like load balancing, auto scaling, and monitoring.

ECS vs ECS Fargate
With ECS, you manage your own EC2 instances that run your containerized applications.

AWS ECS Fargate is a serverless container orchestration service that allows you to run containers without having to manage the underlying EC2 instances. With Fargate, AWS takes care of the infrastructure for you, allowing you to focus on running and scaling your containers. Fargate provides the same key concepts as ECS, including clusters, tasks, services, and batch jobs.

The main difference between ECS and ECS Fargate is that with ECS, you manage the EC2 instances that run your containers, whereas with Fargate, AWS manages the instances for you. This makes Fargate a more hands-off approach to container management, and can be beneficial if you don’t want to deal with the operational overhead of managing EC2 instances

let’s Create Simple Spring Boot Application that we will deploy on ECS

The code for this post is available on Github here

1 Create Simple Spring Boot APP
Create Rest API that will return simple String.

1
2
3
4
5
6
7
8
@RestController
public class HelloController {

@GetMapping("/hello")
public String hello() {
return "Hello Message from ECS Service";
}
}

2 Create Docker Image
Add Simple Docker file.

1
2
3
FROM openjdk:11
COPY target/spring-boot-ecs-Fargate-0.0.1-SNAPSHOT.jar spring-boot-ecs-Fargate-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "/spring-boot-ecs-Fargate-0.0.1-SNAPSHOT.jar"]

Run Below command from root of your project. This will create the docker image with name ‘spring-boot-ecs’
docker build -t spring-boot-ecs .
Run Below command to verify docker image is working. You should be able to access API on http://localhost:8080/hello
docker run -p 8080:8080 spring-boot-ecs

3 Create ECR repo for uploading Image
AWS ECR (Elastic Container Registry) is a fully-managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. ECR is tightly integrated with ECS, making it easy to store and manage your Docker images for use in ECS tasks.

Here are some key concepts to understand about ECR:

Repository: An ECR repository is a collection of Docker images with the same name and tag. Each repository has a unique URL that you can use to access the images.

Image: An ECR image is a Docker image that has been uploaded to an ECR repository. Images can have multiple tags, allowing you to have different versions of the same image.

Registry: An ECR registry is a collection of ECR repositories that you own. Each AWS account can have one ECR registry.

To use ECR with ECS, you first need to create an ECR repository to store your Docker images. You can then build your Docker image locally, tag it with the ECR repository URL and push it to ECR using the docker push command. Once your Docker image is in ECR, you can reference it in your ECS task definitions and services.

ECR provides a secure and scalable way to manage your Docker images, and it integrates seamlessly with ECS to provide a complete container management solution.

Create an ECR Repo by following step by step guide

Upload Image
Create Separate IAM User which will have access to upload image to ECR Repo. You can use AWS CLI for uploading images.
docker push <your-aws-account.your-aws-repo/spring-boot-ecs:latest

4 Setting up AWS ECS Fargate cluster and task definition

Here are the steps to follow:
Create an ECS cluster, which is a logical grouping of container instances.

Define a task definition for your application, which specifies the Docker image to use, along with any required configuration.

Configure the network settings and security groups for your task definition.

Deploying your Spring Boot application on AWS ECS Fargate

With your ECS cluster and task definition in place, you can now deploy your Spring Boot application on AWS ECS Fargate. Here are the steps to follow:

Create an ECS service for your task definition, which specifies the number of tasks to run, along with the desired load balancing and auto scaling settings.

Once’s the service is successfully deployed, You can access the API by using Public IP Address of running task

Share Comments

Spring Boot & AWS RDS Part 3- Secrets-Manager

The Previous article was about using AWS RDS with Spring Boot & RDS read replicas with Spring Boot. This post is continuation of same topic. In this post i will show you how to access RDS credentials from AWS Secrets Manager.

Managing the application secrets like database credentials, API keys is always a very critical aspect of application security. Now days almost all the enterprise applications have strict constraints on not allowing storing any secrets in plain text. Secrets are also needed to be rotated in certain time intervals.

AWS Secrets Manager helps us to easily manage and rotate credentials from a central place. Secrets Manager enables us to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hard code sensitive information in plain text. Secrets Manager has the built-in integration with AWS Services like RDS, Redshift and DocumentDB.

The code for this post is available on Github here

Creating Secrets for RDS Instance

On AWS Console go to AWS Secrets Manager->Secrets->Store a new secret and then select Credentials for Amazon RDS database. And create secret as shown.

Retrieving secrets from secrets-manager

Now let’s update our Spring Boot app to retrieve the secrets from secrets-manager. Fortunately Spring & AWS team has created very nice and easy to use aws-secretsmanager-jdbc library for this.

1
2
3
4
5
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-jdbc</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>

Updating Data source configurations

Now we have to update data source configurations in application.properties so that the application can pick up the database credentials.

configuration without secret manager
1
2
3
spring.datasource.url=jdbc:postgresql://<database-endpoint-url>:<port>/<database> 
spring.datasource.username=admin1
spring.datasource.password=Admin123
configuration with secret manager
1
2
3
spring.datasource.url=jdbc-secretsmanager:postgresql://<database-endpoint-url>:<port>/<database> 
spring.datasource.username=dev/test-rds-secret-1
spring.datasource.driver-class-name=com.amazonaws.secretsmanager.sql.AWSSecretsManagerPostgreSQLDriver

Observe thatJDBC URL prefix changed to jdbc-secretmanager.
secret name is used username.
The driver class is from spring-cloud-aws-jdbc.

Other driver classes
1
2
3
com.amazonaws.secretsmanager.sql.AWSSecretsManagerMySQLDriver
com.amazonaws.secretsmanager.sql.AWSSecretsManagerOracleDriver
com.amazonaws.secretsmanager.sql.AWSSecretsManagerMSSQLServerDriver

Now if you Run the application, Application should connect to database.

Note
For running the application locally, AWS Profile should have been configured correctly & user should access to read secrets from secretsmanager.

config credentials
1
2
3
[default]
aws_access_key_id = XXXXX
aws_secret_access_key = XXXXX

How the magic happens?

Magic fo connecting to the database is done by the the JDBC driver class provided by the aws-secretsmanager-jdbc. When the application request the connection the wrapper class AWSSecretsManagerPostgreSQLDriver makes and API call to Secrets Manager to retrieve the credentials.

Whats happens when secrets are Rotated?

The aws-secretsmanager-jdbc library does not calls AWS Secrets Manager API every time when connection is requested.
As accessing Secrets Manager API is expensive hence it uses cache. The cache policy is Least Recently Used (LRU), so when the cache must discard a secret, it discards the least recently used secret. By default, the cache refreshes secrets every hour.

When the cached has not expired but the Secrets in AWS Secrets Manager is rotated or changed,
Driver uses fallback mechanism. If the database returns an error for the wrong username/password, Driver class makes an fresh API to AWS Secrets Manager to get the new credentials.

The code for this post is available on Github here

Share Comments

Spring Boot & AWS RDS Part 2 - Read Replicas

In Previous post we discuss, How to use spring boot to access AWS RDS service. This post is continuation of same topic and we will explore, How we can configure & use ReadReplicas.

What is Database Read Replicas?

In General read replica is a copy of the primary database instance and it automatically reflects changes made in primary database in almost real time. Read Replicas can improve the performance of read-heavy database workloads by offloading read workload from primary instance.

How AWS RDS read replica works?

Amazon RDS uses the MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL DB engines’ built-in replication functionality to create a read replica. Any Updates made to the primary DB instance are asynchronously copied to the read replica.

AWS allows to create read replicas in same availability zone or in different availability zone. It Also allows to create read replicas in different regions. Creating up to 5 read replicas are allowed.

Why to use read replica?

  1. Read replicas can significantly improve the performance by redirecting read traffic to one or more read replicas.
  2. Read replicas are Ideal for implementing Business reporting or data warehousing work loads, Without impacting normal business flows.
  3. In some cases read replica can be used for implementing disaster recovery. Read replicas can be promoted as primary database instance.

    How to create read replicas in AWS Console?

    While creating a read replicas we need to specify an existing DB instance as the source. Then Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the snapshot. The read replica operates as a DB instance that allows only read-only connections.
    Applications connect to a read replica the same way they do to any DB instance.

On AWS Console choose the DB instance that you want to use as the source for a read replica.
Then Go to action, choose Create read replica. Then Follow the same steps explained In Previous post

All the data from the main table will also be available in replicated instance. We can verify the data by connecting to replicated instance using PGAdmin or any other similar tool.

How to use read replicas with Spring Boot App

In Previous post we used spring-boot-starter-data-jpa, To use the full power for read replicas , In this example we will use Spring Cloud AWS JDBC as it provides some useful features.

  1. Spring cloud aws automatically detects the read-replica instance and If the read replica support is enabled, It will automatically send the requests to e replica instance. As an application developer we do not have to configure multiple data sources.
  2. Spring cloud aws does the Automatic retry incase of database failure. It attempts to send the same request to different availability zone.
  3. As an application developer, We do not need to worry about how many read replicas are configured.

    Setting up Spring Boot App

    Let’s create simple small Spring Boot app that will interact with primary database and read replicas.
    Apart form other needed dependency, We need to add spring-cloud-aws-jdbc as dependency

1
2
3
4
5
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-aws-jdbc</artifactId>
<version>2.2.6.RELEASE</version>
</dependency>

The code for this post is available on Github here

Configuring data source

The data sources can be configured using the Spring Boot configuration files. Because of the dynamic number of data sources inside one application, the Spring Boot properties must be configured for each data source.

data source configuration properties
1
2
3
cloud.aws.rds.<DB-Instance-ID>.username=admin1
cloud.aws.rds.<DB-Instance-ID>.password=Admin123
cloud.aws.rds.<DB-Instance-ID>.databaseName=employee

How to enable read-replica

enable read replica
1
cloud.aws.rds.employee-db.readReplicaSupport=true

How To redirect read traffic to read replica instance

For redirecting traffic to replicated instance we just need to use Transactions and set Transactional property as readOnly = true

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Service
@RequiredArgsConstructor
public class EmployeeService {

private final EmployeeRepository repository;

@Transactional
public void saveEmployeeToDatabase(Employee employee){
repository.save(employee);
}

@Transactional(readOnly = true)
public List<Employee> findAll(){
return repository.findAll();
}
}

Write work load and replication

All Write transactions will be redirected to the primary DB Instance and AWS will handle the replication asynchronously without impacting the performance of primary DB Instance.

Points to keep in mind before using read replicas

Read-replica feature of RDS can increase throughput and performance but replication is not exactly realtime. There will be some lag in coping with data from primary instance to replicated instance and Read replica might return outdated data in some scenarios.

The code for this post is available on Github here

Share Comments

Spring Boot & AWS RDS - Part 1

What is AWS RDS?

AWS RDS is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. AWS RDS provides multiple DB Engines options like MySQL, PostgreSQL,MariaDB, Oracle SQL Server.

As Amazon RDS handles routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair. This brings a lot of convenience to RDS users and provides. RDS also provides other features like replication, enhance availability and reliability.

In this article we will examine how to use Spring boot to access AWS RDS PostgreSQL. Amazon RDS for PostgreSQL provides access to the capabilities of the familiar PostgreSQL database engine.

Creating PostgreSQL DB Instance on AWS

Go to RDS->Databases->Create Database for creating new database instance. Select PostgreSQL as engine type.
For this demo i am using below setting. (These configurations are not recommended for production usage)

  1. Free tier,
  2. DB instance identifier : employee-db
  3. Credential: Master username
  4. VPC : Default
  5. Public Access:True
  6. Security Group:Default
  7. Initial database name:employee

Note
Only specifying Public Access:True for your Databases might not work. The Security group should allow Inbound traffic from your IP address or All IP address.

Onces the database is ready, Note the Endpoint url which we will use as spring.datasource.url

Setting up Spring Boot Project

Let’s create simple small Spring Boot app that will interact with RDS.

We do not need any AWS specific dependancy, Only JPA,spring-web & postgresql dependancy are needed.

1
2
3
4
5
6
7
8
9
10
11
12
13
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>

Data source configurations
1
2
3
4
5
6
spring.datasource.url=jdbc:postgresql://<database-endpoint-url>:<port>/<database> 
spring.datasource.username=admin1
spring.datasource.password=Admin123
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
spring.jpa.hibernate.show-sql=true

Onces the database connection is done, We can simply use the JAP Repository to interact with database.

Controller
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
@RestController
@Slf4j
@RequestMapping("/employee")
@RequiredArgsConstructor
public class EmployeeController {

private final EmployeeRepository repository;
@PostMapping
public ResponseEntity createEmployee(@RequestBody CreateEmployeeRequest request) {
repository.save(new Employee(request.getId(), request.getFirstName(), request.getLastName()));
return ResponseEntity
.status(HttpStatus.CREATED)
.build();
}
@GetMapping
public List<Employee> getAllEmployee() {
return repository.findAll();
}
}
Entity And Repository
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Entity
@AllArgsConstructor
@NoArgsConstructor
@Data
public class Employee {

@Id
private UUID id;
private String firstName;
private String lastName;
}

public interface EmployeeRepository extends JpaRepository<Employee, UUID> {
}

The code for this post is available on Github here

Share Comments

Spring Boot With AWS S3

In previous post we discuss, How to use spring boot to access AWS SQS service. In this article we will examine how to use Spring boot to access AWS S3.

Spring Cloud provides convenient way to interact with AWS S3 service. With the help of spring cloud S3 support we can use all well-known Spring Boot features. It also offers multiple useful features compare to SDK provided by AWS.

The code for this post is available on Github here

Using Spring cloud

To use S3 support we just need to add below dependancy
1
2
3
4
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws</artifactId>
</dependency>

Providing AWS credential and SDK configurations

In order to make calls to the AWS Services the credentials must be configured for the the Amazon SDK. In order to access S3 service we can configure access key and secret key using yaml or properties files
1
2
3
4
5
6
7
8
9
10
document:
bucket-name: spring-boot-s3-poc
cloud:
aws:
region:
static: us-east-1
auto: false
credentials:
access-key: XXX
secret-key: XXXXX

Creating AmazonS3 Client bean
AmazonS3 Client bean can be use to perform different operation on AWS S3 service.

AmazonS3 Client Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Configuration
public class Config {

@Value("${cloud.aws.credentials.access-key}")
private String awsAccessKey;

@Value("${cloud.aws.credentials.secret-key}")
private String awsSecretKey;

@Value("${cloud.aws.region.static}")
private String region;

@Primary
@Bean
public AmazonS3 amazonS3Client() {
return AmazonS3ClientBuilder
.standard()
.withRegion(region)
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
}
}

Find all objects in bucket

listObjectsV2 method can be use to get all object keys from the bucket
1
2
3
4
5
6
@GetMapping
public List<String> getAllDocuments() {
return amazonS3.listObjectsV2(bucketName).getObjectSummaries().stream()
.map(S3ObjectSummary::getKey)
.collect(Collectors.toList());
}

Upload object to S3 bucket

We can use putObject method on our AmazonS3 client bean to upload object in S3 bucket. It provides multiple overloaded methods to upload object as File, String, InputStream etc.
lets take example of uploading MultipartFile to S3 bucket.

Uploading MultipartFile to S3 bucket
1
2
3
4
5
6
7
8
9
10
@PostMapping
public ResponseEntity uploadDocument(@RequestParam(value = "file") MultipartFile file) throws IOException {
String tempFileName = UUID.randomUUID() + file.getName();
File tempFile = new File(System.getProperty("java.io.tmpdir") + "/" + tempFileName);
file.transferTo(tempFile); // Convert multipart file to File
String key UID.randomUUID() + file.getName() // unique key for the file
amazonS3.putObject(bucketName, key, tempFile); // Upload file
tempFile.deleteOnExit(); //delete temp file
return ResponseEntity.created(URI.create(tempFileName)).build();
}

Download object from S3 bucket

We can use getObject method on our AmazonS3 client bean to get object from S3 bucket. getObject returns an S3Object
which can be converted to ByteArrayResource .

Download object from S3 bucket
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@GetMapping("/{fileName}")
public ResponseEntity<ByteArrayResource> downloadFile(@PathVariable String fileName) throws IOException {
S3Object data = amazonS3.getObject(bucketName, fileName); // fileName is key which is used while uploading the object
S3ObjectInputStream objectContent = data.getObjectContent();
byte[] bytes = IOUtils.toByteArray(objectContent);
ByteArrayResource resource = new ByteArrayResource(bytes);
objectContent.close();
return ResponseEntity
.ok()
.contentLength(bytes.length)
.header("Content-type", "application/octet-stream")
.header("Content-disposition", "attachment; filename=\"" + fileName + "\"")
.body(resource);
}

Deleting object from S3 bucket

We can use deleteObject method on our AmazonS3 client bean to delete object from bucket.

Delete Object
1
2
3
4
5
6
@DeleteMapping("/{fileName}")
public ResponseEntity deleteDocument(@PathVariable String fileName) {
log.info("Deleting file {}", fileName);
amazonS3.deleteObject(bucketName, fileName); // fileName is key which is used while uploading the object
return ResponseEntity.ok().build();
}

Creating presigned-url for accessing objects for limited time.

We can use generatePresignedUrl method on our AmazonS3 client bean to generate PresignedUrl which will be valid till provided time.

get presignedUrl
1
2
3
4
5
6
7
@GetMapping("/presigned-url/{fileName}")
public String presignedUrl(@PathVariable String fileName) throws IOException {

return amazonS3
.generatePresignedUrl(bucketName, fileName, convertToDateViaInstant(LocalDate.now().plusDays(1)))
.toString();// URL will be valid for 24hrs
}

Note:
On application startup, you might see exception related to Metadata or RegistryFactoryBean. You need to exclude some auto configuration. You can find more details
https://stackoverflow.com/a/67409356/320087

exclude autoconfigure
1
2
3
4
5
6
7
 
spring:
autoconfigure:
exclude:
- org.springframework.cloud.aws.autoconfigure.context.ContextInstanceDataAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextRegionProviderAutoConfiguration

The code for this post is available on Github here

Share Comments

Spring Boot With AWS SQS

Spring Cloud messaging support provides a convenient way to interact with AWS SQS service, With the help of spring cloud messaging support we can use all well-known Spring Boot features. It also offers multiple useful features compare to SDK provided by AWS.

The code for this post is available on Github here

Create a standard AWS SQS queue

Navigate to aws consol -> Simple queue service -> create queue. Then select standard queue and provide name to queue.Click on create queue.

Create IAM Role and IAM Group, Which will have access to our queue.

Using Spring cloud messaging

The Spring Cloud AWS messaging module comes as a standalone module and can be imported with the following dependency
1
2
3
4
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws-messaging</artifactId>
</dependency>

Providing AWS credential and SDK configurations

In order to make calls to the AWS Services the credentials must be configured for the the Amazon SDK. In order to access SQS service we can configure access key and secret key using yaml or properties files
1
2
3
4
5
6
7
8
9
10
cloud:
aws:
region:
static: us-east-1
auto: false
credentials:
access-key: XXXX
secret-key: XXXX
end-point:
uri: https://sqs.us-east-1.amazonaws.com/549485575026/spring-boot-poc

Sending message to SQS.

In order to send messages to SQS queue, Spring boot provides QueueMessagingTemplate which uses AmazonSQSAsync

Configuration for QueueMessagingTemplate
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@Configuration
public class SQSConfig {
@Value("${cloud.aws.region.static}")
private String region;
@Value("${cloud.aws.credentials.access-key}")
private String accessKey;
@Value("${cloud.aws.credentials.secret-key}")
private String secretKey;
@Bean
public QueueMessagingTemplate queueMessagingTemplate() {
return new QueueMessagingTemplate(amazonSQSAsync());
}
@Bean
@Primary
public AmazonSQSAsync amazonSQSAsync() {
return AmazonSQSAsyncClientBuilder.standard().withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.build();
}
}

QueueMessagingTemplate Provides a convenient method convertAndSend which can be used to send domain objects as message. QueueMessagingTemplate delegate the conversion process to an instance of the MessageConverter interface. This interface defines a simple contract to convert between Java objects and SQS messages.

Message Publisher
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@Component
@Slf4j
public class Publisher {
@Autowired
private QueueMessagingTemplate queueMessagingTemplate;
@Value("${cloud.aws.end-point.uri}")
private String endpoint;
@Scheduled(fixedRate = 1000)
public void scheduleFixedRateTask() {
log.info("Sending Message to SQS ");
//queueMessagingTemplate.send(endpoint, MessageBuilder.withPayload("Niraj").build());
queueMessagingTemplate.convertAndSend(endpoint, new Pojo("SomeRandomValue"));
}
}

Consuming Messages from SQS

Spring boot provides a convenient Annotation @SqsListener. In below example a queue listener container is started that polls the spring-boot-poc queue. The incoming messages is converted to the type of method argument, in this case Pojo.

As the deletionPolicy is provided as ON_SUCCESS it means, Message will be Deleted from Queue only when successfully executed by listener method (no exception thrown). We can set the Global deletion policy for all the queues which are consumed by
SqsListener by using property cloud.aws.sqs.handler.default-deletion-policy=ON_SUCCESS

Message Consumer
1
2
3
4
5
6
7
8
@Component
@Slf4j
public class Consumer {
@SqsListener(value = "spring-boot-poc",deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void processMessage(Pojo message) {
log.info("Message from SQS {}", message);
}
}

Note:
On application startup, you might see exception related to Metadata or RegistryFactoryBean. You need to exclude some auto configuration. You can find more details
https://stackoverflow.com/a/67409356/320087

exclude autoconfigure
1
2
3
4
5
6
7
 
spring:
autoconfigure:
exclude:
- org.springframework.cloud.aws.autoconfigure.context.ContextInstanceDataAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextRegionProviderAutoConfiguration

The code for this post is available on Github here

Share Comments

Spring Boot With Hibernate Envers

Hibernate Envers provides an easy & flexible way to implement database auditing and versioning. Database Auditing in the context of JPA means tracking and logging the changes on persisted entities. The database audit logs are important from compliance perspectives and also provides grate helps to identify how and what data has been changed.

Hibernate Envers can be integrated very easyly with Spring Boot JPA.

The code for this post is available on Github here

To use Envers in Spring boot application, We need to add below dependency.

1
2
3
4
5
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-envers</artifactId>
<version>5.4.30.Final</version>
</dependency>

To Audit changes that are performed on an entity, we need to add @Audited annotation on the entity.

Considering we have a UserDetails entity,for which we want to enable Auditing.
1
2
3
4
5
6
7
8
@Entity
@Audited
public class UserDetails {
@Id
private Integer userId;
private String firstName;
private String lastName;
}

In order to log all the changes to entity, Envers needs REVINFO and Entity_aud table, In this case it will be USER_DETAILS_AUD.
The REVINFO table contains revision id and revision timestamp. A row is inserted into this table on each new revision, that is, on each commit of a transaction, which changes audited data.

flyway migration scripts for the the audit table and revinfo table will look like below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

CREATE TABLE REVINFO (
REV INTEGER GENERATED BY DEFAULT AS IDENTITY,
REVTSTMP BIGINT,
PRIMARY KEY (REV)
);

CREATE TABLE USER_DETAILS (
USER_ID INTEGER PRIMARY KEY,
FIRST_NAME VARCHAR(50) NOT NULL,
LAST_NAME VARCHAR(50) NOT NULL
);

CREATE TABLE USER_DETAILS_AUD (
USER_ID INTEGER NOT NULL,
FIRST_NAME VARCHAR(50),
LAST_NAME VARCHAR(50),
REV INTEGER NOT NULL,
REVTYPE INTEGER NOT NULL,
PRIMARY KEY (USER_ID, REV)
);

Now when we insert,update and delete UserDetails entity, audit log will be saved in USER_DETAILS_AUD table.

for below code we should expect 4 rows in USER_DETAILS_AUD table
1
2
3
4
5
6
7
8
9
10
11
12
private void dataSetup(UserDetailsRepository userRepository) {
UserDetails userDetails = new UserDetails(1, "NIRAJ", "SONAWANE");
userRepository.save(userDetails); // Create

userDetails.setFirstName("Updated Name");
userRepository.save(userDetails); // Update-1

userDetails.setLastName("Updated Last name"); // Update-2
userRepository.save(userDetails);

userRepository.delete(userDetails); // Delete
}

The REVTYPE column value is taken from the RevisionType Enum. Which has values
0-Add
1-Update
2-Delete

The code for this post is available on Github here

Share Comments

Consumer Driven Contract Test Using Spring Cloud Contract

Photo by Beatriz Pérez Moya on Unsplash

This post discusses, What is Consumer Driven Contract test? and How to implement it using Spring Cloud Contract.

The code for this post is available on Github here

Contract Testing

Contract test are set of automated test, That verifies two separate services are adhering to predefine contracts and are compatible with each other.
Aim of contract test is to make sure that, Contract are always kept up to date and each service ( Provider & Consumer) can be tested independently.

Consumer Driven Contract test

In consumer driven contract testing, Consumers are responsible for providing the contract details. In this strategy consumers of API are at the heart of API design process.In consumer driven API design process providers are forced to complete their consumer obligations. Frameworks like pact and Spring Cloud Contracts provides set of tools to implement Consumer Driven Contract test.

Spring Cloud Contract

Spring Cloud Contract provides set of tool for implementing Consumer Driven Contract test for Spring based applications. It has two major component Contract Verifier for Producers & Stub Runner for consumer

Sample application

Let’s write some contract test. Assume we got an requirement from Consumer application Service-A for status API for Provider application Service-B, Which will provide Current Status of user.

In consumer driven contract strategy, as consumer of service, consumers need to define what exactly they want from producer in the form of written contracts. You can provide contract in groovy or yaml format

Provider : Service-B

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import org.springframework.cloud.contract.spec.Contract
Contract.make {
description "should return user status"

request {
url "/status"
method GET()
}

response {
status OK()
headers {
contentType applicationJson()
}
body (
id: 1,
status: "CREATED"
)
}
}
Implement Contract
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@RestController
class UserStatusController(private val userStatusService: UserStatusService) {
@GetMapping("/status")
fun getStatus(): ResponseEntity<UserStatus> {
return ResponseEntity.ok(userStatusService.getUserStatus(1))
}
}

data class UserStatus(val id: Int, val status: String)

@Service
class UserStatusService {

fun getUserStatus(userId:Int):UserStatus{
return UserStatus(1,"ACTIVATED")
}
}

How to verify contracts ?
spring-cloud-starter-contract-verifier helps us to automatically verify the contracts, It’s generates the test cases during the build phase and verify the API response against the contract. Add below dependency in pom

1
2
3
4
5
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-contract-verifier</artifactId>
<scope>test</scope>
</dependency>

To auto-generated tests classes, Add below plugin inside build tag.

1
2
3
4
5
6
7
8
9
10
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<version>3.0.1</version>
<extensions>true</extensions>
<configuration>
<testFramework>JUNIT5</testFramework>
<baseClassForTests>com.ns.producer.BaseClass</baseClassForTests>
</configuration>
</plugin>

We also need to provide BaseClassForTest, which will be extended by all generated classes. Base class is responsible for providing all needed mocking & spring beans needed for the generated test classes.In out cases this is how base class will look like.

BaseClass
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public abstract class BaseClass {

@Autowired
private UserStatusController userStatusController;
@MockBean
private UserStatusService userStatusService;
@BeforeEach
public void setup() {
Mockito.when(userStatusService.getUserStatus(1)).thenReturn(new UserStatus(1, "CREATED"));
RestAssuredMockMvc.standaloneSetup(userStatusController);
}

}

Now if we run the build, ContractVerifierTest test class will be generated inside /target/generated-test-source/contract generated class will be look like below

Generated Test Class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class ContractVerifierTest {
@Test
public void validate_get_status_by_id() throws Exception {
// given:
MockMvcRequestSpecification request = given();
// when:
ResponseOptions response = given().spec(request)
.get("/status");
// then:
assertThat(response.statusCode()).isEqualTo(200);
assertThat(response.header("Content-Type")).matches("application/json.*");
// and:
DocumentContext parsedJson = JsonPath.parse(response.getBody().asString());
assertThatJson(parsedJson).field("['id']").isEqualTo(1);
assertThatJson(parsedJson).field("['status']").isEqualTo("CREATED");
}
}

Consumer : Service-A

On the consumer side , We can use stub generated by Producer application to test the interaction from consumer to producer.To use stub generated by producer add below dependancy in consumer

pom
1
2
3
4
5
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
<scope>test</scope>
</dependency>

the Unit test to test interaction with Producer will look like below

pom
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
@SpringBootTest
class StatusServiceTests {
@Autowired
lateinit var underTest: StatusService

@JvmField
@RegisterExtension
final val stubRunner = StubRunnerExtension()
.downloadStub("com.ns", "producer", "0.0.1-SNAPSHOT", "stubs")
.withPort(8080)
.stubsMode(StubRunnerProperties.StubsMode.LOCAL)

@Test
fun getStatus() {
val status = underTest.getStatus()
assertEquals(status, "CREATED")
}

@Test
fun getPactStatus() {
val status = underTest.getPactStatus()
assertEquals(status, "CREATED")
}
}

The code for this post is available on Github here

Reference Spring Cloud Contract Reference Documentation

Share Comments

Monitoring Spring Boot Application with Prometheus and Grafana on Kubernetes

unsplash-logoCarlos Muza

Welcome to the second post on Prometheus & Grafana. In last post Monitoring Spring Boot Application with Prometheus and Grafana we Integrated Prometheus , Spring Boot and Grafana using docker.

In this post we will discuss, How to setup Prometheus and Grafana on Kubernetes using Helm Charts

The code for this post is available on Github here

If you’re new to Kubernetes & Prometheus I recommend reading the following hands-on guide on Kubernetes.

  1. Deploy React, Spring Boot & MongoDB Fullstack application on Kubernetes
  2. Monitoring Spring Boot Application with Prometheus and Grafana

Prerequisites

You need to have Kubectl, Helm, Minikube installed on your machine. To Follow along this post, Basic knowledge of Kubernetes is needed.

Step 1 : Deploy a Spring Boot application on Kubernetes and expose actuator endpoints

  1. How to deploy Spring boot application on Kubernetes is explained in detail here
  2. How to expose actuator endpoints for Prometheus is explained here.

In Kubernetes environment , we can configure annotations which will be used by prometheus to scrap data.Below is the complete deployment.yaml file

spring-boot-prometheus-deployment.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-prometheus
spec:
selector:
matchLabels:
app: spring-boot-prometheus
replicas: 1
template:
metadata:
labels:
app: spring-boot-prometheus
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
containers:
- name: spring-boot-prometheus
image: nirajsonawane/spring-boot-prometheus:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
limits:
memory: 294Mi

Step 2 : Create separate namespace for Monitoring

it’s always good idea to keep related things together, We will create separate namespace in Kubernetes for monitoring and will deploy all monitoring related application under that namespace.

namespace.yml
1
2
3
4
kind: Namespace
apiVersion: v1
metadata:
name: monitoring

Helm Chart

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.

Step 3: Deploy Prometheus using Helm Chart

With the help of Helm, We can deploy prometheus using single command.

1
helm install prometheus stable/prometheus --namespace monitoring

This will deploy Prometheus into your cluster in the monitoring namespace and mark the release with the name prometheus.

let’s check if prometheus is running or not

1
kubectl get pods -n monitoring 

Step 4: Deploy Grafana using Helm Chart

In Previous post we have manually created the data sources. Here we can create the config map for Prometheus data source and grafana deployment can use these config maps.

After Grafana Helm chart deployment, it looks for any config maps that contain a grafana_datasource label.

config.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
namespace: monitoring
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus-server.monitoring.svc.cluster.local
values.yml
1
2
3
4
5
6
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
Config map & Grafana deployment
1
2
kubectl apply -f helm/monitoring/grafana/config.yml 
helm install grafana stable/grafana -f helm/monitoring/grafana/values.yml --namespace monitoring

Password protected Grafana instance will be be deployed, To know the password run the below command.

print password
1
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Now let’s do port-forward for accessing grafana
port-forward deployment
1
kubectl --namespace monitoring port-forward grafana-5c6bbf7f4c-n5pqb 3000

Now if you goto http://localhost:3000 grafana interface will be available.

Now lets add JVM chart which wil be using our Prometheus datasource.

The code for this post is available on Github here

Share Comments