Monitoring Spring Boot Application with Prometheus and Grafana on Kubernetes

unsplash-logoCarlos Muza

Welcome to the second post on Prometheus & Grafana. In last post Monitoring Spring Boot Application with Prometheus and Grafana we Integrated Prometheus , Spring Boot and Grafana using docker.

In this post we will discuss, How to setup Prometheus and Grafana on Kubernetes using Helm Charts

The code for this post is available on Github here

If you’re new to Kubernetes & Prometheus I recommend reading the following hands-on guide on Kubernetes.

  1. Deploy React, Spring Boot & MongoDB Fullstack application on Kubernetes
  2. Monitoring Spring Boot Application with Prometheus and Grafana

Prerequisites

You need to have Kubectl, Helm, Minikube installed on your machine. To Follow along this post, Basic knowledge of Kubernetes is needed.

Step 1 : Deploy a Spring Boot application on Kubernetes and expose actuator endpoints

  1. How to deploy Spring boot application on Kubernetes is explained in detail here
  2. How to expose actuator endpoints for Prometheus is explained here.

In Kubernetes environment , we can configure annotations which will be used by prometheus to scrap data.Below is the complete deployment.yaml file

spring-boot-prometheus-deployment.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-prometheus
spec:
selector:
matchLabels:
app: spring-boot-prometheus
replicas: 1
template:
metadata:
labels:
app: spring-boot-prometheus
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
containers:
- name: spring-boot-prometheus
image: nirajsonawane/spring-boot-prometheus:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
limits:
memory: 294Mi

Step 2 : Create separate namespace for Monitoring

it’s always good idea to keep related things together, We will create separate namespace in Kubernetes for monitoring and will deploy all monitoring related application under that namespace.

namespace.yml
1
2
3
4
kind: Namespace
apiVersion: v1
metadata:
name: monitoring

Helm Chart

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed.

Step 3: Deploy Prometheus using Helm Chart

With the help of Helm, We can deploy prometheus using single command.

1
helm install prometheus stable/prometheus --namespace monitoring

This will deploy Prometheus into your cluster in the monitoring namespace and mark the release with the name prometheus.

let’s check if prometheus is running or not

1
kubectl get pods -n monitoring 

Step 4: Deploy Grafana using Helm Chart

In Previous post we have manually created the data sources. Here we can create the config map for Prometheus data source and grafana deployment can use these config maps.

After Grafana Helm chart deployment, it looks for any config maps that contain a grafana_datasource label.

config.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
namespace: monitoring
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://prometheus-server.monitoring.svc.cluster.local
values.yml
1
2
3
4
5
6
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
Config map & Grafana deployment
1
2
kubectl apply -f helm/monitoring/grafana/config.yml 
helm install grafana stable/grafana -f helm/monitoring/grafana/values.yml --namespace monitoring

Password protected Grafana instance will be be deployed, To know the password run the below command.

print password
1
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Now let’s do port-forward for accessing grafana
port-forward deployment
1
kubectl --namespace monitoring port-forward grafana-5c6bbf7f4c-n5pqb 3000

Now if you goto http://localhost:3000 grafana interface will be available.

Now lets add JVM chart which wil be using our Prometheus datasource.

The code for this post is available on Github here

Share Comments

Monitoring Spring Boot Application with Prometheus and Grafana

unsplash-logoCarlos Muza

In this post we will discuss how to integrate Prometheus monitoring system with Spring Boot and Grafana.

The code for this post is available on Github here

This setup has three major components.

Spring Boot and Micrometer

Spring Boot Actuator provides number of features to help us monitor and manage spring boot application. Spring Boot Actuator also provides dependency management and auto-configuration for Micrometer.

Micrometer is an application metrics facade that supports numerous monitoring systems.
To integrate Prometheus with spring boot we just need to add micrometer-registry-prometheus dependancy in class path.Once spring boot detects this dependancy it will expose /actuator/prometheus endpoint.

pom File
1
2
3
4
5
6
7
8
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

Along with this dependancy we need to set below property.

application.properties
1
management.endpoints.web.exposure.include=health,info,metrics,prometheus

Now if we run the application and check http://localhost:8080/actuator/prometheus

Prometheus

Prometheus is an open-source monitoring and alerting toolkit originally built at SoundCloud.
Prometheus does one thing and it does it well. It has a simple yet powerful data model and a query language that lets you analyse how your applications and infrastructure are performing.

Prometheus has a data scraper that pulls metrics data over HTTP periodically at a configured interval. It stores metric data in A time-series Database. It has simple user interface that can be used to run query and visualize data. It also has powerful Altering system.

let’s try to run Prometheus on docker.
We need to configure Prometheus to scrape our spring boot application.

prometheus.yml
1
2
3
4
5
6
7
8
9
global:
scrape_interval: 10s

scrape_configs:
- job_name: 'spring_micrometer'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['YOUR-MACHINE-IP:8080']

configuration is very much self explanatory. We have created job ‘spring_micrometer’ for scraping our spring boot application and provided the endpoint where prometheus can get the data. As we are runing Prometheus from docker in targets we need to provide IP of local machine.

Run Prometheus on docker

1
docker run -d -p 9090:9090 -v <File path of prometheus.yml>:/etc/prometheus/prometheus.yml prom/prometheus

Now if you goto http://localhost:9090/targets it should show our spring boot application.

Grafana

Grafana is open source visualization and analytics software. It allows you to query, visualize, alert on, and explore your metrics no matter where they are stored. It provides you with tools to turn your time-series database (TSDB) data into beautiful graphs and visualizations.

Grafana supports querying Prometheus from very initial version

let’s try to setup grafana using docker.

docker run
1
docker run -d -p 3000:3000 grafana/grafana 

Now if you goto http://localhost:3000 grafana interface will be available. Default username password is admin/admin.
you can add Prometheus data source and dashboard by following below steps.

The code for this post is available on Github here

Share Comments

Deploy React, Spring Boot & MongoDB Fullstack application on Kubernetes

unsplash-logoKeith Misner

In this post we will discuss how to Deploy Full stack application on Kubernetes. We will build small Student CRUD application using React as Front end, Spring Boot as back end and MongoDB as persistance layer and we also configure PersistentVolumeClaim.

The code for this post is available on Github here

Prerequisites

To Follow along this post, You need to have minikube and kubectl installed on your system. Basic knowledge of Docker,Kubernetes & Spring Boot is needed.

The sample application that we will be deploying on Kubernetes allows user to perform CRUD operations and the final deployment structure will be below.

let’s start

Step 1 : Deploy a React application on Kubernetes

The React app is created by create-react-app and then docker image is available on Ducker hub. You can also this image to follow along docker pull nirajsonawane/student-app-client

Deployment configuration

student-app-client-deployment File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: apps/v1
kind: Deployment
metadata:
name: student-app-client
spec:
selector:
matchLabels:
app: student-app-client
replicas: 1
template:
metadata:
labels:
app: student-app-client
spec:
containers:
- name: student-app-client
image: nirajsonawane/student-app-client
imagePullPolicy: Always
ports:
- containerPort: 80

Deployments in Kubernetes is declarative way of creating and updating pods. In above configurations we created student-app-client deployment indicated by name, with 1 number of replicas. The containers sections provides details about Which & How the containers should get created.

Service configuration

student-app-client-service File
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: student-app-client-service
spec:
selector:
app: student-app-client
ports:
- port: 80
protocol: TCP
targetPort: 80
type: ClusterIP

Service in Kubernetes is way to expose application running on a set of Pods (Deployments) as network service. The above configurations creates a new Service object named student-app-client-service, which targets TCP port 80 on any Pod with the app=student-app-client label. Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP. Please Check the documentation for more details.

let’s deploy it on our local Kubernetes cluster
Start minikube minikube start

Check minikube status minikube status

Apply deployment and service for client kubectl apply -f <file-name.yaml>

Tip: If you want to check if client pod is getting correctly deployed and want to access URL, Change the service type to NodePort and then run
minikube service student-app-client-service

Step 2 : Deploy MongoDB persistance layer on Kubernetes

For Managing storage Kubernetes PersistentVolume subsystem provides API which defines how storage is provided and how it is consumed. For this we need to create two kubernetes resources

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
lifecycle of PV is independent of any individual Pod that uses the PV. In simple words PV are user-provisioned storage volumes assigned to a Kubernetes cluster.

A PersistentVolumeClaim (PVC) is a request for storage by a user that deployment needs. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.

In order to deploy the the database component we need to define below resource configurations :

  1. PersistentVolumeClaim
  2. Deployment
  3. Service

PersistentVolumeClaim

PersistentVolumeClaim config File
1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 256Mi

The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes

Deployment

Deployment Config File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:3.6.17-xenial
ports:
- containerPort: 27017
volumeMounts:
- name: storage
mountPath: /data/db
volumes:
- name: storage
persistentVolumeClaim:
claimName: mongo-pvc

Deployment is requesting volume defined by claim Name mongo-pvc

Service

Service Config File
1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017

let’s apply all these new configurations.
Apply PersistentVolumeClaim , deployment and service for MongoDB kubectl apply -f <file-name.yaml>

Step 3 : Deploy Spring Boot Backend API on Kubernetes

Our Backend api is simple spring boot application and it is using doing CRUD using MongoRepository complete code is available on Github here

Spring boot app is dockerized and docker image is available docker hub. docker pull nirajsonawane/student-app-api:0.0.1-SNAPSHOT
Deployment

API Deployment Config File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: Deployment
metadata:
name: student-app-api
spec:
selector:
matchLabels:
app: student-app-api
replicas: 1
template:
metadata:
labels:
app: student-app-api
spec:
containers:
- name: student-app-api
image: nirajsonawane/student-app-api:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: MONGO_URL
value: mongodb://mongo:27017/dev

in above yaml file The env: is used to define environment variables for the POD. Our API expects MONGO_URL for configuration of spring.data.mongodb.uri
application.properties
1
spring.data.mongodb.uri=${MONGO_URL:mongodb://localhost:27017/dev}

Now let’s talk about MONGO_URL.

mongodb url is configured like mongodb://someHost:27017/dev so what is mongo in our url? mongo is name defined in service config file of mongo.
Pods within a cluster can talk to each other through the names of the Services exposing them.

Service

Service Config File for API
1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: Service
metadata:
name: student-app-api
spec:
selector:
app: student-app-api
ports:
- port: 8080
protocol: TCP
targetPort: 8080

Apply deployment and service for API kubectl apply -f <file-name.yaml>

Step 4 : Deploy Ingress and connect frontend to backend

Ingress is an API object that manages external access to the services in a cluster.

Ingress Resource
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1
kind: Ingress
metadata:
name: student-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /?(.*)
backend:
serviceName: student-app-client-service
servicePort: 80
- path: /api/?(.*)
backend:
serviceName: student-app-api
servicePort: 8080

We can define different HTTP rule in ingress configuration. For our application we have configured two rules in backend section. A backend is a combination of Service and port names as described in the Service doc. HTTP (and HTTPS) requests to the Ingress that matches the host and path of the rule are sent to the listed backend.

Note While making call to backend API from react client we are prefixing request with api and then redirecting request to student-app-api thro our Ingress. We can also use service-name for direct making call to backend api.

lets deploy the final resource.
Fist we need to enable ingress by running minikube addons enable ingress and then ‘kubectl apply -f student-app-ingress.yaml’

let’s try to access the application from minikube ip

The code for this post is available on Github here

Share Comments

Project Reactor [Part 3] Error Handling

unsplash-logoDaniil Silantev
This is the third article on a series of articles on Project Reactor. In previous article we discussed different operators in Project Reactor. In this article, I’ll show you how to do error handling in reactor. We’ll do this through examples.

In Reactive programming errors are also consider as terminal events. When an error occurs, event is sent to onError method of Subscriber.Before we start looking at how we handle errors, you must keep in mind that any error in a reactive sequence is a terminal event. Even if an error-handling operator is used, it does not let the original sequence continue. Rather, it converts the onError signal into the start of a new sequence (the fallback one). In other words, it replaces the terminated sequence upstream of it.

The code for this post is available on my Github account here

OnError

In below code blok, subscriber will print values from 1,2,3,4,5 and after that onError code block will get executed. Note that number 6 will never be printed as Error is terminal event. subscriber Completed line will also not be printed on console as in case of error we do not get onComplete event.

OnError
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

public void testErrorFlowFlux() {
Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6));
fluxFromJust.subscribe(
(it)-> System.out.println("Number is " + it), // OnNext
(e) -> e.printStackTrace(), //OnError
() -> System.out.println("subscriber Completed") //onComplete
);
//To Unit test this code
StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,3,4,5)
.expectError(RuntimeException.class);

onErrorResume

If you want to report exception and then want to return some fallback value you can use ‘onErrorResume’

onErrorResume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 
@Test
public void testOnErrorResume() throws InterruptedException {
Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6))
.onErrorResume(e->{
log.info("**************");
System.out.println("Exception occured " + e.getMessage());
//Return Some Fallback Values
return Flux.just(7,8);
});

StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,3,4,5)
.expectNext(7,8)
.verifyComplete();

}

onErrorReturn

onErrorReturn can be used if you just want to return some fallback value for error item

onErrorResume
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
   
@Test
public void testOnErrorReturn() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6))
.onErrorReturn(99)
;
StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,3,4,5)
.expectNext(99)
.verifyComplete();

}

OnErrorContinue

If you want to ignore error produce by any operator onErrorContinue can be use.
onErrorContinue will ignore the error element and continue the sequence.
in below example for number 3 we are getting some exception, onErrorContinue will simply ignore that exception.

onErrorContinue
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
 

@Test
public void testOnErrorContinue() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.map(i->mapSomeValue(i))
.onErrorContinue((e,i)->{
System.out.println("Error For Item +" + i );
})
;
StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,4,5)
.verifyComplete();

}

private int mapSomeValue(Integer i) {
if( i==3)
throw new RuntimeException("Exception From Map");
return i;
}

OnErrorMap

To Map Exception to any custom exception

OnErrorMap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 
@Test
public void testOnErrorMap() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6))
.map(i->i*2)
.onErrorMap(e -> new CustomeException(e) )
;
StepVerifier
.create(fluxFromJust)
.expectNext(2, 4,6,8,10)
.expectError(CustomeException.class);
}

retry

You can add retry on error. But keep in mind the retry will be started from first element.

retry
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Test
public void testOnRetry() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3)
.map(i->i*2)
.onErrorMap(e -> new CustomeException(e) )
.retry(2);

StepVerifier
.create(fluxFromJust)
.expectNext(2, 4,6)
.expectNext(2, 4,6)
.expectNext(2, 4,6)
.expectError(CustomeException.class)
.verify();
}

onErrorStop

onErrorStop will stop the execution

onErrorStop
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Test
public void testOnErrorStop() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3)
.concatWith(Flux.error(new RuntimeException("Test")))
.concatWith(Flux.just(6))
.map(i->doubleValue(i))
.onErrorStop();
StepVerifier
.create(fluxFromJust)
.expectNext(2, 4,6)
.verifyError();
}
private Integer doubleValue(Integer i) {
System.out.println("Doing Multiple");
return i*2;
}

doOnError

It you want to execute side effect on error

doOnError
1
2
3
4
5
6
7
8
9
10
11
12
13
14
 @Test
public void testDoOnError() throws InterruptedException {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.doOnError(e -> System.out.println("Rum some Side effect!!"));

StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,3,4,5)
.expectError()
.verify();
TimeUnit.SECONDS.sleep(2);
}

doFinally

doFinally is similar to finally block of try catch.

doOnError
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
 @Test
public void testDoFinally() throws InterruptedException {
Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5)
.concatWith(Flux.error(new RuntimeException("Test")))
.doFinally( i->{
if (SignalType.ON_ERROR.equals(i)) {
System.out.println("Completed with Error ");
}
if (SignalType.ON_COMPLETE.equals(i)) {
System.out.println("Completed without Error ");
}
});

StepVerifier
.create(fluxFromJust)
.expectNext(1, 2,3,4,5)
.expectError()
.verify();
TimeUnit.SECONDS.sleep(2);
}


The code for this post is available on my Github account here

Share Comments

Project Reactor [Part 2] Exploring Operators in Flux & Mono

unsplash-logoDaniil Silantev

This is the second article on a series of Project Reactor. In previous article we discussed basic of Flux an Mono. In this second article, I’ll show you how to use Operators to modified and transform flux. We’ll do this through examples.
The code for this post is available on my Github account here

Filter

This is similar to java 8 stream filter. It takes predicates, the elements which satisfies predicate condition will be pass thro.

Filtering
1
2
3
4
5
6
7
8
9
10
@Test
public void testFilteringFlux() {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5,6,7,8,9,10).log();
Flux<Integer> filter = fluxFromJust.filter(i -> i % 2 == 0);//filter the even numbers only
StepVerifier
.create(filter)
.expectNext(2,4,6,8,10)
.verifyComplete();
}

Distinct

it filter out duplicates.

Distinct
1
2
3
4
5
6
7
8
9
10
@Test
public void distinct() {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5,1,2,3,4,5).log();
Flux<Integer> distinct = fluxFromJust.distinct();
StepVerifier
.create(distinct)
.expectNext(1, 2,3,4,5)
.verifyComplete();
}

takeWhile

Consumes values from flux until predicate returns TRUE for the values

takeWhile
1
2
3
4
5
6
7
8
9
10
@Test
public void takeWhile() {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5,6,7,8,9,10).log();
Flux<Integer> takeWhile = fluxFromJust.takeWhile(i -> i <=5);
StepVerifier
.create(takeWhile)
.expectNext(1, 2,3,4,5)
.verifyComplete();
}

skipWhile

Skips elements until predicate returns TRUE for the values

skipWhile
1
2
3
4
5
6
7
8
9
10
@Test
public void skipWhile() {

Flux<Integer> fluxFromJust = Flux.just(1, 2,3,4,5,6,7,8,9,10).log();
Flux<Integer> takeWhile = fluxFromJust.skipWhile(i -> i <=5);
StepVerifier
.create(takeWhile)
.expectNext(6,7,8,9,10)
.verifyComplete();
}

map

Map operation is similar to java 8 stream map operation. Map operation is use for transforming element from one Type to another.

Convert String Flux To Integer Flux
1
2
3
4
5
6
7
8
9
10
@Test
public void testMapOperationFlux() {

Flux<String> fluxFromJust = Flux.just("RandomString", "SecondString","XCDFRG").log();
Flux<Integer> filter = fluxFromJust.map(i-> i.length());
StepVerifier
.create(filter)
.expectNext(12,12,6)
.verifyComplete();
}

flatMap

FlatMap Transform the elements emitted by this Flux asynchronously into Publishers, then flatten these inner publishers into a single Flux through merging, which allow them to interleave.

FlatMap
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@Test
public void testFlatMapFlux() {

Flux<Integer> fluxFromJust = Flux.just(1,2,3).log();
Flux<Integer> integerFlux = fluxFromJust
.flatMap(i -> getSomeFlux(i));//getSomeFlux returns flux of range ,
// then we do flatMap on all Flux to convert them in to single Flux

StepVerifier
.create(integerFlux)
.expectNextCount(30)
.verifyComplete();
}
private Flux<Integer> getSomeFlux(Integer i) {
return Flux.range(i,10);
}

Index

Keep information about the order in which source values were received by indexing them with a 0-based incrementing long

index
1
2
3
4
5
6
7
8
9
10
11
12
public  void maintainIndex(){

Flux<Tuple2<Long, String>> index = Flux
.just("First", "Second", "Third")
.index();
StepVerifier.create(index)
.expectNext(Tuples.of(0L,"First"))
.expectNext(Tuples.of(1L,"Second"))
.expectNext(Tuples.of(2L,"Third"))
.verifyComplete();

}

flatMapMany

This operator is very useful if you want to convert mono to flux.flatMapMany transforms the signals emitted by this Mono into signal-specific Publishers, then forward the applicable Publisher’s emissions into the returned Flux.

flatMapMany
1
2
3
4
5
6
7
8
9
@Test
public void flatMapManyTest() {
Mono<List<Integer>> just = Mono.just(Arrays.asList(1, 2, 3));
Flux<Integer> integerFlux = just.flatMapMany(it -> Flux.fromIterable(it));
StepVerifier
.create(integerFlux)
.expectNext(1, 2, 3)
.verifyComplete();
}

startWith

startWith
1
2
3
4
5
6
7
8
@Test
public void startWith(){
Flux<Integer> just = Flux.just(1, 2, 3);
Flux<Integer> integerFlux = just.startWith(0);
StepVerifier.create(integerFlux)
.expectNext(0,1,2,3)
.verifyComplete();
}

concatWith

The concatWith method does concatenation of two flux sequentially subscribing to the first flux then waits for completion and then subscribes to the next.

concatWith
1
2
3
4
5
6
7
8
@Test
public void concatWith(){
Flux<Integer> just = Flux.just(1, 2, 3);
Flux<Integer> integerFlux = just.concatWith(Flux.just(4,5));
StepVerifier.create(integerFlux)
.expectNext(1,2,3,4,5)
.verifyComplete();
}

merge

Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence. Unlike concat, sources are subscribed to eagerly.

Merge
1
2
3
4
5
6
7
8
9
@Test
public void zip() throws InterruptedException {
Flux<Integer> firsFlux = Flux.just(1, 2, 3,4,5).delayElements(Duration.ofSeconds(1));
Flux<Integer> secondFlux = Flux.just(10, 20, 30, 40).delayElements(Duration.ofSeconds(1));
firsFlux.mergeWith(secondFlux)
.subscribe(System.out::println);
//This will print numbers received from firsFlux and secondFlux in random order
TimeUnit.SECONDS.sleep(11);
}

CollectList

Collect all elements emitted by this Flux into a List that is emitted by the resulting Mono when this sequence completes.

CollectList
1
2
3
4
5
6
7
8
9
@Test
public void CollectList(){
Mono<List<Integer>> listMono = Flux
.just(1, 2, 3)
.collectList();
StepVerifier.create(listMono)
.expectNext(Arrays.asList(1,2,3))
.verifyComplete();
}
CollectSortedList
1
2
3
4
5
6
7
8
9
@Test
public void CollectSortedListList(){
Mono<List<Integer>> listMono = Flux
.just(1, 2, 3,9,8)
.collectSortedList();
StepVerifier.create(listMono)
.expectNext(Arrays.asList(1,2,3,8,9))
.verifyComplete();
}

zip

Zip multiple sources together, that is to say wait for all the sources to emit one element and combine these elements once into an output value (constructed by the provided combinator). The operator will continue doing so until any of the sources completes. Errors will immediately be forwarded. This “Step-Merge” processing is especially useful in Scatter-Gather scenarios.

Zip
1
2
3
4
5
6
7
8
9
10
@Test
public void zip() {
Flux<Integer> firsFlux = Flux.just(1, 2, 3);
Flux<Integer> secondFlux = Flux.just(10, 20, 30, 40);
Flux<Integer> zip = Flux.zip(firsFlux, secondFlux, (num1, num2) -> num1 + num2);
StepVerifier
.create(zip)
.expectNext(11, 22, 33)
.verifyComplete();
}

buffer

Collect all incoming values into a single List buffer that will be emitted by the returned Flux once this Flux completes.

buffer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@Test
public void bufferTest() {
Flux<List<Integer>> buffer = Flux
.just(1, 2, 3, 4, 5, 6, 7)
.buffer(2);

StepVerifier
.create(buffer)
.expectNext(Arrays.asList(1, 2))
.expectNext(Arrays.asList(3, 4))
.expectNext(Arrays.asList(5, 6))
.expectNext(Arrays.asList(7))
.verifyComplete();

}

You can check all available operators here

It the next article, I’ll show you how to handel errors while processing data in Mono and Flux.

The code for this post is available on my Github account here

Share Comments

Project Reactor [Part 1] - Playing With Flux & Mono

unsplash-logoDaniil Silantev

What is Reactor

Project Reactor implements the reactive programming model.It implements the Reactive Streams Specification a standard for building reactive applications.
reactor integrates directly with the Java 8 functional APIs, notably CompletableFuture, Stream, and Duration. It offers composable asynchronous sequence APIs — Flux (for [N] elements) and Mono (for [0|1] elements).

Key Components of Reactive Manifesto and Reactive Streams Specification

Reactive Streams is an initiative to provide a standard for asynchronous stream processing with non-blocking back pressure. This encompasses efforts aimed at runtime environments (JVM and JavaScript) as well as network protocols. Reactive Specification are based on Reactive Manifesto

Reactive Specifications defines below key Contracts
Publisher: Representing sources of data also called as Observables.
Subscriber: Listening to the Publisher. Subscriber subscribes to publisher
Subscription: Publisher will create a subscription for every Subscriber which will try to subscribe to it.
Processor: A processor can be used as a publisher as well as subscriber. Processors are used for data transformation. these are set of methods for modifying and composing the data.

Objective Of this post

Objective of these series of post is to show, How to use Flux & Mono. How to subscribe to them for consuming data. Different operation that we can perform on data while consuming data. I will be using Spring Webflux which internally used Project Reactor

Let get started, We need spring-boot-starter-webflux to get started.Project rector also provides very handy library reactor-test for unit testing.

The code for this post is available on my Github account here

Pom File
1
2
3
4
5
6
7
8
9
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scop>
</dependency>
Creating Flux and Mono

Flux A Reactive Streams Publisher with rx operators that emits 0 to N elements, and then completes(successfully or with an error).

Few Example of creating Flux
1
2
3
4
5
6
7
8
9

Flux<Integer> fluxFromJust = Flux.just(1, 2, 3);//Create a Flux that emits the provided elements and then completes.
Flux<Integer> integerFlux = Flux.fromIterable(Arrays.asList(1, 2, 3));//Create a Flux that emits the items contained in the provided Iterable.
Flux.fromStream(Arrays.asList(1,2,3).stream());//Create a Flux that emits the items contained in the provided Stream.
Integer[] num2 = {1, 2, 3, 4, 5};
Flux<Integer> integerFluxFromArray = Flux.fromArray(num2);//Create a Flux that emits the items contained in the provided array.
Flux.generate(<Consumer>) ;//Programmatically create a Flux by generating signals one-by-one via a consumer callback.
Flux.Creare(<Consumer>); //Programmatically create a Flux with the capability of emitting multiple elements in a synchronous or asynchronous manner through the FluxSink API.

Mono A Reactive Streams Publisher with basic rx operators that completes successfully by emitting an element, or with an error.

Few Examples of creating Mono
1
2
3
4
5
6
7
8
Mono<Integer> just = Mono.just(1);//
Mono<Object> empty = Mono.empty();//
Mono.create(); //Create a Mono that completes without emitting any item.
Mono.from(<Publisher>)//Expose the specified Publisher with the Mono API, and ensure it will emit 0 or 1 item.

//there are multiple options available for creating mono from Callable,CompletionStage,CompletableFuture etc
Mono.fromCallable()
}

Flux & Mono are lazy

Lazy in context of reactive programming means, No Matter how many operation you do on the stream, They won’t be executed until you consume it. Flux & Mono start emitting values only when subscriber is attached.

Creating Subscriber and consuming values

Subscriber has multiple overloaded methods, let’s check few of them.

Subscriber with onNext

Subscriber with consumer (onNext)
1
2
Flux<Integer> fluxFromJust = Flux.just(1, 2, 3);
fluxFromJust.subscribe(i->System.out.println(i));//It will print number 1,2,3

Subscriber with onNext & onError. OnError will be get call in case of error.
Flux has one handy concatWith which we can use to concat error

Subscriber with consumer and error handler (onError)
1
2
3
4
5
6
7
Flux<Integer> fluxFromJust = 
Flux.just(1, 2, 3)
.concatWith(Flux.error( new RuntimeException("Test Exception"));
fluxFromJust.subscribe(
i->System.out.println(i),// onNext
e->System.out.println("In Error Block " + e.getMessage()) //onError
);

Subscriber with onNext, onError and onComplete. On Successful completion onComplete method get’s called
Subscriber with consumer, error handler and onComplete
1
2
3
4
5
6
7
8
Flux<Integer> fluxFromJust = 
Flux.just(1, 2, 3)
.concatWith(Flux.error( new RuntimeException("Test Exception"));
fluxFromJust.subscribe(
i->System.out.println(i),//Will Print 1,2,3 //onNext
e->System.out.println("In Error Block " + e.getMessage())//OnError
()-> System.out.println("Process Completed") //OnComplete
);

To Log the activities on Flux / Mono. We can use log method and it will start logging all events.

Log Events
1
2
3
Flux<Integer> fluxFromJust = 
Flux.just(1, 2, 3)
.log()

Lets write some Unit test using reactor-test

StepVerifier can be use like below to validate the expectations

StepVerifier
1
2
3
4
5
6
7
8
9
10
11
12
@Test
public void testCreateFluxAndSubscribe() {

Flux<Integer> fluxFromJust = Flux.just(1, 2, 3).log();
StepVerifier
.create(fluxFromJust)
.expectNext(1)
.expectNext(2)
.expectNext(3)
.verifyComplete();
}


IF You want to verify only count then we can use expectNextCount
StepVerifier
1
2
3
4
5
6
7
8
9
@Test
public void testCreateFluxAndSubscribeVerifyCount() {

Flux<Integer> fluxFromJust = Flux.just(1, 2, 3).log();
StepVerifier
.create(fluxFromJust)
.expectNextCount(3)
.verifyComplete();
}

To Assert on Error
StepVerifier
1
2
3
4
5
6
7
8
9
@Test
public void testCreateFluxAndSubscribeVerifyError() {

Flux<Integer> fluxFromJust = Flux.just(1).concatWith(Flux.error(new RuntimeException("Test")));
StepVerifier
.create(fluxFromJust)
.expectNextCount(1)
.verifyError(RuntimeException.class);
}

It the next article, I’ll show you how to process and transform data in Mono and Flux.
The code for this post is available on my Github account here

Share Comments

Testcontainers With Spring Boot For Integration Testing

Take your Integration Tests to next level using Testcontainers

Now days the modern application interacts with multiple systems like database, microservice within system ,external API, middleware systems etc.It’s become very critical for the success of project to have good Integration test strategy.
In memory database like H2, HSQLDB are very popular for Integration testing of persistance layer but these are not close to production environment. If application has dependancy on Docker containers, Then it’s become more difficult to test that code in Integration environment. Testcontainers help us to handle these challenges

What is Testcontainers?

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container.

The code for this post is available on my Github account here

Let’s Write some Integration Test using Testcontainers For Spring Boot App

In previous Post We created simple Spring Boot application that uses Mongodb Database (containrized) let’s write integration test for that.

It’s easy to add Testcontainers to your project - let’s walk through a quick example to see how.

Add Testcontainer to project

Pom File
1
2
3
4
5
6
7
8
9
10
11
12
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<version>1.12.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<version>1.12.3</version>
<scope>test</scope>
</dependency>

Creating a generic container based on an image

For Mongodb there is no special test container image available, But we can create one by extending GenericContainer

MongoDbContainer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class MongoDbContainer extends GenericContainer<MongoDbContainer> {

public static final int MONGODB_PORT = 27017;
public static final String DEFAULT_IMAGE_AND_TAG = "mongo:3.2.4";
public MongoDbContainer() {
this(DEFAULT_IMAGE_AND_TAG);
}
public MongoDbContainer(@NotNull String image) {
super(image);
addExposedPort(MONGODB_PORT);
}
@NotNull
public Integer getPort() {
return getMappedPort(MONGODB_PORT);
}
}

You can also create generic container using ClassRule
1
2
3
4

@ClassRule
public static GenericContainer mongo = new GenericContainer("mongo:3.2.4")
.withExposedPorts(27017);

Starting the container and using it in test

MongoDbContainerTest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@SpringBootTest
@AutoConfigureMockMvc
@ContextConfiguration(initializers = MongoDbContainerTest.MongoDbInitializer.class)
@Slf4j
public class MongoDbContainerTest {

@Autowired
private MockMvc mockMvc;
@Autowired
private ObjectMapper objectMapper;
@Autowired
private FruitRepository fruitRepository;

private static MongoDbContainer mongoDbContainer;

@BeforeAll
public static void startContainerAndPublicPortIsAvailable() {
mongoDbContainer = new MongoDbContainer();
mongoDbContainer.start();
}


@Test
public void containerStartsAndPublicPortIsAvailable() throws Exception {

FruitModel build = FruitModel.builder().color("Red").name("banana").build();
mockMvc.perform(post("/fruits")
.contentType("application/json")
.content(objectMapper.writeValueAsString(build)))
.andExpect(status().isCreated());
Assert.assertEquals(1, fruitRepository.findAll().size());
}

public static class MongoDbInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
@Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
log.info("Overriding Spring Properties for mongodb !!!!!!!!!");

TestPropertyValues values = TestPropertyValues.of(
"spring.data.mongodb.host=" + mongoDbContainer.getContainerIpAddress(),
"spring.data.mongodb.port=" + mongoDbContainer.getPort()

);
values.applyTo(configurableApplicationContext);
}
}
}
}

lets see what we are doing in test line by line.
@SpringBootTest As we want to write Integration test we are using @SpringBootTest which tells Spring to load complete application context.

@AutoConfigureMockMvc configure auto-configuration of MockMvc. As we want to test
controller->service->repository ->database

@ContextConfiguration Overriding Spring properties. We need to override host and port on which mongodb testcontainer has started.

Starting Mongodb container in BeforeAll hook so that DB instance is available during the test.
Then In test we are simpaly calling post method on controller and after that checking if actually data is getting inserted in database or not.

Using Predefined Testcontainer

There are some predefined Testcontainer are available, Which are very useful and easy to use
e.g MySQLContainer is available for mysql database. let’s see how to use that

MySQLContainerTest
1
2
3
4
5
6
7
8
9
10
11
12
@Testcontainers
public class MySQLContainerTest {

@Container
private final MySQLContainer mySQLContainer = new MySQLContainer();

@Test
@DisplayName("Should start the container")
public void test() {
Assert.assertTrue(mySQLContainer.isRunning());
}
}

To use the Testcontainers extension annotate your test class with @Testcontainers.

Note You need to think about how you want to use container

  1. Containers that are restarted for every test method
  2. Containers that are shared between all methods of a test class

If you define Container like this
@Container private MySQLContainer mySQLContainer = new MySQLContainer(); Then it will create new instance for each test case and it you use static like this
@Container private static MySQLContainer mySQLContainer = new MySQLContainer(); then same Instance will be used for all tests.

The code for this post is available on my Github account here

Share Comments

Spring Boot + Mongodb + Docker Compose

unsplash-logoGabriel Barletta
In this post we will discuss how to use Docker Compose to define and run multi-container Docker applications.
The code for this post is available on my Github account here

Prerequisites
To Follow along this post basic knowledge of Docker, Container & Spring Boot is Needed. docker and docker-compose should be install on your system.

Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. We define and configure all the services used in application in single file called as “docker-compose.yml” More details about docker compose can be found in documentation. Docker compose helps and reduces lot of overhead of managing apps that has dependancy on multiple containers.
Docker compose significantly improves productivity as we can run complete application stack using single command. Docker compose runs all the containers on a single host by default.

Docker Compose in Action
I will create simple hypothetical application that will expose rest endpoint to manage fruit information. Application is build using two containers. I will use docker compose to run this multi-container application.

Spring Boot APP
Create very Spring Spring boot application using Spring initializr use below dependency. spring-boot-starter-web,spring-boot-starter-actuator,lombok and you should be able to run the application.Check http://localhost:8080/actuator/health point is returning status as “UP”

Dokcrizeing spring boot app
Dockerizing Spring Boot app is very straightforward,below is sample Dockerfile file.

1
2
3
4
5
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Creating Docker Image
docker build -t api-docker-image .
Run above command to create docker image. There are maven plugins available for creating docker image during maven build. for simplicity i am using simple docker command to create image.

Running Docker Image
docker run -d -p 9090:8080 api-docker-image
we are mapping 8080 container port to 9090 host machine port means the application will be available on host machine on port 9090. Now Spring boot application is running on docker and will be available on
http://localhost:9090/actuator/health

Now let’s add mongodb
Add below dependancy in pom file

Pom File
1
2
3
4
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

Rest endpoints to save and get Fruit information.

Rest Controller
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@RestController
@Slf4j
public class FruitController {

private final FruitService fruitService;
public FruitController(FruitService fruitService) {this.fruitService = fruitService;}

@PostMapping("/fruits")
public ResponseEntity addFruit(@RequestBody FruitRequest fruitRequest) {
log.info("Request : {}", fruitRequest);
fruitService.saveFruit(fruitRequest.toFruitModel());
return ResponseEntity.status(HttpStatus.CREATED).build();
}

@GetMapping("/fruits")
public List<FruitModel> getAllFruit() {
return fruitService.findAll();
}
}
}

Simple JPA MongoRepository for saving and getting data to/from mongodb

JPA Repository & Fruit JPA Model
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@Component
public interface FruitRepository extends MongoRepository<FruitModel ,String> {
}

@Document
@Data
@Builder
public class FruitModel {

@Id
private String id;
private String name;
private String color;
}
}

Now we want to run mongodb database as separate container and spring boot app as separate container application. We can do this manually by running docker commands, But that’s very tedious task and lot of configurations needed for containers to talk to each other. docker compose simplifies these things for us

Define services in a Compose file
We Create a file called docker-compose.yml and starts defining all the containers needed for application as services.
in below docker compose file we are defining two services one is for database and one for rest-api and we do all the needed configuration at single place. As our spring boot app (api) is dependent on database we are specifying that as link. There are lof configuration we can do in docker compose file.

docker-compose
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: "3"
services:
api-database:
image: mongo:3.2.4
container_name: "api-database"
ports:
- 27017:27017
command: --smallfiles
api:
image: api-docker-image
ports:
- 9091:8080
links:
- api-database
}

Configure mongodb host name using spring config property using service name defined in docker-compose.

docker-compose
1
spring.data.mongodb.host=api-database

Running Docker Compose
docker-compose up single command is needed to start the application. command will create one container for database and one for spring-boot app as defined in docker-compose file.

Summary

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

  3. Run docker-compose up and Compose starts and runs your entire app.

The code for this post is available on my Github account here

Share Comments

Creating Custom Spring Boot Starter To Implement Cross-Cutting Concerns

Now days Spring Boot has become de facto standard for numerous Web enterprise developments. Spring Boot helps to improve developer’s productivity by implementing lot of cross-cutting concerns as Starter Projects. We Just add these dependancy in our projects or configure few properties and Spring Boot does the magic for us by doing autoconfiguration. The starters projects automatically configure lof of stuff for us. This helps us to get started more quickly.
However, With a lof magic happening in background, it’s very Important to know How things work.

The code for this post is available for download here.

How Spring Boot’s Starter Works

On Startup, Spring Boot checks for spring.factories file. This file is located in the META-INF directory.

1
2
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
ns.aop.LogMethodExecutionTimeAutoConfiguration

All the classes with @Configuration should list under EnableAutoConfiguration key in the spring.factories file.
Spring Will create Beans Based on configuration and Conditions defined in Configurations files. We will see this in detail with example.

Let’s Create Library That logs Method Execution Time

lets imagine, We want to log the method execution time for few methods in our project. We should be able to enable/disable this feature based on some property.

Creating Our Custom Spring Boot Starter Project

Create Spring Boot Project with Below dependencies.

Pom File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring.boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-autoconfigure</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
</dependencies>

spring-boot-dependencies allows us to use any Spring dependency.
spring-boot-autoconfigure for using autoconfigure feature
spring-boot-configuration-processor to generate metadata for our configuration properties. IDEs can give us autocomplete.

Use AOP to log method execution time

Create simple annotation LogMethodExecutionTime to be used on method and aspect to log time.

Annotation & Aspect
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface LogMethodExecutionTime {
}

@Aspect
@Slf4j
public class LogMethodExecutionTimeAspect {
@Around("@annotation(LogMethodExecutionTime)")
public Object logExecutionTime(ProceedingJoinPoint joinPoint) throws Throwable {
final long start = System.currentTimeMillis();
final Object proceed = joinPoint.proceed();
final long executionTime = System.currentTimeMillis() - start;
log.info(joinPoint.getSignature() + " executed in " + executionTime + "ms");
return proceed;
}
}

Creating Our Own Autoconfiguration

We want to control the Method Execution Time logging to be enable based on certain properties. Spring provides lot of @Conditional annotations.Based on different conditions we can control Configurations of starter projects. For this example we can use ConditionalOnProperty.
our starter will be activated only if logging.api.enabled property is present and has value true

Configuration For our Starter
1
2
3
4
5
6
7
8
@Configuration
@ConditionalOnProperty(name = "logging.api.enabled", havingValue = "true", matchIfMissing = true)
public class LogMethodExecutionTimeAutoConfiguration {
@Bean
public LogMethodExecutionTimeAspect getLogMethodExecutionTimeAspect(){
return new LogMethodExecutionTimeAspect();
}
}

spring.factories File

We need to create special file called as spring.factories. Our custom starter to be pick by Spring boot, we have to create spring.factories. This file should be placed within the src/main/resources/META-INF folder. This file has list of all classes annotated with @Configuration.

1
2
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
ns.aop.LogMethodExecutionTimeAutoConfiguration

Note Naming Convention
You should make sure to provide a proper namespace for your starter. Do not start your module names with spring-boot, even if you use a different Maven groupId.
As a rule of thumb, you should name a combined module after the starter. For example, assume that you are creating a starter for “acme” and that you name the auto-configure module acme-spring-boot-autoconfigure and the starter acme-spring-boot-starter. If you only have one module that combines the two, name it acme-spring-boot-starter.

Using the custom starter

Let’s create a sample Spring Boot application client to use our custom starter.

Client
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<dependency>
<groupId>ns</groupId>
<artifactId>method-execution-time-logging-spring-boot-starter</artifactId>
<version>0.0.1-SNAPSHOT</version>
</dependency>

// define below property
logging.api.enabled=true
//Main Class
@SpringBootApplication
public class ClientApplication {

public static void main(String[] args) {
SpringApplication.run(ClientApplication.class, args);
}
@Bean
ApplicationRunner init(TestClass testClass) {
return (ApplicationArguments args) -> dataSetup(testClass);
}
private void dataSetup(TestClass testClass) throws InterruptedException {
testClass.run();
}
}
//Test Class
@Component
public class TestClass {
@LogMethodExecutionTime
public void run() throws InterruptedException {
Thread.sleep(3000);
}
}

The code for this post is available for download here.

Share Comments

My AWS Solutions Architect Associate Certification Preparation Guide

unsplash-logoBen White

I Recently [September 2019 ] Completed AWS Architect Associate Certification in First Attempt with score of 89%. I want to share my experience and exam preparation tips with you.

My Background about AWS and Cloud Technologies

I had No prior AWS experience or knowledge. But I had previous work experience in Other IaaS Platform. I was well aware about the most of the cloud concepts covered in Exam. It was easy for me to relate the problem and Pain point Any AWS Service was trying to Solve. I think this is very important to understand any AWS Service and motivation behind creating such service. This helped me to understand bigger picture.

Preparation Timeline

AWS suggest you should have minimum One year of working experience on AWS Platform but it’s not mandatory. When i did my research, Most of the people recommending at lest Three to Four months of study (2 -3 hrs everyday). It’s obvious we cannot generalize this. It depends on individual.

Please take your time to study and do not rush and Book the Exam. Give yourself enough time The platform has more than 140 different services and new services are launching every day. It can be very overwhelming and time consuming trying to understand Services. Architect Associate exam does not cover all Services but it still covers lot.
I Personally made mistake by booking exam in advance, Giving myself only three weeks of time for preparation along with my Office work of 8-9 hrs. For last few days i have to push myself to cover all services.

Preparation Resources

I Use Udemy Platform for my regular study, For AWS I took below courses from Udemy.

AWS Certified Solutions Architect - Associate 2019. By Ryan Kroonenburg, Faye Ellis
This is one of the most popular course for CSAA Certification. Content of course is very good and up to date. I will surely recommend this course.
I have rated this course as 4.5 Star on Udemy

Ultimate AWS Certified Solutions Architect Associate 2019 by Stephane Maarek
This is also another very good course. This course provide more details and hands on compare to Ryan course. This was my primary source for preparation.
I will definitely recommend this course. I have rated this course as 5 Star on Udemy

**I would Suggest do not rely on any Single Course. Complete at lest two courses of your choice.

Practice Exams
Practice Exams are equally Important. I Used below two practice exams. All Exam questions are scenario based, So it’s become very important to prepare your self for such questions. Both the below Practice Exams includes 6 Practice Tests and quality of question is also good. Real Exam also has similar (But Not exact same) questions. I have rated both these course as 5 Star on Udemy

You might not do well in these practice exams. Read the explanation about the answers and repeat the the test again. Due to limited time i was not able to repeat the test.

AWS Certified Solutions Architect Associate Practice Exams by Neal Davis, Digital Cloud Training
My First Attempt Scores in Test exams 46%,49%,60%,76%,53%,63%. I Was able to pass only 1 Test in my First Attempt

AWS Certified Solutions Architect Associate Practice Exams By by Jon Bonso, Tutorials Dojo
My First Attempt Scores in Test exams 75%,75%,68%,80%,86%,75%. I took these test at last.

FAQ & Whitepaper
It is Suggested that You should go through FAQ and White Papers before appearing to exam. I haven’t got time to read any White Papers or FAQ

Preparation And Exam Tips

These are not some magical Tips and you must be knowing most of these. Just make sure you check all the boxes.

Hands on using Free tier Do not start you preparation considering exam in mind. Do hands on whenever possible and consider it as you are implementing it for any enterprise level application. Exam will test you for such scenario only.

Taking Note Start taking your notes as soon as possible. Exam covers lot of topics and it’s difficult to remember all stuff. Video tutorial needs lot of time and you might not able to repeat.

Read questions carefully and Take your Time to Understand it Most of the questions are Descriptive and are up to 3 lines . Try to Find Keywords like Cost effective,Manage Service, Minimum Configurations,Availability, Durability etc Keywords will help to select Most appropriate Option. First Try to Eliminate Wrong Answer rather than finding correct one

VPC is Very Important
VPC is very important from Exam point of view and i got lot of question about VPC.You need to understand which service should be in a public subnet versus a private subnets And How Services deployed in Private subnets can interact with Internet and other services.

Disaster recovery
Disaster recovery also covers good portion of exam. Try to Understand and memorize how each services works in case of failure, And How to configure services for Disaster recovery.
E.g Does the service backups data, Where it copies, is it automatic or we need to configure it, How failover works etc.

Reality Check After Certification

Does it mean I am now great cloud architect after Certification? Of course not :)
The real word Enterprise application Brings lof of complexity and different unknown challenges and No exam can test these scenarios.

It’s very important to keep ourself updated with stuff happening around technology space. Any Certification can give me an edge over other candidates While getting job or Can Help me to get better salary But it does not guarantee that I Will be able to solve the Problem.

I want to become good software architect and getting any such certification is Just the one Step Forward.

Please let me know about your AWS Exam experience and Any tips that you want to share.

Share Comments