Spring Boot With AWS S3

In previous post we discuss, How to use spring boot to access AWS SQS service. In this article we will examine how to use Spring boot to access AWS S3.

Spring Cloud provides convenient way to interact with AWS S3 service. With the help of spring cloud S3 support we can use all well-known Spring Boot features. It also offers multiple useful features compare to SDK provided by AWS.

The code for this post is available on Github here

Using Spring cloud

To use S3 support we just need to add below dependancy
1
2
3
4
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-aws</artifactId>
</dependency>

Providing AWS credential and SDK configurations

In order to make calls to the AWS Services the credentials must be configured for the the Amazon SDK. In order to access S3 service we can configure access key and secret key using yaml or properties files
1
2
3
4
5
6
7
8
9
10
document:
bucket-name: spring-boot-s3-poc
cloud:
aws:
region:
static: us-east-1
auto: false
credentials:
access-key: XXX
secret-key: XXXXX

Creating AmazonS3 Client bean
AmazonS3 Client bean can be use to perform different operation on AWS S3 service.

AmazonS3 Client Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
@Configuration
public class Config {

@Value("${cloud.aws.credentials.access-key}")
private String awsAccessKey;

@Value("${cloud.aws.credentials.secret-key}")
private String awsSecretKey;

@Value("${cloud.aws.region.static}")
private String region;

@Primary
@Bean
public AmazonS3 amazonS3Client() {
return AmazonS3ClientBuilder
.standard()
.withRegion(region)
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsAccessKey, awsSecretKey)))
.build();
}
}

Find all objects in bucket

listObjectsV2 method can be use to get all object keys from the bucket
1
2
3
4
5
6
@GetMapping
public List<String> getAllDocuments() {
return amazonS3.listObjectsV2(bucketName).getObjectSummaries().stream()
.map(S3ObjectSummary::getKey)
.collect(Collectors.toList());
}

Upload object to S3 bucket

We can use putObject method on our AmazonS3 client bean to upload object in S3 bucket. It provides multiple overloaded methods to upload object as File, String, InputStream etc.
lets take example of uploading MultipartFile to S3 bucket.

Uploading MultipartFile to S3 bucket
1
2
3
4
5
6
7
8
9
10
@PostMapping
public ResponseEntity uploadDocument(@RequestParam(value = "file") MultipartFile file) throws IOException {
String tempFileName = UUID.randomUUID() + file.getName();
File tempFile = new File(System.getProperty("java.io.tmpdir") + "/" + tempFileName);
file.transferTo(tempFile); // Convert multipart file to File
String key UID.randomUUID() + file.getName() // unique key for the file
amazonS3.putObject(bucketName, key, tempFile); // Upload file
tempFile.deleteOnExit(); //delete temp file
return ResponseEntity.created(URI.create(tempFileName)).build();
}

Download object from S3 bucket

We can use getObject method on our AmazonS3 client bean to get object from S3 bucket. getObject returns an S3Object
which can be converted to ByteArrayResource .

Download object from S3 bucket
1
2
3
4
5
6
7
8
9
10
11
12
13
14
@GetMapping("/{fileName}")
public ResponseEntity<ByteArrayResource> downloadFile(@PathVariable String fileName) throws IOException {
S3Object data = amazonS3.getObject(bucketName, fileName); // fileName is key which is used while uploading the object
S3ObjectInputStream objectContent = data.getObjectContent();
byte[] bytes = IOUtils.toByteArray(objectContent);
ByteArrayResource resource = new ByteArrayResource(bytes);
objectContent.close();
return ResponseEntity
.ok()
.contentLength(bytes.length)
.header("Content-type", "application/octet-stream")
.header("Content-disposition", "attachment; filename=\"" + fileName + "\"")
.body(resource);
}

Deleting object from S3 bucket

We can use deleteObject method on our AmazonS3 client bean to delete object from bucket.

Delete Object
1
2
3
4
5
6
@DeleteMapping("/{fileName}")
public ResponseEntity deleteDocument(@PathVariable String fileName) {
log.info("Deleting file {}", fileName);
amazonS3.deleteObject(bucketName, fileName); // fileName is key which is used while uploading the object
return ResponseEntity.ok().build();
}

Creating presigned-url for accessing objects for limited time.

We can use generatePresignedUrl method on our AmazonS3 client bean to generate PresignedUrl which will be valid till provided time.

get presignedUrl
1
2
3
4
5
6
7
@GetMapping("/presigned-url/{fileName}")
public String presignedUrl(@PathVariable String fileName) throws IOException {

return amazonS3
.generatePresignedUrl(bucketName, fileName, convertToDateViaInstant(LocalDate.now().plusDays(1)))
.toString();// URL will be valid for 24hrs
}

Note:
On application startup, you might see exception related to Metadata or RegistryFactoryBean. You need to exclude some auto configuration. You can find more details
https://stackoverflow.com/a/67409356/320087

exclude autoconfigure
1
2
3
4
5
6
7
 
spring:
autoconfigure:
exclude:
- org.springframework.cloud.aws.autoconfigure.context.ContextInstanceDataAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextStackAutoConfiguration
- org.springframework.cloud.aws.autoconfigure.context.ContextRegionProviderAutoConfiguration

The code for this post is available on Github here

Share Comments