Home > Backend Development > Golang > Handle S3 file downloads without running out of resources

Handle S3 file downloads without running out of resources

WBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWB
Release: 2024-02-10 19:36:09
forward
1040 people have browsed it

处理 S3 文件下载而不耗尽资源

php editor Xinyi is here to introduce an efficient way to handle S3 file downloads to avoid exhausting server resources. S3 is a scalable cloud storage service provided by Amazon, but when dealing with large file downloads, traditional download methods may cause server resources to be exhausted. This article will introduce a PHP-based solution that effectively handles S3 file downloads and improves server performance and user experience through chunked downloads and streaming. Let’s learn about this method together!

Question content

I have a go-gin application that allows multiple file types to be uploaded and downloaded in S3.

All files before uploading to s3 are encrypted using AWS s3cryptoclient, AES GCM, and keys from KMS. So, as far as the s3 bucket is concerned, everything is binary.

I can use the getObject's SDK command to download the file to the server and decrypt it, then use io.write(tempfile) to provide this file to the client for download.

The problem here is that S3 contains files of size 10GB and the server will be accessed by multiple users on a daily basis. As we can see, writing temporary files on a server with 16GB of RAM can also quickly exhaust memory, while we also have to be mindful of the cost of running such a server.

The bottleneck is that the file needs to be decrypted before it can be served, in this use case the S3 presigned url is sufficient, although decryption is not provided by the s3 presigned url unless it is encryption done by the client, in our Case AWS is handling encryption so this solution is not feasible.

Does anyone have any tips or possible use cases to solve this problem where we can use go-gin/NGINX to write files directly to the client.

The current user’s processing of file downloads

s3FileStream, _ := s3c.GetBucketItem(&utils.S3ObjectBucketInput{
    Bucket: "bucketName",
    Key:    "UserFileName"
})

fileBody, err := io.ReadAll(s3FileStream.Body)
if err != nil {
    panic(err.Error())
}

fileExtension := s3FileStream.Metadata["X-Amz-Meta-Type"]

err = ioutil.WriteFile("file" + *fileExtension, fileBody, 600) // temp file
if err != nil {
    panic(err.Error())
}
c.JSON(http.StatusCreated, string(fileBody))
c.Done()
Copy after login

}

Workaround

One option is to write the object directly to the client as the response body:

s3FileStream, _ := s3c.GetBucketItem(&utils.S3ObjectBucketInput{
    Bucket: "bucketName",
    Key:    "UserFileName",
})
fileExtension := s3FileStream.Metadata["X-Amz-Meta-Type"]
c.DataFromReader(http.StatusCreated, 0, "application/data",
    s3FileStream.Body,
    map[string]string{"Content-Dispositon": "attachment; filename=" + "file" + *fileExtension})
c.Done()
Copy after login

The above is the detailed content of Handle S3 file downloads without running out of resources. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template