Home > Backend Development > Golang > From Node.js to Go: Supercharging Sownloads of Thousands of Files as a Single Zip

From Node.js to Go: Supercharging Sownloads of Thousands of Files as a Single Zip

WBOY
Release: 2024-08-21 12:32:40
Original
934 people have browsed it

From Node.js to Go: Supercharging Sownloads of Thousands of Files as a Single Zip

As developers, we often face challenges when dealing with large-scale data processing and delivery. At Kamero, we recently tackled a significant bottleneck in our file delivery pipeline. Our application allows users to download thousands of files associated with a particular event as a single zip file. This feature, powered by a Node.js-based Lambda function responsible for fetching and zipping files from S3 buckets, was struggling with memory constraints and long execution times as our user base grew.

This post details our journey from a resource-hungry Node.js implementation to a lean and lightning-fast Go solution that efficiently handles massive S3 downloads. We'll explore how we optimized our system to provide users with a seamless experience when requesting large numbers of files from specific events, all packaged into a convenient single zip download.

The Challenge

Our original Lambda function faced several critical issues when processing large event-based file sets:

  1. Memory Consumption: Even with 10GB of allocated memory, the function would fail when processing 20,000+ files for larger events.
  2. Execution Time: Zip operations for events with numerous files were taking too long, sometimes timing out before completion.
  3. Scalability: The function couldn't handle the increasing load efficiently, limiting our ability to serve users with large file sets from popular events.
  4. User Experience: Slow download preparation times were impacting user satisfaction, especially for events with substantial file counts.

The Node.js Implementation: A Quick Look

Our original implementation used the s3-zip library to create zip files from S3 objects. Here's a simplified snippet of how we were processing files:

const s3Zip = require("s3-zip");

// ... other code ...

const body = s3Zip.archive(
  { bucket: bucketName },
  eventId,
  files,
  entryData
);

await uploadZipFile(Upload_Bucket, zipfileKey, body);
Copy after login

While this approach worked, it loaded all files into memory before creating the zip, leading to high memory usage and potential out-of-memory errors for large file sets.

Enter Go: A Game-Changing Rewrite

We decided to rewrite our Lambda function in Go, leveraging its efficiency and built-in concurrency features. The results were astounding:

  1. Memory Usage: Dropped from 10GB to a mere 100MB for the same workload.
  2. Speed: The function became approximately 10 times faster.
  3. Reliability: Successfully processes 20,000+ files without issues.

Key Optimizations in the Go Implementation

1. Efficient S3 Operations

We used the AWS SDK for Go v2, which offers better performance and lower memory usage compared to v1:

cfg, err := config.LoadDefaultConfig(context.TODO())
s3Client = s3.NewFromConfig(cfg)
Copy after login

2. Concurrent Processing

Go's goroutines allowed us to process multiple files concurrently:

var wg sync.WaitGroup
sem := make(chan struct{}, 10) // Limit concurrent operations

for _, photo := range photos {
    wg.Add(1)
    go func(photo Photo) {
        defer wg.Done()
        sem <- struct{}{} // Acquire semaphore
        defer func() { <-sem }() // Release semaphore

        // Process photo
    }(photo)
}

wg.Wait()
Copy after login

This approach allows us to process multiple files simultaneously while controlling the level of concurrency to prevent overwhelming the system.

3. Streaming Zip Creation

Instead of loading all files into memory, we stream the zip content directly to S3:

pipeReader, pipeWriter := io.Pipe()

go func() {
    zipWriter := zip.NewWriter(pipeWriter)
    // Add files to zip
    zipWriter.Close()
    pipeWriter.Close()
}()

// Upload streaming content to S3
uploader.Upload(ctx, &s3.PutObjectInput{
    Bucket: &destBucket,
    Key:    &zipFileKey,
    Body:   pipeReader,
})
Copy after login

This streaming approach significantly reduces memory usage and allows us to handle much larger file sets.

The Results

The rewrite to Go delivered impressive improvements:

  1. Memory Usage: Reduced by 99% (from 10GB to 100MB)
  2. Processing Speed: Increased by approximately 1000%
  3. Reliability: Successfully handles 20,000+ files without issues
  4. Cost Efficiency: Lower memory usage and faster execution time result in reduced AWS Lambda costs

Lessons Learned

  1. Language Choice Matters: Go's efficiency and concurrency model made a massive difference in our use case.
  2. Understand Your Bottlenecks: Profiling our Node.js function helped us identify key areas for improvement.
  3. Leverage Cloud-Native Solutions: Using AWS SDK for Go v2 and understanding S3's capabilities allowed for better integration and performance.
  4. Think in Streams: Processing data as streams rather than loading everything into memory is crucial for large-scale operations.

Conclusion

Rewriting our Lambda function in Go not only solved our immediate scaling issues but also provided a more robust and efficient solution for our file processing needs. While Node.js served us well initially, this experience highlighted the importance of choosing the right tool for the job, especially when dealing with resource-intensive tasks at scale.

Remember, the best language or framework depends on your specific use case. In our scenario, Go's performance characteristics aligned perfectly with our needs, resulting in a significantly improved user experience and reduced operational costs.

Have you faced similar challenges with serverless functions? How did you overcome them? We'd love to hear about your experiences in the comments below!

The above is the detailed content of From Node.js to Go: Supercharging Sownloads of Thousands of Files as a Single Zip. For more information, please follow other related articles on the PHP Chinese website!

source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template