APIs are the backbone of modern applications. When I first started building APIs with Spring Boot, I was so focused on delivering features that I overlooked one crucial aspect: resilience. I learned the hard way that an API’s ability to gracefully handle failures and adapt to different conditions is what makes it truly dependable. Let me take you through some mistakes I made along the way and how I fixed them. Hopefully, you can avoid these pitfalls in your own journey.
What Happened: In one of my early projects, I built an API that made external calls to third-party services. I assumed those services would always respond quickly and didn’t bother setting timeouts. Everything seemed fine until traffic increased, and the third-party services started slowing down. My API would just hang indefinitely, waiting for a response.
Impact: The API’s responsiveness took a nosedive. Dependent services started failing, and users faced long delays—some even got the dreaded 500 Internal Server Error.
How I Fixed It: That’s when I realized the importance of timeout configurations. Here’s how I fixed it using Spring Boot:
@Configuration public class RestTemplateConfig { @Bean public RestTemplate restTemplate(RestTemplateBuilder builder) { return builder .setConnectTimeout(Duration.ofSeconds(5)) .setReadTimeout(Duration.ofSeconds(5)) .additionalInterceptors(new RestTemplateLoggingInterceptor()) .build(); } // Custom interceptor to log request/response details @RequiredArgsConstructor public class RestTemplateLoggingInterceptor implements ClientHttpRequestInterceptor { private static final Logger log = LoggerFactory.getLogger(RestTemplateLoggingInterceptor.class); @Override public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException { long startTime = System.currentTimeMillis(); log.info("Making request to: {}", request.getURI()); ClientHttpResponse response = execution.execute(request, body); long duration = System.currentTimeMillis() - startTime; log.info("Request completed in {}ms with status: {}", duration, response.getStatusCode()); return response; } } }
This configuration not only sets appropriate timeouts but also includes logging to help monitor external service performance.
What Happened: There was a time when a our internal service we depended on went down for several hours. My API didn’t handle the situation gracefully. Instead, it kept retrying those failing requests, adding more load to the already stressed system.
Cascading failures are one of the most challenging problems in distributed systems. When one service fails, it can create a domino effect that brings down the entire system.
Impact: The repeated retries overwhelmed the system, slowing down other parts of the application and affecting all users.
How I Fixed It: That’s when I discovered the circuit breaker pattern. Using Spring Cloud Resilience4j, I was able to break the cycle.
@Configuration public class Resilience4jConfig { @Bean public CircuitBreakerConfig circuitBreakerConfig() { return CircuitBreakerConfig.custom() .failureRateThreshold(50) .waitDurationInOpenState(Duration.ofSeconds(60)) .permittedNumberOfCallsInHalfOpenState(2) .slidingWindowSize(2) .build(); } @Bean public RetryConfig retryConfig() { return RetryConfig.custom() .maxAttempts(3) .waitDuration(Duration.ofSeconds(2)) .build(); } } @Service @Slf4j public class ResilientService { private final CircuitBreaker circuitBreaker; private final RestTemplate restTemplate; public ResilientService(CircuitBreakerRegistry registry, RestTemplate restTemplate) { this.circuitBreaker = registry.circuitBreaker("internalService"); this.restTemplate = restTemplate; } @CircuitBreaker(name = "internalService", fallbackMethod = "fallbackResponse") @Retry(name = "internalService") public String callInternalService() { return restTemplate.getForObject("https://internal-service.com/data", String.class); } public String fallbackResponse(Exception ex) { log.warn("Circuit breaker activated, returning fallback response", ex); return new FallbackResponse("Service temporarily unavailable", getBackupData()).toJson(); } private Object getBackupData() { // Implement cache or default data strategy return new CachedDataService().getLatestValidData(); } }
This simple addition prevented my API from overwhelming itself, internal service or the third-party service, ensuring system stability.
What Happened: Early on, I didn’t put much thought into error handling. My API either threw generic errors (like HTTP 500 for everything) or exposed sensitive internal details in stack traces.
Impact: Users were confused about what went wrong, and the exposure of internal details created potential security risks.
How I Fixed It: I decided to centralize error handling using Spring’s @ControllerAdvice annotation. Here’s what I did:
@Configuration public class RestTemplateConfig { @Bean public RestTemplate restTemplate(RestTemplateBuilder builder) { return builder .setConnectTimeout(Duration.ofSeconds(5)) .setReadTimeout(Duration.ofSeconds(5)) .additionalInterceptors(new RestTemplateLoggingInterceptor()) .build(); } // Custom interceptor to log request/response details @RequiredArgsConstructor public class RestTemplateLoggingInterceptor implements ClientHttpRequestInterceptor { private static final Logger log = LoggerFactory.getLogger(RestTemplateLoggingInterceptor.class); @Override public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException { long startTime = System.currentTimeMillis(); log.info("Making request to: {}", request.getURI()); ClientHttpResponse response = execution.execute(request, body); long duration = System.currentTimeMillis() - startTime; log.info("Request completed in {}ms with status: {}", duration, response.getStatusCode()); return response; } } }
This made error messages clear and secure, helping both users and developers.
What Happened: One fine day, we launched a promotional campaign, and the traffic to our API skyrocketed. While this was great news for the business, some users started spamming the API with requests, starving others of resources.
Impact: Performance degraded for everyone, and we received a flood of complaints.
How I Fixed It: To handle this, I implemented rate limiting using Bucket4j with Redis. Here’s an example:
@Configuration public class Resilience4jConfig { @Bean public CircuitBreakerConfig circuitBreakerConfig() { return CircuitBreakerConfig.custom() .failureRateThreshold(50) .waitDurationInOpenState(Duration.ofSeconds(60)) .permittedNumberOfCallsInHalfOpenState(2) .slidingWindowSize(2) .build(); } @Bean public RetryConfig retryConfig() { return RetryConfig.custom() .maxAttempts(3) .waitDuration(Duration.ofSeconds(2)) .build(); } } @Service @Slf4j public class ResilientService { private final CircuitBreaker circuitBreaker; private final RestTemplate restTemplate; public ResilientService(CircuitBreakerRegistry registry, RestTemplate restTemplate) { this.circuitBreaker = registry.circuitBreaker("internalService"); this.restTemplate = restTemplate; } @CircuitBreaker(name = "internalService", fallbackMethod = "fallbackResponse") @Retry(name = "internalService") public String callInternalService() { return restTemplate.getForObject("https://internal-service.com/data", String.class); } public String fallbackResponse(Exception ex) { log.warn("Circuit breaker activated, returning fallback response", ex); return new FallbackResponse("Service temporarily unavailable", getBackupData()).toJson(); } private Object getBackupData() { // Implement cache or default data strategy return new CachedDataService().getLatestValidData(); } }
This ensured fair usage and protected the API from abuse.
What Happened: Whenever something went wrong in production, it was like searching for a needle in a haystack. I didn’t have proper logging or metrics in place, so diagnosing issues took way longer than it should have.
Impact: Troubleshooting became a nightmare, delaying issue resolution and frustrating users.
How I Fixed It: I added Spring Boot Actuator for health checks and integrated Prometheus with Grafana for metrics visualization:
@RestControllerAdvice @Slf4j public class GlobalExceptionHandler extends ResponseEntityExceptionHandler { @ExceptionHandler(HttpClientErrorException.class) public ResponseEntity<ErrorResponse> handleHttpClientError(HttpClientErrorException ex, WebRequest request) { log.error("Client error occurred", ex); ErrorResponse error = ErrorResponse.builder() .timestamp(LocalDateTime.now()) .status(ex.getStatusCode().value()) .message(sanitizeErrorMessage(ex.getMessage())) .path(((ServletWebRequest) request).getRequest().getRequestURI()) .build(); return ResponseEntity.status(ex.getStatusCode()).body(error); } @ExceptionHandler(Exception.class) public ResponseEntity<ErrorResponse> handleGeneralException(Exception ex, WebRequest request) { log.error("Unexpected error occurred", ex); ErrorResponse error = ErrorResponse.builder() .timestamp(LocalDateTime.now()) .status(HttpStatus.INTERNAL_SERVER_ERROR.value()) .message("An unexpected error occurred. Please try again later.") .path(((ServletWebRequest) request).getRequest().getRequestURI()) .build(); return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(error); } private String sanitizeErrorMessage(String message) { // Remove sensitive information from error messages return message.replaceAll("(password|secret|key)=\[.*?\]", "=[REDACTED]"); } }
I also implemented structured logging using the ELK Stack (Elasticsearch, Logstash, Kibana). This made logs far more actionable.
Building resilient APIs is a journey, and mistakes are part of the process. Here are the key lessons I learned:
These changes transformed how I approach API development. If you’ve faced similar challenges or have other tips, I’d love to hear your stories!
End Note: Remember that resilience is not a feature you add—it's a characteristic you build into your system from the ground up. Each of these components plays a crucial role in creating APIs that not only work but continue to work reliably under stress.
The above is the detailed content of Building Resilient APIs: Mistakes I Made and How I Overcame Them. For more information, please follow other related articles on the PHP Chinese website!