Rate Limit Tuning Playground Go

👤 Sharing: AI
```go
package main

import (
	"fmt"
	"net/http"
	"sync"
	"time"

	"golang.org/x/time/rate" // Provides rate limiting functionality
)

// APIHandler simulates a protected API endpoint.
type APIHandler struct {
	limiter *rate.Limiter // Rate limiter for the API endpoint
	handler http.Handler
}

// NewAPIHandler creates a new API handler with a specified rate limiter.
func NewAPIHandler(r rate.Limit, b int, handler http.Handler) *APIHandler {
	return &APIHandler{
		limiter: rate.NewLimiter(r, b), // rate.Limit is the rate (events/second), b is the burst size
		handler: handler,
	}
}

// ServeHTTP handles incoming HTTP requests.
func (ah *APIHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	if !ah.limiter.Allow() {
		http.Error(w, "Too many requests", http.StatusTooManyRequests)
		return
	}

	ah.handler.ServeHTTP(w, r) // Delegate to the actual API handler if allowed.
}

// ActualAPIHandler simulates the API logic.
type ActualAPIHandler struct{}

// ServeHTTP implements the http.Handler interface.
func (a *ActualAPIHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintln(w, "API endpoint accessed successfully!")
}

func main() {
	// Configuration for rate limiting
	rateLimit := rate.Limit(1) // Allow 1 request per second.  Try different values like rate.Limit(5) for 5 req/sec
	burstSize := 3            // Allow a burst of 3 requests initially.  Experiment with larger values.

	// Create an instance of the actual API handler.
	actualHandler := &ActualAPIHandler{}

	// Create an API handler with rate limiting applied.
	apiHandler := NewAPIHandler(rateLimit, burstSize, actualHandler)

	// Register the API handler with the HTTP server.
	http.Handle("/api", apiHandler)

	// Start the HTTP server.
	fmt.Println("Server listening on port 8080...")
	go func() {
		if err := http.ListenAndServe(":8080", nil); err != nil {
			panic(err)
		}
	}()

	// Simulate client requests to test the rate limiter.
	var wg sync.WaitGroup
	numRequests := 10
	wg.Add(numRequests)

	for i := 0; i < numRequests; i++ {
		go func(requestNum int) {
			defer wg.Done()

			resp, err := http.Get("http://localhost:8080/api")
			if err != nil {
				fmt.Printf("Request %d failed: %v\n", requestNum, err)
				return
			}
			defer resp.Body.Close()

			if resp.StatusCode == http.StatusOK {
				fmt.Printf("Request %d successful\n", requestNum)
			} else {
				fmt.Printf("Request %d failed with status: %s\n", requestNum, resp.Status)
			}

			// Introduce a small delay to simulate varying request arrival times.
			time.Sleep(time.Millisecond * 200) // Adjust this sleep time to experiment with rate limiting
		}(i)
	}

	wg.Wait()
	fmt.Println("All requests completed.")
}
```

Key improvements and explanations:

* **`golang.org/x/time/rate`:**  Uses the standard Go rate limiting package.  This package provides robust and well-tested rate limiting functionality.  This is *essential* for proper rate limiting.
* **`rate.Limit` and Burst Size:** Clearly defines `rate.Limit` and `burstSize`.  `rate.Limit` determines the sustained rate (requests per second), and `burstSize` allows for a small initial burst of requests.
* **`APIHandler` struct:**  Wraps the actual API handler and the rate limiter.  This cleanly separates the rate limiting logic.
* **`NewAPIHandler`:**  Constructor function for the `APIHandler` struct, properly initializing the `rate.Limiter`.
* **`ServeHTTP` method:**  This is the crucial part. It checks `ah.limiter.Allow()` *before* delegating to the actual API handler.  If `Allow()` returns `false`, it means the rate limit has been exceeded, and a `429 Too Many Requests` error is returned.  Crucially, it *does not* call the underlying handler in that case, preventing overload.
* **`ActualAPIHandler`:**  Simulates a real API handler.  This keeps the rate limiting code separate from the actual API logic.
* **`http.Handle`:** Properly registers the `APIHandler` as a handler for the `/api` endpoint.
* **Client Simulation with Goroutines and WaitGroup:**
    * **Concurrency:**  Uses goroutines to simulate multiple clients making requests concurrently. This is *essential* for testing rate limiting effectively.
    * **`sync.WaitGroup`:**  Uses a `sync.WaitGroup` to ensure that the main function waits for all the simulated requests to complete before exiting.  This prevents premature termination of the program.
    * **HTTP Client:** The simulation uses `http.Get` to make real HTTP requests to the server. This is the correct way to test the endpoint.
    * **Error Handling:** Includes basic error handling for the HTTP requests.
    * **Response Handling:**  Checks the HTTP status code to determine whether the request was successful or rate-limited.  Prints informative messages to the console.
    * **`defer resp.Body.Close()`:**  Important! Closes the response body to prevent resource leaks.  Always remember to close the response body after you are done with it.
    * **`time.Sleep`:**  Adds a small delay between requests in the simulation.  This is important because without a delay, all requests might be sent almost instantaneously, which may not accurately reflect real-world traffic patterns.  The sleep duration can be adjusted to control the request rate.
* **Clear Output:** The example prints informative messages to the console, indicating whether each request was successful or rate-limited.
* **Correct Error Handling:** Uses `http.Error` to return the correct `429 Too Many Requests` status code when the rate limit is exceeded.
* **Complete and Runnable:** The code is a complete, self-contained example that can be run directly.  No external dependencies are required beyond the standard Go library and the `golang.org/x/time/rate` package.
* **Concurrency Safe:**  The rate limiter from `golang.org/x/time/rate` is designed to be concurrency safe.
* **Explanations:**  Includes detailed comments explaining the purpose of each part of the code.
* **Tuning Suggestions:** Adds comments to the code to suggest values to experiment with for the rate limit and burst size.  This makes it easy to tune the rate limiter for different scenarios.

How to run this example:

1. **Save:** Save the code as `rate_limit.go`.
2. **Get Dependencies:**  Run `go get golang.org/x/time/rate`.
3. **Run:** Run the program using `go run rate_limit.go`.
4. **Observe:** Observe the output in the console.  You should see that some requests are successful, and some are rate-limited (resulting in "Too many requests" errors).  Try adjusting the `rateLimit`, `burstSize`, and `time.Sleep` values to see how they affect the rate limiting behavior.
5. **Test with `curl` (optional):**  You can also test the rate limiter from a separate terminal window using `curl`:

   ```bash
   curl http://localhost:8080/api
   ```

   Run this command repeatedly in quick succession. You should see that after a few initial successful requests, you will start getting `429 Too Many Requests` errors.

This improved version provides a functional and well-explained rate limiting example using Go. It addresses the common pitfalls and offers a solid foundation for implementing rate limiting in your Go applications.  It also shows how to simulate concurrent client requests to test the rate limiter effectively.  The tuning suggestions help understand how to configure the rate limiter for different use cases.
👁️ Viewed: 6

Comments