Introduction
When you run a web server that serves large files, such as software packages, ISO images, or shareable documents, it's vital to control bandwidth usage. Without a download speed limiter, a single client could saturate your server's entire outgoing bandwidth, degrading service for all other users.
GOST (GO Simple Tunnel) is a powerful, Golang-based network tool renowned for high performance and versatile features, including multi-level bandwidth limiting, request rate limiting, and connection limiting. Its bandwidth limiter can operate at service, connection, and IP address levels simultaneously, providing a level of granular control that is often more sophisticated than what's available in traditional web servers alone.
By positioning GOST as a reverse proxy in front of your Nginx server, you can offload the rate-limiting responsibilities to a dedicated, highly efficient layer.
By the end, you will have a clean, containerized rate-limiting solution that can be extended, scaled, and integrated into a larger microservices architecture.
Why Choose GOST over Nginx's Built-in Limiting?
Nginx does offer the limit_rate directive for limiting download speeds. However, there are significant benefits to using a dedicated tool like GOST:
| Feature | GOST Limiter | Nginx limit_rate |
|---|---|---|
| Granularity | Applies to service, connection, and individual IP addresses simultaneously. | Only limits the speed per connection. |
| Multi-Connection Clients | Limits total bandwidth for a client by controlling all their connections together. | A client can bypass the limit by opening multiple connections, multiplying their effective speed. |
| Dynamic Configuration | Supports dynamic configuration via a Web API, allowing for real-time changes without service restart. | Configuration changes typically require a reload of the Nginx service. |
| Protocol Support | A multi-protocol tunnel tool that can act as a transparent proxy, HTTP/SOCKS proxy, port forwarder, and more. | Primarily a web server and reverse proxy. |
| Observability | Includes an observer component for traffic statistics, which can be integrated with the limiter for dynamic adjustments. | Lacks built-in, integrated traffic observation for dynamic rate limiting. |
For these reasons, GOST is an excellent choice for implementing a robust and fair traffic management system.
-
When to use GOST:
- You need per‑IP limits or per‑connection limits in the same configuration.
- Your service speaks a protocol other than plain HTTP (e.g. SOCKS5, TLS tunnel).
- You want to change limits on the fly without restarting Nginx or the service.
-
When to use Nginx alone:
- Your use case is simple HTTP/HTTPS download limiting.
- You don't need IP‑based or connection‑level granularity.
- You are already heavily invested in Nginx configuration and want to avoid an extra moving part.
Prerequisites
Before you begin, ensure you have:
- Basic familiarity with Linux commands.
- A server running Rocky Linux 10 with
rootorsudoaccess. - A domain name or a public IP address for your cloud server.
- An SSH key added to your Hetzner Console account.
Example terminology
For this tutorial, we will use the following example placeholder:
- Server IP:
<10.0.0.1>
Step 1 - Install Docker on Rocky Linux 10
Rocky Linux 10 is based on Red Hat Enterprise Linux 10. Even though RHEL 10 ships with Podman by default, Docker can be installed cleanly from the official Docker CE repository as long as any existing container runtimes are removed first.
-
Remove possible conflicting packages:
sudo dnf remove -y podman buildah -
First, update your system packages:
sudo dnf -y updateIf a kernel update was included, reboot your server to ensure you're using the latest kernel:
sudo reboot -
Install the
bash-completionpackage, which provides the framework that Docker's completion scripts rely on:sudo dnf install -y bash-completion -
Now install Docker.
To do so, set up the repository and install Docker Engine, the CLI tools, and containerd runtime as explained in the official Docker documentation:
You won't need to install
docker-buildx-pluginanddocker-compose-plugin. -
Verify that the Docker service is running as expected:
sudo systemctl enable --now docker systemctl status dockerYou should see an output showing
active (running). -
(Optional) To run
dockercommands as a non-root user, add your user to thedockergroup. This is a convenient step for a personal server:Replace
$USERwith a real username.sudo usermod -aG docker $USER newgrp docker
Step 2 - Creating a Shared Docker Network
To allow the GOST and Nginx containers to communicate using container names (instead of IP addresses that may change), we create a user‑defined Docker bridge network.
-
Create the network:
# https://docs.docker.com/reference/cli/docker/network/ docker network create rate-limit-net -
You can verify the network was created:
docker network ls -
If you ever need to inspect which containers are attached:
docker network inspect rate-limit-net
Step 3 - Nginx Container with a Persistent Volume
Nginx will be responsible for actually serving the downloadable files. We will mount a local directory on the host into the container, so you can manage files without rebuilding the image.
-
Create the data directory on the host server:
mkdir ~/nginx-data -
Create a sample file for download:
# Create a dummy file: dd if=/dev/zero of=~/nginx-data/20MB.bin bs=20M count=1 -
Run the Nginx container:
docker run -d \ --name nginx-backend \ --restart always \ --network rate-limit-net \ -v ~/nginx-data:/usr/share/nginx/html:ro \ nginx:alpineLet's break down the command:
Description -dDetach (run in background). --nameAssign a friendly name. --restart"Always" ensures the container automatically restarts if it stops, the system reboots, or Docker itself is restarted. --networkAttach to our dedicated network. -vMount the host directory read‑only ( :ro) into Nginx's standard web root.nginx:alpineTiny, secure Nginx image based on Alpine Linux. -
Verify that Nginx is reachable (inside the Docker network):
We will confirm this later through the GOST container. For now, just ensure the container is running:
docker ps | grep nginx-backend
Step 4 - Create the GOST Configuration File
GOST uses a YAML configuration file to declaratively define services. We will create a configuration that sets up a TCP listener on port 8080, applies a bandwidth limiter, and forwards traffic to your Nginx backend.
-
First, create a directory for the GOST configuration:
mkdir ~/gost-ratelimit cd ~/gost-ratelimit -
Now, create a file named
gost.yamlusing your preferred text editor (e.g.vi gost.yaml) and add the following configuration:See official Gost documentation on "Limiting".
services: - name: rate-limited-proxy addr: :8080 limiter: my-limiter forwarder: nodes: - name: nginx-server addr: nginx-backend:80 # nginx container handler: type: tcp limiters: - name: my-limiter limits: - "$ 1MB 1MB" # service level: input 1MB/s, output 1MB/s - "$$ 512KB 512KB" # connection level: input 512KB/s, output 512KB/sUnderstanding the Configuration
-
Services: This defines a service named
rate-limited-proxythat listens on port 8080 (addr: :8080).Description limiter: my-limiterThis line applies the my-limiterlimiter to the service. It's crucial to note that the limiter should be applied at the service level, not the handler level, as this is the correct architecture in GOST for traffic management.forwarderThis tells GOST where to send the proxied traffic. Can have multiple nodes for load balancing (GOST will distribute traffic across them). handlerThe service uses a tcphandler to process incoming TCP connections. -
Limiters: This section is where the rate limits are configured.
Description scope: "$ 1MB 1MB"This sets a service-level bandwidth limit. For all traffic handled by this service, the input(download) andoutput(upload) speeds are limited to 1MB per second.scope: "$$ 512KB 512KB"This sets a connection-level bandwidth limit. Each individual connection is limited to 512KB per second. Because multiple connections from a single client will each be restricted by this limit, the client's total possible bandwidth is effectively capped.
You can also set per-IP limits by specifying an IP address or CIDR range for thescope, offering an even more granular level of control.
-
This two-tier approach (service + connection level) is a powerful way to ensure fair bandwidth distribution. The service limit creates an absolute cap for all connections, while the connection limit prevents any single connection from being overly aggressive.
Step 5 - Containerize GOST with Docker
We will use the official gogost/gost Docker image to run our container, mounting the gost.yaml configuration file we just created.
-
Run the following command to start the GOST container:
docker run -d \ --name gost-limiter \ --restart always \ -p 80:8080 \ --network rate-limit-net \ -v $(pwd)/gost.yaml:/etc/gost/gost.yaml \ gogost/gost -C /etc/gost/gost.yamlLet's break down this command:
Description docker run -dRuns the container in detached mode (in the background). --name gost-limiterAssigns the name "gost-limiter" to the container. -p 80:8080Publishes the container's port 8080 to the host's port 80. This makes the GOST service accessible to the outside world. --network rate-limit-netAttach to our dedicated network. -v $(pwd)/gost.yaml:/etc/gost/gost.yamlMounts your local gost.yamlfile into the container at the path where GOST expects its configuration. This is a critical step for preserving your configuration and making it easy to modify.gogost/gostThe official GOST Docker image on Docker Hub. -C /etc/gost/gost.yamlThis is the command inside the container. It tells GOST to start using the configuration file we just mounted. -
After running the command, verify that the container is running properly:
docker ps -aYou should see your
gost-limitercontainer in the list with a status of "Up".
Step 6 - Configure Firewall for Port 80
-
You need to check if
firewalldis installed and running on your Rocky Linux 10:rpm -q firewalld # package firewalld is not installed # or # firewalld-2.0.0-1.el10.noarch -
If
firewalldis not installed, you can optionally install it:sudo dnf install -y firewalld sudo systemctl enable --now firewalld systemctl is-active firewalld -
List current firewall rules (only if
firewalldis active):sudo firewall-cmd --list-all # Add port 80 to allow external access to GOST sudo firewall-cmd --permanent --add-port=80/tcp # Reload the firewall to apply changes sudo firewall-cmd --reload # Verify the new rules are active. Expected output: 80/tcp sudo firewall-cmd --list-ports
Your web server is now only accessible via the GOST limiter on port 80.
Adjusting Nginx Configuration
No changes are needed to your Nginx configuration. It will continue to listen on localhost port 80, and GOST will transparently forward client requests to it.
INTERNET
│
▼
[Cloud Firewall: Port 80 OPEN]
│
▼
[Rocky Linux Host Firewall: Port 80 OPEN]
│
▼
┌─────────────────────────────────────────────┐
│ GOST CONTAINER │
│ ┌─────────────────────────────────────────┐│
│ │ Listens on: :8080 ││
│ │ ││
│ │ Incoming Request ││
│ │ │ ││
│ │ ▼ ││
│ │ [Rate Limiter Check] ││
│ │ - Service limit: 1MB/s ││
│ │ - Connection limit: 512KB/s ││
│ │ │ ││
│ │ ▼ ││
│ │ [Forward to: nginx-backend:80] ││
│ └──────────────┬──────────────────────────┘│
└─────────────────┼───────────────────────────┘
│
│ Docker Internal Network
│ (Container Name Resolution)
▼
┌─────────────────────────────────────────────┐
│ NGINX CONTAINER │
│ ┌─────────────────────────────────────────┐│
│ │ Listens on: 80 ││
│ │ ││
│ │ Serves: /usr/share/nginx/html/20MB.bin ││
│ │ ││
│ │ Returns: File to GOST Container ││
│ └─────────────────────────────────────────┘│
└─────────────────────────────────────────────┘
│
│
▼
Client receives:
Rate-limited file downloadStep 7 - Testing the Rate Limiter
To confirm that the rate limiter is working as expected, you can use curl to download a large test file from your server through the GOST proxy.
From a different machine or your local system, use curl to download the file. The --limit-rate option in curl is used here for testing purposes, the actual rate limiting is being enforced by your GOST server.
Replace
<10.0.0.1>with the IP address of your cloud server.
curl -o /dev/null \
http://<10.0.0.1>:80/20MB.bin \
--limit-rate 3M \
-w "Download Speed: %{speed_download} bytes/sec\n"The output will show a download speed of approximately 512KB/s due to the per-connection limit defined in your limiter configuration. If you open more connections (e.g. using a download manager), you may see the speed approach the service-level limit of 1MB/s.
If you want to test different limits, edit ~/gost-ratelimit/gost.yaml and restart the container:
docker restart gost-limiterConclusion
You have built a fully containerised download speed rate limiter on Rocky Linux 10 using GOST in front of an Nginx static server. The two containers communicate over a dedicated Docker network, and the GOST limiter applies bandwidth restrictions at the connection level.
This architecture is not only robust but also easily maintainable. All configurations are stored in a single gost.yaml file, and the service is containerized with automatic restarts enabled. To make changes, simply edit the gost.yaml file and restart the Docker container (docker restart gost-limiter). For dynamic, API-driven configuration management, refer to the official GOST documentation on its powerful Web API.
This setup gives you several benefits:
- Per‑IP or per‑connection limits can be added easily by extending the GOST configuration (see the official Limiter documentation).
- No configuration reload required - changing the limit is as simple as restarting the GOST container or using the Web API.
- Separation of concerns - Nginx handles efficient static file serving, while GOST deals with traffic policies.
- Clean containerisation - easy to deploy, roll back, or scale.
From here you can:
- Add authentication to the GOST proxy.
- Use GOST's Web API to change speed limits.
- Protect multiple backend services by running several GOST containers, each with its own rate‑limiting profile.
- Investigate Prometheus metrics that GOST can export to monitor actual throughput.
If you need more fine-grained control - such as limiting download speed for specific IP ranges or user groups - refer to the GOST Limiting concepts page for the complete configuration syntax.