Deploying an Observer Node with Docker#
The Flare Foundation distributes official Docker images for both Songbird (flarefoundation/go-songbird
) and Flare (flarefoundation/go-flare
) on Docker Hub.
These images can be used to deploy containerized observer nodes for all of Flare's networks.
This guide explains how to deploy your own observer node via Docker and Docker Compose.
Prerequisites#
This guide assumes you have first read Deploying an Observer Node to understand hardware requirements, starting the node and setting up a configuration file. This guide focuses entirely on using Docker and Docker Compose to start an observer node, using sections from 'Deploying an Observer Node' to assist the initial set up.
This guide also contains different instructions depending on which Flare network you want to deploy to, so make sure you are aware of the available networks.
Ensure that you have Docker installed. Please note, the installation can vary depending on the operating system and whether you decide to use Docker Engine or Docker Desktop. For simplicity this guide is using a Docker Engine installation on a Linux Debian machine.
If you do not have Docker already installed follow any of the guides below.
Recommended:
Alternatives:
To avoid using sudo
each time you run the docker
command, add your user to the docker group, post-installation:
sudo usermod -a -G docker $USER
# Log out and log back in or restart your system for the changes to take effect
It is useful to install jq
to improve readability of JSON outputs from your nodes RPC.
Guide#
1. Disk Setup#
This setup varies depending on your use case, but essentially you need to have a local directory with sufficient space for the blockchain data to be stored and persisted in. Refer to disk space requirements defined in Deploying an Observer Node.
In this guide, there is an additional disk mounted at /mnt/db
, which is used to persist the blockchain data.
After you have a machine set up and ready to go, find the additional disk, format it if necessary, and mount to a directory:
lsblk
# ----
# NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
# sda 8:0 0 10G 0 disk
# ├─sda1 8:1 0 9.9G 0 part /
# ├─sda14 8:14 0 3M 0 part
# └─sda15 8:15 0 124M 0 part /boot/efi
# sdb 8:16 0 300G 0 disk <- Device identified as db disk via size
# ----
sudo mkdir /mnt/db
sudo chown -R <user>:<user> /mnt/db
sudo mkfs.ext4 -m 0 \
-E lazy_itable_init=0,lazy_journal_init=0,discard \
/dev/sdb
sudo mount /dev/sdb /mnt/db
Info
- Replace
<user>
with the user you wish to start your containerized observer node with. It is recommended that this isn't the root user for security reasons. - Ensure you are replacing
/dev/sdb
with your actual device, since it could be different to the example.
Confirm new disk is mounted:
df -h
# -----
# Filesystem Size Used Avail Use% Mounted on
# udev 3.9G 0 3.9G 0% /dev
# tmpfs 796M 376K 796M 1% /run
# /dev/sda1 9.7G 1.9G 7.3G 21% /
# tmpfs 3.9G 0 3.9G 0% /dev/shm
# tmpfs 5.0M 0 5.0M 0% /run/lock
# /dev/sda15 124M 11M 114M 9% /boot/efi
# tmpfs 796M 0 796M 0% /run/user/1009
# /dev/sdb 295G 28K 295G 1% /mnt/db
Look for your device name and mount point specified in the output to confirm the mount worked.
Backup original fstab
file (to revert changes if needed) and update /etc/fstab
to make sure device is mounted when the system reboots:
sudo -i
cp /etc/fstab /etc/fstab.backup
fstab_entry="UUID=$(blkid -o value -s UUID /dev/sdb) \
/mnt/db \
ext4 \
discard,defaults \
0 2"
echo $fstab_entry >> /etc/fstab
exit
2. Configuration File and Logs Directory Setup#
Once your database directory is ready, the next step is to correctly define your configuration file and logs directory for the observer node. Later, these are mounted from your local machine to the specified directories on your containerized observer node.
Mounting the logs directory provides you access to the logs generated by the workload on your local machine, at the specified local directory.
This can save you some effort from using docker logs
and instead inspecting the files in your local directory.
This example uses the configuration provided in the "Additional Configuration" section of Deploying an Observer Node
Create the local directories, change ownership to a non-root user of your choice and create configuration file:
sudo mkdir -p /opt/flare/conf
sudo mkdir /opt/flare/logs
sudo chown -R <user>:<user> /opt/flare
cat > /opt/flare/conf/config.json << EOL
{
"snowman-api-enabled": false,
"coreth-admin-api-enabled": false,
"eth-apis": [
"eth",
"eth-filter",
"net",
"web3",
"internal-eth",
"internal-blockchain",
"internal-transaction-pool"
],
"rpc-gas-cap": 50000000,
"rpc-tx-fee-cap": 100,
"pruning-enabled": true,
"local-txs-enabled": false,
"api-max-duration": 0,
"api-max-blocks-per-request": 0,
"allow-unfinalized-queries": false,
"allow-unprotected-txs": false,
"remote-tx-gossip-only-enabled": false,
"log-level": "info"
}
EOL
Info
Replace <user>
with the user you wish to start your containerized observer node with.
It is recommended that this isn't the root user for security reasons.
3. Running with Docker#
This step in the guide demonstrates using Docker CLI or Docker Compose to run a Flare node. Configuration of the node is the same in both steps, the Docker CLI is an easy copy and paste command to get a node running and inspect its behavior. For something more permanent and usable for the future, follow the Docker Compose section of this step.
Using Docker CLI#
For Flare observer nodes you need to pull an image from Flare Foundation's go-flare repository.
Read the 'Overview' tab in the repository linked above to understand the various parameters of running a node via the Docker image. Some important things to note are the default directory locations and the environment variables available for you to adjust the behaviour of your observer node.
Start the container with the the parameters defined:
docker run -d \
--name flare-observer \
-e AUTOCONFIGURE_BOOTSTRAP="1" \
-e NETWORK_ID="flare" \
-e AUTOCONFIGURE_PUBLIC_IP="1" \
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://flare.flare.network/ext/info" \
-v /mnt/db:/app/db \
-v /opt/flare/conf:/app/conf/C \
-v /opt/flare/logs:/app/logs \
-p 0.0.0.0:9650:9650 \
-p 0.0.0.0:9651:9651 \
flarefoundation/go-flare:v1.7.1806
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs flare-observer -f
For Songbird observer nodes you need to pull an image from Flare Foundation's go-songbird repository.
Read the 'Overview' tab in the repository linked above to understand the various parameters of running a node via the Docker image. Some important things to note are the default directory locations and the environment variables available for you to adjust the behaviour of your observer node.
Start the container with the the parameters defined:
docker run -d \
--name songbird-observer \
-e AUTOCONFIGURE_BOOTSTRAP="1" \
-e NETWORK_ID="songbird" \
-e AUTOCONFIGURE_PUBLIC_IP="1" \
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://songbird.flare.network/ext/info" \
-v /mnt/db:/app/db \
-v /opt/flare/conf:/app/conf/C \
-v /opt/flare/logs:/app/logs \
-p 0.0.0.0:9650:9650 \
-p 0.0.0.0:9651:9651 \
flarefoundation/go-songbird:v0.6.4
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs songbird-observer -f
For Coston observer nodes you need to pull an image from Flare Foundation's go-songbird repository. This is because Coston is Songbird's testnet and uses the same code.
Read the 'Overview' tab in the repository linked above to understand the various parameters of running a node via the Docker image. Some important things to note are the default directory locations and the environment variables available for you to adjust the behaviour of your observer node.
Start the container with the the parameters defined:
docker run -d \
--name coston-observer \
-e AUTOCONFIGURE_BOOTSTRAP="1" \
-e NETWORK_ID="coston" \
-e AUTOCONFIGURE_PUBLIC_IP="1" \
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston.flare.network/ext/info" \
-v /mnt/db:/app/db \
-v /opt/flare/conf:/app/conf/C \
-v /opt/flare/logs:/app/logs \
-p 0.0.0.0:9650:9650 \
-p 0.0.0.0:9651:9651 \
flarefoundation/go-songbird:v0.6.4
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs coston-observer -f
For Coston2 observer nodes you need to pull an image from Flare Foundation's go-flare repository. This is because Coston2 is Flare's testnet and uses the same code.
Read the 'Overview' tab in the repository linked above to understand the various parameters of running a node via the Docker image. Some important things to note are the default directory locations and the environment variables available for you to adjust the behaviour of your observer node.
Start the container with the the parameters defined:
docker run -d \
--name coston2-observer \
-e AUTOCONFIGURE_BOOTSTRAP="1" \
-e NETWORK_ID="costwo" \
-e AUTOCONFIGURE_PUBLIC_IP="1" \
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston2.flare.network/ext/info" \
-v /mnt/db:/app/db \
-v /opt/flare/conf:/app/conf/C \
-v /opt/flare/logs:/app/logs \
-p 0.0.0.0:9650:9650 \
-p 0.0.0.0:9651:9651 \
flarefoundation/go-flare:v1.7.1806
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs coston2-observer -f
Once you have confirmed that the container is running, use Ctrl+C to exit the following of logs and check your container's /ext/health
endpoint.
Only when the observer node is fully synced will you see "healthy": true
, but this otherwise confirms your container's HTTP port (9650) is accessible from your local machine.
curl http://localhost:9650/ext/health | jq
Command arguments explained:
Volumes:
-
-v /mnt/db:/app/db
Mount the local database directory to the default database directory of the container.
-
-v /opt/flare/conf:/app/conf/C
Mount the local configuration directory to the default location of
config.json
. -
-v /opt/flare/logs:/app/logs
Mount the local logs directory to the workloads default logs directory.
Ports:
-
-p 0.0.0.0:9650:9650
Mapping the container's HTTP port to your local machine, enabling the querying of the containerized observer node's HTTP port via your local machine's IP and port.
Warning
Only use binding
0.0.0.0
for port 9650 if you wish to publicly expose your containerized observer node's RPC endpoint from your machine's public IP address. If you require it to be publicly accessible for another application to use, ensure you set up a firewall rule to only allow port 9650 to be accessible via specific source IP addresses. -
-p 0.0.0.0:9651:9651
Mapping the container's peering port to your local machine so other peers can query the node.
Environment Variables:
-
-e AUTOCONFIGURE_BOOTSTRAP="1"
Retrieves the bootstrap endpoints Node-IP and Node-ID automatically.
-
-e NETWORK_ID="<network>"
Sets the correct network ID from the provided options below:
coston
costwo
songbird
flare
-
-e AUTOCONFIGURE_PUBLIC_IP="1"
Retrieves your local machine's IP automatically.
-
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="<bootstrap_host>/ext/info"
Defines the bootstrap endpoint used to initialize chain sync. Flare nodes can be used to bootstrap your node for each chain:
https://coston.flare.network/ext/info
https://costwo.flare.network/ext/info
https://songbird.flare.network/ext/info
https://flare.flare.network/ext/info
Using Docker Compose#
Docker Compose for this use case is a good way to simplify your setup of running the observer node. Adding all necessary configurations into a single file that can be run with a simple command.
In this guide the docker-compose.yaml
file is created in /opt/observer
but the location is entirely up to you.
Create the working directory, set the ownership and create the docker-compose.yaml
file:
sudo mkdir /opt/observer
sudo chown -R <user>:<user> /opt/observer
cat > /opt/observer/docker-compose.yaml << EOL
version: '3.6'
services:
observer:
container_name: flare-observer
image: flarefoundation/go-flare:v1.7.1806
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP="1"
- NETWORK_ID="flare"
- AUTOCONFIGURE_PUBLIC_IP="1"
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://flare.flare.network/ext/info"
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
EOL
sudo mkdir /opt/observer
sudo chown -R <user>:<user> /opt/observer
cat > /opt/observer/docker-compose.yaml << EOL
version: '3.6'
services:
observer:
container_name: songbird-observer
image: flarefoundation/go-songbird:v0.6.4
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP="1"
- NETWORK_ID="songbird"
- AUTOCONFIGURE_PUBLIC_IP="1"
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://songbird.flare.network/ext/info"
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
EOL
sudo mkdir /opt/observer
sudo chown -R <user>:<user> /opt/observer
cat > /opt/observer/docker-compose.yaml << EOL
version: '3.6'
services:
observer:
container_name: coston-observer
image: flarefoundation/go-songbird:v0.6.4
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP="1"
- NETWORK_ID="coston"
- AUTOCONFIGURE_PUBLIC_IP="1"
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston.flare.network/ext/info"
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
EOL
sudo mkdir /opt/observer
sudo chown -R <user>:<user> /opt/observer
cat > /opt/observer/docker-compose.yaml << EOL
version: '3.6'
services:
observer:
container_name: coston2-observer
image: flarefoundation/go-flare:v1.7.1806
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP="1"
- NETWORK_ID="costwo"
- AUTOCONFIGURE_PUBLIC_IP="1"
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston2.flare.network/ext/info"
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
EOL
Start Docker Compose:
docker compose -f /opt/observer/docker-compose.yaml up -d
When the Docker Compose command completes, check the container is running and inspect that logs are printing:
docker ps
docker compose logs -f
Once you have confirmed the container is running, use Ctrl+C to exit the following of logs and check your container's /ext/health
endpoint.
Only when the observer node is fully synced will you see "healthy": true
, but this otherwise confirms your container's HTTP port (9650) is accessible from your local machine.
curl http://localhost:9650/ext/health | jq
4. Additional Configuration#
There are plenty of environment variables at your disposal to adjust your workload at runtime. The example Docker and Docker Compose guides above assumed some defaults and utilized built-in automation scripts for most of the configuration. Outlined below are all options available:
Name | Default | Description |
---|---|---|
HTTP_HOST |
0.0.0.0 |
HTTP host binding address |
HTTP_PORT |
9650 |
The listening port for the HTTP host |
STAKING_PORT |
9651 |
The staking port for bootstrapping nodes |
PUBLIC_IP |
(empty) | Public facing IP. Must be set if AUTOCONFIGURE_PUBLIC_IP=0 |
DB_DIR |
/app/db |
The database directory location |
DB_TYPE |
leveldb |
The database type to be used |
BOOTSTRAP_IPS |
(empty) | A list of bootstrap server IPs |
BOOTSTRAP_IDS |
(empty) | A list of bootstrap server IDs |
CHAIN_CONFIG_DIR |
/app/conf |
Configuration folder where you should mount your configuration file |
LOG_DIR |
/app/logs |
Logging directory |
LOG_LEVEL |
info |
Logging verbosity level that is logged into the file |
AUTOCONFIGURE_PUBLIC_IP |
0 |
Set to 1 to autoconfigure PUBLIC_IP , skipped if PUBLIC_IP is set |
AUTOCONFIGURE_BOOTSTRAP |
0 |
Set to 1 to autoconfigure BOOTSTRAP_IPS and BOOTSTRAP_IDS |
EXTRA_ARGUMENTS |
(empty) | Extra arguments passed to flare binary |
Additionally:
-
NETWORK_ID
Default: The default depends on the image you use, so either go-songbird (
default: coston
) or go-flare (default: costwo
)Description: Name of the network you want to connect to.
-
AUTOCONFIGURE_BOOTSTRAP_ENDPOINT
Default:
https://coston2.flare.network/ext/info
orhttps://flare.flare.network/ext/info
Description: Endpoint used to automatically retrieve the Node-ID and Node-IP from.
-
AUTOCONFIGURE_FALLBACK_ENDPOINTS
Default: (empty)
Description: Comma-divided fallback bootstrap endpoints, used if
AUTOCONFIGURE_BOOTSTRAP_ENDPOINT
is not valid, such as the bootstrap endpoint being unreachable. Tested from first-to-last, until one is valid.