RPC Node
An RPC node (also sometimes referred to as an observer node) is responsible for servicing application requests. Unlike validator nodes, RPC nodes operate outside the network and do not influence consensus or block production. This guide will walk you through deploying an RPC node for Flare.
You have two options for setting up an RPC node, each with its own pros and cons:
-
Setup on bare-metal: More complex, but offers better performance and lower memory usage.
-
Setup with Docker: Simpler, but results in lower performance and higher memory usage.
Hardware requirements
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
Requirement | |
---|---|
CPU | 4 cores |
RAM | 16 GB |
Disk space (pruned) | 500 GB SSD |
Disk space (archival) | 3.5 TB SSD |
Disk growth | 30 GB/month |
Requirement | |
---|---|
CPU | 4 cores |
RAM | 16 GB |
Disk space (pruned) | 150 GB SSD |
Disk space (archival) | 1 TB SSD |
Disk growth | 5 GB/month |
Requirement | |
---|---|
CPU | 4 cores |
RAM | 16 GB |
Disk space (pruned) | 2 TB SSD |
Disk space (archival) | 8 TB SSD |
Disk growth | 120 GB/month |
Requirement | |
---|---|
CPU | 4 cores |
RAM | 16 GB |
Disk space (pruned) | 150 GB SSD |
Disk space (archival) | 1 TB SSD |
Disk growth | 11 GB/month |
- Disk speed: 1200 MB/s read and 600 MB/s write, or higher
- Network speed: 40 Mbps, or higher
Setup on bare-metal
Prerequisites
Ensure you have the following tools installed:
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
Configure the node
Clone the repository and run the build.sh
script:
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
git clone https://github.com/flare-foundation/go-flare.git
cd go-flare/avalanchego
chmod +x scripts/build.sh
./scripts/build.sh
The resulting executable will be stored in build/avalanchego
.
git clone https://github.com/flare-foundation/go-flare.git
cd go-flare/avalanchego
chmod +x scripts/build.sh
./scripts/build.sh
The resulting executable will be stored in build/avalanchego
.
git clone --branch v1.9.0.1 https://github.com/flare-foundation/go-flare.git
cd go-flare/avalanchego
chmod +x scripts/build.sh
./scripts/build.sh
The resulting executable will be stored in build/avalanchego
.
git clone --branch v1.9.0.1 https://github.com/flare-foundation/go-flare.git
cd go-flare/avalanchego
chmod +x scripts/build.sh
./scripts/build.sh
The resulting executable will be stored in build/avalanchego
.
Verify the installation by running:
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
go test $(go list ./... | grep -v /tests/) # avalanchego unit tests
cd ../coreth
go test ./... # coreth unit tests
cd ../avalanchego
go test $(go list ./... | grep -v /tests/) # avalanchego unit tests
cd ../coreth
go test ./... # coreth unit tests
cd ../avalanchego
go test $(go list ./... | grep -v /tests/) # avalanchego unit tests
cd ../coreth
go test ./... # coreth unit tests
cd ../avalanchego
Whitelisting nodes for Songbird Canary-Network
While the Songbird Canary-Network is being tested, all nodes wanting to peer with it, including RPC nodes, need to have their IP address whitelisted.
To do this, make a whitelisting request by contacting Tom T. over:
- Discord (
Tom T#7603
) - Telegram (
@TampaBay7
) - Email ([email protected])
To have greater redundancy, you can whitelist multiple nodes per single provider.
Checking the status of your whitelisting request
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://songbird-bootstrap.flare.network/ext/info
If your IP address is whitelisted, this command returns a JSON response. Otherwise you will get a 403 error ("Forbidden").
Note that whitelisting is not required on Flare Mainnet, Flare Testnet Coston2, or Songbird Testnet Coston.
go test $(go list ./... | grep -v /tests/) # avalanchego unit tests
cd ../coreth
go test ./... # coreth unit tests
cd ../avalanchego
Run the node
This is the simplest command to quickly get your node up and running. The next section explains the parameters used here, along with additional parameters you may wish to configure.
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
./build/avalanchego --network-id=flare --http-host= --bootstrap-ips="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://flare-bootstrap.flare.network/ext/info | jq -r ".result.ip")" --bootstrap-ids="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://flare-bootstrap.flare.network/ext/info | jq -r ".result.nodeID")"
./build/avalanchego --network-id=costwo --http-host= --bootstrap-ips="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://coston2-bootstrap.flare.network/ext/info | jq -r ".result.ip")" --bootstrap-ids="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://coston2-bootstrap.flare.network/ext/info | jq -r ".result.nodeID")"
./build/avalanchego --network-id=songbird --http-host= --bootstrap-ips="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://songbird-bootstrap.flare.network/ext/info | jq -r ".result.ip")" --bootstrap-ids="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://songbird-bootstrap.flare.network/ext/info | jq -r ".result.nodeID")"
./build/avalanchego --network-id=coston --http-host= --bootstrap-ips="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://coston-bootstrap.flare.network/ext/info | jq -r ".result.ip")" --bootstrap-ids="$(curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://coston-bootstrap.flare.network/ext/info | jq -r ".result.nodeID")"
After a lot of log messages the node should start synchronizing with the network, which might take anywhere from a few hours to a few days depending on network speed and hardware specification. Node syncing can be stopped at any time. Use the same command to resume the node syncing from where it left off.
You will know your node is fully booted and accepting transactions when the output of this command:
curl http://127.0.0.1:9650/ext/health
Contains the field "healthy" : true
in the returned JSON object.
If the node gets stuck during bootstrap (or it takes far longer than the estimates given above), try adding the parameter --bootstrap-retry-enabled=false
when running the node.
Additional configuration
These are some of the most relevant CLI parameters you can use. Read more about them in the Avalanche documentation.
-
--bootstrap-ips
,--bootstrap-ids
: IP address and node ID of the peer used to connect to the rest of the network for bootstrapping. Note that you have to whitelist your node's IP address or your queries will always be answered with 403 error codes.Determining your peer's IP address and node ID.
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
Peer's IP address:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://flare-bootstrap.flare.network/ext/info | jq -r ".result.ip"
Peer's node ID:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://flare-bootstrap.flare.network/ext/info | jq -r ".result.nodeID"
Peer's IP address:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://coston2-bootstrap.flare.network/ext/info | jq -r ".result.ip"
Peer's node ID:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://coston2-bootstrap.flare.network/ext/info | jq -r ".result.nodeID"
Peer's IP address:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://songbird-bootstrap.flare.network/ext/info | jq -r ".result.ip"
Peer's node ID:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://songbird-bootstrap.flare.network/ext/info | jq -r ".result.nodeID"
Peer's IP address:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeIP" }' -H 'content-type:application/json;' https://coston-bootstrap.flare.network/ext/info | jq -r ".result.ip"
Peer's node ID:
curl -m 10 -sX POST --data '{ "jsonrpc":"2.0", "id":1, "method":"info.getNodeID" }' -H 'content-type:application/json;' https://coston-bootstrap.flare.network/ext/info | jq -r ".result.nodeID"
-
--http-host
: Use--http-host=
(empty) to allow connections from other machines. Otherwise, only connections fromlocalhost
are accepted. -
--http-port
: The port through which the node will listen to API requests. The default value is9650
. -
--staking-port
: The port through which the network peers will connect to this node externally. Having this port accessible from the internet is required for correct node operation. The default value is9651
. -
--db-dir
: Directory where the database is stored. Make sure to use a disk with enough space as recommended in the Hardware requirements section. It defaults to~/.avalanchego/db
on Flare Mainnet and Flare Testnet Coston2, and to~/.flare/db
on Songbird Canary-Network and Songbird Testnet Coston. For example, you can use this option to store the database on an external drive. -
--chain-config-dir
: Optional JSON configuration file for using non-default values.Sample JSON configuration for RPC nodes.
For archival nodes, set
"pruning-enabled": false
to disable pruning. Note that archival nodes require significantly more disk space than standard RPC nodes.<chain-config-dir>/C/config.json{
"snowman-api-enabled": false,
"coreth-admin-api-enabled": false,
"eth-apis": [
"eth",
"eth-filter",
"net",
"web3",
"internal-eth",
"internal-blockchain",
"internal-transaction"
],
"rpc-gas-cap": 50000000,
"rpc-tx-fee-cap": 100,
"pruning-enabled": true,
"local-txs-enabled": false,
"api-max-duration": 0,
"api-max-blocks-per-request": 0,
"allow-unfinalized-queries": false,
"allow-unprotected-txs": false,
"remote-tx-gossip-only-enabled": false,
"log-level": "info"
}
Setup with Docker
Prerequisites
- Docker Engine
- jq (optional, but recommended)
For simplicity this guide uses Docker Engine installed on Debian Linux.
To avoid using sudo
each time you run the docker
command, add your user to the Docker group after installation:
sudo usermod -a -G docker $USER
# Log out and log back in or restart your system for the changes to take effect
Configure the node
Disk setup
This setup varies depending on your use case, but essentially you need to have a local directory with sufficient space for the blockchain data to be stored and persisted in. In this guide, there is an additional disk mounted at /mnt/db
, which is used to persist the blockchain data. After you have a machine set up and ready to go, find the additional disk, format it if necessary, and mount to a directory:
lsblk
# ----
# NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
# sda 8:0 0 10G 0 disk
# ├─sda1 8:1 0 9.9G 0 part /
# ├─sda14 8:14 0 3M 0 part
# └─sda15 8:15 0 124M 0 part /boot/efi
# sdb 8:16 0 300G 0 disk <- Device identified as db disk via size
# ----
sudo mkdir /mnt/db
sudo chown -R <user>:<user> /mnt/db
sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
sudo mount /dev/sdb /mnt/db
- Replace
<user>
with the user you wish to start your containerized RPC node with. It is recommended that this isn't the root user for security reasons. - Ensure you are replacing
/dev/sdb
with your actual device, since it could be different to the example.
Confirm the new disk is mounted:
df -h
# -----
# Filesystem Size Used Avail Use% Mounted on
# udev 3.9G 0 3.9G 0% /dev
# tmpfs 796M 376K 796M 1% /run
# /dev/sda1 9.7G 1.9G 7.3G 21% /
# tmpfs 3.9G 0 3.9G 0% /dev/shm
# tmpfs 5.0M 0 5.0M 0% /run/lock
# /dev/sda15 124M 11M 114M 9% /boot/efi
# tmpfs 796M 0 796M 0% /run/user/1009
# /dev/sdb 295G 28K 295G 1% /mnt/db
Look for your device name and mount point specified in the output to confirm the mount worked.
Backup the original fstab
file (to revert changes if needed) and update /etc/fstab
to make sure the device is mounted when the system reboots:
sudo -i
cp /etc/fstab /etc/fstab.backup
fstab_entry="UUID=$(blkid -o value -s UUID /dev/sdb) /mnt/db ext4 discard,defaults 0 2"
echo $fstab_entry >> /etc/fstab
exit
Configuration File and Logs Directory Setup
Once the disk setup is complete, you can define the configuration file and logs directory for the RPC node. These will be mounted from your local machine to the specified directories on your containerized RPC node.
Mounting the logs directory allows you to access the logs generated by the workload directly on your local machine. This saves you the effort of using docker logs
and lets you inspect the files in your local directory instead.
This example uses the configuration provided in the Additional Configuration section of the Setup on Bare-Metal.
Create the local directories and change ownership to a non-root user of your choice:
sudo mkdir -p /opt/flare/conf
sudo mkdir /opt/flare/logs
sudo chown -R <user>:<user> /opt/flare
Replace <user>
with the user you wish to start your containerized RPC node with.
It is recommended that this isn't the root user for security reasons.
Create the configuration file:
{
"snowman-api-enabled": false,
"coreth-admin-api-enabled": false,
"eth-apis": [
"eth",
"eth-filter",
"net",
"web3",
"internal-eth",
"internal-blockchain",
"internal-transaction"
],
"rpc-gas-cap": 50000000,
"rpc-tx-fee-cap": 100,
"pruning-enabled": true,
"local-txs-enabled": false,
"api-max-duration": 0,
"api-max-blocks-per-request": 0,
"allow-unfinalized-queries": false,
"allow-unprotected-txs": false,
"remote-tx-gossip-only-enabled": false,
"log-level": "info"
}
Run the node
The node can be run using:
-
Docker CLI, easier for a quick setup.
-
Docker Compose, more permanent and usable in the future.
Using Docker CLI
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
Use the Docker image at go-flare. The Overview tab in the repository linked explains the configurable parameters.
Download and start the container:
docker pull flarefoundation/go-flare:v1.9.0.1
docker run -d --name flare-observer -e AUTOCONFIGURE_BOOTSTRAP="1" -e NETWORK_ID="flare" -e AUTOCONFIGURE_PUBLIC_IP="1" -e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://flare-bootstrap.flare.network/ext/info" -v /mnt/db:/app/db -v /opt/flare/conf:/app/conf/C -v /opt/flare/logs:/app/logs -p 0.0.0.0:9650:9650 -p 0.0.0.0:9651:9651 flarefoundation/go-flare:v1.9.0.1
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs flare-observer -f
Use the Docker image at go-flare. The Overview tab in the repository linked explains the configurable parameters.
Download and start the container:
docker pull flarefoundation/go-flare:v1.9.0.1
docker run -d --name coston2-observer -e AUTOCONFIGURE_BOOTSTRAP="1" -e NETWORK_ID="costwo" -e AUTOCONFIGURE_PUBLIC_IP="1" -e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston2-bootstrap.flare.network/ext/info" -v /mnt/db:/app/db -v /opt/flare/conf:/app/conf/C -v /opt/flare/logs:/app/logs -p 0.0.0.0:9650:9650 -p 0.0.0.0:9651:9651 flarefoundation/go-flare:v1.9.0.1
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs coston2-observer -f
Use the Docker image at go-flare. The Overview tab in the repository linked explains the configurable parameters.
Download and start the container:
docker pull flarefoundation/go-flare:v1.9.0.1
docker run -d --name songbird-observer -e AUTOCONFIGURE_BOOTSTRAP="1" -e NETWORK_ID="songbird" -e AUTOCONFIGURE_PUBLIC_IP="1" -e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://songbird-bootstrap.flare.network/ext/info" -v /mnt/db:/app/db -v /opt/flare/conf:/app/conf/C -v /opt/flare/logs:/app/logs -p 0.0.0.0:9650:9650 -p 0.0.0.0:9651:9651 flarefoundation/go-flare:v1.9.0.1
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs songbird-observer -f
Use the Docker image at go-flare. The Overview tab in the repository linked explains the configurable parameters.
Download and start the container:
docker pull flarefoundation/go-flare:v1.9.0.1
docker run -d --name coston-observer -e AUTOCONFIGURE_BOOTSTRAP="1" -e NETWORK_ID="coston" -e AUTOCONFIGURE_PUBLIC_IP="1" -e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="https://coston-bootstrap.flare.network/ext/info" -v /mnt/db:/app/db -v /opt/flare/conf:/app/conf/C -v /opt/flare/logs:/app/logs -p 0.0.0.0:9650:9650 -p 0.0.0.0:9651:9651 flarefoundation/go-flare:v1.9.0.1
Confirm your container is running and inspect that logs are printing:
docker ps
docker logs coston-observer -f
Once you have confirmed that the container is running, use Ctrl+C to exit the following of logs and check your container's /ext/health
endpoint. Only when the RPC node is fully synced will you see "healthy": true
, but this otherwise confirms your container's HTTP port 9650
is accessible from your local machine.
curl http://localhost:9650/ext/health | jq
Explanation of the CLI arguments.
Volumes:
-
-v /mnt/db:/app/db
Mount the local database directory to the default database directory of the container.
-
-v /opt/flare/conf:/app/conf/C
Mount the local configuration directory to the default location of
config.json
. -
-v /opt/flare/logs:/app/logs
Mount the local logs directory to the workloads default logs directory.
Ports:
-
-p 0.0.0.0:9650:9650
Mapping the container's HTTP port to your local machine, enabling the querying of the containerized RPC node's HTTP port via your local machine's IP and port.
!!! warning Only use binding
0.0.0.0
for port 9650 if you wish to publicly expose your containerized RPC node's RPC endpoint from your machine's public IP address. If you require it to be publicly accessible for another application to use, ensure you set up a firewall rule to only allow port 9650 to be accessible via specific source IP addresses. -
-p 0.0.0.0:9651:9651
Mapping the container's peering port to your local machine so other peers can query the node.
Environment Variables:
-
-e AUTOCONFIGURE_BOOTSTRAP="1"
Retrieves the bootstrap endpoints Node-IP and Node-ID automatically.
-
-e NETWORK_ID="<network>"
Sets the correct network ID from the provided options below:
coston
costwo
songbird
flare
-
-e AUTOCONFIGURE_PUBLIC_IP="1"
Retrieves your local machine's IP automatically.
-
-e AUTOCONFIGURE_BOOTSTRAP_ENDPOINT="<bootstrap_host>/ext/info"
Defines the bootstrap endpoint used to initialize chain sync. Flare nodes can be used to bootstrap your node for each chain:
https://coston-bootstrap.flare.network/ext/info
https://costwo.flare.network/ext/info
https://songbird-bootstrap.flare.network/ext/info
https://flare-bootstrap.flare.network/ext/info
Using Docker Compose
Docker Compose for this use case is a good way to simplify your setup of running the RPC node. Adding all necessary configurations into a single file that can be run with a simple command.
In this guide the docker-compose.yaml
file is created in /opt/observer
but the location is entirely up to you.
Create the working directory and set the ownership.
sudo mkdir /opt/observer
sudo chown -R <user>:<user> /opt/observer
Create the docker-compose.yaml
file:
- Flare Mainnet
- Flare Testnet Coston2
- Songbird Canary-Network
- Songbird Testnet Coston
version: '3.6'
services:
observer:
container_name: flare-observer
image: flarefoundation/go-flare:v1.9.0.1
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP=1
- NETWORK_ID=flare
- AUTOCONFIGURE_PUBLIC_IP=1
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT=https://flare-bootstrap.flare.network/ext/info
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
version: '3.6'
services:
observer:
container_name: coston2-observer
image: flarefoundation/go-flare:v1.9.0.1
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP=1
- NETWORK_ID=costwo
- AUTOCONFIGURE_PUBLIC_IP=1
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT=https://coston2-bootstrap.flare.network/ext/info
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
version: '3.6'
services:
observer:
container_name: songbird-observer
image: flarefoundation/go-flare:v1.9.0.1
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP=1
- NETWORK_ID=songbird
- AUTOCONFIGURE_PUBLIC_IP=1
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT=https://songbird-bootstrap.flare.network/ext/info
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
version: '3.6'
services:
observer:
container_name: coston-observer
image: flarefoundation/go-flare:v1.9.0.1
restart: on-failure
environment:
- AUTOCONFIGURE_BOOTSTRAP=1
- NETWORK_ID=coston
- AUTOCONFIGURE_PUBLIC_IP=1
- AUTOCONFIGURE_BOOTSTRAP_ENDPOINT=https://coston-bootstrap.flare.network/ext/info
volumes:
- /mnt/db:/app/db
- /opt/flare/conf:/app/conf/C
- /opt/flare/logs:/app/logs
ports:
- 0.0.0.0:9650:9650
- 0.0.0.0:9651:9651
Run Docker Compose:
docker compose -f /opt/observer/docker-compose.yaml up -d
When the command completes, check the container is running and inspect that logs are being generated:
docker ps
docker compose logs -f
Once you have confirmed the container is running, use Ctrl+C to exit the following of logs and check your container's /ext/health
endpoint.
Only when the RPC node is fully synced will you see "healthy": true
, but this otherwise confirms your container's HTTP port (9650) is accessible from your local machine.
curl http://localhost:9650/ext/health | jq
Additional configuration
There are several environment variables to adjust your workload at runtime. The example Docker and Docker Compose guides above assumed some defaults and utilized built-in automation scripts for most of the configuration. Outlined below are all options available:
Variable Name | Default | Description |
---|---|---|
HTTP_HOST | 0.0.0.0 | HTTP host binding address |
HTTP_PORT | 9650 | The listening port for the HTTP host |
STAKING_PORT | 9651 | The staking port for bootstrapping nodes |
PUBLIC_IP | (empty) | Public facing IP. Must be set if AUTOCONFIGURE_PUBLIC_IP=0 |
DB_DIR | /app/db | The database directory location |
DB_TYPE | leveldb | The database type to be used |
BOOTSTRAP_IPS | (empty) | A list of bootstrap server IPs |
BOOTSTRAP_IDS | (empty) | A list of bootstrap server IDs |
CHAIN_CONFIG_DIR | /app/conf | Configuration folder where you should mount your configuration file |
LOG_DIR | /app/logs | Logging directory |
LOG_LEVEL | info | Logging verbosity level that is logged into the file |
AUTOCONFIGURE_PUBLIC_IP | 0 | Set to 1 to autoconfigure PUBLIC_IP , skipped if PUBLIC_IP is set |
AUTOCONFIGURE_BOOTSTRAP | 0 | Set to 1 to autoconfigure BOOTSTRAP_IPS and BOOTSTRAP_IDS |
EXTRA_ARGUMENTS | (empty) | Extra arguments passed to flare binary |
Additional options:
-
NETWORK_ID
Default: The default depends on the image you use, so either go-songbird (
default: coston
) or go-flare (default: costwo
)Description: Name of the network you want to connect to.
-
AUTOCONFIGURE_BOOTSTRAP_ENDPOINT
Default:
https://coston2-bootstrap.flare.network/ext/info
orhttps://flare-bootstrap.flare.network/ext/info
Description: Endpoint used to automatically retrieve the Node-ID and Node-IP from.
-
AUTOCONFIGURE_FALLBACK_ENDPOINTS
Default: (empty)
Description: Comma-divided fallback bootstrap endpoints, used if
AUTOCONFIGURE_BOOTSTRAP_ENDPOINT
is not valid, such as the bootstrap endpoint being unreachable. Tested from first-to-last, until one is valid.
Node maintenance
In some cases, your node might not work correctly or you might receive unusual messages that are difficult to troubleshoot. Use the following solutions to ensure your node stays healthy:
-
Ensure Adequate Peers: When your node has fewer than 16 peers, it will not work correctly. To retrieve the number of connected peers, run the following command and look for the line containing
connectedPeers
:curl http://127.0.0.1:9650/ext/health | jq
To automate the process, use:
curl -s http://127.0.0.1:9650/ext/health | jq -r ".checks.network.message.connectedPeers"
-
Check Disk Space: If your node does not sync after a long time and abruptly stops working, ensure the database location has sufficient disk space. Remember, the database size might change significantly during bootstrapping.
-
Resolve Connection Issues: If you receive unusual messages after making submissions or when transactions are reverted, your node might not be connected correctly. Ensure the database location has sufficient disk space, then restart the node.
-
Handle Bootstrap Errors: If you receive an error related to
GetAcceptedFrontier
during bootstrapping, your node was disconnected during the process. Restart the node if you see the following error:failed to send GetAcceptedFrontier(MtF8bVH241hetCQJgsKEdKyJBs8vhp1BC, 11111111111111111111111111111111LpoYY, NUMBER)
-
Restart Unhealthy Nodes: If your node syncs but remains unhealthy for no discernible reason, restart the node.