Instructions to deploy the monitor for existing bridges
The TokenBridge monitor instance deployment uses Ansible. Moreover the process below assumes there are two nodes: one node where Ansible playbooks orchestrate the deployment process (orchestration node) and another node where the monitor instance is deployed (target node).
The orchestration node must satisfy the following dependencies:
Python 2 (v2.6-v2.7)/Python3 (v3.5+)
Ansible v2.3+ (on Ubuntu based systems it could be installed by apt-get install ansible )
Git
The target node musthave a functional Ubuntu 16.04 or 18.04 launched, and recommended 4Gb+ memory.
For these instructions, You will use an Infura account with a project ID. You can create a free account and the id will be located there. Alternately, you can choose a different mainnet endpoint.
Tasks on Orchestration Node
1. Login and generate a pair of SSH keys that will be used by the orchestration node to remotely login to the target node. The generated public key must be added to .ssh/authorized_keys on the target node in the home directory of the user (usually root or ubuntu) that will be configured to perform deployment actions.
Check that user has appropriate permissions and change if needed. Use ls -l to check permissions, if assigned to root and you are using ubuntu user, use the chown command to update permissions.
ls -l
sudo chown -R ubuntu:ubuntu
2. Clone the TokenBridge git repository and change the working directory:
Replace all places templated with tags (<>) with actual values. In order to achieve this it is necessary to define in advance the port where the monitor web-service will listen users' requests and the JSON RPC url to communicate with Ethereum Mainnet nodes.
The web service port must be specified only in the first file.
group_vars/xdai.yml
---MONITOR_BRIDGE_NAME:"xdai"MONITOR_PORT:<port for web service>COMMON_HOME_RPC_URL:"https://dai.poa.network"COMMON_HOME_BRIDGE_ADDRESS:"0x7301CFA0e1756B71869E93d4e4Dca5c7d0eb0AA6"COMMON_FOREIGN_RPC_URL:"https://mainnet.infura.io/v3/<infura project id>"COMMON_FOREIGN_BRIDGE_ADDRESS:"0x4aa42145Aa6Ebf72e164C9bBC74fbD3788045016"COMMON_HOME_GAS_PRICE_FALLBACK:0COMMON_FOREIGN_GAS_PRICE_SUPPLIER_URL:"https://gasprice.poa.network/"COMMON_FOREIGN_GAS_PRICE_SPEED_TYPE:"fast"COMMON_FOREIGN_GAS_PRICE_FALLBACK:10000000000COMMON_FOREIGN_GAS_PRICE_FACTOR:1MONITOR_HOME_START_BLOCK:759MONITOR_FOREIGN_START_BLOCK:6478417MONITOR_VALIDATOR_HOME_TX_LIMIT:0MONITOR_VALIDATOR_FOREIGN_TX_LIMIT:300000MONITOR_TX_NUMBER_THRESHOLD:100
4. Return to the deployment directory (cd ..) and create the hosts.yml file
Replace all variables templated with tags (<>) with actual values.
---xdai:children:monitor:hosts:<target node ip address 1.2.3.4>:ansible_user:<user>
Here <user> is the account that will ssh into the monitor node for deployment actions. This is typically ubuntu or root.
5. Next, the Ansible playbook will deploy the monitor instance on the remote target node, then propagate the rest of configuration to the same system. You will run the playbook each time after re-configuring the hosts.yml file
If the target node contains python3 instead of python, append -e 'ansible_python_interpreter=/usr/bin/python3' to the end of the ansible-playbook command (before -i hosts.yml). Try this if you get a node connection / ssh error.
Depending on your ssh setup, you may not need the --private-key flag
⏳ Automated deployment and the remote node configuration can take a few minutes depending on target node resources. Be patient and maybe have a ☕ during these operations!
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/xdai/poa/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/poa/wetc/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/wetc/amb-xdai/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/amb-xdai/amb-poa/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/amb-poa/amb-rinkeby/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/amb-rinkeby/amb-qdai/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
sed -i 's/amb-qdai/amb-test/' hosts.yml
ansible-playbook --private-key=~/.ssh/<privkey.file> -i hosts.yml site.yml
6. Wait for 5-6 minutes and check availability of the monitor statistic in the web service:
1) By URL:
http://<target node ip address:port>/xdai
http://<target node ip address:port>/poa
http://<target node ip address:port>/wetc
http://<target node ip address:port>/amb-xdai
http://<target node ip address:port>/amb-poa
http://<target node ip address:port>/amb-rinkeby
http://<target node ip address:port>/amb-qdai
http://<target node ip address:port>/amb-test
2) If URL method is unavailable you can login to the target node and: curl http://<target node ip address:port>/xdai from the command line to check operability.