Table of Contents
- 1. Introduction
- 2. Installation
- 3. Using basic commands
-
- 3.0.1. Start as a server
- 3.0.2. View server Specification
- 3.0.3. Run container
- 3.0.4. Remove container
- 3.0.5. Adding servers to ip table
- 3.0.6. Update ip table
- 3.0.7. List Servers
- 3.0.8. View Network interfaces
- 3.0.9. Viewing Containers created Client side
- 3.0.10. Running plugin
- 3.0.11. Create group
- 3.0.12. Add container to group
- 3.0.13. View groups
- 3.0.14. View specific group
- 3.0.15. Delete container from group
- 3.0.16. Delete entire group
- 3.0.17. Pulling plugin from a remote repo
- 3.0.18. Deleting plugin from the plugin directory
- 3.0.19. Added custom metadata about the current node
- 3.0.20. MapPort and link to domain name
-
- 4. P2P Module Implementation
- 5. Language Bindings
- 6. Config Implementation
- 7. Abstractions
- 8. NAT Traversal
- 9. Client mode
- 10. Blog posts
- 11. Ideas for future potencial features
1. Introduction
1.1. Abstract
This project focuses on creating a framework for running heavy computational tasks that a regular computer cannot handle easily. These tasks may include graphically demanding video games, rendering 3D animations, and performing complex protein folding simulations. The major focus of this project is not on financial incentives but rather on building a robust and efficient peer-to-peer (P2P) network to decentralise task execution and increase the computational bandwidth available for such tasks.
The P2PRC framework serves as a foundation for decentralised rendering and computation, providing insights into how tasks can be distributed efficiently across a network of peers. Leveraging the P2PRC approach, this project extends its capabilities to handle a wider range of computationally intensive tasks.
1.2. Motivation
Many of the users rely on our PC / Laptop or servers that belong to a server farm to run heavy tasks and with the demand of high creativity requires higher computing power. Buying a powerful computer every few years to run a bunch of heavy tasks which are not executed as frequently to reap the benefits can be inefficient utilization of hardware. On the other end, renting servers to run these heavy tasks can be really useful. Ethically speaking this is leading to monopolisation of computing power similar to what is happening in the web server area. By using peer to peer principles it is possible to remove the monopolisation factor and increase the bandwidth between the client and server.
2. Installation
Over here we will cover the basic steps to get the server and client side running.
2.1. Latest release install
2.2. Install from Github master branch
2.2.1. Install Go lang
The entire the implementation of this project is done using Go lang. Thus, we need go lang to compile to code to a binary file. Instructions to install Go lang
2.2.2. Install Docker
In this project the choice of virtualization is Docker due to it's wide usage in the developer community. In the server module we use the Docker Go API to create and interact with the containers.
Instructions to install docker
Instructions to install docker GPU
// Do ensure that the docker command does not need sudo to run sudo chmod 666 /var/run/docker.sock
2.2.3. Build Project and install project
To set up the internal dependencies and build the entire go code into a single binary
make
2.2.4. Add appropriate paths to .bashrc
export P2PRC=/<PATH>/p2p-rendering-computation export PATH=/<PATH>/p2p-rendering-computation:${PATH}
2.2.5. Test if binary works
p2prc --help
- Output:
NAME: p2p-rendering-computation - p2p cli application to create and access VMs in other servers USAGE: p2prc [global options] command [command options] [arguments...] VERSION: 2.0.0 COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --Server, -s Starts server (default: false) [$SERVER] --UpdateServerList, --us Update List of Server available based on servers iptables (default: false) [$UPDATE_SERVER_LIST] --ListServers, --ls List servers which can render tasks (default: false) [$LIST_SERVERS] --AddServer value, --as value Adds server IP address to iptables [$ADD_SERVER] --ViewImages value, --vi value View images available on the server IP address [$VIEW_IMAGES] --CreateVM value, --touch value Creates Docker container on the selected server [$CREATE_VM] --ContainerName value, --cn value Specifying the container run on the server side [$CONTAINER_NAME] --BaseImage value, --bi value Specifying the docker base image to template the dockerfile [$CONTAINER_NAME] --RemoveVM value, --rm value Stop and Remove Docker container (IP:port) accompanied by container ID via --ID or --id [$REMOVE_VM] --ID value, --id value Docker Container ID [$ID] --Ports value, -p value Number of ports to open for the Docker Container [$NUM_PORTS] --GPU, --gpu Create Docker Containers to access GPU (default: false) [$USE_GPU] --Specification value, --specs value Specs of the server node [$SPECS] --SetDefaultConfig, --dc Sets a default configuration file (default: false) [$SET_DEFAULT_CONFIG] --NetworkInterfaces, --ni Shows the network interface in your computer (default: false) [$NETWORK_INTERFACE] --ViewPlugins, --vp Shows plugins available to be executed (default: false) [$VIEW_PLUGIN] --TrackedContainers, --tc View (currently running) containers which have been created from the client side (default: false) [$TRACKED_CONTAINERS] --ExecutePlugin value, --plugin value Plugin which needs to be executed [$EXECUTE_PLUGIN] --CreateGroup, --cgroup Creates a new group (default: false) [$CREATE_GROUP] --Group value, --group value group flag with argument group ID [$GROUP] --Groups, --groups View all groups (default: false) [$GROUPS] --RemoveContainerGroup, --rmcgroup Remove specific container in the group (default: false) [$REMOVE_CONTAINER_GROUP] --RemoveGroup value, --rmgroup value Removes the entire group [$REMOVE_GROUP] --MAPPort value, --mp value Maps port for a specific port provided as the parameter [$MAPPORT] --DomainName value, --dn value While mapping ports allows to set a domain name to create a mapping in the proxy server [$DOMAINNAME] --Generate value, --gen value Generates a new copy of P2PRC which can be modified based on your needs [$GENERATE] --ModuleName value, --mod value New go project module name [$MODULENAME] --PullPlugin value, --pp value Pulls plugin from git repos [$PULLPLUGIN] --RemovePlugin value, --rp value Removes plugin [$REMOVEPLUGIN] --AddMetaData value, --amd value Adds metadata about the current node in the p2p network which is then propagated through the network [$ADDMETADATA] --help, -h show help (default: false) --version, -v print the version (default: false)
3. Using basic commands
3.0.1. Start as a server
p2prc -s
3.0.2. View server Specification
p2prc --specs=<ip address>
3.0.3. Run container
use the --gpu
if you know the other machine has a gpu.
p2prc --touch=<server ip address> -p <number of ports> --gpu
3.0.4. Remove container
The docker id is present in the output where you create a container
p2prc --rm=<server ip address> --id=<docker container id>
3.0.5. Adding servers to ip table
p2prc --as=<server ip address you want to add>
3.0.6. Update ip table
p2prc --us
3.0.7. List Servers
p2prc --ls
3.0.8. View Network interfaces
p2prc --ni
3.0.9. Viewing Containers created Client side
p2prc --tc
3.0.10. Running plugin
p2prc --plugin <plugin name> --id <container id or group id>
3.0.11. Create group
p2prc --cgroup
3.0.12. Add container to group
p2prc --group <group id> --id <container id>
3.0.13. View groups
p2prc --groups
3.0.14. View specific group
p2prc --group <group id>
3.0.15. Delete container from group
p2prc --rmcgroup --group <group id> --id <container id>
3.0.16. Delete entire group
p2prc --rmgroup <group id>
3.0.17. Pulling plugin from a remote repo
p2prc --pp <repo link>
3.0.18. Deleting plugin from the plugin directory
p2prc --rp <plugin name>
3.0.19. Added custom metadata about the current node
p2prc --amd "custom metadata"
3.0.20. MapPort and link to domain name
Allows to expose remote ports from a machine in the P2P network.
p2prc --mp <port no to map> --dn <domain name to link Mapped port against>
4. P2P Module Implementation
The P2P module is for managing server information within the network. It maintains and updates the IP table, ensuring accuracy by preventing duplicates and removing entries for unreachable servers. Furthermore, the module conducts speed tests on the listed servers to determine upload and download speeds. This valuable information enables users to identify nearby servers with optimal performance, enhancing their overall network experience.
Figure 1: UML diagram of P2P module
The peer to peer implementation was built from scratch. This is because other peer to peer libraries were on the implementation of the Distributed hash table. At the current moment all those heavy features are not needed because the objective is to search and list all possible servers available. The limitation being that to be a part of the network the user has to know at least 1 server. The advantage of building from scratch makes the module super light and possibility for custom functions and structs. The sub topics below will mention the implementations of each functionality in depth.
4.1. IP Table
The ip table file is a json as the format with a list of servers ip addresses, latencies, downloads and uploads speeds. The functions implemented include read file, write file and remove duplicate IP addresses. The remove duplicate IP address function exists because sometimes servers IP tables can have the same ip addresses as what the client has. The path of the IP table json file is received from the configuration module.
{ "ip_address": [ { "Name": "<hostname of the machine>", "MachineUsername": "<machine username>", "IPV4": "<ipv4 address>", "IPV6": "<ipv6 address (Not used)>", "Latency": <latency to the server>, "Download": <download speed (Not used)>, "Upload": <upload speed (Not used)>, "ServerPort": "<server port no>", "BareMetalSSHPort": "<Baremetal ssh port no>", "NAT": "<boolean representing if the node is behind NAT or not>", "EscapeImplementation": "<NAT traversal implementation>", "ProxyServer": "<If the node listed is acting as a proxy server>", "UnSafeMode": <Unsafe mode if turned on will allow all nodes in the network public keys to be added to that particular node>", "PublicKey": "<Public key of that particular node>", "CustomInformation": "<custom information passed in through all the nodes in the network>" } ] }
4.1.1. Latency
The latency is measured in milliseconds. The route /serverinfo is called from the server and time it takes to provide a json response is recorded.
4.2. NAT Traversal
P2PRC currently supports TURN for NAT traversal.
4.3. TURN
The current TURN implementation used is FRP. The TURN server is also
required when a P2PRC node is acting as a Server. The TURN server is
determined based on the Node with the least amount of latency based on
the Nodes available on the IPTable. Once a TURN server is determined
there are 2 actions performed. The first one is /FRPPort
to the TURN
server to receive a port which is used to generate the external port
from the TURN server. The flow below describes the workflow.
4.3.1. Client mode
- Call
/FRPPort
http://<turn server ip>:<server port no>/FRPport
- Call the TURN server in the following manner. The following is a sample code snippet below.
import ( "github.com/Akilan1999/p2p-rendering-computation/p2p/frp" ) func main() { serverPort, err := frp.GetFRPServerPort("http://" + <lowestLatencyIpAddress.Ipv4> + ":" + lowestLatencyIpAddress.ServerPort) if err != nil { return nil, err } // Create 1 second delay to allow FRP server to start time.Sleep(1 * time.Second) // Starts FRP as a client with proxyPort, err := frp.StartFRPClientForServer(<lowestLatencyIpAddress.Ipv4>, serverPort, <the port you want to expose externally>) if err != nil { return nil, err } }
5. Language Bindings
Language bindings
refers to wrappers to bridge 2 programming languages. This is used in
P2PRC to extend calling P2PRC functions in other programming languages.
Currently this is done by generating .so
and .h
from the Go
compiler.
5.2. Workings under the hood
Below are a sample set of commands to open the bindings implementation.
# run cd Bindings/ # list files ls # search for file Client.go
5.2.1. In Client go
There a few things to notice which are different from your standard Go programs:
- 1. We import "C" which means Cgo is required.
import "C"
- 2. All functions which are required to be called from other programming languages have comment such as.
//export <function name> // ------------ Example ---------------- // The function below allows to externally // to call the P2PRC function to start containers // in a specific node in the know list of nodes // in the p2p network. // Note: the comment "//export StartContainer". //export StartContainer func StartContainer(IP string) (output *C.char) { container, err := client.StartContainer(IP, 0, false, "", "") if err != nil { return C.CString(err.Error()) } return ConvertStructToJSONString(container) }
- 3. While looking through the file (If 2 files are compared it is pretty trivial to notice a common structure).
// --------- Example ------------ //export StartContainer func StartContainer(IP string) (output *C.char) { container, err := client.StartContainer(IP, 0, false, "", "") if err != nil { return C.CString(err.Error()) } return ConvertStructToJSONString(container) } //export ViewPlugin func ViewPlugin() (output *C.char) { plugins, err := plugin.DetectPlugins() if err != nil { return C.CString(err.Error()) } return ConvertStructToJSONString(plugins) }
- It is easy to notice that:
ConvertStructToJSONString(<go object>)
: This is a helper function that convert a go object to JSON string initially and converts it toCString
.(output *C.char)
: This is the return type for most of the functions.
- A Pseudo code to refer to the common function implementation shape could be represented as:
func <Function name> (output *C.char) { <response>,<error> := <P2PRC function name>(<parameters if needed>) if <error> != nil { return C.CString(<error>.Error()) } return ConvertStructToJSONString(<response>) }
5.3. Current languages supported
5.3.1. Python
- Build sample python program
The easier way
# Run make python # Expected ouput Output is in the Directory Bindings/python/export/ # Run cd Bindings/python/export/ # list files ls # Expected output SharedObjects/ p2prc.py
Above shows a generated folder which consists of a folder called "SharedObjects/" which consists of
p2prc.so
andp2prc.h
files.p2prc.py
refers to a sample python script calling P2PRC go functions. To start an any project to extend P2PRC with python, This generated folder can copied and created as a new git repo for P2PRC extensions scripted or used a reference point as proof of concept that P2PRC can be called from other programming languages.
5.3.2. Haskell
P2PRC officially supports Haskell bindings and will further support project using Haskell to build orchestrators on top of P2PRC.
6. Config Implementation
The configuration module is responsible to store basic information of absolute paths of files being called in the Go code. In a full-fledged Cli the configuration file can be found in the directory etc and from there points to location such as where the IP table file is located. In the future implementation the config file will have information such as number of hops and other parameters to tweak and to improve the effectiveness of the peer to peer network. The configuration module was implemented using the library Viper. The Viper library automates features such as searching in default paths to find out if the configuration file is present. If the configuration file is not present in the default paths then it auto generates the configuration file. The configurations file can be in any format. In this project the configuration file was generated using JSON format.
{ "MachineName": "pc-74-120.customer.ask4.lan", "IPTable": "/Users/akilan/Documents/p2p-rendering-computation/p2p/iptable/ip_table.json", "DockerContainers": "/Users/akilan/Documents/p2p-rendering-computation/server/docker/containers/", "DefaultDockerFile": "/Users/akilan/Documents/p2p-rendering-computation/server/docker/containers/docker-ubuntu-sshd/", "SpeedTestFile": "/Users/akilan/Documents/p2p-rendering-computation/p2p/50.bin", "IPV6Address": "", "PluginPath": "/Users/akilan/Documents/p2p-rendering-computation/plugin/deploy", "TrackContainersPath": "/Users/akilan/Documents/p2p-rendering-computation/client/trackcontainers/trackcontainers.json", "ServerPort": "8088", "GroupTrackContainersPath": "/Users/akilan/Documents/p2p-rendering-computation/client/trackcontainers/grouptrackcontainers.json", "FRPServerPort": "True", "BehindNAT": "True", "CustomConfig": null }
7. Abstractions
The Abstractions package consists of black-boxed functions for P2PRC.
7.1. Functions
Init(<Project name>)
: Initializes P2PRC with all the needed configurations.Start()
: Starts p2prc as a server and makes it possible to extend by adding other routes and functionality to P2PRC.MapPort(<port no>)
: On the local machine the port you want to export to world.StartContainer(<ip address>)
: The machine on the p2p network where you want to spin up a docker container.RemoveContainer(<ip address>,<container id>)
: Terminate container based on the IP address and container name.GetSpecs(<ip address>)
: Get specs of a machine on the network based on the IP address.ViewIPTable()
: View the IP table which about nodes in the network.UpdateIPTable()
: Force update IP table to learn about new nodes faster.
8. NAT Traversal
P2PRC currently supports TURN for NAT traversal.
8.1. TURN
The current TURN implementation used is FRP. The TURN server is also
required when a P2PRC node is acting as a Server. The TURN server is
determined based on the Node with the least amount of latency based on
the Nodes available on the IPTable. Once a TURN server is determined
there are 2 actions performed. The first one is /FRPPort
to the TURN
server to receive a port which is used to generate the external port
from the TURN server. The flow below describes the workflow.
9. Client mode
- Call
/FRPPort
http://<turn server ip>:<server port no>/FRPport
- Call the TURN server in the following manner. The following is a sample code snippet below.
import ( "github.com/Akilan1999/p2p-rendering-computation/p2p/frp" ) func main() { serverPort, err := frp.GetFRPServerPort("http://" + <lowestLatencyIpAddress.Ipv4> + ":" + lowestLatencyIpAddress.ServerPort) if err != nil { return nil, err } // Create 1 second delay to allow FRP server to start time.Sleep(1 * time.Second) // Starts FRP as a client with proxyPort, err := frp.StartFRPClientForServer(<lowestLatencyIpAddress.Ipv4>, serverPort, <the port you want to expose externally>) if err != nil { return nil, err } }
10. Blog posts
10.1. Self host within 5 minutes any program
- Author: Akilan Selvacoumar
- Date: 28-01-2025
- Video tutorial:
This is a fun expirement for anyone to try to quickly run a server and quickly do a map port and domain name mapping in a single command.
10.1.1. 1. Find a program you want to run
Let's try to setup a really easy program (Let's do with Linkwarden with docker compose :) ). This is under the assumption you have docker compose installed on your local machine.
- Let's run Linkwarden using docker compose and P2PRC
mkdir linkwarden && cd linkwarden curl -O https://raw.githubusercontent.com/linkwarden/linkwarden/refs/heads/main/docker-compose.yml curl -L https://raw.githubusercontent.com/linkwarden/linkwarden/refs/heads/main/.env.sample -o ".env"
Environment configuration
vim .env # Change values NEXTAUTH_URL=https://<DOMAIN NAME>/api/v1/auth NEXTAUTH_SECRET=VERY_SENSITIVE_SECRET POSTGRES_PASSWORD=CUSTOM_POSTGRES_PASSWORD
Run linkwarden!
docker compose up
If setup correctly linkwarden should be running. Local link: http://localhost:3000
Time to setup P2PRC Installation Instructions
Run p2prc as a background
p2prc -s &
Run map port and domain mapping
p2prc --mp 3000 --dn <DOMAIN NAME>
Sample response
{ "IPAddress": "217.76.63.222", "PortNo": "61582", "EntireAddress": "217.76.63.222:61582" }
Add DNS entry
A entry 217.76.63.222
Your done now just head to the DOMAIN NAME you added. ex: https://linkwarden.akilan.io
11. Ideas for future potencial features
Consists of personal loideas for the future of P2PRC. At moment only has main contributors writiing to this.
11.1. To support hetrogenous set of Nodes that cannot run P2PRC
This stems from a personal issue I have when doing research on CheriBSD kernel. For my research I am using the ARM morello which is a 128bit ARMv8 processor. At the moment Go programs can cannot compile and run inside the CPU. This means I cannot run P2PRC at the moment inside the ARM morello to remotely access it when it's behind NAT using P2PRC. This would indeed be a common problem when running against various Architectures that do not support running P2PRC. As you will see soon this also creates oppurtunity space to scale faster to nodes in a local network and would introduce a new layer fault tolerance within a local network nodes.
11.1.1. Assumptions:
- I have a Morello board that cannot run P2PRC
- The Morello has a local IP address (ex: 192.168.0.10)
- I have 2 laptops running P2PRC in that local network.
- This means I have 2 ways to access the Morello board: Which is to SSH into either 2 laptops and then SSH into 192.168.0.10 to gain access to the board. Wouldn't it be great to automate this whole layer and as well look into custom tasks into the hetrogenous hardware.
11.1.2. Set of interesting possible:
We build a cool set possibilities before and use this to build up the implementation plan.
- We can use P2PRC access the morello board remotely in a single command.
- We can use the P2PRC protocol to run servers inside the morello board via traversed node locally which can access that Node.
- Spin servers on node not running P2PRC using the P2PRC standard abstractions.
- Auto-setup P2PRC nodes with just SSH access via potencially a DSL.
- A neat use case for CHERI for instance would be use the architecture to run light weight hypervisors.
11.1.3. Implementation
- To use implementations similar to socat to ensure we can bind address of local nodes to a node running P2PRC and the node running P2PRC can do a local map port.
- We are working on hardening the implementation of the –mp (Map port) to even map ports to machines which remotely running P2PRC. This means of instance I can issue a command to the Morello board without the morello board being in my local network.
- We would want to implement the exsisting P2PRC public key mechanism as well so that other nodes can access the Morello board who have permission access.
Figure 2: Implementation idea (To be improved upon)