In this guide you will learn how to create a CRUD API using Postgresql for storage, Node.js for application code, and OpenFaaS to automate Kubernetes. Since we are building on top of Kubernetes, we also get portability out of the box, scaling and self-healing infrastructure.
Conceptual Architecture
Ultimately, we'll use the CRUD API for fleet management of a number of Raspberry Pis distributed around the globe. Postgresql will provide the backing datastore, OpenFaaS will provide scale-out compute upon Kubernetes, and our code will be written in Node.js using an OpenFaaS template and the pg
library from npmjs
.
The Guide
These are the component parts of the guide: * Kubernetes cluster - Production-grade container orchestration * Postgresql - "The World's Most Advanced Database" * OpenFaaS - "Serverless Made Simple for Kubernetes" * Docker - Container runtime used to build Docker images for use with OpenFaaS * Node.js - Server-side JavaScript we will build the API with
1) Set Up Your Kubernetes Cluster
Create a new cluster in your Civo dashboard and configure your kubectl to point at the new cluster.
For a full walk-through of Civo please refer to our Civo Documentation.
2) Install Postgresql
Civo users can add Postgresql as an one-click installation application in your Civo dashboard, or via the command-line interface:
If you are not on the Civo managed Kubernetes and don't have access to the Marketplace, you can do the following.
- Install k3sup which can install Postgresql and OpenFaaS:
curl -sLfS https://get.k3sup.dev | sudo sh
- Install Postgresql:
k3sup app install postgresql
You should see output like this:
=======================================================================
= postgresql has been installed. =
=======================================================================
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgresql -U postgres -d postgres -p 5432
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
# Find out more at: https://github.com/helm/charts/tree/master/stable/postgresql
Whichever installation method you used, note the output carefully, it will show you the following, which you need in later steps:
- how to get your password
- your postgresql username
- how to connect with the Postgresql administrative CLI called
pgsql
3) Install OpenFaaS
First, on your machine, install the OpenFaaS CLI:
sh
curl -sLSf https://cli.openfaas.com | sudo sh
Again, if you are on the Managed Kubernetes, you can simply add OpenFaaS to your cluster as an application from the dashboard:
Alternatively, you can use k3sup
to install it:
sh
k3sup app install openfaas
Again, note the output because it will show you how to connect and fetch your password. This is either visible on the Civo dashboard or in your command-line output like this:
```======================================================================= = OpenFaaS has been installed. = ======================================================================= # Get the faas-cli curl -SLsf https://cli.openfaas.com | sudo sh # Forward the gateway to your machine kubectl rollout status -n openfaas deploy/gateway kubectl port-forward -n openfaas svc/gateway 8080:8080 & # If basic auth is enabled, you can now log into your gateway: PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo) echo -n $PASSWORD | faas-cli login --username admin --password-stdin faas-cli store deploy figlet faas-cli list # For Raspberry Pi faas-cli store list \ --platform armhf faas-cli store deploy figlet \ --platform armhf # Find out more at: # https://github.com/openfaas/faas ```
4) Install Docker
If you do not yet have Docker on your local machine, then get it from the Docker Homepage. Docker images are a secure, repeatable and portable way to build and ship code between environments.
Next, sign up for a Docker Hub account. The Docker Hub is an easy way to share our Docker images between our laptop and our cluster.
Run docker login
and use your new username and password.
Now set your Docker username for use with OpenFaaS, replacing with your username:
export OPENFAAS_PREFIX="alexellis2"
5) Node.js
Now let's build the CRUD application that can store the status, uptime, and temperatures of any number of Raspberry Pis as a proof of concept.
Install Node.js 12 on your local computer
Get Node.js here.
Create an OpenFaaS function using Node.js
mkdir -p $HOME/crud/
cd $HOME/crud/
faas-cli new --lang node12 device-status
mv device-status.yml stack.yml
--lang
flag, like --lang node12
or --lang node18
. For more details, visit the OpenFaaS documentation.
This creates three files for us:
├── device-status
│ ├── handler.js
│ └── package.json
└── stack.yml
Add the pq
npm module:
Adding the Postgresql library for Node.js (pg
), will enable Pooling of connections, so that each HTTP request to our CRUD API does not create a new connection.
Here's what the Node.js library says about Pooling:
The PostgreSQL server can only handle a limited number of clients at a time. Depending on the available memory of your PostgreSQL server you may even crash the server if you connect an unbounded number of clients.
PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client all queries among all simultaneous requests will be pipelined and executed serially, one after the other.
The good news is that we can access Pooling easily and you'll see that in action below.
cd $HOME/crud/device-status/
npm install --save pg
This step will update our package.json
file.
6) Build the database schema
We'll have two tables, one will hold the devices and a key that they use to access the CRUD API, and the other will contain the status data.
-- Each device
CREATE TABLE device (
device_id INT GENERATED ALWAYS AS IDENTITY,
device_key text NOT NULL,
device_name text NOT NULL,
device_desc text NOT NULL,
device_location point NOT NULL,
created_at timestamp with time zone default now()
);
-- Set the primary key for device
ALTER TABLE device ADD CONSTRAINT device_id_key PRIMARY KEY(device_id);
-- Status of the device
CREATE TABLE device_status (
status_id INT GENERATED ALWAYS AS IDENTITY,
device_id integer NOT NULL references device(device_id),
uptime bigint NOT NULL,
temperature_c int NOT NULL,
created_at timestamp with time zone default now()
);
-- Set the primary key for device_status
ALTER TABLE device_status ADD CONSTRAINT device_status_key PRIMARY KEY(status_id);
Create both tables using a pqsql
prompt, you took a note of this in the earlier step when you installed Postgresql, either via the Civo marketplace or k3sup
. The command given will run the pgsql
CLI in a Kubernetes container on your cluster.
Simply paste the text in the code snippet above into the prompt. You will be able to exit the prompt by running the command \q
.
Next, we'll provision a device manually, and then use the API for the devices to call home.
First, create an API key using bash
:
export KEY=$(head -c 16 /dev/urandom | shasum | cut -d" " -f "1")
echo $KEY
Back in your pqsql
prompt, enter the following, replacing the first value with the key generated above:
INSERT INTO device (device_key, device_name, device_desc, device_location) values
('4dcf92826314c9c3308b643fa0e579b87f7afe37', 'k4s-1', 'OpenFaaS ARM build machine', POINT(35,-52.3));
At this point you'll get a new row, you'll need two parts for your device, its device_id
which identifies it and its device_key
which it will use to access the CRUD API.
SELECT device_id, device_key FROM device WHERE device_name = 'k4s-1';
Keep track of this for use with the CRUD API later:
device_id | device_key
-----------+------------------------------------------
1 | 4dcf92826314c9c3308b643fa0e579b87f7afe37
(1 row)
7) Build the CREATE operation
The C in CRUD stands for CREATE and we will use it to insert a row into the device_status
database. The Raspberry Pi will run some code on a periodic basis using cron
, and access our API that way.
The connection pool code in the pg
library needs several pieces of data to connect our function to the database.
- DB host (confidential)
- DB user (confidential)
- DB password (confidential)
- DB name (non-confidential)
- DB (non-confidential)
All confidential data will be stored in a Kubernetes secret, non-confidential data will be stored in stack.yml
using environment variables. Now we are going to create that Kubernetes secret named db
. Remember that the instructions for getting the Postgresql username and password are above in the step where you created the database.
export USER="your_postgres_username"
export PASS="your_postgres_password"
export HOST="postgresql.default.svc.cluster.local"
kubectl create secret generic -n openfaas-fn db \
--from-literal db-username="$USER" \
--from-literal db-password="$PASS" \
--from-literal db-host="$HOST"
secret/db created
Now, attach the secret and environment variables to stack.yml
:
```yaml
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions: device-status: lang: node12 handler: ./device-status image: alexellis2/device-status:latest environment: dbport: 5432 dbname: postgres secrets: - db ``` (Note that you will need to change the image prefix to be your Docker username.)
Now, let's edit handler.js
and connect to the database:
"use strict"
const { Client } = require('pg')
const Pool = require('pg').Pool
const fs = require('fs')
var pool;
module.exports = async (event, context) => {
if(!pool) {
const poolConf = {
user: fs.readFileSync("/var/openfaas/secrets/db-username", "utf-8"),
host: fs.readFileSync("/var/openfaas/secrets/db-host", "utf-8"),
database: process.env["db_name"],
password: fs.readFileSync("/var/openfaas/secrets/db-password", "utf-8"),
port: process.env["db_port"],
};
pool = new Pool(poolConf)
await pool.connect()
}
let deviceKey = event.headers["x-device-key"]
if(deviceKey && event.body.deviceID) {
let deviceID = event.body.deviceID
const { rows } = await pool.query("SELECT device_id, device_key FROM device WHERE device_id = $1 and device_key = $2", [deviceID,deviceKey]);
if(rows.length) {
await insertStatus(event, pool);
return context.status(200).succeed({"status": "OK"});
} else {
return context.status(401).error({"status": "invalid authorization or device"});
}
}
return context.status(200).succeed({"status": "No action"});
}
async function insertStatus(event, pool) {
let id = event.body.deviceID;
let uptime = event.body.uptime;
let temperature = event.body.temperature;
try {
let res = await pool.query('INSERT INTO device_status (device_id, uptime, temperature_c) values($1, $2, $3)',
[id, uptime, temperature]);
console.log(res)
} catch(e) {
console.error(e)
}
}
Time to deploy the code!
faas-cli up
Now, test it with curl
:
curl 127.0.0.1:8080/function/device-status \
--data '{"uptime": 2294567, "temperature": 38459}' \
-H "X-Device-Key: 4dcf92826314c9c3308b643fa0e579b87f7afe37" \
-H "X-Device-ID: 1" \
-H "Content-Type: application/json"
Use pqsql
to see if the row got inserted:
SELECT * from device_status;
status_id | device_id | uptime | temperature_c | created_at
-----------+-----------+---------+---------------+-------------------------------
1 | 1 | 2294567 | 38459 | 2019-12-11 11:56:04.380975+00
(1 row)
If you ran into any errors then you can use a simple debugging approach:
faas-cli logs device-status
- this shows anything printed to stdout or stderr- Add a log statement with
console.log("message")
and runfaas up
again
8) Build the RETRIEVE operation
Let's allow users or devices to select data when a valid devicekey and deviceid combination are given.
"use strict"
const { Client } = require('pg')
const Pool = require('pg').Pool
const fs = require('fs')
const pool = initPool()
module.exports = async (event, context) => {
let client = await pool.connect()
let deviceKey = event.headers["x-device-key"]
let deviceID = event.headers["x-device-id"]
if(deviceKey && deviceID) {
const { rows } = await client.query("SELECT device_id, device_key FROM device WHERE device_id = $1 and device_key = $2", [deviceID, deviceKey]);
if(rows.length) {
if(event.method == "POST") {
await insertStatus(deviceID, event, client);
client.release()
return context.status(200).succeed({"status": "OK"});
} else if(event.method == "GET") {
let rows = await getStatus(deviceID, event, client);
client.release()
return context.status(200).succeed({"status": "OK", "data": rows});
}
client.release()
return context.status(405).fail({"status": "method not allowed"});
} else {
client.release()
return context.status(401).fail({"status": "invalid authorization or device"});
}
}
client.release()
return context.status(200).succeed({"status": "No action"});
}
async function insertStatus(deviceID, event, client) {
let uptime = event.body.uptime;
let temperature = event.body.temperature;
try {
let res = await client.query('INSERT INTO device_status (device_id, uptime, temperature_c) values($1, $2, $3)',
[deviceID, uptime, temperature]);
console.log(res)
} catch(e) {
console.error(e)
}
}
async function getStatus(deviceID, event, client) {
let uptime = event.body.uptime;
let temperature = event.body.temperature;
let {rows} = await client.query('SELECT * FROM device_status WHERE device_id = $1',
[deviceID]);
return rows
}
function initPool() {
return new Pool({
user: fs.readFileSync("/var/openfaas/secrets/db-username", "utf-8"),
host: fs.readFileSync("/var/openfaas/secrets/db-host", "utf-8"),
database: process.env["db_name"],
password: fs.readFileSync("/var/openfaas/secrets/db-password", "utf-8"),
port: process.env["db_port"],
});
}
Deploy the code with faas up
faas-cli up
- Now test it with
curl
:
curl -s 127.0.0.1:8080/function/device-status \
-H "X-Device-Key: 4dcf92826314c9c3308b643fa0e579b87f7afe37" \
-H "X-Device-ID: 1" \
-H "Content-Type: application/json"
If you have jq
installed, it can also format the output, here's an example:
{
"status": "OK",
"data": [
{
"status_id": 2,
"device_id": 1,
"uptime": "2294567",
"temperature_c": 38459,
"created_at": "2019-12-11T11:56:04.380Z"
}
]
}
The response body is in JSON and shows the underlying table schema. This can be "prettified" using an altered select statement, or by transforming each row in code to a new name.
8) Build the Client
We can now create a client for our CRUD API using any programming language, or even by using curl
as above. Python is the best supported language for the Raspberry Pi ecosystem, so let's use that to create a simple HTTP client.
Log into your Raspberry Pi running Raspbian.
Note you could also use a computer for this step, but must replace the temperature query since that will only work on the Raspberry Pi.
Install Python3 and pip
:
sudo apt update -qy
sudo apt install -qy python3 python3-pip
Add the Python requests
module which can be used to make HTTP requests:
sudo pip3 install requests
Create a folder for the code:
mkdir -p $HOME/client/
Create a client/requirements.txt
file with the following contents:
requests
Create the client/app.py
import requests
import os
ip = os.getenv("HOST_IP")
port = "31112"
deviceKey = os.getenv("DEVICE_KEY")
deviceID = os.getenv("DEVICE_ID")
uptimeSecs = 0
with open("/proc/uptime", "r") as f:
data = f.read()
uptimeSecs = int(data[:str.find(data,".")])
tempC = 0
with open("/sys/class/thermal/thermal_zone0/temp", "r") as f:
data = f.read()
tempC = int(data)
payload = {"temperature": tempC, "uptime": uptimeSecs}
headers = {"X-Device-Key": deviceKey, "X-Device-ID": deviceID, "Content-Type": "application/json"}
r = requests.post("http://{}:{}/function/device-status".format(ip, port), headers=headers, json=payload)
print("Temp: {}\tUptime: {} mins\tStatus: {}".format(tempC/1000, uptimeSecs/60, r.status_code))
- Create a script to run the client at
client/run_client.sh
#!/bin/bash
export HOST_IP=91.211.152.145 # This is your cluster IP, visible in the Civo dashboard
export DEVICE_KEY=4dcf92826314c9c3308b643fa0e579b87f7afe37 # replace with your key
export DEVICE_ID=1
python3 ./app.py
Set up a schedule for the client using cron
chmod +x ./client/run_client.sh
sudo systemctl enable cron
sudo systemctl start cron
crontab -e
Enter the following to create a reading every 15 minutes.
*/15 * * * * /home/pi/client/run_client.py
Test the script manually, then check the database table again.
/home/pi/client/run_client.py
Here are three consecutive runs I made using the command prompt:
postgres=# SELECT * from device_status;
status_id | device_id | uptime | temperature_c | created_at
-----------+-----------+---------+---------------+-------------------------------
8 | 1 | 2297470 | 38459 | 2019-12-11 12:28:16.28936+00
9 | 1 | 2297472 | 38946 | 2019-12-11 12:28:18.809582+00
10 | 1 | 2297475 | 38459 | 2019-12-11 12:28:21.321489+00
(3 rows)
Now, Over to You
Let's stop here and take a look at what we've built. * An architecture on Kubernetes with a database and scalable compute * A CRUD API with CREATE and RETRIEVE implemented in Node.js * Security on a per-device level using symmetrical keys * A working client that can be used today on any Raspberry Pi
It's over to you now to continue learning and building out the CRUD solution with OpenFaaS.
Keep learning
From here, you should be able to build the DELETE and UPDATE operations on your own, if you think that they are suited to this use-case. Perhaps when a device is de-provisioned, it should be able to delete its own rows?
The devicekey enabled access to read the records for a single device, but should there be a set of administrative keys, such that users can query the devicestatus records for any devices?
What else would you like to record from your fleet Raspberry Pis? Perhaps you can add the external and IP address of each host to the table and update the Python client to collect that? There are several free websites available that can provide this data.