In today’s IT landscape, many organizations are transitioning their systems to containerized platforms. Unfortunately, not all applications are cloud-native friendly. Even though, you have legacy applications running on a container platform, you still need a way to send metrics for alerting purpose or calculating SLA due to contractual agreements.
One such application is ProFTPD, a file transfer solution commonly used for SFTP and FTP protocols.
Luckily, ProFTPD provides several utilities to get several metrics such as uptime, process id, user count etc. But these utilities provided as compiled C program and not easy to parse and send metrics to InfluxDB or Prometheus.
Due to above reasons, It was written a small Python script which reads scoreboard file and export metrics as InfluxDB line or Json format.
#!/usr/bin/env python3
import os
import sys
import time
import argparse
import json
# https://github.com/proftpd/proftpd/blob/master/utils/utils.h
def read_scoreboard(file='/var/run/proftpd/proftpd.scoreboard'):
chunk_size = 624
metrics = dict()
count = 0
try:
with open(file=file, mode='rb') as f:
while True:
header = f.read(44)
if not header:
break
magic = int.from_bytes(header[:4], 'little')
version = int.from_bytes(header[8:12], 'little')
pid = int.from_bytes(header[16:20], 'little')
uptime = int.from_bytes(header[24:32], 'little')
epoch_now = int(time.time())
epoch_proftpd = (int(str(uptime)))
if magic == 0xdeadbeef:
while True:
chunk = f.read(chunk_size)
if not chunk:
break
user = chunk[:32]
if (( any( i > 0 for i in user))):
count += 1
metrics['usercount'] = count
metrics['maxinstances'] = int(os.environ.get('PROFTPD_MAXINSTANCES', 0)) # 0 means no definition find in the deployment.
metrics['uptime'] = epoch_now - epoch_proftpd
metrics['timestamp'] = round(time.time() * 1000 * 1000 * 1000)
metrics['release_name'] = os.environ.get('RELEASE_NAME', 'undefined')
metrics['pod_name'] = os.environ.get('PODNAME', 'undefined')
metrics['node_name'] = os.environ.get('HOSTNAME', 'undefined')
metrics['namespace'] = os.environ.get('NAMESPACE_NAME', 'undefined')
return metrics
else:
metrics['error'] = "Unknown magic number"
return metrics
except FileNotFoundError as e:
metrics['error'] = f"Scoreboard file {e.filename} not found"
return metrics
def export_metrics(metrics, output='influx'):
# influx line protocol
if output == 'influx':
tags = f"release_name={metrics['release_name']},namespace={metrics['namespace']},node_name={metrics['node_name']}"
fields = f"usercount={metrics['usercount']},maxinstances={metrics['maxinstances']},uptime={metrics['uptime']},podname=\"{metrics['pod_name']}\""
print(f"proftpd_scoreboard,{tags} {fields} {metrics['timestamp']}")
elif output == 'json':
jmetrics = dict()
jmetrics['release_name'] = metrics["release_name"]
jmetrics['namespace'] = metrics["namespace"]
jmetrics['node_name'] = metrics["node_name"]
jmetrics['usercount'] = metrics["usercount"]
jmetrics['uptime'] = metrics["uptime"]
jmetrics['pod_name'] = metrics["pod_name"]
jmetrics['maxinstances'] = metrics["maxinstances"]
jmetrics['timestamp'] = metrics["timestamp"]
print(json.dumps(jmetrics))
else:
print("unknown output format it must be one of 'influx' or 'json'")
def main(args):
metrics = dict()
parser = argparse.ArgumentParser(description='ProFTPD Scoreboard file Read')
parser.add_argument(
'--file',
default='/var/run/proftpd/proftp.scoreboard',
required=False,
metavar='file',
help='ProFTPD Scoreboard file location default: /var/run/proftpd/proftpd.scoreboard'
)
parser.add_argument(
'--output',
default='influx',
required=False,
help='output of read scoreboard file "influx" or "json"'
)
try:
args = vars(parser.parse_args())
metrics = read_scoreboard(args['file'])
if 'error' not in metrics:
export_metrics(metrics, args['output'])
else:
print(metrics)
except Exception as e:
print(e)
sys.exit(1)
if __name__ == '__main__':
main(sys.argv)
You have many options to send metrics to different monitoring systems with above script. One option is to send metrics to InfluxDB via Telegraf. Second option could be sending those metrics to Prometheus using Telegraf Prometheus plugin.
tesla@docker:~$ python3 scoreboard.py --file /var/run/proftpd.scoreboard --output=influx
proftpd_scoreboard,release_name=undefined,namespace=undefined,node_name=undefined usercount=0,maxinstances=0,uptime=365,podname="undefined" 1733658086889642752
tesla@docker:~$ python3 scoreboard.py --file /var/run/proftpd.scoreboard --output=json
{"release_name": "undefined", "namespace": "undefined", "node_name": "undefined", "usercount": 0, "uptime": 376, "pod_name": "undefined", "maxinstances": 0, "timestamp": 1733658097696133376}
telegraf.conf
...
[[inputs.exec]]
commands = ["/usr/lib-exec/platform-python --file /var/run/proftpd.scoreboard --output influx"]
timeout = "5s"
data_format = "influx"
interval = "15s"
telegraf.conf
[[outputs.prometheus_client]]
listen = ":9273"
When you choose second option you need to also configure Kubernetes Service and ServiceMonitor object accordingly.
The ServiceMonitor is a Custom Resource Definition (CRD) provided by the Prometheus Operator, which simplifies and automates monitoring tasks in Kubernetes. It is used to define how Prometheus should scrape metrics from services running in your Kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: proftpd
namespace: proftpd
labels:
app: proftpd
spec:
ports:
- name: metrics
port: 9273
targetPort: 9273
selector:
app: proftpd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: proftpd-metrics
namespace: proftpd
labels:
app: proftpd
spec:
selector:
matchLabels:
app: proftpd
endpoints:
- port: metrics
path: /metrics
scheme: http
interval: 15s