Not sure what you’re using to generate that list/formatting is a bit difficult.
I don’t have a cluster since it’s effectively single user + @Auto_Post_Bot@social.packetloss.gg (in theory a few other people have access, but they’re not active), single machine, it’s just more or less the out of the box docker stuff on a bare metal machine in my basement + a digital ocean droplet.
The droplet is what I’m using to have a static IP to prevent dynamic DNS nonsense + it provides some level of protection against a naive DDoS attack on random fediverse servers (since I can in the worst case, get on my phone and severe the ZeroTier connection that’s using to connect the droplet to my basement server).
I’m pretty confident whatever is going on is payload related at this point.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
50622 70 20 0 330264 240200 201512 S 0.0 0.7 0:25.21 postgres
50636 70 20 0 327804 239520 201296 S 0.0 0.7 0:26.55 postgres
50627 70 20 0 327204 239152 201592 S 0.0 0.7 0:24.75 postgres
50454 70 20 0 328932 238720 200872 S 0.0 0.7 0:26.61 postgres
50639 70 20 0 313528 217800 193792 S 0.0 0.7 0:03.13 postgres
50641 70 20 0 313284 217336 194204 S 0.0 0.7 0:03.15 postgres
50626 70 20 0 313592 216604 193636 S 0.0 0.7 0:05.07 postgres
50632 70 20 0 313236 216460 193968 S 0.0 0.7 0:04.52 postgres
50638 70 20 0 310368 216084 193856 S 0.0 0.7 0:04.20 postgres
50614 70 20 0 310520 216072 193840 S 0.0 0.7 0:02.88 postgres
50642 70 20 0 312200 215920 194068 S 0.0 0.7 0:04.46 postgres
50640 70 20 0 312584 215724 193676 S 0.0 0.7 0:03.32 postgres
50635 70 20 0 309744 215404 193764 S 0.0 0.7 0:02.72 postgres
50630 70 20 0 312168 215224 193488 S 0.0 0.7 0:02.67 postgres
50621 70 20 0 309560 215096 193772 S 0.0 0.7 0:02.97 postgres
50646 70 20 0 309492 215008 193560 S 0.0 0.7 0:04.66 postgres
50625 70 20 0 309760 215004 193368 S 0.0 0.7 0:03.08 postgres
50637 70 20 0 309296 214992 193848 S 0.0 0.7 0:02.87 postgres
50616 70 20 0 310596 214984 192700 S 0.0 0.7 0:04.17 postgres
50643 70 20 0 310392 214940 194008 S 0.0 0.7 0:04.14 postgres
50624 70 20 0 310128 214880 192928 S 0.0 0.7 0:04.15 postgres
50631 70 20 0 310220 214596 192576 S 0.0 0.7 0:02.71 postgres
50613 70 20 0 309364 213880 192520 S 0.0 0.7 0:04.06 postgres
50628 70 20 0 309852 213236 191504 S 0.0 0.7 0:03.04 postgres
50634 70 20 0 187772 163388 149428 S 0.0 0.5 0:02.87 postgres
50644 70 20 0 189684 162892 148508 S 0.0 0.5 0:04.11 postgres
50633 70 20 0 186096 162544 149324 S 0.0 0.5 0:03.20 postgres
50629 70 20 0 185644 162112 149296 S 0.0 0.5 0:04.62 postgres
50618 70 20 0 186264 160576 147928 S 0.0 0.5 0:04.10 postgres
50582 70 20 0 185708 160236 147592 S 0.0 0.5 0:04.10 postgres
3108 70 20 0 172072 144092 142256 S 0.0 0.4 0:04.46 postgres
3109 70 20 0 172024 142404 140632 S 0.0 0.4 0:02.24 postgres
2408 70 20 0 171856 23660 22020 S 0.0 0.1 0:00.76 postgres
3113 70 20 0 173536 9472 7436 S 0.0 0.0 0:00.15 postgres
3112 70 20 0 171936 8732 7020 S 0.0 0.0 0:01.54 postgres
3114 70 20 0 173472 5624 3684 S 0.0 0.0 0:00.00 postgres
I’ve got quite a bit of experience with postgres; I don’t see any indication it’s the problem.
Consider TrueNAS Scale with mirrored drive pairs DIY.