lsof to find large open files

Was having a space allocation problem with a ceph host and couldn’t figure out what was holding files open.. finally listed lsof by size

lsof | grep REG | awk '{ print $1,$7,$9 }' | sort -t ' ' -k 2 -V

Found rsyslog had huge files open

splunkd REG 16400942226
splunkd REG 16400942226
splunkd REG 16400942226
splunkd REG 16400942226
rsyslogd REG 164487529796
rsyslogd REG 164487529796

Ceph Optimal Recovery Values

The Ceph defaults for this are a little too aggressive for most devices, this will give you a more reasonable recovery speed that does not tank the system as hard but still yields a quick stable recovery.

ceph config set osd osd_recovery_sleep_hdd 0.25
ceph config set osd osd_recovery_sleep_ssd 0.05
ceph config set osd osd_recovery_sleep_hybrid 0.10

Ceph – Delete erasure coded pgs after dataloss

Sometimes you have failures that cannot be fixed… ie EC 2+1 and 2 drives failing… (btw this was the recommended default EC profile of 14.x..) and you should use 8+3 at minimum to prevent this!

Warning, everything below ensures data loss on the affected PG.

ceph pg PGID query  | jq .acting

# Stop OSD related to PG, figure out the shard id of the pg, generally its .s0, .s1, .s2 depending on your EC config.
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0/ --pgid PGID.s1/2 --force --op remove

# Restart the osd, wait for it to attempt to peer, stop it then mark it complete.
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0/ --pgid PGID.s1/2 --op mark-complete

# Tell the customer your mistake is acceptable..
ceph pg 13.df mark_unfound_lost delete

Installing Headless Nvidia Drivers in Ubuntu 20.04 and up

Lots of poor documentation around the interwebs for this… here is the required packages to make this useful. If you want the normal driver with xorg just remove headless from the package name.

apt install linux-headers-$(uname -r) -y
apt install nvidia-headless-470-server nvidia-utils-470-server libnvidia-encode-470-server -y

How does video encoding work? MP4? I-Frames?! P-Frames!?!?! GOP!!!?!?!!?

This is a placeholder for me comprehending how video encoding works… I’ll update/edit as I become more familiar.. please don’t assume I have any idea what im talking about.

But, basically you have a GOP (group of pictures) and that GOP has a specified number of frames per second. So lets say you have a 30 FPS video, it has 30 frames per second of data, you can have a number of GOP that is different than that though.

So lets say you have a GOP size of 90, but your frame rate is 30 FPS. You will then have 29 P-Frames per I-Frame, For a total of 87 P-frames and 3 I-Frames.

I-Frames are ENTIRE picture, P-Frames are the “guess” at what changed since the last I-Frame. More I-Frames = more bandwidth.

https://en.wikipedia.org/wiki/Video_compression_picture_types

Fixing a Ceph Mon map after disaster!

Cephs weakest leak is configuration.. once a cluster is deployed is incredibly durable and will survive most mistakes without punishment. However adding a monitor that is unreachable via all machines can yield a very broken cluster that cannot be managed.

For example, if you add a new monitor and the automatically detected ip (ansible or kolla) isn’t correct, possibly a loopback or other assigned ip, you will loose the ability to use the ceph tools on the cluster because of a broken monitor map config.

So heres what you need to know in a nut shell to fix it.

  1. Stop your monitors
  2. Export a monitor map from the last known good monitor
  3. Edit the monitor map to fix the broken entry
  4. Repeat this for all the monitors that were “working”.
  5. Inject the monitor maps on those monitors
  6. Start the monitors and check for them to forum a quorum.
ceph-mon -c /etc/ceph/cluster-name-ceph.conf -i MONITOR_NAME  --extract-monmap /tmp/monmap
monmaptool --print /tmp/monmap
monmaptool --rm bad-host-entry /tmp/monmap
monmaptool --print /tmp/monmap
ceph-mon --c /etc/ceph/cluster-name-ceph.conf -i MONITOR_NAME --inject-monmap /tmp/monmap
chown ceph:ceph -R /var/lib/ceph/mon/cluster-monitor-name/
systemctl start ceph-mon.target


Published
Categorized as Ceph, Storage