hi
this setup is perfectly working with version 7.x and rest of the two node still in production and ceph is still working, i am using two separate set of nic for ceph public and private, 10.10.20.xx is in mesh network and do not have vlan, private network is in a different ovs port in bond 2 in vlan 20 to isolate the traffics from migration and data, the problem is (enp177s0f2 enp177s0f3} changed its name to (ens4f0np0 ens4f1np1), and also four more interface changed its name, and it stopped, now so many interface is showing in this file, i don't know how to go fwd, should i use a # in front of those old interface name ? and reconfigure them, but after two days later the biggest problem is that the i am unable to shell in to the CLI, the disk is full now with log, scared to restart now, /dev/mapper/pve-root is 100% full .... now what to do .... please help if possible ..... or i loose my job
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 2.6G 23G 10% /run
/dev/mapper/pve-root 94G 94G 0 100% /
tmpfs 126G 60M 126G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 304K 72K 228K 24% /sys/firmware/efi/efivars
/dev/sda2 511M 352K 511M 1% /boot/efi
zpool-ha 825G 128K 825G 1% /zpool-ha
/dev/fuse 128M 56K 128M 1% /etc/pve
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-4
tmpfs 126G 28K 126G 1% /var/lib/ceph/osd/ceph-5
192.168.0.100:/mnt/ser-hdd/pve 14T 1.6T 13T 12% /mnt/pve/NFS
192.168.0.44:/mnt/NFSShare 3.6T 1.6T 2.0T 45% /mnt/pve/NFS-Storage
tmpfs 26G 0 26G 0% /run/user/0
message is :=
connection failed (Error 500: closing file '/var/tmp/pve-reserved-ports.tmp.650163' failed - No space left on device)
what to do now ?
these r the new interface which changed its name, but unable to figure out corresponding name in v 7
iface ens4f0np0 inet manual
iface ens4f1np1 inet manual
iface ens5f0 inet manual
iface ens5f1 inet manual
iface ens5f2 inet manual
iface ens5f3 inet manual