Been there.
https://forum.proxmox.com/threads/garbage-collector-error-due-to-datastore-full.136813/#post-607732
You will temporary have to move some chunk dir's in /mnt/datastore/HDD1/.chunks/ to another disk.
Enable maintanance mode on datastore HDD1
mv /mnt/datastore/HDD1/.chunks/ff*...
Just login, do the "apt install chrony", hit the ENTER 2x, sit back and relax...
Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
networkd-dispatcher
The following packages will be REMOVED:
systemd-timesyncd
The following NEW packages will...
Well, that is new(s) for me.
proxmox:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto vmbr0
iface vmbr0 inet static
address 192.168.1.2
netmask 255.255.255.0
gateway 192.168.1.254
bridge_ports enp2s0f0
bridge_stp off
bridge_fd...
You have working ping so you should be reachable on https://51.81.11.97:8006 if your connecting computer is in the same network...
Comparing to my /etc/network/interfaces you could comment out the 5 lines with
pre-down and post-up
and restart networking: sudo systemctl restart networking
Just start giving info, so we can all join in!
Log in in on the server and start giving info:
proxmox:~# dmesg | grep eth
proxmox:~# ip a
proxmox:~# cat /etc/network/interfaces
proxmox:~# ping 1.1.1.1
More or less in the same boat. Old and rusty Proxmox, installed it 2011, kept up-to-date.
Now forced to do a IP-rearrangement in my home-environment.
Noticed: expired pve-root-ca and pve-ssl files.
With above help and...
Works like magic!
root@pve-ML110:/etc/proxmox-backup# zpool status
pool: PassPort2TB
state: ONLINE
scan: scrub repaired 0B in 12:34:54 with 0 errors on Sun Jul 11 12:58:56 2021
config:
NAME STATE READ WRITE CKSUM...
Hi,
created a datastore zpool on attached usb disk, which at the time identified itself by /dev/sdi.
Need to change that to /dev/disk/by-id.
Tried https://plantroon.com/changing-disk-identifiers-in-zpool/#detach-and-attach
Want to export and import -d /dev/disk/by-id...
After kernel upgrade I did a Hibernate on the VM's and a following reboot hung with the same msg as the above shown by izegd.
Had to powercycle the host.
No cluster here.
systemctl restarted a couple of pve services, after that i could disable this Storage, and delete it.
Probably a full disk was one of the cause:
"unable to open file '/etc/pve/nodes/pve4/lrm_status.tmp.4655' - Input/output error"
Solved.
Any thought on how to remove the beta from Storage?
All I get is "delete storage failed: error during cfs-locked 'file-storage_cfg' operation: got lock request timeout (500)" and loads of errors in syslog.
pvestatd[4098]: Tuxis_Backup_Beta: error fetching datastores - 500 Can't connect to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.