Fixed by a clean reinstall, unfortunately. I didn't find much that I was able to fix...
But, for further discussion:
- "initial interfaces" - a copy of working one (but that works only when the update was failed and I revert it for the first time - after I was able to run update once again -...
Hello,
One of nodes in my cluster failed after update to 6.4 - it doesn't start with the network
When doing pveveriosn -v some packages got the error "not correctly installed"
So, I got an idea to revert /etc/network/interfaces to old one, and it worked - after systemctl restart networking...
A lot of tose errors only:
May 05 14:18:31 pve-07-06 kernel: nfs: server 10.2.128.43 not responding, timed out
May 05 14:18:34 pve-07-06 pvestatd[2084]: storage 'BACKUP01-NFS' is not online
May 05 14:18:43 pve-07-06 pvestatd[2084]: unable to activate storage 'BACKUP01-NFS' - directory...
That doesn't change anything.
root@pve-07-08:~# pvesm set BACKUP01-NFS --disable 0
root@pve-07-08:~# pvesm status
storage 'BACKUP01-NFS' is not online
Name Type Status Total Used Available %
BACKUP01-NFS nfs inactive...
I have big issue with NFS storage that is used only for backups in my proxmox cluster (12 physical nodes).
Every time NFS server is rebooted i need to remove NFS share from proxmox, restart all 12 nodes, and the remount NFS share again, by adding it from scratch
Is there any way to force...
Hello,
I'm a bit stuck here.
I need to recover the old VM from died ESXi 6.0
I've got only files from VMWare, and in .vmx I've got this line:
scsi0.virtualDev = "lsisas1068"
I've converted .vmdk to qcow2 using qemu-img convert.
I'm able to see boot same as in this thread...
I have simple setup with two physical NIC interfaces.
I have VLAN interface for management and I would like to tag VM inside proxmox with the same VLAN.
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual...
Hi,
I have setup with 3 physical hosts, each of them got 3 disks as OSD in CEPH cluster.
Using calculation:
Total PGs = (Total_number_of_OSD * 100) / max_replication_count
Where my max_replication_count = 3
I've set PG 128 for my POOL_CEPH
I've two pools:
rados lspools...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.