Search results

  1. N

    Changing boot order from inside a VM

    Handle automated reinstallation of the OS, initiated by the guest itself. But not every boot have to be a reinstall. Changing the boot order is usually the easiest thing to do.
  2. N

    Changing boot order from inside a VM

    Uh? Guest ACKs fs freeze/thaw and reports its IP addresses (and other infos) to the host via agent, IIUC. So it's already bi-directional. Well, even if completely isolated (like when there's no agent), is there no way to tell it to interact with its own BIOS settings? That's a whole can of...
  3. N

    Changing boot order from inside a VM

    Urgh. Having to expose PVE to "public" access is "not very good". Actually just a bit less bad than exposing IPMI. But if it's not (yet?) managed by agent interface there's not much I can do :( Tks.
  4. N

    Changing boot order from inside a VM

    Hello all. Is it possible to let a VM tell Proxmox "from now on, boot from network instead of from scsi0"? For servers with IPMI it can be done via ipmitool, but for VMs? Tks, Diego
  5. N

    Expand PBS storage

    Tks. The all-SSD pool is "impossible" (not enough space: server only hosts 24 disks). What about ditching completely ZFS and using MDRAID (the CPU does have 64 cores, so extra load should not be an issue) or HW RAID (changing the controller)? Is verify a sequential read? The big backup is...
  6. N

    Expand PBS storage

    @Dunuin Tks, but SSDs are way more expensive and offer at most 1/4 of the space (4TB vs 16TB). A striped mirror of HDDs gives half the space. @Richard Nearly the same issue as above: I need space (just one of the backups is 32TB! and requires more than 2 days to be verified). I can lose 4 disks...
  7. N

    Expand PBS storage

    Hello all. I'm going to configure a new PBS server. It currrently hosts 8 x 16TB disks, but "soon" (during the year, not next week) we'll need to add another 16 x 16TB (for a total of 24 x 16TB). I thought to use RAIDZ3, starting with the current 8 disks and expanding later with the new ones...
  8. N

    Can not install CEPH on freshly-reinstalled nodes

    I might have found the issue: the FIREWALL! Seems Proxmox is not adding rules to allow CEPH connections between cluster nodes (neither pings, it seems... that's what rang a bell: "why can't I ping virtX from virtY even if I can use ssh?"). Just disabling the firewall "automagically" lets CEPH...
  9. N

    Can not install CEPH on freshly-reinstalled nodes

    Trying to avoid a reinstall, I issued "pveceph purge" on virt5 and then on virt4. On both nodes /etc/ceph/ still exists and contains dangling symlinks: root@virt5:~# ls -l /etc/ceph/ total 4 lrwxrwxrwx 1 root root 18 Sep 19 11:14 ceph.conf -> /etc/pve/ceph.conf -rw-r--r-- 1 root root 92 Mar 8...
  10. N

    Can not install CEPH on freshly-reinstalled nodes

    I got errors while installing the second monitor (virt5) since the first try. The only one that starts is the one on virt4. The first time I had the network configured as a bridge over balance-alb bond including two eno. After reinstall I kept the default config of brige over a single eno to...
  11. N

    Can not install CEPH on freshly-reinstalled nodes

    Well, re "cveceph purge" deleting data only from that machine I found many old threads that said otherwise (someone lost all VM data), but since I haven't created OSDs yet, that's not a problem. Currently I didn't start cleanup and I have: root@virt4:~# ceph auth ls| sed 's/key: .*/key...
  12. N

    Can not install CEPH on freshly-reinstalled nodes

    Well, I did many tries reinstalling, and to avoid troubles I tried to delete everything ceph-related between tests. That includes running "pveceph purge" on virt4 (after turning off virt5 and virt6) and manually cleaning /etc/ceph/* and /etc/pve/priv/ceph* from virt1 after turning off virt4. But...
  13. N

    Can not install CEPH on freshly-reinstalled nodes

    Hello. It's about a week I'm banging my head on this. I had a 9-nodes cluster (virtN, N=1..9, 192.168.1.3N/24). I now have to replace all the nodes with "new" hardware, so I started from the nodes 4..6. As described in the docs: - shutdown virtX and start install on new HW, so no risk old virtX...
  14. N

    kernel 5.13.19-4-pve breaks e1000e networking

    Nope. It's quite an old storage system (nearly 20yo!) and the two controllers work in active/passive mode. The alternative would be to sacrifice the multipath... Tks anyway. Hope I'll be able to replace the power-hungry CX3 with a more energy-efficient system soon.
  15. N

    Removing iSCSI disk

    See also https://forum.proxmox.com/threads/kernel-5-13-19-4-pve-breaks-e1000e-networking.108141/ : I'm having troubles with newer kernels trying to scan "passive" devices :(
  16. N

    kernel 5.13.19-4-pve breaks e1000e networking

    Urgh. Missed that. 1) and 2) should probably be swapped... Given that changing the FW would not remove the cause of the problem (lvm scanning a not-responding device), what else can I do? I'm definitely out of ideas :(
  17. N

    kernel 5.13.19-4-pve breaks e1000e networking

    Still no updated fw, but the problem is not that the device does not work, it's that udevd launches pvscan on the raw device of the passive path, that doesn't respond (it's passive!). That's also the reason I blacklisted all disk devices ("r|/dev/.d.*|") in global_filter line of...
  18. N

    kernel 5.13.19-4-pve breaks e1000e networking

    Looked around for a bit, but it seems I can't find a firmware for those cards :( Maybe they're too old... Tomorrow I'll look again and will possibly try replacing 'em with newer ones (sigh... having to reconfigure mappings from the Clariion interface... bleach! :( ).
  19. N

    kernel 5.13.19-4-pve breaks e1000e networking

    sdc is part of a multipath device (an old CX3-80 connected via FC, two 4Gbps fibers): root@virt9:~# multipath -ll mp_CX3_dr (360060160c0251c001aa0979f088bec11) dm-7 DGC,RAID 5 size=39T features='1 queue_if_no_path' hwhandler='1 emc' wp=rw |-+- policy='service-time 0' prio=50 status=active | `-...
  20. N

    kernel 5.13.19-4-pve breaks e1000e networking

    Even with the latest 5.15 kernel it does not work :( Attached both working (-ok) and not-working (-bad) versions of the requested files. Hope it's easily fixable :) Tks.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!