Search results

  1. C

    Migration from ESXI to proxmox: Shared LVM issues

    I have SAN with 3 luns connected with multiple destination's paths by multipath without problem. But i used mainly cli for configuration at the end, not gui. Looking on your debug, i think problem is somewhere 1] acl on SAN 2] iscsi/multipath misconfig Check...
  2. C

    DELL fatal error was detected after Proxmox install

    Was it resolved? I got the same problem, finally upgraded one of the Dell 740XD from pve 7 to pve 8 (Linux pve-18 6.8.12-13-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-13 (2025-07-22T10:00Z) x86_64 GNU/Linux) and every reboot: Screen lost. Server stuck. Event Message: A fatal error was detected on a...
  3. C

    Converting a PVE host disk from legacy MBR to UEFI GPT

    Maybe you have uefi version installed too...Check https://pve.proxmox.com/wiki/Host_Bootloader
  4. C

    Proxmox + Ceph Cluster to Provide SMB?

    It depends on requirements/hw. It's not different from running mailservers, db servers, webservers on the ceph. Every application has it's own performane. It works. It's performant? Test it. Ceph performance's tests are everywhere.
  5. C

    [SOLVED] Proxmox VE - "Unable to locate package corosync-qdevice"

    Fix your network. Err:1 http://ftp.us.debian.org/debian bookworm InRelease<br> Temporary failure resolving 'ftp.us.debian.org'<br>Err:5 ...
  6. C

    no HA failover with loss of iSCSI

    Proxmox HA don't cover storage unavailability.
  7. C

    Test migration stuck

    PVE remotes: proxmox-ve: 8.4.0 (running kernel: 6.8.12-11-pve) pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d) PDM: Installed: 0.1.11 VM 10 GB + 1 TB disks All endpoints 3 devices are on the same subnet without fw, 10 Gbps network, hdd raid. This was 2nd try, 1st try was...
  8. C

    Access to download proxmox cdn is problematic

    We filter by fqdn on firewall. We have rarely problem with any other repository, but PVE repo is standard for fail. Update? Fail. Dist-upgrade? Fail. Repeatedly fail, until firewall and apt are in sync. And all is connected to the same dns infra.
  9. C

    Access to download proxmox cdn is problematic

    Hi, i am again reporting problem with cdn used by proxmox for apt repository. Our firewalls just doesn't work reliable with it's 1 minute ip cycling. ;; ANSWER SECTION: download.proxmox.com. 635 IN CNAME download.cdn.proxmox.com. <---- there is 600+ TTL ...
  10. C

    no DHCP for VM in test env?

    you need to debug via tcpdump/wireshark dhcp requests/responses
  11. C

    ProxMox Internal Relay

    Try blacklist/whitelist/mail filter.
  12. C

    How to best use multiple NICs

    1x LAG with 4x 25G nics or 2x LAG with 2x 25G nics split vlans as needed + add 1G adapter 2x 1G - 2x corosync network (better) or 1x 1G - primary corosync, secondary on the LAG
  13. C

    VM interface names - numbering changed with 7. vnic?

    Yes, newly added nic and suprised by different name. Looking into that git - well, that's not good info, especially with IaaC, who (human) will remember/count ifaces - it's X number before change, not possible to use cycle etc...Yes, we are using systemd, but using .link will require hard-coded...
  14. C

    VM interface names - numbering changed with 7. vnic?

    agent: 1 boot: cdn bootdisk: scsi0 cores: 2 cpu: host ide2: none,media=cdrom ipconfig1: ip=SOMEIPV4/24,gw=SOMEIPV4 memory: 4096 name: SOMEFQDN net0: virtio=0A:44:61:8C:42:C7,bridge=vmbr0,tag=VLANID net1: virtio=0A:12:66:62:02:25,bridge=vmbr0,tag=VLANID net2...
  15. C

    VM interface names - numbering changed with 7. vnic?

    PVE 7.4 /var/log# ip l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen...
  16. C

    Network instability on Proxmox Infrastructure in HA X Mikrotik

    If you need shut/unshut ports on switch after PVE reboot -> with 99 % probability the switch is problem the 1 % is for server hw/sw problem
  17. C

    Load balancing and redundancy

    well... 1] direct mesh network for ceph backend 2] if not LACP/MLAG/etc possible - STP or maybe any dynamic fast routing protocol 3] get LACP/MLAG switches
  18. C

    25Gbe Server Interface Negotiation and Speed Capping Issue

    google for example https://techcommunity.microsoft.com/blog/networkingblog/three-reasons-why-you-should-not-use-iperf3-on-windows/4117876