Search results

  1. I

    [TUTORIAL] PoC 2 Node HA Cluster with Shared iSCSI GFS2

    Hi using gfs2 a today hit bug: [2479853.036266] ------------[ cut here ]------------ [2479853.036509] kernel BUG at fs/gfs2/inode.h:58! [2479853.036721] Oops: invalid opcode: 0000 [#1] PREEMPT SMP PTI need to restart pve node. :-/
  2. I

    Monitoring ceph with Zabbix 6.4

    Latest ceph release (20) removes both: https://ceph.io/en/news/blog/2025/v20-2-0-tentacle-released/#changes MGR Users now have the ability to force-disable always-on modules. The restful and zabbix modules (deprecated since 2020) have been officially removed.
  3. I

    What is the default migration network ?

    When 3 node CEPH cluster is setup using full mesh - routed mode, is it possible to use this ceph network for migration as well? In GUI I have to select interface, but actually there are two interfaces in this case, each to different node.
  4. I

    [SOLVED] Question about running Proxmox on a single consumer SSD

    I think it is fine, if you will not use zfs. Personaly I am using mdadm raid 1 (yes I know...), 2x MX500 2TB and LVM on top for VM. Running fine for 3 years.
  5. I

    Fibre Channel SAN connectivity.

    You can do it by disk passthrough (cli). https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM). Since FC SAN devices usually have multiple paths, you have to configure multi path https://pve.proxmox.com/wiki/Multipath#Set_up_multipath-tools and then pass to vm this dm...
  6. I

    Monitoring ceph with Zabbix 6.4

    This official doc https://docs.ceph.com/en/reef/mgr/zabbix/ can be used, but instructions are not correct. Especially This is not optional, but required step. Also, Plugins.Ceph.InsecureSkipVerify=true in zabbix_agent2.conf is required. Guide is here...
  7. I

    Debian update with open-vswitch stopped networking

    Seems we hit something like this when upgrading from pve8 to pve9...Anyone else?
  8. I

    TASK ERROR: storage migration failed: block job (mirror) error: drive-efidisk0: 'mirror' has been cancelled

    I've been hitting this on multiple disks, not just small one (efi). Seem root cause was running vm have cpu type 'host' but not identical CPUs were in cluster. Fixed by setting different cpu profile for vm (x86-64-v4). This error is very confusing to find root case! mirror-scsi0: Completing...
  9. I

    [SOLVED] Snapshots as volume chains problem

    I have manually cleared (lvremove) invalid snapshots & updated to latest from pve-test. Then I had to set "10.0+pve1" as machine version. After this, create snapshots is working again. If they break it again on current I will open Bugzilla ticket.
  10. I

    [SOLVED] Snapshots as volume chains problem

    Nobody? Should I open ticket in bugzilla?
  11. I

    [SOLVED] Snapshots as volume chains problem

    I have setup one PVE host with one LUN from SAN and users are testing Snapshots as Volume-Chain feature. Then have broken it - they are QA, so it is their job. There are no snapshots on vm: root@pve04:~# qm listsnapshot 104 `-> current You are here...
  12. I

    Downloading data files from Proxmox node

    This would be nice to have, easy way to download from web gui...
  13. I

    Steal time monitoring

    Found reddit thread about this, monitoring using htop: #configure htop display for CPU stats htop (hit f2) Display options > enable detailed CPU Time (system/IO-Wait/Hard-IRQ/Soft-IRQ/Steal/Guest) select Screens -> main available columns > select(f5) 'Percent_CPU_Delay" "Percent_IO_Delay"...
  14. I

    create a screenshot of the virtual machine

    But this require root. Is it possible to create screenshot without root privileges only using API?
  15. I

    Proxmox GUI Ceph "Connection error"

    Hitting this error/problem. I see this difference with haproxy https://host/api2/json/cluster/ceph/status { "data": null, "message": "binary not installed: /usr/bin/ceph-mon\n" } Without haproxy: https://host:8006/api2/json/cluster/ceph/status { "message": "binary not installed...
  16. I

    HA for Proxmox Cluster management portal

    Thanks for idea. Actually to keep things simple, keepalived does not need define master/backup or unicast peers, if you don't care which node will be master. It is possible to have same config on all nodes: vrrp_instance pve-cl01 { interface eth0 virtual_router_id 71...
  17. I

    Proxmox Hooks

    I would like to mark virtual machine when it is created with username (owner) who created it. Maybe this can be put in notes. Question is - how to hook this event?
  18. I

    OpenVSwitch static mac address configuration for internal interface

    I set it this way - in this example vlan10 is used as pve managment vlan iface vlan10 inet static ovs_type OVSIntPort ovs_bridge vmbr0 ovs_options tag=10 address 10.10.0.2/24 gateway 10.10.0.1 hwaddress aa:00:97:b0:02:c8
  19. I

    Using keepalived to access the cluster WI over a single IP

    Thanks for guide, had same idea. Yes native setting at Datacenter level would be very nice, everything required is almost already there.