Search results

  1. bbgeek17

    SAN / NAS plan out assistance needed here

    We should get some results in a few days Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  2. bbgeek17

    Multipathing with SAS storage but still be able to snapshot

    Hi @RoxyProxy, glad to see you got it going. I would keep an eye on https://forum.proxmox.com/threads/problem-with-pve-8-1-and-ocfs2-shared-storage-with-io_uring.140273/#post-718444 Snapshots with OCFS2 will, of course, work with qcow file format as the snapshot will be taken at file level...
  3. bbgeek17

    Changing Hardware - leads to configuration and access disaster

    Hi @Dart , welcome to the forum. I think this part of the Release Notes might be helpful: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_8.2:~:text=Kernel%3A%20Change%20in%20Network%20Interface%20Names Note the the reference documentation mentioned in the middle which explains how you can pin...
  4. bbgeek17

    Some suggestion for Two PVE with ISCSI stroage setup

    Hi @parker0909 Yes, you can use iSCSI storage in a cluster of two nodes (and a quorum vote, hopefully). Yes, you can achieve HA with shared iSCSI, whether direct or via LVM. No, you will not have thin provisioning, except that which is done inside your storage appliance internally. iSCSI can...
  5. bbgeek17

    Veeam Silent Data Corruption

    Excellent news Pavel! We will test it with our internal tools when we get some free cycles. Best, Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  6. bbgeek17

    SAN / NAS plan out assistance needed here

    You misunderstood. The comment from @Johannes S and my follow up on it were referring to a particular storage type scheme available in PVE: ZFS over iSCSI (ZFS/iSCSI). This approach has multiple requirements absent in Powerstore: - ability to ssh into the system - ssh as root - using ZFS as...
  7. bbgeek17

    SAN / NAS plan out assistance needed here

    Thin LVM is not concurrent/shared access safe. You will have data loss if you manage to defeat PVE bumpers and implement it as shared storage. We were not yet able to qualify Veeam as a safe backup option with PVE. There is a thread about the issue both here and on Veeam forum. As soon as the...
  8. bbgeek17

    Support for Windows Failover Clustering

    Hi @smecklin, I don't believe your Proxmox subscription would help support you in running WSFC. WSFC requires specialized storage support (specifically SCSI persistent reservations and managed initiator identities) and does not work "out of the box" with any of the pre-packaged storage types...
  9. bbgeek17

    Support for Windows Failover Clustering

    Hi @smecklin , welcome to the forum. There were a few discussions about this topic, for example: https://forum.proxmox.com/threads/proxmox-ha-mssql-failover-cluster.145104/ https://forum.proxmox.com/threads/support-for-windows-failover-clustering.141016/ In summary, compatibility with MS FCS...
  10. bbgeek17

    Can I enable Proxmox HA with 2 Nodes + QDevice + Local storage?

    Hi @logui , The goal of the HA is to ensure continued operation of services. In PVE its done by moving the execution context of the VM or LXC from one node to the another. While its possible to not have a dependency on stored disks (PXE boot), this is not your situation. Your VMs will have...
  11. bbgeek17

    Ceph Cluster Expansion Documentation?

    While I am not a Ceph expert, I believe the following article covers the steps: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster It may not have an isolated "add new node" section, but adding an OSD/monitor is akin to adding a new "ceph member". Which, as you noticed, is not the...
  12. bbgeek17

    Ceph Cluster Expansion Documentation?

    Hi @cfgmgr , is this not it? https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_join_node_to_cluster Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  13. bbgeek17

    Detached disk, now can not delete it, or reattach it

    Is there no unused0 in the hardware panel? Can you run : qm config [vmid] pvesm list [storage] If you see the disk in above list output, you can use : pvesm free [storage:disk] However, be careful. There is no bumper against selecting and destroying wrong disk. Measure twice before cutting...
  14. bbgeek17

    [SOLVED] Unable to retrieve next free vmid: 500 Internal Server Error: unable to get any free VMID in range [999991, 1000000]

    Hi @fadeway , Proxmox, by default, does not enforce this range. Someone in your organization set the range. Look at "datacenter.cfg" file for details: cat /etc/pve/datacenter.cfg keyboard: en-us mac_prefix: BC:24:14 next-id: lower=9000 You can actuate the range via GUI in datacenter options...
  15. bbgeek17

    SAN / NAS plan out assistance needed here

    Hi @geekspaz, I recommend configuring your proof of concept (POC) with the same storage solution you plan to use in production. Otherwise, you might encounter unexpected issues later on. Keep in mind that the Proxmox interactions and integration with Blockbridge differ significantly from...
  16. bbgeek17

    No Network connection to the Proxmox 7.4 GUI

    Hi @jongpac , welcome to the forum. The output of your "ip a" conveniently shows that "vmbr0" bridge interface sits on top of "IDRAC" port. Unless you've configured the port for "sharing" between IDRAC and OS, that port is not generally available for the OS. I'd recommend that you add...
  17. bbgeek17

    SAN / NAS plan out assistance needed here

    Hi @geekspaz , There are a few architectural things you have to work out for a successful proof of concept: Two servers are not a valid/supported cluster construct. Each cluster member has an equal vote. If you lose the connection between the two members then there will be no majority present...
  18. bbgeek17

    Adding existing LVM on iSCSI

    this ^ looks like two paths to the same disk. Most likely via old 1G and new 10G network. I suggest that you try to configure multipath package, you can use new and updated wiki page: https://pve.proxmox.com/wiki/Multipath good luck Blockbridge : Ultra low latency all-NVME shared storage...
  19. bbgeek17

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    We will get it into our QA pipeline asap! Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  20. bbgeek17

    Adding existing LVM on iSCSI

    Do you have multiple paths to storage, even if inadvertently? https://forum.proxmox.com/threads/cannot-create-lvm-on-iscsi-storage.82145/ What happens when you run: /sbin/pvs --separator : --noheadings --units k --unbuffered --nosuffix --options pv_name,pv_size,vg_name,pv_uuid...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!