would "qm monitor" achieve the same?
qm monitor <vmid>
Enter QEMU Monitor interface.
<vmid>: <integer> (100 - 999999999)
The (unique) ID of the VM.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox...
Hi @Tomm ,
Just change all instances of "enp101s0 " to "enp100s0f0np0" in the /etc/network/interfaces
Good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Another option is to remove the data LV post-install and expand the root LV/FS with the remaining space.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @Shamson96 , welcome to the forum.
You should try directions from existing threads on this topic , i.e. https://forum.proxmox.com/threads/using-ntfs-hard-drive-from-windows-server.103509/
If you run into trouble, come back with specific...
You have not described what your actual use case is, workload, or any requirements.
Based on that, you can start with deployment of single server. Try things out and then come back asking more specific questions.
You can always extend the solo...
It's probably a matter of priority. Of course it would be nice to have everything, however the team likely balances between: availability, stability, net new functionality, and pretty GUI for things that already work.
Blockbridge : Ultra low...
Hi @SInisterPisces ,
Configuring iSCSI multipath is generally straightforward and relatively simple. It is well documented in many online resources, including the following article...
Hi @SInisterPisces ,
From an operating system perspective, an NVMe/TCP-connected disk and an iSCSI disk both present themselves as raw block devices. When you use the native OS tools to manage them (iscsiadm for iSCSI and nvme for NVMe/TCP)...
Interesting! Will keep this in mind for suitable use-cases.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
A bit of good news to kick off the new year: our pull request addressing the iSCSI DB consistency/compatibility issue has been accepted by the Open-iSCSI maintainers. This means the fix will be included upstream and should make its way into a...
Hi @LG-ITI , welcome to the forum.
First, let’s make sure we are aligned on what “ZFS-over-iSCSI” means: This approach allows you to programmatically expose virtual disks as iSCSI LUNs. These virtual disks are backed by ZFS. The consumers of...
Hi All, could any of the participants who are sharing their experience here post:
- VM config pre-resize
- storage.cfg
- Storage config (lsscsi, lsblk, pvs, vgs, lvs, multipath -ll ) pre-resize
- Resize steps with outputs
- Storage config...
Hi @Techingenuity , have a look at this thread:
https://forum.proxmox.com/threads/rename-datacenter.121163/
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
You are welcome. Feel free to mark this thread as Solved by editing the opening post and selecting Solved from the drop-down near the subject line. This helps to keep the forum tidy.
Cheers
Blockbridge : Ultra low latency all-NVME shared...
Hi @admin20, welcome to the forum.
You can, but you don't strictly need to. The ESX Migration tool uses ESXi API to extract data from ESXi and then transfer it to the PVE. That obviously means that you need somewhere to put that data in PVE. It...
If we go back to Storage Pools, which is not Ops question as we know now, one would have to edit/modify every VM configuration file to point to appropriate Pool/Volume.
I think the Resource Pool rename is simpler, one would only need to edit...
Hi @micahel jiang ,
You are confused about storage location, shared properties and reporting. What you have :
- 11TB disk mounted to host1, this disk is not shared even if it is external.
- Proxmox storage pool of type DIRECTORY that points to...
Makes sense, you did not specify the type of pool (resource vs storage vs) and I made an assumption of storage as they have similar restriction. My apologies. I don't have sufficient experience with Resource pools. Generally, until a PVE team...
The easiest thing to do is to install Virtual PVE and test out your exact scenario : digit-named-pool with VM which has disks on it. Start with 8, upgrade to 9.
A quick test on PVE9 shows that the storage subsystem will skip newly incompatible...
Awesome to hear. You can mark this thread as Solved by editing the first post and selecting appropriate prefix from the subject dropdown.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...