PVE makes heavy use of the boot disk. It is not recommended to use USB sticks of any size with PVE.
The better practice is to use enterprise grade SSD/NVMe. The best practices is to use 2 in a mirror config.
If you implement aggressive log rotation and are diligent in treating PVE as am...
Perhaps because you have a space after comma?
Keep in mind that ansible modules are not supported by PVE developers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi @Adamced , welcome to the forum.
You've attached shared SAS storage to multiple hosts.
Your storage is connected to each host via multiple paths for redundancy.
You are seeing "double" because each path provides an access to the LUN.
Your next step is to install and configure "multipath"...
If you look at the Task information:
hybernate = qmsuspend
pause = qmpause
API docs state:
suspend: You need 'VM.PowerMgmt' on /vms/{vmid}, and if you have set 'todisk', you need also 'VM.Config.Disk' on /vms/{vmid} and 'Datastore.AllocateSpace' on the storage for the vmstate.
The same...
You can simply use ssh-copy-id (man ssh-copy-id ) or manually edit authorized_keys file
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I dont think it matters what the geometry of the ZFS pool is. So it should work just fine.
Good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Hi NatO, thanks for your report.
Our testing suggests that a Veeam backup of a live VM is not internally consistent, even with a single disk. I recommend only assessing the consistency of multi-disk configuration once the basics are operating as expected.
As soon as Veeam announces a fix...
Hi @inside , welcome to the forum.
Your path to resolution might be faster if you ping Linbit about this.
Good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, that is correct for LVM thin type of storage and some others.
thats correct, raw files do not provide thin provisioning
if you plan to stick with file storage, then that is the path to take. The other option is to move to LVM storage, the disks are "raw" from PVE point of view, however...
Hi @zyled , welcome to the forum.
The PVE is doing what you've configured it to do. It sounds like your expectations do not match your current configuration. To get a better answer, you'd need to provide more information:
- storage configuration: cat /etc/pve/storage.cfg
- running storage...
do you have space on your disk? "df -h"
The next step is to examine the log : journalctl -b0
Look for errors, warnings, or anything unusual
Good luck
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
No
ZFS is a local file system and is not network-dependent
No. Because thats not something it is designed for. What you need to research is Ceph. For Ceph your 1G link will be a bottleneck.
If you have existing NAS, then you should use it for shared storage, its a perfect fit.
Good luck...
Hi @beawesomeism , welcome to the forum.
What is the IP of the PVE server?
What is the IP of your workstation?
What is the IP of your gateway?
What is the output of "ip a" on PVE?
What is the corresponding output from your workstation? (ipconfig /all ; ip a)
Optional:
Can you ping the PVE...
Hi @jorge_buqui , welcome to the forum.
Your experience is as expected. Take a look at the table in the Wiki: https://pve.proxmox.com/wiki/Storage
Note the combinations of Shared/Snapshot capabilities and what storage they apply to.
There are significant differences between PVE and ESXi, not...
What you are attempting to do is to create a semblance of a Clustered File system. However, EXT4 is not a clustered filesystem. You are corrupting the data and metadata on every write. You should research CFS options, there are very few available...
A particular keyword used by netplan is not important. The host needs a gateway to reach outside its defined subnet.
Initially you defined the subnet as /32 which means the host can only talk to itself.
Your next screenshot has /24 which means the host can talk to anyone who has an IP address in...
So you named the disk using VM ID 101 but want to assign it to VM 103?
Does VM 101 actually exist? If it does, then use "qm disk move" command to reassign the disk. (man qm)
If it does not exist , find where the new disk is stored on your storage-4t pool and move/rename it manually...
You are confusing local disk you've added to "pve2" host with ability to have shared storage. You dont have shared storage.
You should remove "shared" option from your new pool and restrict it to a particular node that actually has that disk attached.
Blockbridge : Ultra low latency all-NVME...
Where does Proxmox VE come into it?
You configuration has /32 subnet, that netmask contains only one IP. Is it right? Your network screenshot does not show netmask.
Your gateway is wrong - you are pointing it to the host itself. Review your second screenshot for correct GW.
Good luck...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.