Hello,
I just install a new pve 6 node on a single ssd.
I'm not an expert on disk stuff, but I see this ssd has a physical sector size of 4096, so I think all partition must begin with a 4096 multiple sector size, isn't it?
But fdisk warn me only for partition 1.
Partition 2 and 3 are 2048...
I think you will get better performance by creating an hardware raid array and use a single virtual disk (raid0) with zfs.
I use this setup with a couple of dell server with no problem, except the grub boot one. This issue should be solved with proxmox 6 and uefi boot.
Sorry, I manage only about two dozen of proxmox ve servers at the moment with linux and windows vms.
I dind't refer to vm problem but to the qemu problem. In normal situations the VM must stop at the hypervisor level, regardless of the guest OS (no clean shutdown).
If not, you will receive an...
What do you mean? Did you try to hard stop? This is very strange.
If the VM doesn't stop, I think you have some serious configuration or hardware problem...
Hello,
With the new UEFI boot in PVE 6, grub will load kernel from uefi partition, right?
So problems like this:
https://forum.proxmox.com/threads/zfs-grub-rescue-after-reboot.42515/
should be solved.
What do you think?
Thank you!
I'm not sure it's a drbd issue, it's doing his work.
maybe qemu4 introduced some check on the storage?
BUT I tried this, when the resource was secondary:
if >> /dev/drbd1001
then
echo "writable"
else
echo "write permission denied"
fi
the result is: writable! ... I'm really confused ...
Dear Proxmox staff and users,
I know that drbd9 is not supported by proxmox but I tried drbd9 with linstor on PVE6 and all works except a "stupid" (?) thing: VMs can't start.
As I understand, there is no need to promote a drbd resource to primary. I mount the disk from OS and works.
At the other...
Thank you! I see this:
node {
name: p6t1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.7.96.3
}
If I ping p6t1 it resolves the public IP, not the private 10.7.96.3, correct?
It's fine but for not skilled user I think it's better to show a warning message before joining a...
Sorry for writing again, the cluster is working but someone can confirm that there is no need to add the ring address in the /etc/hosts files with PVE6?
Yes
Yes, zsync make a snapshot and does always incremental backups (not the first time of course).
You can keep a number of snapshots wich resides both on source and destination
EDIT: it's always a full backup but it sends only differences, it's very efficient and reliable
If you have a...
I'm just trying PVE6 (just updated from pvetest) on three test vps
I added a network with private address for corosync
created the cluster on the private address in the first node
joining the second node using private address I got time outs but the node is in the cluster :-)
same for the third...
You can't migrate the linstor controller, because when the vm pauses for the final step, proxmox can't contact the storage controller for the primary/secondary changes.
So you can't install it on a vm managed by linstor itself.
Linbit provided a tutorial for creating a drbd resource not managed...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.