In my case i don't use ceph, just kvm. In this case, can i break the cluster if mix major pve versions?
In some guides from proxmox.com say something about change default display to standard in case you do live migration from 4.4 to 5.0, so i understand that it's possible, no?
Thanks.
Hello,
I have a cluster with 4.4 but i would like test 5.0 only in 1 node using the same cluster and move some vms with live migration. After some time upgrade other nodes of the cluster.
Should be possible?
Thanks.
Hello,
I have this configuration inside vm:
root@dev:~# sysctl -p
vm.panic_on_oom = 1
kernel.panic = 3
root@dev:~#
When vm is out of memory (swap is disabled) linux should have a kernel panic and reboot in 3 seconds, but it is not working with KVM/proxmox.
Someone know anything about that...
Hello,
Thanks so much, it was a stupid behavior doing migration in my side...
It works like you said. Just migration from VM name, not from disk resources for VM. Of course you need local storage in both nodes enabled for images and mounted in the same path.
You comment about NFS solution...
Thanks for you reply.
I can not use shared storage because i delegate all hardware management on others companies than they do not support it. I can use extra servers as NFS, but it will be a unique point of failure. To have 2 NFS servers in HA is not good solutions because i must have RAID 1...
Does Zsync need ZFS in source storage? or it will work if i use LVM thin? for example.
If i don't use zsync, can i dump vm to local storage of another host in the cluster?
If i can note migrate without downtime... i have some questions:
Can i migrate from LVM thin to LVM thin in another host...
Hello guys,
I have to implement a new environment, but i am not sure how Proxmox can solve my requirements:
2 servers (Server A and Server B).
SSD and SATA disks for every server.
All vm have to use SSD disk (IO is critical).
Every night Server A take snapshot of all vms using local SSD and...
:S
This test is with local SSD.
VM: ubuntu 14.04
Jobs: 64 (f=64): [rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr] [0.6% done] [378.3MB/0KB/0KB /s] [96.9K/0/0 iops] [eta 59m:40s]
I done more tests.
I tested with local SSD without RAID:
From Proxmox server, test reached 97K IOPS
From vm inside Proxmox, 5-6K IOPS (again.... 5-7% of real performance)
After, i converted images from qcow2 to raw and changed some options:
Select VM -> Hardware -> Select disk -> Edit...
Hello,
I have a problem with NFS performance from VE 4.3:
iperf test between 2 proxmox nodes: 5Gbps (it's OK)
IO benchmark from proxmox server to NFS: 22K IOPS (OK)
IO benchmark from proxmox vm mounting NFS directly: 22K IOPS (OK)
IO benchmakr from proxmox vm using local disk that it's stored...
Hello,
I have 2 admin users:
- 1 root using PAM auth. Console works.
- 1 Extra user with Administrator or PVAdmin using PV auth and /pool/poolname limitation, it also fail if i use / as pool limit. Everytime i try to connect to the console i get the following error message: "Server...
Hello,
I am trying to move from qcow2 to vmdk with this command when vm is running:
root@pve1:~# time qm move_disk 11111111 scsi1 SSD_LOCAL -format vmdk -delete 1
create full clone of drive scsi1 (SSD_PREMIUM:11111111/vm-11111111-disk-2.qcow2)
Formatting...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.