As I said, all the vm traffic is on vmbr1, drbd traffic is on separate 10Gbit nic, so vmbr0 is used only for cluster traffic and GUI management (very rare).
vmbr0/eth0, in all 3 nodes, is a gbit connection in a gbit switch (mii-tools confirms this ("eth0: negotiated 1000baseT-FD flow-control...
Yes, exactly the same version of everything. I've always updated the node at the same time.
I've had a check in first node now (time 16.52) and I've found that I've had some weird thing today too
(that log file starts at Aug 6 06:39:54)
root@prox01:~# grep -i totem /var/log/daemon.log | tail...
In a 3 node cluster, 2 with drbd9 storage and one for quorum, I've 4-5 times a day corosync problems like those below.
Sometime I enter the web GUI and I see the other nodes in red, pvecm status says is everything ok, but if I dare run the command i.e. "qm list" it hangs and there is no way to...
We have used Mitac Pluto 220 (now there is also a 230 model), is relatively cheap (around 200$ I guess, once you have added a SSD and some Ram) and intel based, so you directly bare metal install Proxmox into it.
You don't "mount" the virtual disks in Proxmox (like seems you were trying to do in your first post). You have to have the virtual disk available in proxmox (different storage technology so different ways to have that) and "link" it to the vm in the vmid.conf file, so QEMU/KVM sees them as block...
I'm just shooting in the dark.
It's really hard help you since you provide confusing and partial info. I've googlead a bit, and I've found in this forum that VHD are first converted into something that QEMU can understand, like a qcow2 file.
If you use one of your 2 file based storage (local or...
In your config you have boot disk as sata0, but there is no sata0 entry as storage, so probably you boot with cdrom as fallback sequence.
So the only storage defined is ide2, that points to a ubuntu iso. Do you mean that the ubuntu iso is your VHD?
If so, just check if the...
Your VM config shows NO disk defined (only cdrom), so of course you can't mount it (if you boot from that cdrom, otherwise you don't even boot). Sorry but from the first comments you wrote seems that you don't really have a clue of how things works (no offence), so is very hard figure out how to...
Hi, I'm testing Proxmox 5 (latest updates) and it's Ceph (luminous).
I've 3 VM with Proxmox installed, 2 as storage with 32GB virtual disk each, and the 3° only with local storage as 3° monitor.
I've followed instructions on https://pve.proxmox.com/wiki/Ceph_Server and some googling around
I've...
It's a workaround because the design is flawed when you have full permissions, and to fix this problem you suggest to use less permissions (workaround). In other words, you are not fixing the problem i.e. moving the buttons, but suggest to login in a way that prevents it to use them at all.
If I...
Just a shot in the dark, from the error message seems that you don't have 'mkdir' command available in the server (don't know which one). Check if is the case (i.e. issue the command 'which mkdir'). If is absent, install the package "coreutils", but is really strange and scaring if it's missing...
AFAIK drbd9 has less isuses with split brain, and the "best practice" to avoit with drbd8, have half the storage used by one node for it's VMs and the other half for the other node, is really a waste of space (2X) and a restriction of flexibility (i.e. I've a slower node that is good if usually...
I second this request, did not know about that change (dated 20 Jenary 2017)!
https://www.linbit.com/en/drbd-manage-faq/
So far I've found no replacement for 2 storage node + quorum node configuration that is possible with drbd9, so I think it's very important for we all with small requirements...
I've done some further test:
- changing scheduler to older default, deadline, don't solve the problem
- without compression the backup works fine (vma verify is ok, and also the restored vm seems ok)
- with gzip we have the same issue
- with lzop we have the same issue
- all the above with GUI...
I've to add that I've transferred the backup into a 4.3 proxmox, and seems broken there too
lzop -d -c vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo |vma verify -
** (process:10046): ERROR **: verify failed - wrong vma extent header chechsum
Trace/breakpoint trap
so seems that is the backup...
Just tested, same here, please fix very soon and update packages, because I need a realiable backup/restore to keep testing 5 beta
here my output
restore vma archive: lzop -d -c /mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo|vma extract -v -r...
FOUND the problem!
Proxmox 5 defaults with cfq scheduler instead of deadline!
root@pve5test:~# grep . /sys/block/sd*/queue/scheduler
noop deadline [cfq]
In the vm I did then
root@pve5test:~# echo deadline > /sys/block/sda/queue/scheduler
root@pve5test:~# grep . /sys/block/sd*/queue/scheduler...
But what importance has this? We are comparing a bare metal installation of prox 4, ext4, barrier, with one of prox 5, ext4, barrier and performance are dramatically lower.
Just try yourself, create 2 vms (same resources, same storage type and destination) and install prox4 and 5 then compare...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.