Search results

  1. M

    corosync totem retransmit and cluster problem

    As I said, all the vm traffic is on vmbr1, drbd traffic is on separate 10Gbit nic, so vmbr0 is used only for cluster traffic and GUI management (very rare). vmbr0/eth0, in all 3 nodes, is a gbit connection in a gbit switch (mii-tools confirms this ("eth0: negotiated 1000baseT-FD flow-control...
  2. M

    corosync totem retransmit and cluster problem

    Yes, exactly the same version of everything. I've always updated the node at the same time. I've had a check in first node now (time 16.52) and I've found that I've had some weird thing today too (that log file starts at Aug 6 06:39:54) root@prox01:~# grep -i totem /var/log/daemon.log | tail...
  3. M

    corosync totem retransmit and cluster problem

    In a 3 node cluster, 2 with drbd9 storage and one for quorum, I've 4-5 times a day corosync problems like those below. Sometime I enter the web GUI and I see the other nodes in red, pvecm status says is everything ok, but if I dare run the command i.e. "qm list" it hangs and there is no way to...
  4. M

    What are you using to keep for 3rd node ?

    We have used Mitac Pluto 220 (now there is also a 230 model), is relatively cheap (around 200$ I guess, once you have added a SSD and some Ram) and intel based, so you directly bare metal install Proxmox into it.
  5. M

    Problems getting SPICE/qxl to work under Ubuntu

    Just a shoot in the dark... in Proxmox, is the vm hardware config set with Display=SPICE?
  6. M

    Tom help me!

    You don't "mount" the virtual disks in Proxmox (like seems you were trying to do in your first post). You have to have the virtual disk available in proxmox (different storage technology so different ways to have that) and "link" it to the vm in the vmid.conf file, so QEMU/KVM sees them as block...
  7. M

    Tom help me!

    I'm just shooting in the dark. It's really hard help you since you provide confusing and partial info. I've googlead a bit, and I've found in this forum that VHD are first converted into something that QEMU can understand, like a qcow2 file. If you use one of your 2 file based storage (local or...
  8. M

    Tom help me!

    In your config you have boot disk as sata0, but there is no sata0 entry as storage, so probably you boot with cdrom as fallback sequence. So the only storage defined is ide2, that points to a ubuntu iso. Do you mean that the ubuntu iso is your VHD? If so, just check if the...
  9. M

    Tom help me!

    Your VM config shows NO disk defined (only cdrom), so of course you can't mount it (if you boot from that cdrom, otherwise you don't even boot). Sorry but from the first comments you wrote seems that you don't really have a clue of how things works (no offence), so is very hard figure out how to...
  10. M

    Ceph on PVE5 test cluster dubts and problems

    Hi, I'm testing Proxmox 5 (latest updates) and it's Ceph (luminous). I've 3 VM with Proxmox installed, 2 as storage with 32GB virtual disk each, and the 3° only with local storage as 3° monitor. I've followed instructions on https://pve.proxmox.com/wiki/Ceph_Server and some googling around I've...
  11. M

    Disable server shutdown/restart from the web interface

    It's a workaround because the design is flawed when you have full permissions, and to fix this problem you suggest to use less permissions (workaround). In other words, you are not fixing the problem i.e. moving the buttons, but suggest to login in a way that prevents it to use them at all. If I...
  12. M

    Error on Cluster Node Add

    Just a shot in the dark, from the error message seems that you don't have 'mkdir' command available in the server (don't know which one). Check if is the case (i.e. issue the command 'which mkdir'). If is absent, install the package "coreutils", but is really strange and scaring if it's missing...
  13. M

    Cannot restore VM backup image at 5.0beta1(5.0-5) : ERROR **: restore failed - wrong vma extent

    Problem seems solved with yesterday's updates from 5.x test repo, thanks a lot
  14. M

    drbdmanage license change

    AFAIK drbd9 has less isuses with split brain, and the "best practice" to avoit with drbd8, have half the storage used by one node for it's VMs and the other half for the other node, is really a waste of space (2X) and a restriction of flexibility (i.e. I've a slower node that is good if usually...
  15. M

    drbdmanage license change

    I second this request, did not know about that change (dated 20 Jenary 2017)! https://www.linbit.com/en/drbd-manage-faq/ So far I've found no replacement for 2 storage node + quorum node configuration that is possible with drbd9, so I think it's very important for we all with small requirements...
  16. M

    Cannot restore VM backup image at 5.0beta1(5.0-5) : ERROR **: restore failed - wrong vma extent

    I've done some further test: - changing scheduler to older default, deadline, don't solve the problem - without compression the backup works fine (vma verify is ok, and also the restored vm seems ok) - with gzip we have the same issue - with lzop we have the same issue - all the above with GUI...
  17. M

    Cannot restore VM backup image at 5.0beta1(5.0-5) : ERROR **: restore failed - wrong vma extent

    I've to add that I've transferred the backup into a 4.3 proxmox, and seems broken there too lzop -d -c vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo |vma verify - ** (process:10046): ERROR **: verify failed - wrong vma extent header chechsum Trace/breakpoint trap so seems that is the backup...
  18. M

    Cannot restore VM backup image at 5.0beta1(5.0-5) : ERROR **: restore failed - wrong vma extent

    Just tested, same here, please fix very soon and update packages, because I need a realiable backup/restore to keep testing 5 beta here my output restore vma archive: lzop -d -c /mnt/raid1backup/proxmox/dump/vzdump-qemu-100-2017_03_29-17_24_58.vma.lzo|vma extract -v -r...
  19. M

    Proxmox 5 beta, Areca 1883i, 10x LESS Fsync/Sec vs 4.x

    FOUND the problem! Proxmox 5 defaults with cfq scheduler instead of deadline! root@pve5test:~# grep . /sys/block/sd*/queue/scheduler noop deadline [cfq] In the vm I did then root@pve5test:~# echo deadline > /sys/block/sda/queue/scheduler root@pve5test:~# grep . /sys/block/sd*/queue/scheduler...
  20. M

    Proxmox 5 beta, Areca 1883i, 10x LESS Fsync/Sec vs 4.x

    But what importance has this? We are comparing a bare metal installation of prox 4, ext4, barrier, with one of prox 5, ext4, barrier and performance are dramatically lower. Just try yourself, create 2 vms (same resources, same storage type and destination) and install prox4 and 5 then compare...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!