Search results

  1. grin

    [SOLVED] softdog reboots while having quorum

    Nov 17 10:09:26 bran corosync[4681]: [TOTEM ] A processor failed, forming new configuration. Nov 17 10:09:29 bran corosync[4681]: [TOTEM ] A new membership (10.20.30.40:1884) was formed. Members left: 4 Nov 17 10:09:29 bran corosync[4681]: [TOTEM ] Failed to receive the leave message. failed...
  2. grin

    All functions became slooow (corosync problem?)

    More info (you won't be happy I guess): due to that *&^%$#@!ing 60 sec fencing (I will murder it someday while praying for a real fencing) one cluster in question have rebooted. The result: it's fast again. corosync-quorumtool returns in 13 ms. Where to see what (on the still slow clusters)?
  3. grin

    All functions became slooow (corosync problem?)

    corosync-quorumtool -l takes like forever, 150 sec. There's plenty of traffic on 5404 and 5405. [update] Somehow it seems to be related to corosync: on clusters with fast response quorumtool resonds immediately. I do not see anything in logs, however: Nov 15 16:36:08 node01 corosync[2941]...
  4. grin

    All functions became slooow (corosync problem?)

    On some clusters (4.2 and 4.3) every management function started to get slow; GUI refresh calls take 4+ seconds (literally anything, from options to summary), 'pct' on console (w/o params, too) take 4-8 sec to finish. pvecm took 48 sec just now for a 4 node testcluster with full quorum. I have...
  5. grin

    New Server build xfs or ext4

    Well I have been working on a few places in the kernel in the past but what you say is correct. Still, ZFS module have been incompatible with released but new kernels for long periods of time (half a year in kernel incompatible changes is way too long for me) and it didn't make me quite...
  6. grin

    New Server build xfs or ext4

    Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer...
  7. grin

    New Server build xfs or ext4

    I wonder what data have you used to reach that conclusion? :-) On the second thought... no, I am not. Is ZFS in the kernel yet or it's still a repeatedly-breaking separate module? I have been testing it from time to time, and have lost a bunch of test volumes in due course since around... 2010...
  8. grin

    XFS for LXC container

    Better preformance, more reliability. But this is a religious debate really. My counter-question is: why not? (And I am searching the forum about how to create an xfs formatted lxc container from the start.... so far no luck.)
  9. grin

    New Server build xfs or ext4

    This is a good advice for XFS versions in the year of 2000 and around. The block zeroing bug was fixed, like, a decade ago? ext4 is slow. zfs is not for serious use (or is it in the kernel yet?). xfs is really nice and reliable. yes, even after serial crashing. ;-)
  10. grin

    ceph don't seem to be able to do linked clone, and full cloning is slow

    Answering myself on question 1: linked clone only works from templates it seems, but it uses real ceph clones.
  11. grin

    ceph don't seem to be able to do linked clone, and full cloning is slow

    the gui only offers Full Clone for VMs on ceph volumes; ceph should be able to create linked clones fast and easy (as well as light snapshots, by the way). Full Clone creation doesn't seem to use ceph and thus extremely slow. PVE full clone was created in 43 minutes, while doing it in the...
  12. grin

    Annoying no-scroll menu in the new gui

    Okay, I accept, it's a Firefox issue, what next? :-) Should we reverse engineer the problem or could you report it, if you believe it is not something which ought to be fixed locally?
  13. grin

    Anything like 'vzctl enter' for KVM machines?

    As a sidenote: https://pve.proxmox.com/wiki/Serial_Terminal That's what he was looking for I guess.
  14. grin

    GUI don't allow partitions as Ceph OSD journal devices

    Indeed, it's been fixed. Still the original problem exists: ceph journals cannot be partitions, only devices. One possible way to use SSDs is to partition it among the OSDs (which seems to be a common failure point but since a pool consists of dozens of OSDs on plenty of machines its risk is...
  15. grin

    Annoying no-scroll menu in the new gui

    My colleagues told me the same, but anyway, the resolution is 1280x1024 (this is my smaller secondary monitor :)) and the browser is Firefox Nightly (52.0a1 (2016-10-16) (64-bit)). [UPDATE4 :)] Yes, in Chrome it does scroll, but not in Firefox. (The attachment management of the forum seems to...
  16. grin

    Annoying no-scroll menu in the new gui

    4.3-3/557191d3 looks great but feels PITA to use. The menu has ben converted from top line to left block, which is bad enough (uses ten times the original area of the screen estate) but it doesn't scroll on mouse scrollwheel. This just hurts. Every time. Have to go down to the...
  17. grin

    datacenter level command line tools (like list all CTs)

    Yep, pvesh seems to be the way to go, which is API for the masses, really.
  18. grin

    GUI don't allow partitions as Ceph OSD journal devices

    Yes. Fresh install. proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve) pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165) pve-kernel-4.4.19-1-pve: 4.4.19-66 lvm2: 2.02.116-pve3 corosync-pve: 2.4.0-1 libqb0: 1.0-1 pve-cluster: 4.0-46 qemu-server: 4.0-88 pve-firmware: 1.1-9 libpve-common-perl...
  19. grin

    GUI don't allow partitions as Ceph OSD journal devices

    Also, creating OSD through the GUI doesn't work for the same reason: no disks visible by pve.