Search results

  1. J

    Processes die in CTs with Holy Crap error in dmesg

    ...does not seem to hurt that much: Changelog for kernel 042stab080.2: [cpt] frightening "Holy Crap X" messages has been reworked/removed :)
  2. J

    No Installation possible Haswell Xeon 1225v3 X10SLM-F Supermicro

    i can confirm that this kernel works with an E3-1220v3 (on Asus RS300-E8-PS4); installation from ISO worked only with kernel option "nousb".
  3. J

    PVE 3.0 - cluster with 3 nodes - fencing on nodes startup

    so i guess this means that he should power on 2 nodes, wait for them being up, and then power on the 3rd. at least this is how our testing cluster works - one of the nodes used boots significantly slower, to it is also fenced during power-up... may i ask - why? i find IPMI - "real" IPMI - to...
  4. J

    Proxmox 3.0 Cluster setup help

    i don't think that this would work - there should be an IP of network where both nodes are connected. Maybe your /etc/hosts is not correct - your hostnames should resolve...
  5. J

    pve-upgrade-2.3-to-3.0 fails with a dist-upgrade failed

    does an "aptitude install dpkg" work? looks like as if that's still the old version... with dpkg version 1.16.x, that "unknown directive" should disappear.
  6. J

    pve-upgrade-2.3-to-3.0 fails with a dist-upgrade failed

    i'd try an "aptitude clean" - and just restart the upgrade script. I also upgraded a few installations this way - not a single one went through without errors, but all were successful at the end, after some fix/restart iterations.
  7. J

    Updates for Proxmox VE 3.0 - including storage migration ("qm move")

    storage migration - great feature, thanks! just testing a bit - two findings so far: - if the target storage is too small to hold the to-be-migrated image physically, the migration process stalls and times out with some message about "probable bad sectors" - a qcow2 image...
  8. J

    OpenVZ Ploop support

    well, you didn't share this thinking before, so it was a bit difficult to understand... ;) to be honest, i never had the idea to doubt the KVM single-file-approach, since this imo always used to be "well known cemented standard" in VM industries... (probably introduced to the wider public by...
  9. J

    OpenVZ Ploop support

    ah, ok, i agree - but why comparing with KVM here anyway? In comparison with current *OpenVZ* support i'd say ploop is "less limited than standard container fs". At least as a concept, it yet has to prove stability, imo...
  10. J

    OpenVZ Ploop support

    ...and in reverse this means KVM and qcow2 are "unlimited"? Well... i'd say both have their limits and advantages; as always... didn't doubt that; just wanted to assist ploop supporters... :)
  11. J

    OpenVZ Ploop support

    Hm, are you really calling openvz "limited technology"? Then i'd wonder why it is supported by proxmox at all... i really like openvz for having large numbers of containers, with easy little "over provisioning". Using "ploop" imo would have quite some advantages: - snapshots - snapshots! :) -...
  12. J

    Proxmox clustered and SAN problems

    as udo said - your NFS doesn't show up (that's your "shared FS" for ISOs etc.) - and from our experience, a configured but non-working NFS service also could affect the GUI in showing strange things...
  13. J

    Proxmox Ve 2.2 2 node cluster: NODES restarting now and then!?

    you only have 2 nodes? How do you ensure having a quorum? you probably either need a 3rd node, or at least a sort of "fake quorum-only node" for HA...
  14. J

    Updates for Proxmox VE 2.2

    we once had a similar issue - when Proxmox moved to the RHEL kernel, we had a node which locked up randomly (afair with a panic). Using a stress test tool, we were able to reproduce this - and also that kernel versions before that new one did not have the problem. At the end it turned out that...
  15. J

    Backup: "referential integrity" missing?

    i see; thanks for clarifying...
  16. J

    Backup: "referential integrity" missing?

    Hi, not sure if this is a bug or simply something to "take care of": with version "2.2/c1614c8c", deleting (destroying) a VM which has been in a backup job, results in "Backup Error"-messages and mails, unless the destroyed VM is manually removed from that backup job...
  17. J

    PVE Cluster Node Maintenance Mode?

    ...our "workaround" for now ;-) #!/bin/bash if [ -z $1 ] ; then echo "usage: $0 <target-node> [offline]" echo " Migrate all CTs and VMs on local node to target node." echo " Online- (Live-)migration is default." echo " In case live migration fails, please try offline...
  18. J

    Problem with migration and/or Openvz config with disk_quota

    i found a workaround for this - since we're using vps.mount/umount scripts anyway to activate xfs project quota, those scripts now also simply check for "DISK_QUOTA" being "no" in $VEID.conf. Looks good so far. Btw, there seems to be a little problem with proxmox creating fresh containers...
  19. J

    Problem with migration and/or Openvz config with disk_quota

    well, key word was 'automatically'. Having to do this manually would disqualify for productive usage in a team of admins (for us), which then would have to follow a 'soft policy' to manually enter cli to add that line... Hm, perhaps some cronjob could do/check this, let's see...
  20. J

    Problem with migration and/or Openvz config with disk_quota

    i understand; however, looks more like OpenVZMigrate.pm operating on a non-existing parameter, which imo is "non-clean" anyway ;-) so question would be - is there a way to get "DISK_QUOTA=no" into $VEID.conf automatically besides "hard editing" API2/OpenVZ.pm?