Search results

  1. F

    Prevent user from overwriting firewall rules

    HI, i am using latest proxmox version pve-manager/4.3-3/557191d3 I want that users can change or add their firewall rules. but not activate deactivate firewall in generall- or deactivate IP filter. with user rights: PVEVMAdmin a user can change ALL settings including IP filter which is not...
  2. F

    Ceph VM backup and restore on PVE 4.1 very slow

    any solution to this? slow restore speed?
  3. F

    best upgrade path fo ceph

    hi, we want to upgrade from pve 3.4 to 4.2 and from ceph firefly to hammer- jewel is not a good idea at the moment i read that the update is non trivial on the ceph homepage?? we did a pve upgrade with our test cluster. no problem. whats the best way to update ceph? directly all machines at...
  4. F

    Proxmox 3.4 and Ceph Firefly not supported anymore? create xfs error.

    yes thats right.... its a ceph bug... before i had osd mkfs options xfs = "-f -i size = 2048" and it worked for the old servers. (and not for the newest ceph firefly) no i changed it to osd mkfs options xfs = "" and it works....
  5. F

    Proxmox 3.4 and Ceph Firefly not supported anymore? create xfs error.

    We are using still proxmox 3.4 (we want to update in 1 month to 4) and ceph firefly. but we cannot install new osds anymore.. says: ceph4 pvedaemon[15035]: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 820e9ff3-7d60-4244-a606-0d11e96b9504 /dev/sdc' failed...
  6. F

    more then 24 nics possible? unstable?

    i have a vm server linux ubuntu 14.04 running for some time with 24 nics... perfect. no i have two new vm (kvm) servers ubuntu 14.04 up to date. but after some days they freeze. (100% cpu) they have 30 nics each. some nics are e1000 and some virtio. could virtio be the problem or the 30 nics...
  7. F

    kvm crash - kernel panic

    ok i updated to latest pve-kernel-3.10.0-16-pve and after 3 hours server freezed. same as here: https://forum.proxmox.com/threads/possibly-unstable-pve-kernel-3-10-0-15-pve-and-pve-kernel-3-10-0-16-pve.25767/ now downgraded to pve-kernel-3.10.0-14-pve and it seems to work...
  8. F

    kvm crash - kernel panic

    Hi, since years we are experiencing kernel panics on our host machines once a year more or less. it does not matter which hardware it was or which proxmox version. this time one of our virt machines allways had a BSOD. it could not be restartet. after the third reset also the node machine had a...
  9. F

    real memory consumption is double then the setting in proxmox

    hi, im am running pve-manager/3.4-1/3f2d890e (running kernel: 3.10.0-7-pve) on one node. some virtual machines consume a lot more memory then they should. i put for exmaple fixed 16GB for the machine id 212- and top shows me RES 22.3 and VIRT even 29GB! i am not able to start more vms as the...
  10. F

    virt disk on local filesystem slow write vs. lvm

    hi, i have some servers where i use lvm for the virtual disk. as it is not possible to set up a lvm for the existing disk (pve ocupies 100% with its lvm) on installation i have inly "local" storage now. (old servers i did first install a ubuntu and afterwards the proxmox installlation) its the...
  11. F

    [SOLVED] live migration fails

    after the failure i saw that start and stop of the script did not work on the target server. it was a missing bridge. just for the note: dont look only at the migration task in the panel. the start / stop which also appears can tell you some more information :-)
  12. F

    [SOLVED] live migration fails

    Jul 09 15:23:13 starting migration of VM 109 to node 'ceph2' (192.168.11.32) Jul 09 15:23:13 copying disk images Jul 09 15:23:15 starting VM 109 on remote node 'ceph2' Jul 09 15:23:16 start failed: command '/usr/bin/kvm -id 109 -chardev...
  13. F

    can storage migration result in bad data?

    hmmm ok. ceph has the discard/trimming feature of qemu? i think with nas storage over nfs it shouldnt be a problem. but i have to move around 10 vms to ceph. and until proxmox4 is stable some time will go by.... could this also happen with restoring backups? (data corruption with ceph rbd?) this...
  14. F

    can storage migration result in bad data?

    my tests in the test worked well. still i would like to know if this bug is fixed now. in the repository there are even newer packages of qemu-server. i am still afraid to do migrations because the last where a disaster.
  15. F

    can storage migration result in bad data?

    thank you. i will install some machines on my test environment and try to install the new qemu verion. still i also first have to reproduce the bug in the storage migration form the production cluster in the test environment. i am not 100% sure if every time i transfered a machine to the cluster...
  16. F

    can storage migration result in bad data?

    but this means that with any other the writetrough a storage migration could result in data loss / corruption? if i have any kind of caching unexpected shutdown can logically lead to data loss.. but also storage migration?
  17. F

    can storage migration result in bad data?

    proxmox-ve-2.6.32: 3.3-147 (running kernel: 3.10.0-7-pve) pve-manager: 3.4-1 (running version: 3.4-1/3f2d890e) pve-kernel-3.10.0-7-pve: 3.10.0-27 pve-kernel-2.6.32-32-pve: 2.6.32-136 pve-kernel-2.6.32-30-pve: 2.6.32-130 pve-kernel-2.6.32-37-pve: 2.6.32-147 pve-kernel-2.6.32-29-pve: 2.6.32-126...
  18. F

    can storage migration result in bad data?

    i did a storage migration from local lvm to ceph. all went fine. but after the next start of windows i had a lot of corrupt files when rebooting the windows server (windows 2008)
  19. F

    Here to another 10 Years!

    +1 thnx for the great work!