Search results

  1. S

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    Shutdown logs? (not the task viewer but the host logs on shutdown). You can easily capture that using a configured IPMI serial console. Maybe boot without "quiet"
  2. S

    VE 4.0 Kernel Panic on HP Proliant servers

    Try to limit memory to a single NUMA node. (numactl -H to see how much memory is allocated per node)
  3. S

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    Oh, I've got it now, sorry. So shutdown in GUI works as expected. Hm, I don't have this issue because I explicitly shutdown important VMs before planned host shutdown. If you do service pve-manager stop does it return immediately? That should take down the VMs and CTs.
  4. S

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    it finishes fine because it kills the VM on timeout.
  5. S

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    For linux guests you have to have acpid installed.
  6. S

    Issue with CentOS7 httpd spawing: Permission denied (PVE4 beta 2)

    Actually the same happens on debian, too. So it is a distribution/app problem. You can write to the creators or you can have your own custom templates with this fix in place.
  7. S

    Issue with CentOS7 httpd spawing: Permission denied (PVE4 beta 2)

    Editing a permanent file on a permanent storage medium (i.e. hard-drive) is not permanent enough for you?
  8. S

    [SOLVED] proxmox 4, live migration , freebsd, boot error , root partition

    And what is the link between qcow2, CentOS and your migrated FreeBSD machine?
  9. S

    [SOLVED] PVE 4 CPU Cores

    Are you sure it is a hard limit at 30GB? Like, it does it at 31 but not at 30? Or it simply gets worser and worser when increasing the RAM size? I think allocating and freeing 320GB of RAM (if qemu/KVM or the guest initializes it) is not cheap in any scenario. I've seen that KVM indeed pegs all...
  10. S

    ZFS samba share best practice

    Your network will be the bottleneck. Don't worry about running Samba in a CT.
  11. S

    [SOLVED] Deduplication on an dataset - has anyone this feature active?

    No, the dedup cache has an entry for each block, as ZFS is variable block size up to "recordsize". If all blocks are 128k then it would be nice. Smaller blocks are worse, because their amount is higher meaning more entries in DDT. Dedup is useless anyway, because storage is cheaper than RAM...
  12. S

    [SOLVED] proxmox 4, live migration , freebsd, boot error , root partition

    Your disk name has changed. I don't known enough about FreeBSD but if you've chosen a virtio disk for your VM its device name should be something like "vtbd..."
  13. S

    pct start vs lxc-start

    Based on what I've read until now, it seems that it happens where there is high network traffic (or many sockets open?) and the veth peer is pinned in container's namespace, so the outside peer can't be gracefully removed. My opinion, again, is to simply ip link delete all the container's...
  14. S

    pct start vs lxc-start

    No, it didn't work. For now, I've added a hack, maybe somebody else can elaborate it. In /usr/share/lxc/hooks/lxc-pve-poststop-hook I've put "ip link delete veth${vmid}i0" and it works every time. Btw, this issue goes back to 2013, it seems...
  15. S

    pct start vs lxc-start

    No, I can't. Unfortunately, I don't think a stripped down clone will help, as being such a hard to hit error, I'm afraid any change in the configuration (like the number of torrents in cache for transmission) will make it work. I'm pretty sure that the 2nd attempt (on failed startup) to clear...
  16. S

    choose new controller for new install (4.0) system

    A PERC/5i is very cheap and pretty fast. It also has BBU and cache.
  17. S

    VE 4.0 Kernel Panic on HP Proliant servers

    I don't know if it helps with NMI, but you should try kdump to get more information on what is going bad.
  18. S

    pct start vs lxc-start

    Here it is: arch: amd64 cpulimit: 2 cpuunits: 1024 hostname: SeedBox memory: 512 net0: name=eth0,hwaddr=12:B0:87:0B:B4:6E,bridge=vmbr0,ip=192.168.27.21/24,gw=192.168.27.1 onboot: 1 ostype: ubuntu rootfs: Containers:subvol-101-rootfs swap: 512 lxc.mount.entry: /raid0/torrents data/torrents none...
  19. S

    pct start vs lxc-start

    Nope, just rebooted, so everything is new (4.0-51), including kernel. Same problem. Anyway, considering that a single container is doing that and all containers have the same template, it means that a specific service is at fault.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!