Search results

  1. High IO with a lot of LXC Containers

    Just from experience I would think that you should drop one layer of proxmox (I cannot see why you need to run proxmox in a VM inside proxmox when proxmox provides nice features like pools & SDN to keep things separated) and run your you VM with the containers on a proxmox instance that is not...
  2. How to change template(windows) and update linked clones

    I just thought about it a little more (and read up somewhat) and understand that the linked clone is based on a snapshot of the template. So once a snapshot is made, all changes go to the clone. If the template is changed, the clone still refers to the snapshot and does not include the...
  3. How to change template(windows) and update linked clones

    I'm surprised to learn that a linked clone cannot be updated by updating the template the clone is linked to. Why is this? Is there a technical reason that one of the dev's here could explain. If it where possible to update a template and let the linked clone act modification it would be a...
  4. High IO with a lot of LXC Containers

    I don't see much feedback on your issue, @henning, and I'm wondering if you managed to figure this out?
  5. Disk errors on FreeBSD 12.2 guest

    We ended up recreating the VM and it runs perfectly now. What went wrong we will probably never know.
  6. Disk errors on FreeBSD 12.2 guest

    No joy! It still doesn't succeed in writing to the disk properly.
  7. Disk errors on FreeBSD 12.2 guest

    Of course! I changed that to scsi0 and then fixed fstab in FreeBSD to use the changed name and now it seems to be fine.
  8. Disk errors on FreeBSD 12.2 guest

    We have recently upgraded to kernel 5.15 and now were having disk errors on one guest running FreeBSD 12.2. The config is: undefinedagent: 1 balloon: 8192 boot: cd bootdisk: virtio0 cores: 8 ide2: cephfs:iso/FreeBSD-11.4-RELEASE-amd64-disc1.iso,media=cdrom memory: 32768 name: VO-IRIS-Poller...
  9. All LXCs and VMs lost networking!

    Let me clarify that: Because everything is virtualised, I lost the firewalls and thus remote access too. The Remote Management interfaces of the nodes are configured on a non-public network, so I guess I'll have to find a secure way of accessing these via some out-of-band system.
  10. All LXCs and VMs lost networking!

    I have ifupdown2 installed... also, all guests on all nodes went offline # dpkg -l 'ifupdown*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name...
  11. All LXCs and VMs lost networking!

    I had a really perplexing situation last night. I had previously upgraded one or four nodes running the latest pve to pve-kernel -15.5. Because the naming of the network interfaces was changed at some stage, I had to recreate the /etc/network/interfaces file with the new nic names on all the...
  12. lxc doesn't start properly after upgrade to pve7

    It's a legacy rails app for which there is no upgrade budget at this stage :) Needs substantial work and it probably won't happen.
  13. lxc doesn't start properly after upgrade to pve7

    I think the only way forward to future proof these older guests is to move them to KVM machines.
  14. Memory management: ceph

    Thanks! It seems that luminous doesn't have all the commands to manage this yet. I'm searching the docs now... I'm systematically upgrading this cluster to the latest version, but I need to understand how to limit the memory usage in the process. Since is just a test and dev cluster, so...
  15. Memory management: ceph

    I see that ceph manages memory automatically according to https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#automatic-cache-sizing Is the following normal for automatic cache sizing then? Granted, some of the machines have only 8GB of RAM and are used a storage nodes...
  16. Support External RBD And CephFS with Erasure Coded Data Pool

    Thanks for that, we'll definitely look into it. That is really good news!
  17. lxc doesn't start properly after upgrade to pve7

    So, is it one of the other for all lxc's? In other words, if I impliment this kernel setting, will all containers revert to using cgroups instead cgroupsv2
  18. lxc doesn't start properly after upgrade to pve7

    I have read through that, but something is not quite clear to me. In the Ubuntu 14.04 lxc image there is no /etc/default/grub as refered to by this linked reference. So should the systemd.unified_cgroup_hierarchy=0 parameter be set in the proxmox node kernel config??
  19. [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    After updating to ceph 16 (pacific) on pve7, I have the following condition: ~# ceph health detail HEALTH_OK but ~# ceph status cluster: id: 04385b88-049f-4083-8d5a-6c45a0b7bddb health: HEALTH_OK services: mon: 3 daemons, quorum FT1-NodeA,FT1-NodeB,FT1-NodeC (age 13h)...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!