Search results

  1. L

    SDN: To be VLAN aware or not? Or another problem?

    I'm experimenting with the SDN service and have not been able to figure this one out: If I create a Vnet (in my case 11) and tick "VLAN aware", then I cannot define a subnet. If, on the other hand, I untick "VLAN aware", I can define a subnet. What is the point of this and how does one...
  2. L

    Memory management: ceph

    The beauty of using free/libre software is that one can use it for what one wants. We have a cluster of older repurposed machines that are used as a development and testing environment, be backup to these and it does a great job of it. Surprisingly, the response times are not bad at all. In...
  3. L

    High IO with a lot of LXC Containers

    Just from experience I would think that you should drop one layer of proxmox (I cannot see why you need to run proxmox in a VM inside proxmox when proxmox provides nice features like pools & SDN to keep things separated) and run your you VM with the containers on a proxmox instance that is not...
  4. L

    How to change template(windows) and update linked clones

    I just thought about it a little more (and read up somewhat) and understand that the linked clone is based on a snapshot of the template. So once a snapshot is made, all changes go to the clone. If the template is changed, the clone still refers to the snapshot and does not include the...
  5. L

    How to change template(windows) and update linked clones

    I'm surprised to learn that a linked clone cannot be updated by updating the template the clone is linked to. Why is this? Is there a technical reason that one of the dev's here could explain. If it where possible to update a template and let the linked clone act modification it would be a...
  6. L

    High IO with a lot of LXC Containers

    I don't see much feedback on your issue, @henning, and I'm wondering if you managed to figure this out?
  7. L

    Disk errors on FreeBSD 12.2 guest

    We ended up recreating the VM and it runs perfectly now. What went wrong we will probably never know.
  8. L

    Disk errors on FreeBSD 12.2 guest

    No joy! It still doesn't succeed in writing to the disk properly.
  9. L

    Disk errors on FreeBSD 12.2 guest

    Of course! I changed that to scsi0 and then fixed fstab in FreeBSD to use the changed name and now it seems to be fine.
  10. L

    Disk errors on FreeBSD 12.2 guest

    We have recently upgraded to kernel 5.15 and now were having disk errors on one guest running FreeBSD 12.2. The config is: undefinedagent: 1 balloon: 8192 boot: cd bootdisk: virtio0 cores: 8 ide2: cephfs:iso/FreeBSD-11.4-RELEASE-amd64-disc1.iso,media=cdrom memory: 32768 name: VO-IRIS-Poller...
  11. L

    All LXCs and VMs lost networking!

    Let me clarify that: Because everything is virtualised, I lost the firewalls and thus remote access too. The Remote Management interfaces of the nodes are configured on a non-public network, so I guess I'll have to find a secure way of accessing these via some out-of-band system.
  12. L

    All LXCs and VMs lost networking!

    I have ifupdown2 installed... also, all guests on all nodes went offline # dpkg -l 'ifupdown*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name...
  13. L

    All LXCs and VMs lost networking!

    I had a really perplexing situation last night. I had previously upgraded one or four nodes running the latest pve to pve-kernel -15.5. Because the naming of the network interfaces was changed at some stage, I had to recreate the /etc/network/interfaces file with the new nic names on all the...
  14. L

    lxc doesn't start properly after upgrade to pve7

    It's a legacy rails app for which there is no upgrade budget at this stage :) Needs substantial work and it probably won't happen.
  15. L

    lxc doesn't start properly after upgrade to pve7

    I think the only way forward to future proof these older guests is to move them to KVM machines.
  16. L

    Memory management: ceph

    Thanks! It seems that luminous doesn't have all the commands to manage this yet. I'm searching the docs now... I'm systematically upgrading this cluster to the latest version, but I need to understand how to limit the memory usage in the process. Since is just a test and dev cluster, so...
  17. L

    Memory management: ceph

    I see that ceph manages memory automatically according to https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#automatic-cache-sizing Is the following normal for automatic cache sizing then? Granted, some of the machines have only 8GB of RAM and are used a storage nodes...
  18. L

    Support External RBD And CephFS with Erasure Coded Data Pool

    Thanks for that, we'll definitely look into it. That is really good news!
  19. L

    lxc doesn't start properly after upgrade to pve7

    So, is it one of the other for all lxc's? In other words, if I impliment this kernel setting, will all containers revert to using cgroups instead cgroupsv2
  20. L

    lxc doesn't start properly after upgrade to pve7

    I have read through that, but something is not quite clear to me. In the Ubuntu 14.04 lxc image there is no /etc/default/grub as refered to by this linked reference. So should the systemd.unified_cgroup_hierarchy=0 parameter be set in the proxmox node kernel config??