Search results

  1. SATA DOM as OS discs?

    Warranty of the entire server is up to 3 years, depends how much we are willing to pay. In the techspecs there is a line "Drive writes per day" for 5 years with 1 dwpd. Should be an indication that it should last longer than the normal warranty of the rest of the server, shouldn't it?
  2. SATA DOM as OS discs?

    The SATA-DOM Specs are in the attached screenshot. Do not know the brand of the DOMs. We buy from a known german company, do not know if I can post the name here. But they have the option to buy 'Proxmox' compatible hardware, which they have tested and approved.
  3. SATA DOM as OS discs?

    Hello, installing Proxmox on SATA-DOMs is a valid option? I saw older threads where it was not recommended. I thought to use two SATA-DOMs in mirrored configuration, which are 64 GB in size and rated at 1 DWPD, shouldn't they be reliable enough only for the OS? Regards, Thomas
  4. Proxmox 6.1 OOM killed a VM

    Ah, no we are not backing up with Proxmox. We do a backup internal on the VM at this time. Sorry for the confusion.
  5. Proxmox 6.1 OOM killed a VM

    Installed the latest updates for PVE as I have seen that qemu was updated, hoped that it helped. Added a little script which wrote the 'rss' column of the 'ps -eo rss,command' output to a file, every second. I made a primitive graph attached to this post where we see clearly that around 4 or 5...
  6. Proxmox 6.1 OOM killed a VM

    Could it be some remains from XenServer where the VMs where imported from? But other VMs which are working normally where also imported from XenServer.....
  7. Proxmox 6.1 OOM killed a VM

    Sorry, wrote to fast about local storage, it is not possible, not enough space on local disks for the VMs, are all to big. The VM from the screenshot has 4 GB RAM configured. Now it raised to 7,3 GB used RAM, still raising slowly. The VM i migrated to another host, hast risen to 8,6 GB used RAM...
  8. Proxmox 6.1 OOM killed a VM

    The VM we had to restart on last friday now has again 6,4 GB of RAM in use, see attached screenshot. Are all those threads normal that show up in the screenshot (made with htop)?
  9. Proxmox 6.1 OOM killed a VM

    Live migrated two VMs with very high memory consumption to another host, now (obviously) RAM usage is in balance with the configured RAM. Will keep an eye on them. How much overhead of memory is normal?
  10. Proxmox 6.1 OOM killed a VM

    All our VMs are on NFS. And all are in production, so fiddling a lot around is problem. I could clone it to local storage and start it without network, is this sufficient for a test?
  11. Proxmox 6.1 OOM killed a VM

    Yes, memory consumption I wrote was at the moment of writing. Here the config of the VM: root@pve7:~# cat /etc/pve/qemu-server/147.conf agent: 1 boot: cdn bootdisk: scsi0 cores: 2 cpu: Broadwell ide2: none,media=cdrom memory: 8192 name: VS04 net0: virtio=FE:B8:3F:6C:90:35,bridge=vmbr0 net1...
  12. Proxmox 6.1 OOM killed a VM

    Found another VM on this host which kvm process is using way to much memory. It has configured 8 GB of RAM, but as of 'htop' it is using 32GB...
  13. Proxmox 6.1 OOM killed a VM

    Hi, no ZFS, VMs are on an NFS storage. VM has configured only 4 GB RAM.
  14. Proxmox 6.1 OOM killed a VM

    Hello, Friday evening one of four hosts in our PVE cluster got out of memory. It killed the qemu process of one of our VMs, fortunately no apparent damage, restared without problem (apart the obligatory disk check). The killed VM is a CentOS 6 fully patched, 64bit. At the moment the host is...
  15. [SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

    Or is the status OK, as I see "Quorate: Yes"
  16. [SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

    sorry, this did not work. Still got Quorum: 3 root@pve6:~# pvecm status Cluster information ------------------- Name: PVECLUSTER01 Config Version: 7 Transport: knet Secure auth: on Quorum information ------------------ Date: Wed Dec 11 08:47:30 2019 Quorum...
  17. [SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

    Yes, cluster works. We have the an internal net we normally use for Webconsole/Cluster and a storage network, which I wrongly clicked as link0. Her the information you requested: root@pve6:~# pvecm status Cluster information ------------------- Name: PVECLUSTER01 Config Version...
  18. [SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

    Joined the Server successfully Stupid me, on one I added the wrong network for the cluster network, it has now the network of our storage. Can I adjust this?
  19. [SOLVED] Join fresh install of PVE 6.1 to 6.0 cluster

    Cause having two more servers, we can distribute better the existing VMs and then we would upgrade the cluster from 6.0 to 6.1. Thanks for the answer.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!