Search results

  1. SPICE three monitors - Windows 10 VM

    I have a VM setup using SPICE with three monitors and as long as I leave the memory setting at 32MB it will boot fine and runs though video performance at times is a bit slow. Any time I try to increase this all I get at the console is: Guest has not initialized the display (yet). The VM uses...
  2. Ubuntu do-release-upgrade container

    So, tried upgrading a container running 14.04 -> 16.04. The container was fully patched prior to running the do-release-upgrade. Tried to restart container once that completed and now the container completely refuses to boot! Hoping somebody can help figure out why, I have been poking around...
  3. Ceph and node reboot

    EVERY time we need to restart one of our nodes in our cluster we are faced with this HORRIFIC impact on disk I/O while the Ceph pools need to be "rebuilt". It virtually consumes all of the resources and we need to know how to prevent this. I am simply talking about issuing a 'reboot' after...
  4. Ceph issues new cluster 5.1 fully patched

    We have a new four node cluster that is almost identical to other clusters we are running. However, since it has been up and running at what seems to be random times we end up with errors similar to: 2018-02-05 06:48:16.581002 26686 : cluster [ERR] Health check update: Possible data damage: 4...
  5. Replace Journal / WAL SSD drive

    We have a four node Proxmox cluster with all of the nodes also providing Ceph storage services. One of the nodes is having issues with the SSD that we are using for the journal / WAL drives (this is 5.1 / bluestore). We use a command like: pveceph createosd /dev/sdc --journal_dev /dev/sdr...
  6. Decommission Cluster

    We have a three node Proxmox cluster that we are in the process of decommissioning as we have a new 5.1 cluster that we have migrated the majority of the machines over to. However, for a while we need to keep one of the nodes running with one of the containers it has. But, we also want to take...
  7. [SOLVED] Cluster Letsencrypt SSL

    We have a Proxmox 5.1 cluster and were trying to follow the directions for LetsEncrypt SSL certificates for the nodes. We are following the directions at: https://pve.proxmox.com/wiki/HTTPS_Certificate_Configuration_(Version_4.x_and_newer) These directions worked great for the first node in...
  8. [SOLVED] KVM to KVM communication

    I am running Proxmox 4.2 fully patched with two KVM systems. The first is running Windows 10 Pro and the second is running FreeNAS 9.10. Each of the KVM's can be pinged by the Proxmox host and any other devices on the LAN. However, the two CANNOT ping (or any other network traffic) each...
  9. FreeNAS backend for ZFS over iSCSI

    Wondering if this is yet a supported environment? At one point I know in the development chain it was being looked at and worked on. We have toyed around with other back end ZFS solutions and really have not been pleased with them. Admittedly we have almost NO Solaris experience so all of...
  10. 1000's of audit message

    Our dmesg is filled with 1000's of these messages on one of our cluster machines: [786621.940755] audit: type=1400 audit(1454507557.790:1241952): apparmor="DENIED" operation="ptrace" profile="lxc-container-default" pid=29054 comm="ps" requested_mask="trace" denied_mask="trace"...
  11. LXC Live Migration???

    Was hoping this was to be a feature in 4.1 but it appears not to have made it into the release. Any ideas when this will be available? We really took a step BACKWARDS with this over OpenVZ which allowed for live migration. Hoping it is coming soon as it is the biggest missing feature for us...
  12. ZFS over iSCSI to FreeNAS 9.3+

    Would be interested if this is available yet for Proxmox 4.0 and if not if there is an ETA when this will be. We have tried Nexenta and I have to say it has been a painful process for us. From the fact that it had no 3Ware controller support to the fact that we have LITTLE Solaris experience...
  13. Timeout issue

    We have a number of disk images stored on a Nexenta server. We applied the latest updates to a machine, rebooted and saw that one of the KVM machines did not start and kept generating the error: TASK ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/xx.xx.xx.xx_id_rsa...
  14. ZFS Mirror Boot Issue

    We have a new server running that even though when installed initially booted fine, now seems to be experiencing the ZFS boot issue from the article: https://pve.proxmox.com/wiki/Storage:_ZFS As mentioned in the title this is two SSD drives configured in a mirror. # zpool status rpool pool...
  15. LXC Updates

    I am just wondering if updates for LXC and the features it offers could be defined a bit more on the expected availability. This really seems the one place where 4.x took a step backwards in comparison to the 3.4 / Open VZ solution. In particular we are interested in the following: 1. Live...
  16. Thin Provision

    Somehow one of my storages does not have thin provision on it (ZFS storage if that matters) and I migrated a number of machines to it. Is there a way to enable thin provisioning (yes I know about the check box) AND then have the images shrunk to be thinly provisioned. Of course would prefer if...
  17. Cluster on specific interface

    We are in the process of rebuilding (well creating new) a cluster running on 4.0. We have three interfaces on each of the machines. Two 1G which are front facing and used for accessing the virtual machines with public IP space and a 10G used for our storage network running on private IP space...
  18. LXC Capabilities

    I am hoping to rely on the wisdom of the group to help with an item we are having issues with since upgrading our OpenVZ container to LXC on a 4.0 system. One of them requires capabilities applied to a binary so that it is able to open ports 80 / 443 for web services but still run as an...
  19. CentOS 5.11 in lxc

    We are converting our OpenVZ containers on 3.4 to LXC in 4.0. So far this has been a simple process and worked without issue until this container. It is running CentOS 5.11 and when we run the ptc restore get an error "unsupported redhat release 'CentOS release 5.11 (Final)'". Tried to modify...
  20. Email from address

    We are in the process of building a new cluster using 4.0. I noticed under the Datacenter -> Options section a Email from address field with the hint of root@$hostname. We would like to change this to proxmox@$hostname but the edit entry is not matching that as valid. It is instead checking...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!