One thing i noticed is during the upgrade, it mentioned that pvemanager failed:
Setting up pve-manager (7.1-11) ...
Job for pvedaemon.service failed.
See "systemctl status pvedaemon.service" and "journalctl -xe" for details.
dpkg: error processing package pve-manager (--configure):
installed...
I am using Proxmox 6.4 (latest) and initially i was using just a single SSD drive (using directory) method on the box. I now want to ensure that if the SSD dies eventually that I have redundancy. So I added 2 new Intel DC series SSD Drives are created a RAID-1 at the Raid Controller level. I had...
What is interesting it thinks somehow it has libpve-storage-perl installed from 6.4-1, when it's only running 6.2-12. I presume that something is up with this mismatch somehow?
root@admin-virt01:~# pvesh get /storage/synology-sp1
┌───────────────┬───────────────────────────────────────────┐
│...
I have a 5-node Proxmox 6.2 cluster, and I noticed today that all my nodes are keeping up to 90 days of backups (of all VMs), except this particular single node in the cluster which is only keeping the last single day?
What is very strange is that they are all backing up to the same storage...
I am using all HP DL360 Generation 8 hardware, I am using Intel DC SSD drives that so far have been extremely reliable. (Knock on wood) I haven't had a single failure of any of my SSDS yet in almost 2 years. I clearly know that one day one of them will fail because of Murphys law. The SSD's are...
Currently I have all my Virtual Machines running on a Single Enterprise grade SSD disk on the server. I would like to add redundancy to limit possibly any issues with the SSD failing and losing all my VM's at once.
I am considering two options:
1st option: create a hardware RAID-1 of 2 x...
So depending upon how your system is configured, you could potentially do one of the following.
If your main storage is using LVM for the disks, you could add the new hard drive to the system and add it to the "pve" LVM Volume Group. This would extend the amount of space available to be used by...
So one of my hosts that only has 1/2 the containers running and a rather high load shows:
root@pve02:~# pct cpuset
----------------------------------------------------
105: 10 18
112: 10 18
115...
I upgraded a Proxmox 6.2 Cluster to the new Proxmox 6.3 and after I rebooted the nodes I am seeing some really strange load issues. I am mostly running LXC containers on the host and after they all boot up the load on the box just keeps climbing to insane numbers. I have to stop the LXC...
I know that exists, but the problem that I am having is that when the VM is created it is getting a default list of values from somewhere, and i need to change that list. I am using an automated script so changing it manually after VM is created is not ideal as it doesn't solve my automatic...
Is there a setting in Proxmox that sets the default boot order for any newly created VMs? I am running into an issue where I am trying to use Packer against Proxmox to create a Template, but it's not setting the right boot order, so after it does install it fails to see the scsi-0 drive.
So currently I am using Proxmox 6.2 on all my clusters. I have a Synology NAS unit that provides me over 60TB of NFS storage that I use to place the backups nightly onto. I use the integrated backup product that has been in PVE for a very long time. I've been very happy with the new ZST backup...
So looks like i may have found the issue. In Debian Buster 10, you need to add to the "defaults" section of multipath.conf
find_multipaths on|off
Check out this bug report.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=932307
I am using the very latest Proxmox 6.1-7, and I have a Synology NAS unit that I have exported a LUN to my HP Server. I am trying to determine what is missing from my multipath.conf configuration file because it isn't loading correctly.
The only way currently that I can get it to see the disk is...
Has anyone seen where they have a single node that isn't getting the updates, but all the other nodes do? I have confirmed my /etc/apt/sources-list-.d/ repo files and they are all the same across the boxes. However doing an (apt-get update, apt-get upgrade) says nothing to upgrade but it's stuck...
I am running Proxmox 6.0-6 (pvetest) and I have 3 LXC containers that went into bizarre state. I noticed yesterday after it updated the system with the latest pvetest updates, that the configs for the LXC instances got all jacked up. What is strange is it seems to be messing up the LXC.conf...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.