Hello,
I have just updated to latest version and newest kernel and I have noticed intermittent internet connections on PVE and VM's
proxmox-ve: 7.2-1 (running kernel: 5.15.35-3-pve)
pve-manager: 7.2-5 (running version: 7.2-5/12f1e639)
pve-kernel-5.15: 7.2-5
pve-kernel-helper: 7.2-5...
hello, when clicking on package versions I see ifupdown2 not installed correctly ?
should this be installed instead of ifupdown or is ifupdown the better version ?
proxmox-ve: 7.2-1 (running kernel: 5.15.35-3-pve)
pve-manager: 7.2-5 (running version: 7.2-5/12f1e639)
pve-kernel-5.15: 7.2-5...
Hello,
wondering if this is normal activity for pve-daily-update
I am using PVE latest
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11...
hello quick question,
I followed a guide to get ssl certificatate for 8006 on my proxmox server via the GUI
This is working....
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3...
after latest update 7.2 PVE
I get this when trying to ssh from VNC on proxmox node
proxmox-ve: 7.2-1 (running kernel: 5.15.30-2-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-helper: 7.2-2
pve-kernel-5.15: 7.2-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11...
wonder what caused this, I am not sure, just was browsing the logs via my firewall program and saw this log.
here is the log
May 3 04:01:16 proxmox kernel: [ 0.000385] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
May 3 04:01:16 proxmox kernel: [ 0.000388] e820: remove...
Hello I am using bare metal server installed proxmox and only 1 node with 3 vms
Package Version:
proxmox-ve: 7.1-2 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)
pve-kernel-helper: 7.2-2
pve-kernel-5.13: 7.1-9
pve-kernel-5.4: 6.4-11...
hello,
I have a quick question on what actually does this backed VM102 - do if you disable it from backups?
I see below my 2 VM's 100 and 102
100 is on stop mode and gets backed ok.
102 is disabled but shows as if something gets written?
I have it highlighted in Bold below where I am...
Hello,
testing a couple different setups for a new drive I installed. first I created a Directory via the GUI for the drive. named it backups.
then I wiped the drive and created lvm then wiped the drive
then I went back to create Directory and named it backups again, but now it shows as...
Hello I have another drive setup as a directory on Proxmox VE 7.1
I planned on using this drive as a backup drive. hdd only (not ssd)
the main server has 2 1TB SSD with raid.
here
proxmox-ve: 7.1-1 (running kernel: 5.13.19-6-pve)
pve-manager: 7.1-12 (running version: 7.1-12/b3c09de3)...
Separate Cluster Network
When creating a cluster without any parameters, the corosync cluster network is generally shared with the web interface and the VMs' network. Depending on your setup, even storage traffic may get sent over the same network. It’s recommended to change that, as corosync is...
I am running latest version of Proxmox7.1
and everytime I look at netin in the below screenshot I see spikes right around 7 to 10 mins.
Just wondering if this is this normal ?
is seems to be the same when looking at my VM's under this node.
how can I find out what is creating the spikes...
I had TFA enabled then wanted to add a cluster to add another node in my primary datacenter, so I started in the GUI added a cluster and then copied JOIN and went to 2nd node to add it and would not allow me because of TFA was enabled... so I went to TFA and disabled it via GUI then logged out...
This Video was perfect solution for me.
Just thought I would share it with others incase they need to setup there PVe 8006 with a certificate via cloudflare
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.