I don't think sysstat* in necessary in the host. Accidentally or not, I do have sysstat package installed in my host, but according to the dpkg/apt logs, I installed it many months later after installing the PVE. And its services are inactive...
Maybe its blocked by your network provider? I just downloaded the ISO from that link and it worked fine, downloaded in about 30 seconds. Try going to a library or some other public place with wifi and try it.
Ich sehe das Leerzeichen jetzt in den screenshots nicht- wird das von MXToolbox so reported, wenn eine Mail hingeschickt wird? oder ist das der DKIM lookup und dort wird domain und selector reingepasted? (dann würde ich nochmal sicherstellen...
Hey everyone,
I'm running into an unfortunate problem that I can't seem to resolve. When trying to download Proxmox from (https://enterprise.proxmox.com/iso/proxmox-ve_9.0-1.iso) I keep getting a failure. The first time I tried it gave an error...
Hi @theuken , welcome to the forum.
You cant. Netapp is not a suitable target for ZFS/iSCSI scheme.
I am not familiar with Virtucache, so cant provide any advice there.
The out-of-the-box solution is to use LVM with iSCSI. You may find this...
We're currently preparing to migrate our Vmware environment to Proxmox.
The bulk of our storage is based on large Netapp iSCSI targets (multi-path) using Virtucache on the ESXi hosts (SSD cache read/write) to accelerate.
Performance, stability &...
We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option.
We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in...
On the import cmd's
/dev/disk/by-id/2818162679070632605 needs to be /dev/disk/by-id/ 2818162679070632605
and/or /dev/disk/by-id/Backups should be /dev/disk/by-id/ Backups
If you ever work just with directory storage (which a like) why not do a debian advanced inst where it's possible to install without lvm at all and after upgrade to pve ?! Otherwise to your question you would eg boot a live linux but again has to...
Was the mirror zpool created in the PVE (hypervisor) or in the PBS (VM)?
If in the VM, then the zpool should NOT be mounted in the PVE. That's the way passing-through works: the hypervisor doesn't use / mount / touch / mess ;-) the device.
Note...
Awesome!! I successfully enabled VLANs on the OpenWRT device (Flint 2) and read your comment very very carefully and configured VLANs on the specific devices that I wanted segregated and left the rest alone as desired. Thank you for your valued input
I didn't spot any issues - neither in dmesg, nor during 5h usage - with 6.17.1-1-pve.
According to my central monitoring all values are within normal range for the cluster.
Systems:
EPYC 7402p (Zen2)
EPYC 9474F (Zen4)
VMs:
mostly OpenBSD...
Intresting, I never thought of that, how could one move an entire root/pve install to another storage without a complete re install re configure of the current system?
Nice for learning but is the solution really wanted as with a combined sda+sdb lvm your possibility to fail is twice after - when sda fail sdb is useless or if sdb fail sda is useless. Better to save data from small disk, reinstall there pve...
Just wanted to share my experience with installing Promox Backup Server as a VM in Unraid as I encountered some problems that I wasn't able to find any answers on the internet.
My setup is a baremetal Unraid 6.11.5, another baremetal Proxmox VE...
This is 2 nodes, connected via the clustering function in proxmox, I can accsess the first node if the 2nd is offline but if the main node1 is offline, node 2 2fa gltiches out sadly :/ do you think this will affect the procedure?
In my case I didn't have the correct repository to allow the installation of headers.
Here are the steps I used to resolve it.
Leaving it here for future
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" >>...
Maybe you create just a few mdadm raid sets each for each jbod to see what happen and getting more useful error messages without hanging full node. If hw problems are resolved you can switch back to your zpools but as you see it's not useful for...
little update. ChromeOS 8+ has the new display option "VirGL". That is a game-changer for Chrome OS Flex virtuals: It solves the screen refresh issues that the VirtIO-gpu has. VirGl delivers a smooth video, . even youtube playback is...
Yes, pve-zsync seems like the best option, so much so that I mentioned it in the initial post as an option for using scripts.
The problem with this would be that other users on the system would need to have greater knowledge of Shell to perform...