It's recommended to use apt full-upgrade over apt upgrade maybe just doing the full upgrade will resolve your issue. Does this also happen for new containers?
Today I had the issue again with 0.1.271.
The strange thing is that it has a clean exit code.
C:\Windows\System32>sc qc QEMU-GA
[SC] QueryServiceConfig ERFOLG
SERVICE_NAME: QEMU-GA
TYPE : 10 WIN32_OWN_PROCESS...
The gui seems cryptic like the console command, all I want to do is copy vm/106 from backup to geekscove within the same data store
when I try the GUI and when I do a pull the only item that shows is geekscove not the Backup
Danke ;-). Nee, da war noch keine weitere gesteckt. Ich würde einfach mal eine 2TB zum Testen dazu stecken und die mit Deinem Befehl versuchen einzubinden.
Hi,
Why not just transfer the license to the VM when it's on PVE ?
Cf : https://help.mikrotik.com/docs/spaces/ROS/pages/18350234/Cloud+Hosted+Router+CHR#CloudHostedRouter,CHR-LicenseTransfer
Best regards,
@aaron - i placed it on ceph-storage
What configuration should I check / change on ceph-pool, VM config, etc. to make sure that I'm using proper settings?
I read about KRBD on ceph-pool, write-cache on VM, disable RAM balooning and all other...
Well, then the question is how fast that NAS can write the backup data. As @spirit mentioned, a slow backup target can have an impact on the running VM. The fleecing option in the backup job options can help if you place that on a fast storage in...
I am new to this forum.
It would be nice if people share their resolved configurations.
My two cents.
wifi setup
iface enp4s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.10.5.80/24
gateway 10.10.5.1...
du hast da ja die VLAN*-Schnittstellen.
leider habe ich kein OPNSENSE, deshalb hier keine 1:1 anleitung aber so meine ich das:
Du hast die Interfaces vom Typ VLAN --- (VLAN*)... Du musst diese jetzt einer neuen Schnittstelle zuweisen z.B. Name...
@devaux
public and cluster network right now are using the same 10Gb NIC and the same VLAN
MTU on NIC and Switch are default 1500 right now
All connected to one switch, without MLAG, and yes albo VMs are connected to the same aggregation switch...
Yes it would give even better performance. In that case I would need to use two disks and I am running low on empty disk slots. So for now I will use a single disk as caching disk.
Yes it would give even better performance. In that case I would need to use two disks and I am running low on empty disk slots. So for now I will use a single disk as caching disk.
Aha sorry, I didn't got the part on the caching. If this works for your, that's great. I would still expect that a special device would give a further speedup.
I have added a cache device not a special device. I found the explanation here: https://klarasystems.com/articles/openzfs-understanding-zfs-vdev-types/
Feel free to comment on this.
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
I have added a cache device not a special device. I found the explanation here: https://klarasystems.com/articles/openzfs-understanding-zfs-vdev-types/
Feel free to comment on this.
I see that the command "pvesm" on Proxmox Backup Server is there a equivalent command to retrive this usage%? As the WebUI Shows:
UPDATE: Really i don't need to make a script also for PBS because on PVE has already the PBS Storage for space monitor.
Die Macher der Helperscrips haben das bestimmt berücksichtigt*hust* Schau mal da nach was die zum Thema Update stehen haben.
Frage mich allerdings was das Thema im Forum PBS zu suchen hat.
I try to mount an external ceph cluster with cephfs for backups.
I can mount the cephfs with a path capability of /, but not with a /some/path
It gives me a :
Feb 04 14:19:32 proxmox-bl2 kernel: libceph: client1148949 fsid...