I tried everything here still problem.
NFS network is fine tested no issues, after boot NFS is mounted I can browse directory, no NFS restrictions private network, worked fine with 8 when I try to start an LXC it just hangs. Says not responding...
Hi,
In case it helps, the procedure that I followed and works is the following.
Now I can connect to all the VM/LXC in my Proxmox server from other zerotier connected devices.
a) in Zerotier configure managed routes:
192.xxx.xxx.xxx/24 (LAN...
unfortunately, I also got caught on this. It's affecting all unprivileged containers with bind mounts. I caught mine by trying to migrate a container after updating.
It doesn't matter if its RO or not. My NFS share is RW. Current workaround...
Aha. I think the right fix is to change the proxmox block device attribute detection so that qemu-img will use zeroinit if the thin pool was created with the zero flag attribute, and otherwise it should not use zeroinit. That way it can be...
It looks like you run into the bug reported here: https://bugzilla.proxmox.com/show_bug.cgi?id=7271
Feel free to chime in there!
To temporarily downgrade pve-container you can run apt install pve-container=6.0.18
I'm not using QCOW--- This is all raw only. The source image is a raw 3GB image that you can launch directly without conversion.
I finally understand the root of the problem. LVM does something interesting when you make a thin pool with --zero...
Veeam hat da noch ein paar kleine Bugs. Daher migriere ich im Moment gar nicht mehr mit Veeam.
Tatsächlich lief der Restore mit 12.3 etwas stabiler. Leider ist Veeam nicht so gesprächig, wo das Problem ist. Das was die Worker VM macht ist auch...
You havent really established this is a "storage" issue at all. your problem description,
suggests the issue is with how your software interacts with the decoding and not so much with the storage; it COULD be the storage but you havent posted...
Also ich habe jetzt heute noch ein wenig getestet.
Egal wie meine config von vmbr1 aussieht. Sobald VLAN-aware angeht blockt proxmox den VLAN Traffic sodas er nciht bei der opnsense landet.
Mein weg war jetzt wie folgt.
Auf Proxmox wurde die...
This is the expected reaction for anyone who hasnt been on the other side of this.
REMEMBER, the PRIORITY of any change in systems is, in order:
- to provide AT LEAST a minimum of service requirements, as delineated in the RFP.
- to cause as...
Genau für diese Nischenanwendungen sind diese Server sinnvoll. Aber der Mittelständler mit wenigen Admins, nimmt lieber das simple Setup. vSphere macht einiges richtig, dafür bekomme ich bei DB Servern mit Proxmox deutlich mehr I/O hin.
Für den...
Hi @katti,
You do not need storage snapshots for Veeam backup of any type of storage pool in PVE.
1 Yes, you can have snapshots with FC SAN by using new tech-preview feature...
Hi @katti,
You do not need storage snapshots for Veeam backup of any type of storage pool in PVE.
1 Yes, you can have snapshots with FC SAN by using new tech-preview feature...
Hello,
I am currently running a Proxmox VE 9.1.4 cluster with two nodes and a QDevice for quorum. My shared storage is an HP 3PAR SAN connected via Fibre Channel (FC).
Current Configuration:
Storage Type: LVM (Shared)
Connection: Fibre...
TBH, layering QCOW on a newly added LVM extent without manually zeroing it first seems like a recipe for trouble. This approach is unpopular because it slows provisioning, but it is the safer option.
When QCOW is placed on a filesystem, zeroing...
Ah yeah for a minute I thought --zero was an option for the new thin provisioned volume, but apparently it's a flag for the pool itself only. And I have that set already (proxmox sets that by default). That's why this is such a strange bug...
Hi @medichugh,
Best way is to examine the specified files and remove one of the duplicate references, so only single reference remains.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...