Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM.
With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
It's probably not a good idea. SD cards are OK if you want to boot a recovery environment, but not if you're running a server 24/7.
You can do things to mitigate writes like log2ram, sending syslog to a central instance, noatime, etc... but...
Thanks guys. I appreciate the potential workaround suggestions.
I had already come to the conclusion that I'm gonna lose two capacity drives just to the OS.
You've confirmed.
What's up Neutron? It's been a minute. I've been very busy. Too...
For creating a backup of the whole node1, how would you recommend i do that? Im pretty familiar with backups inside of Prox but doing it outside would be similar to a system image, correct? im reading about proxmox backup server right now, do you...
EDIT: Answer is NO, ProxMox Cluster and Ceph Public can be airgapped.
When I put Ceph Public network on dedicated VLAN that is NOT routed, ceph fails with "Got timeout(500)".
What needs to talk to Ceph?
Frustrating as the all the documents and...
Is this documentation page https://pve.proxmox.com/wiki/Time_Synchronization still up to date? Based on the Upgrade 8to9 docs, the servers should be placed elsewhere " local changes you might want to move them out of the global config"
Again, I dont disagree with the premise. Not having to run multiple API calls, or stitch data for basic things, is good.
This does not seem to track with the actual output. A quick check shows there is no VMID in:
/nodes/pve-2/qemu/3000/snapshot...
I would migrate the pool to an actively-cooled HBA Card in IT mode, and rebuild it as a 6-disk raidz2. You do not want large hard drives in raid5 - when a disk fails, the whole pool is at risk during resilver. RAIDZ2 gives you a whole extra disk...
I happen to agree with the OP that this behavior is inconsistent; if the status/current responses do contain the VMID, then the agent/get-fsinfo should also. I fully understand that the latter is being retrieved through a VM agent (that is not...
Sorry if troubleshooting is not a useful approach; in your case... i would research technical documentation or wait until someone more experienced with this exact issue comes along. I don't consider it rude when you state it is not an applicable...
Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM.
With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
It's probably not a good idea. SD cards are OK if you want to boot a recovery environment, but not if you're running a server 24/7.
You can do things to mitigate writes like log2ram, sending syslog to a central instance, noatime, etc... but...
Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM.
With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
I happen to agree with the OP that this behavior is inconsistent; if the status/current responses do contain the VMID, then the agent/get-fsinfo should also. I fully understand that the latter is being retrieved through a VM agent (that is not...
Steve.
It seems i will build with 5 capacity SATA SSD and 1 enterprise MU SAS SSD for DB/WAL.
I'll read through the device classes thing. I scanned over it previously. Thx.
Yes, the cache tiering.
I meant set up pools by device class like https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_device_classes. The DB/WAL is generally on SSD or faster disk in front of an HDD OSD. If you share a...
Hi @soufiyan , welcome to the forum.
I think the expectation is that you already know the $vmid. You can always do something like this:
pvesh get /nodes/pve-2/qemu/$vmid/agent/get-fsinfo --output-format json | jq --argjson vmid "$vmid" '...
You're right, the jq trick works. But that's my whole point - it's a workaround, not a solution.
When I'm managing 30+ VMs, I shouldn't have to manually stitch data together that the API already knows. Every other endpoint gives me the VMID -...
I'm converting an old ESXi VSAN cluster to CEPH.
ESXi ran on a Dell IDSM, which is a RAID 1 pair of SD cards in a device to make it look like 1 card.
I want all the SSD disks I can use for CEPH, so I want to deploy PVE on the old SDs.
I did...