Latest activity

  • tcabernoch
    tcabernoch reacted to BD-Nets's post in the thread SD card deployment with Like Like.
    Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM. With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
  • tcabernoch
    tcabernoch reacted to Kingneutron's post in the thread SD card deployment with Like Like.
    It's probably not a good idea. SD cards are OK if you want to boot a recovery environment, but not if you're running a server 24/7. You can do things to mitigate writes like log2ram, sending syslog to a central instance, noatime, etc... but...
  • tcabernoch
    tcabernoch replied to the thread SD card deployment.
    Thanks guys. I appreciate the potential workaround suggestions. I had already come to the conclusion that I'm gonna lose two capacity drives just to the OS. You've confirmed. What's up Neutron? It's been a minute. I've been very busy. Too...
  • R
    For creating a backup of the whole node1, how would you recommend i do that? Im pretty familiar with backups inside of Prox but doing it outside would be similar to a system image, correct? im reading about proxmox backup server right now, do you...
  • R
    Results from pvesm status: Name Type Status Total Used Available % agents dir active 3512847472 489273184 3023574288 13.93% external dir disabled...
  • T
    EDIT: Answer is NO, ProxMox Cluster and Ceph Public can be airgapped. When I put Ceph Public network on dedicated VLAN that is NOT routed, ceph fails with "Got timeout(500)". What needs to talk to Ceph? Frustrating as the all the documents and...
  • D
    Is this documentation page https://pve.proxmox.com/wiki/Time_Synchronization still up to date? Based on the Upgrade 8to9 docs, the servers should be placed elsewhere " local changes you might want to move them out of the global config"
  • bbgeek17
    Again, I dont disagree with the premise. Not having to run multiple API calls, or stitch data for basic things, is good. This does not seem to track with the actual output. A quick check shows there is no VMID in: /nodes/pve-2/qemu/3000/snapshot...
  • K
    I would migrate the pool to an actively-cooled HBA Card in IT mode, and rebuild it as a 6-disk raidz2. You do not want large hard drives in raid5 - when a disk fails, the whole pool is at risk during resilver. RAIDZ2 gives you a whole extra disk...
  • S
    I happen to agree with the OP that this behavior is inconsistent; if the status/current responses do contain the VMID, then the agent/get-fsinfo should also. I fully understand that the latter is being retrieved through a VM agent (that is not...
  • R
    Sorry if troubleshooting is not a useful approach; in your case... i would research technical documentation or wait until someone more experienced with this exact issue comes along. I don't consider it rude when you state it is not an applicable...
  • K
    Kingneutron reacted to BD-Nets's post in the thread SD card deployment with Like Like.
    Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM. With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
  • K
    Kingneutron replied to the thread SD card deployment.
    It's probably not a good idea. SD cards are OK if you want to boot a recovery environment, but not if you're running a server 24/7. You can do things to mitigate writes like log2ram, sending syslog to a central instance, noatime, etc... but...
  • B
    BD-Nets replied to the thread SD card deployment.
    Depends on the used SD-Cards. VMware uses the card only as boot image provider and runs fully from RAM. With VMware Version 7.0.0. till 7.0.2 there was an issue (scatchpad area assigned to SD-Card) which quickly (2-3 months) destroyed SD-Cards...
  • G
    I happen to agree with the OP that this behavior is inconsistent; if the status/current responses do contain the VMID, then the agent/get-fsinfo should also. I fully understand that the latter is being retrieved through a VM agent (that is not...
  • tcabernoch
    tcabernoch replied to the thread CEPH cache disk.
    Steve. It seems i will build with 5 capacity SATA SSD and 1 enterprise MU SAS SSD for DB/WAL. I'll read through the device classes thing. I scanned over it previously. Thx.
  • tcabernoch
    tcabernoch reacted to SteveITS's post in the thread CEPH cache disk with Like Like.
    Yes, the cache tiering. I meant set up pools by device class like https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_device_classes. The DB/WAL is generally on SSD or faster disk in front of an HDD OSD. If you share a...
  • S
    Hi @soufiyan , welcome to the forum. I think the expectation is that you already know the $vmid. You can always do something like this: pvesh get /nodes/pve-2/qemu/$vmid/agent/get-fsinfo --output-format json | jq --argjson vmid "$vmid" '...
  • S
    You're right, the jq trick works. But that's my whole point - it's a workaround, not a solution. When I'm managing 30+ VMs, I shouldn't have to manually stitch data together that the API already knows. Every other endpoint gives me the VMID -...
  • tcabernoch
    I'm converting an old ESXi VSAN cluster to CEPH. ESXi ran on a Dell IDSM, which is a RAID 1 pair of SD cards in a device to make it look like 1 card. I want all the SSD disks I can use for CEPH, so I want to deploy PVE on the old SDs. I did...