Latest activity

  • H
    Hello Proxmox Devs, I know PDM is early but here are some of the top features I'd like to be added that I have not seen on the Roadmap. Hopefully others can agree on at least one of these below points. Ability to access shell/console of both...
  • fiona
    fiona replied to the thread VM Crashed had to reboot.
    Hi, you can check the host's and guest's system logs from around the time the issue happened. Please also share the output of pveversion -v and qm config ID with the numerical ID of the VM.
  • M
    https://bugzilla.proxmox.com/show_bug.cgi?id=7305 - thanks!
  • M
    Let me explane it a bit more. What I want is not exactly to separate Ceph RBD from CephFS. I want to give access to VM as Ceph clients from a different network. On proxmox nodes I have a 10GbE dual NIC. One port dedicated to Ceph (public and...
  • D
    Nope, no luck so far. Node with the NFS-Server runs with the old Kernel for now. Strange thing is I didn't get the Stale file handle on other nodes only on the one that is itself exporting the shares.
  • B
    I'm not using Windows 11 in my Proxmox server
  • ghusson
    Ok. What did you do as special parameters or installs on the servers that doesn't come from PVE ISO installer ? Did the server already worked on an other OS with any problem ? Did you look at hardware logs for over-temperature or manual hard reset ?
  • fiona
    Hi @sdettmer, systemd versions >= 242 requires nesting to be able to create Linux namespaces, which is used to isolate services. There are still distros that do not use systemd that you could use for your containers.
  • EllerholdAG
    EllerholdAG reacted to gfngfn256's post in the thread VM names disappear with Like Like.
    I don't see why you are using Veeam backup for Proxmox VM's. Use the in-house vzdump or PBS for VM backups.
  • J
    Johannes S reacted to oz1cw7yymn's post in the thread Ceph cache tier alternative with Like Like.
    Hi! I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic. Please note though that it has a very different performance profile than other...
  • O
    oz1cw7yymn replied to the thread Ceph cache tier alternative.
    Hi! I'm the biggest proponent of ceph, it's really an amazing product and technology. Multi-node redundancy, self-healing and rebalancing is really fantastic. Please note though that it has a very different performance profile than other...
  • B
    Thanks crashes stopped. I am using this as a baseline; switching to VirtIO SCSI (was avoiding it to try and avoid "VM" identifiers) rendered Windows unbootable due to driver and I didn't care enough to fix it since it's a single-purpose VM, so I...
  • H
    hilocz replied to the thread Tracking center export to file.
    Added to Bugzilla https://bugzilla.proxmox.com/show_bug.cgi?id=7304 , pls add you as CC, it will get more attention .
  • J
    Ok, decision made, one more 4TB WD Red SA500 ordered. Paid about the same amount for it as I did in May 25 for three of them :mad:. I checked the installation process on a nested test system. It offers to build a ZFS Raid10 to install the system...
  • fiona
    You can use zpool status -v lsblk -f to check, but it really seems like the Universe storage is formatted for ZFS right now, not LVM. Again, you can wipe the disks and re-create an LVM storage using these disks if that's what you want. Proxmox...
  • R
    radoslav replied to the thread input/output error.
    I didn't say that I don't use ZFS at all in Proxmox. ZFS are the two 600 GB disks where the VM is located. Linux raid-a is not ZFS. I don't think the problem is in the different names. I changed them everywhere and it's still the same.
    • 1770809728694.png
    • 1770809755281.png
    • 1770809839074.png
  • shanreich
    If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood. That way you can have a completely isolated...
  • D
    If you need to use the same subnet multiple times, you need to utilize VRFs to separate them on the PVE host. This functionality is currently not implemented for Simple Zones. When using NAT this way, you'd also need a way of discerning return...
  • D
    If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood. That way you can have a completely isolated...
  • aaron
    If it is all happening on the same host, you could consider using vmbr interfaces without a bridge port, or SDN simple zones without a gateway. Those two should technically be the same under the hood. That way you can have a completely isolated...