Search results

  1. Shrinking a VM within Zpool

    to make it easier and safer add new disk/s on same or different storage backend add those to the VM, boot with live iso (hbcd, ubcd etc) and clone everything to new drives while resizing the partitions. then you can delete old and boot from new drives. longer process but safer imho. edit: only...
  2. Encrypting Proxmox OS disk - Encrypting data at rest

    thanks. i was wondering if anything was added in native installer to make this possible. we try to keep our installs as vanilla as possible but this is definitely a good workaround.
  3. Encrypting Proxmox OS disk - Encrypting data at rest

    just bumping this up to see if there are any updates on this? also any progress on vTPM?
  4. [SOLVED] Questions about Proxmox Ceph Cluster

    not to hijack the thread but is proxmox cluster network still used for migration on shared storage? if corosync is on 1gb dedicated ring will migration on shared storage have huge performance impact? can migration network be switched to vm traffic network? or will that introduce performance...
  5. Slow network speed on VMs, but not on host

    Use virtio nic. you seem to be hitting gigabit limit of E1000.
  6. Live migration fails for high memory VMs

    any update on this? is it a win or linux based vm. im just curious if u found the solution
  7. Proxmox VE Ceph Benchmark 2018/02

    we're on standard which becomes quite a job to make sure we're compliant so we will definitely invest in datacenter once our workload goes up. thanks for the tip on the ms licensing but it always makes my head spin lol. we'd probably be fine with 3 nodes too but the hardware is on the older...
  8. Proxmox VE Ceph Benchmark 2018/02

    once again excellent piece of information. those are some beefy servers you got there. i'm trying to re purpose 5 older dual e5-26xx R430 256GB RAM and hopefully they'll do the trick as our workload isn't too crazy (around 20 VMs 99% win based) and we hope to have some room in case we double it...
  9. Proxmox VE Ceph Benchmark 2018/02

    thank you for taking your time to run those tests and providing a very detailed answer. it's great information. i meant specs of the node hardware, CPU/RAM. Sorry for not making that clear. Also is your 10gb mesh used for both private and public networks or are you splitting them. i'm thinking...
  10. Proxmox VE Ceph Benchmark 2018/02

    Hi Stephan. Is this a hyper converged setup? or just ceph managed through proxmox? list i linked shows that two ceph users experience transfer rate drop while copying 2gb+ files around 1-1.5gb mark (significant drop to 25MB/s) which only happens on windows guests and not linux. would you be able...
  11. Proxmox VE Ceph Benchmark 2018/02

    just wondering if anyone is experiencing this issue on their all flash ceph pools, where windows vm is slow during copy operations. thanks in advance for any insight. i plan on implementing ceph cluster where all vm that will be on rbd will be windows based and that post has me a little worried...
  12. Recover a file from an LXC container that won't start
  13. Proxmox VE 6.2 released!

    hi i noticed that after the upgrade a container (debian 8) that i had running freeswitch (fusionpbx) on stopped starting services. i was able to replicate after spinning up latest debian 9 template and installing fusionpbx. rebooting container will not start freeswitch service. systemctl...
  14. Proxmox VE 6.2 released!

    that's exciting. thanks for the heads up. are we talking months or years? sorry just being nosy.
  15. Proxmox VE 6.2 released!

    that's great! has the backup and restore time improved as well?
  16. ceph 5 node setup planning

    thanks for your reply t.lamprecht. i didnt wanna hijack the other thread and figured ill post this one with more HW info which you inquired about in your reply. The OSD will most likely be Micron 5300 PRO 1.92 TB or open to suggestions, but my research so far narrowed it down to this model. I...
  17. ceph 5 node setup planning

    hi i plan on building ceph setup of 5 nodes with proxmox corosync in mesh over 1Gb QUAD NIC, ceph pub/priv in mesh over 25Gb QUAD NIC and VM access network (also used as 2nd ring for proxmox coro) over 25Gb NIC connected to redundant 25Gb switches. We don't see ourselves expanding past 5 nodes...
  18. Getting confused with different networks for Ceph and proxmox

    so in a perfect setup one would need 1 network for ceph public 1 network for ceph private (ceph cluster) 1 or 2 networks for proxmox cluster (corosync) 1 network for VM access and management (could use this as a 2nd ring for proxmox cluster) then for each network a set of two switches for...
  19. Updates through another interface

    Thank you wolfgang. i was able to add USB NIC dedicated to updates. Just disabling gateway on the main for the time to do an update works well for us. Thanks again for your help!


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!