Search results

  1. S

    Proxmox VE Ceph Benchmark 2018/02

    Yes, it it. I just copied an about 6 GB large single file where source and destination a) is the same vdisk b) are different vdisks (at the same VM) My results: no I/O drops, "constant" rates (as constant as a Windows progress bar can be :D) with a) about 200 MB/s and b) about 350 MB/s...
  2. S

    Proxmox VE Ceph Benchmark 2018/02

    Hi, we run a three node PVE/Ceph cluster with 10x Samsung SM883 1,92 TB per node (very "standard" - replication of 3, one large Ceph pool connected via RBD, dedicated 10 GBit/s mesh network for Ceph). Inside there are a couple of Windows Server 2012 R2 VMs. What kind of performance test would...
  3. S

    Migrating Windows XP from a dead system?

    I'm sorry, I don't remember something like this - XP is soo long ago... :-D Yes, it is! And if storage capacity is no problem I would even do a second copy of this raw image as a backup. You're welcome! (Don't be too sad when your're first attempt booting the XP-VM ends up with a bluescreen -...
  4. S

    Migrating Windows XP from a dead system?

    Hey, without testing it in detail I would suggest the following steps: 1. Remove the physical block device from the broken machine 2. Connect it to a working machine 3. Boot this machine from a Live-Linux 4. Do a dd-based raw copy of the block device to somewhere else (a second block device, a...
  5. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Yes, you're right! So I will buy a P1000, put it into the server mentioned above, passthrough it to our RDS server and see what happens. And I will build a Debian based thinclient connecting via xfreerdp. ... I'm so excited! :) And of course I will report my results! Greets Stephan
  6. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    And what about the client hardware? Can it really be thin, for example Intel Celeron N3160 2 GB RAM Intel HD Graphics 400 (booting Debian from PXE)
  7. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Thank you so much for taking the time! The graphic cards you mentioned... oh my god, they're sooo expensive! :eek: And... licensing a graphic card? I think I'm too old for this! :D Joking apart: What do you think about the following setup: RDSH with 4 CPUs, 16 GB RAM and an NVIDIA Quadro P1000...
  8. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Hi tburger, thanks a lot for your fast reply! Indeed we have home office users which connect via WAN (bad bandwidth, bad latency); in these cases it's totally ok for us that graphics is not perfect. But we also like to use it inside a typical GBit-LAN infrastructure (good bandwidth, good...
  9. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Hi there, yes, I know - not exactly a Proxmox VE question... But maybe someone already solved this problem or have at least ideas what we could try. Here we go: On one of our PVE nodes we run a Windows Server 2016 VM (KVM) as a remote desktop services server - and we're very happy so far...
  10. S

    [SOLVED] PVE/Ceph cluster: Recommendations for updating and restarting

    Hey Alwin, thanks! Indeed we did it this way: 1. migrate all VMs away from node1 (I love "bulk migration" over a dedicated 10 GBit/s-NIC) 2. install updates 3. restart 4. wait until all PGs are "active+clean" 5. do the same with node2 and node3 Greets Stephan
  11. S

    [SOLVED] PVE/Ceph cluster: Recommendations for updating and restarting

    Hi there, today we like to install updates on our three nodes of our PVE/Ceph cluster which need restarts. # apt list --upgradable Listing... Done ceph-base/stable 14.2.6-pve1 amd64 [upgradable from: 14.2.4-pve1] ceph-common/stable 14.2.6-pve1 amd64 [upgradable from: 14.2.4-pve1]...
  12. S

    Hard Drive Setup for Proxmox Node.

    Ah, sorry, I misunderstood this! Our numbers go like this: Since about two months we run a three node PVE/Ceph cluster (so I guess that our writes on the system disks are higher than yours because of Ceph). We monitor disk IO and see a constant write flow between 250 and 450 kB/s, let's say, 350...
  13. S

    Hard Drive Setup for Proxmox Node.

    Intel SSD D3-S4610 960 GB has 6,0 PBW - do you really worry about that this could be too little?
  14. S

    Need opinions on server build for proxmox

    I forgot: I recommend you to give at least 2 vCPUs to your windows servers. You will be thankful when the VMs are installing updates, performing "disk cleanups", scanning for malware, etc.
  15. S

    Need opinions on server build for proxmox

    Hi zecas, welcome to this forum! :) These are my thoughts to your exciting project: a) Your server hardware base looks good - oldie but goldie! :D Just three words: Don't do it! If you like to build a server for business purposes, use enterprise grade hardware. Especially when it comes to...
  16. S

    Hard Drive Setup for Proxmox Node.

    Seems like a little, but powerful piece of hardware with a good single thread performance - I don't know it, but I like it! :D We just built a three node PVE/Ceph cluster based on AMD EPYC 7351 CPUs - no trouble so far. :) Do you have any further questions?
  17. S

    Hard Drive Setup for Proxmox Node.

    Sounds like a very solid choice, congrats! If you're sure that about 900 GB of storage would be enough than go with it - why not? :) And even if you need more in three or four years: Backing up the VMs, replacing the SSDs with larger ones, setting up new ZFS based storage and finally restoring...
  18. S

    Can't create Hard drive

    Hey Reuven, unfortunately we don't use lvm(-thin), so I have no clue what could be wrong in your case. Maybe aaron has an idea? Good luck and greets Stephan
  19. S

    Hard Drive Setup for Proxmox Node.

    Our "calculation" goes like this: Thanks to the proxmox team we get an enterprise grade virtualization platform and safe a lot of money compared to the "big players". And a part of this savings we spent in nice (and enterprise grade) hardware. :)