Search results

  1. S

    Proxmox VE Ceph Benchmark 2018/02

    Sounds like a very good setup, too! Do you have Windows Server Datacenter licences? If yes, keep in mind that you have to pay per CPU and even physical core. Our thoughts were, using less nodes, saving licensing costs and put these savings into the hardware of the three nodes. This is also why...
  2. S

    Proxmox VE Ceph Benchmark 2018/02

    You're welcome! :) I'm so happy with this setup so far that it's a lot of fun sharing our experiences. Ah, of course! Each node is based on this hardware and equipped with 2x AMD Epyc 7351 384 GB RAM (DDR4, 2667 MHz) 10x Samsung SM883 1,92 TB connected to a Broadcom HBA 9300-8i 2x Intel...
  3. S

    Proxmox VE Ceph Benchmark 2018/02

    Thanks for sharing this! Our policy is to be "as standard as possible". So in case of Ceph version we take what the proxmox repos give us (at the moment: nautilus 14.2.9). That's interesting, because the recommendation I get from the proxmox support was to keep "Default (no cache)". Maybe a...
  4. S

    Proxmox VE Ceph Benchmark 2018/02

    Yes, it it. I just copied an about 6 GB large single file where source and destination a) is the same vdisk b) are different vdisks (at the same VM) My results: no I/O drops, "constant" rates (as constant as a Windows progress bar can be :D) with a) about 200 MB/s and b) about 350 MB/s...
  5. S

    Proxmox VE Ceph Benchmark 2018/02

    Hi, we run a three node PVE/Ceph cluster with 10x Samsung SM883 1,92 TB per node (very "standard" - replication of 3, one large Ceph pool connected via RBD, dedicated 10 GBit/s mesh network for Ceph). Inside there are a couple of Windows Server 2012 R2 VMs. What kind of performance test would...
  6. S

    Migrating Windows XP from a dead system?

    I'm sorry, I don't remember something like this - XP is soo long ago... :-D Yes, it is! And if storage capacity is no problem I would even do a second copy of this raw image as a backup. You're welcome! (Don't be too sad when your're first attempt booting the XP-VM ends up with a bluescreen -...
  7. S

    Migrating Windows XP from a dead system?

    Hey, without testing it in detail I would suggest the following steps: 1. Remove the physical block device from the broken machine 2. Connect it to a working machine 3. Boot this machine from a Live-Linux 4. Do a dd-based raw copy of the block device to somewhere else (a second block device, a...
  8. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Yes, you're right! So I will buy a P1000, put it into the server mentioned above, passthrough it to our RDS server and see what happens. And I will build a Debian based thinclient connecting via xfreerdp. ... I'm so excited! :) And of course I will report my results! Greets Stephan
  9. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    And what about the client hardware? Can it really be thin, for example Intel Celeron N3160 2 GB RAM Intel HD Graphics 400 (booting Debian from PXE)
  10. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Thank you so much for taking the time! The graphic cards you mentioned... oh my god, they're sooo expensive! :eek: And... licensing a graphic card? I think I'm too old for this! :D Joking apart: What do you think about the following setup: RDSH with 4 CPUs, 16 GB RAM and an NVIDIA Quadro P1000...
  11. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Hi tburger, thanks a lot for your fast reply! Indeed we have home office users which connect via WAN (bad bandwidth, bad latency); in these cases it's totally ok for us that graphics is not perfect. But we also like to use it inside a typical GBit-LAN infrastructure (good bandwidth, good...
  12. S

    Windows Server 2016 VM (RDS): How to improve user graphic experience?

    Hi there, yes, I know - not exactly a Proxmox VE question... But maybe someone already solved this problem or have at least ideas what we could try. Here we go: On one of our PVE nodes we run a Windows Server 2016 VM (KVM) as a remote desktop services server - and we're very happy so far...
  13. S

    [SOLVED] PVE/Ceph cluster: Recommendations for updating and restarting

    Hey Alwin, thanks! Indeed we did it this way: 1. migrate all VMs away from node1 (I love "bulk migration" over a dedicated 10 GBit/s-NIC) 2. install updates 3. restart 4. wait until all PGs are "active+clean" 5. do the same with node2 and node3 Greets Stephan
  14. S

    [SOLVED] PVE/Ceph cluster: Recommendations for updating and restarting

    Hi there, today we like to install updates on our three nodes of our PVE/Ceph cluster which need restarts. # apt list --upgradable Listing... Done ceph-base/stable 14.2.6-pve1 amd64 [upgradable from: 14.2.4-pve1] ceph-common/stable 14.2.6-pve1 amd64 [upgradable from: 14.2.4-pve1]...
  15. S

    Hard Drive Setup for Proxmox Node.

    Ah, sorry, I misunderstood this! Our numbers go like this: Since about two months we run a three node PVE/Ceph cluster (so I guess that our writes on the system disks are higher than yours because of Ceph). We monitor disk IO and see a constant write flow between 250 and 450 kB/s, let's say, 350...
  16. S

    Hard Drive Setup for Proxmox Node.

    Intel SSD D3-S4610 960 GB has 6,0 PBW - do you really worry about that this could be too little?
  17. S

    Need opinions on server build for proxmox

    I forgot: I recommend you to give at least 2 vCPUs to your windows servers. You will be thankful when the VMs are installing updates, performing "disk cleanups", scanning for malware, etc.
  18. S

    Need opinions on server build for proxmox

    Hi zecas, welcome to this forum! :) These are my thoughts to your exciting project: a) Your server hardware base looks good - oldie but goldie! :D Just three words: Don't do it! If you like to build a server for business purposes, use enterprise grade hardware. Especially when it comes to...
  19. S

    Hard Drive Setup for Proxmox Node.

    Seems like a little, but powerful piece of hardware with a good single thread performance - I don't know it, but I like it! :D We just built a three node PVE/Ceph cluster based on AMD EPYC 7351 CPUs - no trouble so far. :) Do you have any further questions?
  20. S

    Hard Drive Setup for Proxmox Node.

    Sounds like a very solid choice, congrats! If you're sure that about 900 GB of storage would be enough than go with it - why not? :) And even if you need more in three or four years: Backing up the VMs, replacing the SSDs with larger ones, setting up new ZFS based storage and finally restoring...