Search results

  1. B

    Converting qcow1 to qcow2

    I mistakenly converted a bunch of vsphere (vmdk) VM's to qcow rather than qcow2 (using qemu-img convert). From the docs it looks like qcow1 doesn't support multiple snapshots so I can't leave them that way. Is there a quick and easy way to inplace upgrade qcow1 to qcow2? I really don't want to...
  2. B

    Migrate Host to new Boot Disk

    Thanks Udo, feedback much appreciated. Intel 535 120GB. TBH I don't know - I presume the S3700 supports it in its controller? I doubt the 535 does, its just a SATA connection.
  3. B

    Migrate Host to new Boot Disk

    Hi Udo, thanks for the detailed reply and my apologies for my lateness - I missed the notification :( My LV skills are weak but I'll give it a spin. Definitely backup the disk with clonezilla first though :) Soome questions: - Your first step, formatting and mounting the SSD, does it matter...
  4. B

    Migrate Host to new Boot Disk

    I have a three node proxmox cluster setup. The two main nodes are both booting off Samsung SSD 840 EVO's. In retrospect, not a good choice :( Both SSD's have been effected by the firmware slow read performance bug and one can't be upgraded to the firmware "fix". I want to replace them with...
  5. B

    Nesting Hyper-V

    Ah, thanks. Tricky one that.
  6. B

    Nesting Hyper-V

    Hi All, I'm trying to get Hyper-V working in a Windows 10 Preview guest - want to do this so that I can run the Windows Mobile Phone Simulator. I've followed the guide here: http://pve.proxmox.com/wiki/Nested_Virtualization And nesting seems to be enabled: root@vng:~# cat...
  7. B

    Proxmox cluster without HA and with Ceph, manually managed for simple redundancy

    You need three monitor nodes for ceph but you can get by with only two storage nodes. I run two compute nodes that also are ceph storage nodes (3 osd's per node. I have a third lightweight ProxMox node (Intel NUC) that only runs the third ceph monitor for quorum Sent from my RM-977_1008...
  8. B

    HDD for Ceph OSD

    Thanks mir, good to know.
  9. B

    HDD for Ceph OSD

    After a bit more research the Western Digital Blue 1TB SATA3 HDD 64M Caviar Blue WD10EZEX looks better - better IOPS and better failure rate.
  10. B

    HDD for Ceph OSD

    Looking at extending my very small ceph cluster. Made some inital bad choices - using WD 3TB Red, should have gone with multiple smaller & faster drives :( Can get a reasonable price on Seagate SATA3 1TB 7200RPM 64mb Cache (ST1000DM003). They are a bit faster in seq and IOPS. Does anyone see...
  11. B

    Ceph Snapshot Restoration very slow

    Anyone else running into this? maybe I have a screwed up ceph configuration? Taking a snapshot of my ceph hosted vm's is quick - a few seconds. Restoring the snapshot is unbelievably slow - over 30 *minutes* At first I thought it was proxmox, but using the ceph rbd tool was just as slow, On...
  12. B

    Urgent question re vlan setup

    Pardon my ignorance, but how does that work? how does the gateway access the internet?
  13. B

    Urgent question re vlan setup

    Weird crap on our intranet. The big give away was when every mp3 file on my pc was replaced with a executable - <Name>.mp3.exe :) There was a possible sapm bot on our net too. Tools - we used Windows Defendor offline, Malware bytes, AVG, RougeKiller, eset, rkill, ComboFix, adwcleaner and jrt. I...
  14. B

    Urgent question re vlan setup

    We run all our windows dev, test and production servers on our proxmox servers, weekly onsite DR backups and monthly offsite DR backups. And we just got hammered with a root kit virus that is proving extremely difficult to remove. I'm proposing that we restore one by one from last month...
  15. B

    Tuning performance in VM with sceduler

    Be interesting to test with Yah, I have no idea how that would work with VM images. thanks.
  16. B

    Tuning performance in VM with sceduler

    So at worst, no worse :) and possibly better. Also I was wanting to expeirment with Caching Tiers in giant as they seemed more mature. Yes - only the spinner is managed by zfs, a directory mount with a zfs ARC and L2ARC. It helps a lot with the read performance - 200MB/s with it, 70 MB/s...
  17. B

    Tuning performance in VM with sceduler

    nb. Sorry, one last question - is giant compatible with the proxmox ceph management tools and ui?
  18. B

    Tuning performance in VM with sceduler

    Thanks spirit, much appreciated. Is it as simple as that to upgrade from firefly to giant? - upgrade - restart monitors - restart osd's No other procedures/gotchas? You mentioned "For ceph with ssd drives, you really should try giant.". Is it a similar improvement for spinners + ssd journals...
  19. B

    Tuning performance in VM with sceduler

    How do you get giant on proxmox?
  20. B

    Unable to get ceph-fuse working

    Sorry Serge, missed your answer earlier. Thanks for that.