Search results

  1. A

    [SOLVED] QEMU xhci driver for Windows 7

    Is there a way to specify the host bus type passed? Its just a printer, it doesnt do xhci to begin with. even USB1 would do. Thats a thought. I'll give it a shot.
  2. A

    [SOLVED] QEMU xhci driver for Windows 7

    Per title. I have an old printer that needs an old windows host to configure its WIFI. so I figured no sweat, I'll just fire up a windows 7 vm, right? Well, PVE8 seems to either preset the host USB or default to xhci, and no matter what I did (ehci:1) the virtual usb HBA is always showing...
  3. A

    Shared Remote ZFS Storage

    optionally. not included. 25g ports are not trivial in cost- ~$300-500 per port, both at the hba and switch. For a proper ceph deployment, you really want AT LEAST 2 per host, so 6 host bus ports and 6 switch ports. "minimal" is an interesting way of describing 1 core and 4GB ram per daemon...
  4. A

    Small Cloud Cluster design and strategy

    Since you are already looking to deploy in the cloud, why not just use VPS for your applications? VPS already offers you backing HA, snapshots, etc; Why bother doing that twice? With regards to firewalls- use local firewalls for the instance (which should also be provided by the VPS), and a WAF...
  5. A

    Shared Remote ZFS Storage

    I appreciate the DESIRE for a storage solution that is fast, highly available, and "entry level" (which I just read as cheap or free) What you dont seem to be grasping is that the COST of fast and highly available is incompatible with the last requirement, but since you seem to believe you can...
  6. A

    [SOLVED] Adding multiple osds to a ceph cluster

    It depends on your active io on the cluster. The more traffic you demand from rebalancing, the more impact it will have on your guest IO. add OSDs one by one, and see how it impacts your guest performance. when you notice it, stop and wait for completion ;) Alternatively, start reading about...
  7. A

    NFS does not remount

    "nothing has worked" doesnt really give anyone any useful information. perhaps if you shared what you did, how, and what is the manner of "failure" there may be useful ideas to offer.
  8. A

    vportop1 unknown driver Update.... no one has a solution?

    lets back up. Is there, in fact, anything wrong or missing on the machine? what do you see in device manager?
  9. A

    Shared Remote ZFS Storage

    explained by the Dunning Krueger effect. such is life... @jt_telrite have you actually understood what you listed? these represent unclosed "bugs" in 20 YEARS totalling 18. 18! half of them are feature requests. others are wont fix/no longer relevent, not relevent to begin with, documentation...
  10. A

    Shared Remote ZFS Storage

    This has the same hardware requirements and costs as a proxmox cluster with ceph, and around 15k less in software license fees.
  11. A

    Shared Remote ZFS Storage

    In the enterprise this is not a good option. It is difficult enough staffing your IT with competent dependable people to begin with; the more you can farm out to function suppliers (eg, storage) the better. Ceph make sense when you have sufficient IT competence including ceph- There is a reason...
  12. A

    New Proxmox as a replacement for ESXi

    1. create a vm, with vmid the same as your respective disk- eg 100. enter all values BUT DO NOT CREATE A DISK. when done, your disk will appear here: select it and click edit Map your bus type/id and other functions and off you go.
  13. A

    New Proxmox as a replacement for ESXi

    yes, thats correct. Once you create the respective vms and map the disks, re-add your zfs pool and move them.
  14. A

    Shared Remote ZFS Storage

    I dont think this is the arguable point. Id love to have proxmox gmbh provide every service I use in my enterprise, but that doesnt make it a value proposition for THEM. Perhaps you can convince them this is a pool they want to swim in- a good place to start is a feature request in their...
  15. A

    New Proxmox as a replacement for ESXi

    yep. just as I thought. add VM_Pool as a store of type directory. Once you recover access, move all your disks to the zfs pool from the move disk function.
  16. A

    New Proxmox as a replacement for ESXi

    so from that I surmise you didnt use the pool as a zvol store, but as a filestore. what do you see for find /VM_Pool
  17. A

    New Proxmox as a replacement for ESXi

    You also need to add it as a storage pool in pve.
  18. A

    Unable to move LXC to or from one host in the cluster

    then that is a different but related problem- its an RBD that's stuck with open files (which also explains why container wouldnt shut down.) I have a script for these eventualities. #!/bin/bash # usage: rbdrelease ctid if [ -z $1 ] then echo ctid not provided! exit fi ctid=$1...