Search results

  1. M

    Network drops on new VMs, not old

    Sounds like a network issue I used to encounter when the port security on the connected switch port was set to only allow a limited number of mac addresses on the port. After the limit was reached, the traffic for the additional mac addresses above the limit was silently dropped.
  2. M

    WANTED - Proxmox / Ceph consultancy

    Just for reference, here is 10Gbps Ethernet latency during iperf testing: ~$ ping 10.12.23.27 PING 10.12.23.27 (10.129.23.27) 56(84) bytes of data. 64 bytes from 10.12.23.27: icmp_seq=1 ttl=64 time=0.154 ms 64 bytes from 10.12.23.27: icmp_seq=2 ttl=64 time=0.146 ms ... --- 10.12.23.27 ping...
  3. M

    WANTED - Proxmox / Ceph consultancy

    Seems to work okay for me: # iperf -c 192.168.168.100 -P4 -w8M ------------------------------------------------------------ Client connecting to 192.168.168.100, TCP port 5001 TCP window size: 8.00 MByte ------------------------------------------------------------ [ 6] local 192.168.168.98 port...
  4. M

    WANTED - Proxmox / Ceph consultancy

    Honestly, any type of modern network based storage will require something greater that 1Gbps. Even though CEPH/NFS/ISCSI works over 1Gbps, it works much better over >=10Gbps with any workloads greater than light usage. If 10Gbps Ethernet gear is too expensive, consider Infiniband utilizing...
  5. M

    VM with 5 disks not reachable for minutes during snapshot

    The storage network is IPoIB @ 40Gbps. The storage server is NFS on RAID6 spinners. The VM network is 1Gbps active-passive OVS bond. I have no issues with VMs with just a single virtual disk, only those with multiple virtual disks.
  6. M

    VM with 5 disks not reachable for minutes during snapshot

    I'm on the latest version of Proxmox. I have a VM with 5 virtio disks. When I snapshot, with or without RAM included, the VM, it is unreachable for about 5 minutes. I can reproduce it every single time I take a snapshot. I've read other posts about similar circumstances. Is it still not...
  7. M

    disk move slow

    Offline the disk moves quite quickly. Online it takes much longer.
  8. M

    disk move slow

    There are no 1 gig links anywhere. All other functions routinely exceed 500MBps.
  9. M

    disk move slow

    Hi, I've recently upgraded my network stack to 10 gig. I've added "migration_unsecure: 1" to /etc/pve/datacenter.cfg and now live migrations routinely see over 600MBps. Writing directly from the hosts using dd to the nfs shared storage also sees 600MBps. However, when clicking on the "move...
  10. M

    Global Scheduler

    Hi, I'm not looking for a hack. Consider this a feature request. I would like a GUI to be able to schedule tasks globally within a cluster. The technology already exists as backups are currently able to be scheduled globally within the GUI. Please expand on this and create a place in the GUI...
  11. M

    Global Scheduler

    It's already done for scheduling backups. It doesn't seem like that much of a leap to do it to allow for other things.
  12. M

    Global Scheduler

    Hi, It would be nice to have a global scheduler. Something at the Datacenter level that would allow me to schedule tasks for all hosts in a cluster. A "global crontab", if you will. Regards, micush
  13. M

    Live Snapshot

    There is a script for this functionality at https://github.com/kvaps/pve-autosnap. It works pretty well. Schedule it as a cron job.
  14. M

    VM VLAN Tagging

    After testing, I answered my own inquiry. It is possible to do this. Use a bridged network device on the VM with no VLAN tag. Then inside the VM, linux in my case, create a network configuration with multiple VLAN tags on a single network interface. As long as the tag is allowed on the trunk...
  15. M

    VM VLAN Tagging

    Hi, Thanks for the response. However, I am wanting the VM to tag multiple VLANs from one NIC. Is it possible without assigning 10 NICs to a VM, each assigned to a different host bridge in a different network? Regards m
  16. M

    VM VLAN Tagging

    I would like to set up a .1q trunk to a VM, so that the VM does the VLAN tagging, not the host server. Is it possible?
  17. M

    One big VM - HA

    Only the VMs assigned to a specific HA group will be able to use the hosts that belong to that HA group. It's probably easier to think of it as partitioning the HA cluster into different groups of hosts.
  18. M

    One big VM - HA

    I have done exactly this. Create a new HA group. In that particular group only put the hosts that can manage the VM. Then add the VM to HA, selecting this newly created group. In the event of failure, HA will only move the VM to the hosts within the specific group.
  19. M

    Snapshots in 'delete' status

    There is no output on the 'netstat' command. A manual 'qm delsnapshot' gives a timeout error: ~# qm delsnapshot 108 daily201606031845 VM 108 qmp command 'delete-drive-snapshot' failed - got timeout
  20. M

    Snapshots in 'delete' status

    A little more investigation reveals some sort of timeout when running the script interactively: vm108: Removing snapshot daily201605311845 VM 108 qmp command 'delete-drive-snapshot' failed - unable to connect to VM 108 qmp socket - timeout after 5974 retries vm108: Removing snapshot...