Search results

  1. I

    Is this a right way to shrink a .raw secondary disk attached to a VM?

    Hi Before posting this I ve read all 10 posts about shrinking disks but none of them seemed to be the <<<correct>>> officially approved way, so I thought to give it one more try. Backend info: PVE is installed in a mirror zfs and the VM is a win 10 test machine installed on the local-zfs...
  2. I

    Proxmox UPS and Power Loss

    This is the way I am shutting a demo pve, but I don t have a way to see if he will first shutdown the VM. I assume that it does it - gracefully and not by force. If I ll try to watch via gui interface, the only thing I am going to notice is the no connection message, so i wont be able to see if...
  3. I

    Proxmox UPS and Power Loss

    Are those clients the VMs inside proxmox? My clients are based on WinServ2019 and noticed that the application is still in beta.
  4. I

    Proxmox UPS and Power Loss

    The link reminded me that I knew the service but had set it aside for future testing. It all comes down if your UPS is compatible with which feature of nuc in order to be supported? ps when you say it can monitor an attached ups, how do you mean that? Directly to the pve with usb? Connected by...
  5. I

    Proxmox UPS and Power Loss

    Hi Has someone deployed a configuration where during power loss and when the batteries level are critical-low / below a certain percentage, UPS informs proxmox and proxmox shuts down all VMs gracefully? Does the way goes UPS->Server (idrac / ilo) ->proxmox? Thank you
  6. I

    10Gbe car/ports needs configuration?

    Ok you re right.. probably I was thinking of something else. Nice info though for the CRC errors and the way to counter this issue. It seems then that in year 2021 it is pointless to leave it at default 1500 value since most if not all of the todays network equipment supports that feature. That...
  7. I

    10Gbe car/ports needs configuration?

    you can but it wont have any effect I know that it works I don t know why though. Already seen it This is what I am thinking of ...... 1.On the server with the VMs (Lets call it prox1) I ll set up a vmbr based on that 10g port (it is a dual port nic card but it is irrelevant) and setup only...
  8. I

    10Gbe car/ports needs configuration?

    ... I have no extra info since all needed is given. Sorry for the bump still though someone answered. Wrong but sometimes work as a reply initiator. Apart from that....... They work since there is led activity and also created a vmbr based on that port and the VM based on that vmbr has net...
  9. I

    10Gbe car/ports needs configuration?

    .......................... None??? Really?
  10. I

    10Gbe car/ports needs configuration?

    Hi Noticed today that by default after installation of prox, if you go to node->Network you can see all port's active status as Yes (nothing unusual here). Since though the network card of the server has 4 ports (2 of which are 1gbe and the other 2 are 10gbe), the 1gbe ports have active...
  11. I

    ZFS Raid 10 mirror and stripe or the opposite

    @Dunuin many thanks for the replies!!!
  12. I

    ZFS Raid 10 mirror and stripe or the opposite

    I dont think so Editing the conf file of the VM and checking it's disk gives me scsi0: HHproxVM:vm-101-disk-0,cache=writeback,discard=on,size=600G instead of scsi0: HHproxVM:vm-101-disk-0.qcow2,cache=writeback,discard=on,size=600G Also trying to create a vm using the VM storage pool...
  13. I

    ZFS Raid 10 mirror and stripe or the opposite

    Well I come back from your 15 pages post with your fio benches and have a terrible headache trying to figure out what is what. Some tests I run on the mirrored ssds (I dont know though why should I care about those since there will be separate storage on sas drives for the Vms) fio...
  14. I

    ZFS Raid 10 mirror and stripe or the opposite

    Thank you for your quick response and great insight about my considerations. I ll add to what you have answered beginning from bottom to the top since this way I ll end up with the main topic of this post raid. So you agree with me that in my use case scenario ashift 9 is the best option. I...
  15. I

    ZFS Raid 10 mirror and stripe or the opposite

    Now we are getting somewhere. Probably missed the definition for zfs raid10. If it is a stripe of mirrors then it automatically answers my question as well. It is my described Method 2 then, Ok. I am trying to solve the problem of having 2 disks of the same mirror fail by combining...
  16. I

    ZFS Raid 10 mirror and stripe or the opposite

    I feel like kid which you take from hand and learn it stuff...haahah . Thank you for the reply but it is completely irrelevant to what my initial question was. It is like you answer to a how to guide from scratch, Would you like to read my question again and see where the answer failed? For...
  17. I

    ZFS Raid 10 mirror and stripe or the opposite

    Hi In a zfs raid 10 of lets say 8 drives, which is the default way of proxmox configuring the drives assuming the process is done from gui? Mirror vdevs then stripe them or the opposite Method 1 like 2disks in mirror vdev1 / 2disks in mirror vdev2 / These 2 disks stripped in vdev3...
  18. I

    proxmox management interface matters?

    Proxmox to one main switch via 1g port for net access and management. NAS to the same main switch via 1g port for net access and management. Proxmox -> Nas via 10g DAC cable between their 10g cards. So if prox and nas using a network segment of 192.168.10.0/24 for net access and management...
  19. I

    proxmox management interface matters?

    Probably that cleared things a little bit more. If during creating an NFS share (for a remote storage between 2 servers under linux isnt NFS the best way to setup a share?) I use the remote's server ip which corresponds to the 10g card of his, that is the one part. How then prox decides that ti...
  20. I

    proxmox management interface matters?

    Hi I d like to ask about a use case scenario that I am into. Proxmox will be installed on a server with extra 10g card apart from a quad intel i350 (4x1g ports). What I want to accomplish is have a quick net path between Proxmox and a Nas server in order for Vm backups to be transferred...