Recent content by mplssilva

  1. M

    ISCSI Multipath : add a new storage, and an error raised

    Good Day! after change de code in /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm still having issues, Solved after: In Storage DELL SC4020 - Logs, with many Lines: CHELSIOConnection CA Activate Failed: ControllerId=81254 (0x00013D66) lp=1 (0x00000001) ObjId=478 (0x000001de) CHELSIOConnection...
  2. M

    ISCSI Multipath : add a new storage, and an error raised

    Hi, We are in Proxmox 7.4-3 with a ISCSI Dell Compellent SC4020 Storage. This storage used multipath too. there is a log in Storage with many Lines : CHELSIOConnection CA Activate Failed: ControllerId=81254 (0x00013D66) lp=1 (0x00000001) ObjId=478 (0x000001de) CHELSIOConnection CA Activate...
  3. M

    Feature request : CPU temperature for each node

    This will be nice feature, in this days I'm using a Zabbix to monitor CPU temperature and alert on Telegram (temperature threshold). https://github.com/B1T0/zabbix-basic-cpu-temperature - Planning to put a sensor for monitor/alert server room temperature condition (example): -> Sensor Push...
  4. M

    SMB share & CIFS, on boot problem

    //192.168.1.6/Download /mnt/pms_media cifs username=USER,password=PASSWORD,_netdev,dir_mode=0777,file_mode=0777 0 0 The option _netdev is always recommended for cifs mounts in fstab. This switch delays mounting until the network has been enabled, though excluding this option won't...
  5. M

    Configuration for all inclusive server deployment - cost conscious priority.

    hi, Unifi Controller: - Somes experience using on Cloud fo Ubiquit Unifi, UNMS, no problem in vm environment. Proxmox: I'm still checking if you can create a share. I have successfully created ZFS pool, but have not found how to actually create shares that can be used, and backed up to other...
  6. M

    iSCSI multipath with multiple nics on server and storage

    Sorry, wrong type (In storage IP and Mask): Server1, nic1: Use bond (set for vmbr0 VM's, corosync Cluster HA) Server1, nic2: Use bond (set for vmbr0 VM's, corosync Cluster HA) - Storage Area Network: Server1, nic3: 172.16.1.10/24 (Set for dedicated storage use in multipath) Server1, nic4...
  7. M

    iSCSI multipath with multiple nics on server and storage

    good morning, check this if help: https://forum.proxmox.com/threads/use-isci-on-specific-nics.62549/ Some advices: Server1, nic1: Use bond (set for vmbr0 VM's, corosync Cluster HA) Server1, nic2: Use bond (set for vmbr0 VM's, corosync Cluster HA) - Storage Area Network: Server1, nic3...
  8. M

    proxmox 5.4 to 6.1

    Hi, use this howto to ugrade from 5.4 to 6.1: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Actions_step-by-step
  9. M

    Really slow NIC speeds

    TCP Chimney Offload: Is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer. Disabled = Will not pass CPU workload to Network Adapter. Window Auto-Tuning: Feature is enabled by default and makes data transfers over networks...
  10. M

    I reinstalled a node in the cluster and now the cluster is messy

    good morning, https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node - After powering off the node hp4, we can safely remove it from the cluster: hp1# pvecm delnode hp4 pvecm status - If, for whatever reason, you want this server to join the same cluster again, you have to...
  11. M

    Can't remove VM after cloning

    try to verify sufficient disk space / disk partitions in Proxmox.
  12. M

    Really slow NIC speeds

    Based on almost 2 hours to copy 24GB / another 5GB to go, your WAN speed is (more or less) = 20 mbps. - Try this i have some experience with Windows Server and by default applied this in all install that i made...
  13. M

    Really slow NIC speeds

    good morning, try install virtio drivers on Your Windows Server 2019 VM: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso
  14. M

    Can't remove VM after cloning

    good morning, check the proxmox task log, if cloning completed success. in console of your Proxmox, unlock VM: qm unlock VM-ID-Number