Good Day!
after change de code in /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm still having issues, Solved after:
In Storage DELL SC4020 - Logs, with many Lines:
CHELSIOConnection CA Activate Failed: ControllerId=81254 (0x00013D66) lp=1 (0x00000001) ObjId=478 (0x000001de)
CHELSIOConnection...
Hi,
We are in Proxmox 7.4-3 with a ISCSI Dell Compellent SC4020 Storage.
This storage used multipath too.
there is a log in Storage with many Lines :
CHELSIOConnection CA Activate Failed: ControllerId=81254 (0x00013D66) lp=1 (0x00000001) ObjId=478 (0x000001de)
CHELSIOConnection CA Activate...
This will be nice feature, in this days I'm using a Zabbix to monitor CPU temperature and alert on Telegram (temperature threshold).
https://github.com/B1T0/zabbix-basic-cpu-temperature
- Planning to put a sensor for monitor/alert server room temperature condition (example):
-> Sensor Push...
//192.168.1.6/Download /mnt/pms_media cifs username=USER,password=PASSWORD,_netdev,dir_mode=0777,file_mode=0777 0 0
The option _netdev is always recommended for cifs mounts in fstab. This switch delays mounting until the network has been enabled, though excluding this option won't...
hi,
Unifi Controller:
- Somes experience using on Cloud fo Ubiquit Unifi, UNMS, no problem in vm environment.
Proxmox:
I'm still checking if you can create a share. I have successfully created ZFS pool, but have not found how to actually create shares that can be used, and backed up to other...
Sorry, wrong type (In storage IP and Mask):
Server1, nic1: Use bond (set for vmbr0 VM's, corosync Cluster HA)
Server1, nic2: Use bond (set for vmbr0 VM's, corosync Cluster HA)
- Storage Area Network:
Server1, nic3: 172.16.1.10/24 (Set for dedicated storage use in multipath)
Server1, nic4...
good morning, check this if help:
https://forum.proxmox.com/threads/use-isci-on-specific-nics.62549/
Some advices:
Server1, nic1: Use bond (set for vmbr0 VM's, corosync Cluster HA)
Server1, nic2: Use bond (set for vmbr0 VM's, corosync Cluster HA)
- Storage Area Network:
Server1, nic3...
TCP Chimney Offload:
Is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer.
Disabled = Will not pass CPU workload to Network Adapter.
Window Auto-Tuning:
Feature is enabled by default and makes data transfers over networks...
good morning,
https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node
- After powering off the node hp4, we can safely remove it from the cluster:
hp1# pvecm delnode hp4
pvecm status
- If, for whatever reason, you want this server to join the same cluster again, you have to...
Based on almost 2 hours to copy 24GB / another 5GB to go, your WAN speed is (more or less) = 20 mbps.
- Try this i have some experience with Windows Server and by default applied this in all install that i made...
good morning,
try install virtio drivers on Your Windows Server 2019 VM:
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso
good morning,
i have a similar problem (random restart) of some hosts using Proxmox based on issues - corosync / libknet:
https://forum.proxmox.com/threads/pve-5-4-11-corosync-3-x-major-issues.56124/page-12
- Solved in last update Proxmox 6.1-3 > .
What's the version of your Proxmox...
good morning, we made this topology with a dedicated switch for SAN network.
Howto:
https://icicimov.github.io/blog/virtualization/Adding-iSCSI-shared-volume-to-Proxmox-to-support-Live-Migration/
good morning, try to doing some test with iperf3, i do using a vmbr1 on a SAN area network between two Proxmox Servers:
- No issues on performance network.
iperf3 Server ( iperf3 -s ):
Iperf client (iperf3 -c "IP destination Server"):
Network Interfaces:
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.