Dear all,
I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen
cannot import 'tank-zfs': no such pool available
but then both starts good without any issue. Both servers(node4 and node5) are using an...
Dear all I have 2 pbs in the same lan, one is for synching backups from the other one. So I'm using the remote synch job and I have set the option transfer last 7, but every day I see the number of backups incrementing instead of stay to seven, but is not transfering the same number of the...
In the proxmox guy if I click on vm name->summary I can see live Bootdisk size that is very usefull, but is there a way to live monitor other hard disk added to the same LXC?
I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted...
Dear all,
I have a privileged ctn debian11 based that is a LAMP web server with a single web app developed by myself that worked for years without any issues. This app needs to access some windows shared folders on the operator's PC that uses the app, for making this the most reliable possible...
I'm building a new proxmox cluster and I want to use MLAG + separated VLANS for ceph, lan and corosync. Everything it's working, linked and pingable but I'm facing random errors only in my corosync network similar to
[KNET ] host: host: 3 has no active links 802.3ad bond
[TOTEM ] Retransmit...
I'm using same configuration in proxmox docs here https://pve.proxmox.com/wiki/Network_Configuration
Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet...
I have an old 3node ceph cluster with HP gen8 servers, 2x Xeon E5-2680 @ 2.70GHz (turbo 3,50 GHz) 16core and 64GB DDR3 RAM for each node.
We bought some almost new HP gen10 servers with 2x Gold Xeon 6138 @ 2.0GHz (turbo 3,70 GHz) and 128GB DDR4 RAM for each node.
So there is a huge jump in terms...
I have 3 node ceph cluster, each node has 4x600GB OSD and I have just one pool with size 3/2.
I was thinking that over 33% of used storage(I mean just data no replicas) I would have received some warning message, but cluster seems healthy over 40% and everything is green. I'm attaching some...
I'm planning a 7nodes proxmox cluster. Of those 7nodes, 3 will have a ceph shared storage. Each node is equipped with 3x RJ45 and 2x SFP+ network interfaces.
I know that is best to have separated networks for CEPH, PROXMOX CLUSTER and LAN, but I was thinking if is a good Idea to use a setup with...
Hi everyone,
I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue:
every time I reboot a node for any reason (ie updating to...
Hi to all,
I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was...
Hi to all,
in a 3 node ceph cluster buit on 3 identical hp gen8 dl360p servers, I'm always receiving the error attached everytime I'm rebooting a node. Before rebooting the node I always move all the vms to another node, so when I press reboot there is no running VM. To fix this I have to force...
Hi to all,
in my KVM linux servers I have this similar memory usage
so free total memory is around 8Gb and available around 6Gb
In my proxmox gui I have the following usage
as you can see I think that this is showing not the available memory but thee free one, is this the correct behaviour...
I need to replace a 10gb sfp+ 2ports nic with a similar nic that provides 4 ports instead of 2. This particular nic is serving the internodal ceph network in a meshed network configuration, so no switches inside the ring. I'm in a production 3 node cluster with ceph and latest proxmox. replica...
Hi to all,
after updating from proxmox 5 to 6 and ceph luminous to nautilus in a 4 node HA cluster environment, the container storage (ceph_ct) is empty and all the containers disk are instead showed under the vm storage (ceph_vm). I'm attaching some pics to understand bettere, any solution to this?
Hi to all,
I'm in a production environment and I'm on a 3 node ceph cluster so I know that I can migrate my vms to the other nodes but in this particular moment I prefer to not migrate anything cause I don't want to restart the affected node. I have a container id 118 that after a backup failure...
hi to all, just installed latest proxmox mail gateway on top of a debian lxc container, running on proxmox ve. everything is running perfect but I'm trying to figure out how to change the email address that is sending from administration-->spam quarantine-->deliver now is...
I have a freepbx with 5 trunks and 40 extension, everything in my network is very fast and I can ping the voip provider around 15ms. I receive random delay when calling or receiving even through the internal network. I read that asterisk virtualized is not a good idea and this is true cause on a...
I have this great problem, after weeks investigating a solution searching on the routing and freepbx side, I found that on phisical host my problem doesn't happens, so maybe is related to proxmox networking.
I have a freepbx kvm with around 50 extensions and 4 sip trunks. connectivity is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.