Search results

  1. S

    Import ZFS pools by device scanning was skipped because of an unmet condition check

    Dear all, I have a couple of HP gen8 dl360 running latest proxmox 8.1.3 with the same issue, when they start I can clearly se a critical red error on sceen cannot import 'tank-zfs': no such pool available but then both starts good without any issue. Both servers(node4 and node5) are using an...
  2. S

    Synch job, transfer last not working as expected

    Dear all I have 2 pbs in the same lan, one is for synching backups from the other one. So I'm using the remote synch job and I have set the option transfer last 7, but every day I see the number of backups incrementing instead of stay to seven, but is not transfering the same number of the...
  3. S

    monitor space of second hard drive attached to a guest LXC or KVM

    In the proxmox guy if I click on vm name->summary I can see live Bootdisk size that is very usefull, but is there a way to live monitor other hard disk added to the same LXC?
  4. S

    [SOLVED] What service to restart after root disk full

    I made a mistake in my 5 nodes ceph cluster and I selected for my new backups schedule on some nodes the root local storage and it went full, today everything works but I have no access to the gui of the affected nodes(I receive connection refused). All vms and lxc are working good. I deleted...
  5. S

    Container random crash with error autofs resource busy

    Dear all, I have a privileged ctn debian11 based that is a LAMP web server with a single web app developed by myself that worked for years without any issues. This app needs to access some windows shared folders on the operator's PC that uses the app, for making this the most reliable possible...
  6. S

    Is 802.3ad bonding still not supported for corosync?

    I'm building a new proxmox cluster and I want to use MLAG + separated VLANS for ceph, lan and corosync. Everything it's working, linked and pingable but I'm facing random errors only in my corosync network similar to [KNET ] host: host: 3 has no active links 802.3ad bond [TOTEM ] Retransmit...
  7. S

    unknow interface type in gui when using vlan and bond with bridge

    I'm using same configuration in proxmox docs here https://pve.proxmox.com/wiki/Network_Configuration Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto bond0 iface bond0 inet...
  8. S

    3node brand new ceph cluster vs 5node mixed ceph cluster

    I have an old 3node ceph cluster with HP gen8 servers, 2x Xeon E5-2680 @ 2.70GHz (turbo 3,50 GHz) 16core and 64GB DDR3 RAM for each node. We bought some almost new HP gen10 servers with 2x Gold Xeon 6138 @ 2.0GHz (turbo 3,70 GHz) and 128GB DDR4 RAM for each node. So there is a huge jump in terms...
  9. S

    Newbie question about ceph gui in proxmox

    I have 3 node ceph cluster, each node has 4x600GB OSD and I have just one pool with size 3/2. I was thinking that over 33% of used storage(I mean just data no replicas) I would have received some warning message, but cluster seems healthy over 40% and everything is green. I'm attaching some...
  10. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    I'm planning a 7nodes proxmox cluster. Of those 7nodes, 3 will have a ceph shared storage. Each node is equipped with 3x RJ45 and 2x SFP+ network interfaces. I know that is best to have separated networks for CEPH, PROXMOX CLUSTER and LAN, but I was thinking if is a good Idea to use a setup with...
  11. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    Hi everyone, I have a simple 3 node cluster that has always worked for many years and successfully passed the updates starting from proxmox 4. After updating to version 7 of proxmox and pacific ceph, the system is affected by this issue: every time I reboot a node for any reason (ie updating to...
  12. S

    An help to free inode usage on my servers

    Hi to all, I just updated my 4 nodes ceph cluster to latest proxmox 6.2, but after that I was receiving in my pve dashboard some errors related to the available space by ceph's mon. So searching with df -h I found that my root partition was around 75% on a 136GB sas 15k disk. At this point I was...
  13. S

    Failed deactivating swap /dev/pve/swap

    Hi to all, in a 3 node ceph cluster buit on 3 identical hp gen8 dl360p servers, I'm always receiving the error attached everytime I'm rebooting a node. Before rebooting the node I always move all the vms to another node, so when I press reboot there is no running VM. To fix this I have to force...
  14. S

    proxmox reports free memory not available

    Hi to all, in my KVM linux servers I have this similar memory usage so free total memory is around 8Gb and available around 6Gb In my proxmox gui I have the following usage as you can see I think that this is showing not the available memory but thee free one, is this the correct behaviour...
  15. S

    replace sfp+nic in a ceph cluster

    I need to replace a 10gb sfp+ 2ports nic with a similar nic that provides 4 ports instead of 2. This particular nic is serving the internodal ceph network in a meshed network configuration, so no switches inside the ring. I'm in a production 3 node cluster with ceph and latest proxmox. replica...
  16. S

    container storage content is empty

    Hi to all, after updating from proxmox 5 to 6 and ceph luminous to nautilus in a 4 node HA cluster environment, the container storage (ceph_ct) is empty and all the containers disk are instead showed under the vm storage (ceph_vm). I'm attaching some pics to understand bettere, any solution to this?
  17. S

    cannot start container after backup failure

    Hi to all, I'm in a production environment and I'm on a 3 node ceph cluster so I know that I can migrate my vms to the other nodes but in this particular moment I prefer to not migrate anything cause I don't want to restart the affected node. I have a container id 118 that after a backup failure...
  18. S

    change postmaster email in an lxc container

    hi to all, just installed latest proxmox mail gateway on top of a debian lxc container, running on proxmox ve. everything is running perfect but I'm trying to figure out how to change the email address that is sending from administration-->spam quarantine-->deliver now is...
  19. S

    freepbx delay with kvm

    I have a freepbx with 5 trunks and 40 extension, everything in my network is very fast and I can ping the voip provider around 15ms. I receive random delay when calling or receiving even through the internal network. I read that asterisk virtualized is not a good idea and this is true cause on a...
  20. S

    freepbx and double nat on kvm

    I have this great problem, after weeks investigating a solution searching on the routing and freepbx side, I found that on phisical host my problem doesn't happens, so maybe is related to proxmox networking. I have a freepbx kvm with around 50 extensions and 4 sip trunks. connectivity is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!