Search results

  1. [SOLVED] Ceph - Schedule deep scrubs to prevent service degradation

    The script above has been updated to work with Ceph 14 (Nautilus). The affected line, for Ceph 12 (Nautilus) was: $AWK '/^[0-9]+\.[0-9a-z]+/ { if($10 == "active+clean") { print $1,$23,$24 ; }; }' | \
  2. PVE 5.4-11 + Corosync 3.x: major issues

    Adding a node to an existing cluster results in all existing nodes being fenced and restarting. Also, this was with secauth: off and token: 10000 /etc/corosync/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: kvm7a nodeid: 1 quorum_votes: 1...
  3. TSO offloading problem with igb whilst, ixgbe is fine

    Most of our Proxmox clusters utilise Intel 82599ES 10GbE SFP+ NICs where TSO (TCP/IP Segmentation Offloading) works as expected for VMs where their VirtIO NIC then also has TSO enabled. We however have a cluster made up of 3 x Lenovo RD350 servers where Intel 82599ES 10GbE NICs are used for...
  4. Proxmox 5.0 and OVS with 'dot1q-tunnel'

    Hi, Eric Garver's post details examples on configuring QinQ ports on the OvS side so that the VM simply interacts with untagged packets. This is not something we currently have a requirement for and requires two rules to specifically configure ingress and egress operations for each VM...
  5. PVE 5.4-11 + Corosync 3.x: major issues

    And one of the nodes restarted at 13:18, I've uploaded the syslog entries matching 'coro' from all 3 nodes.
  6. PVE 5.4-11 + Corosync 3.x: major issues

    I'm also experiencing corosync issues on clusters we've upgraded to PVE 6.0. We tried changing protocol from UDP to SCTP, defining netmtu (which we subsequently discovered to no longer be used in Corosync 3). We've upgraded one cluster to using libknet1_1.10-pve2~test1_amd64.deb but still see...
  7. Proxmox 5.0 and OVS with 'dot1q-tunnel'

    Probably not without customising the GUI and script you've already edited. OvS does support having the host handling the double tag injection. We however haven't made patches as we treat OvS like any other standard switch. We trunk all or selective VLANs to a few virtual routers and attach...
  8. PVE 6.0 - PXE installation fails

    Hi, We've been PXE booting the ISO for several years, it makes it substantially easier and faster to load equipment. We use the steps detailed in the following forum post and presume that it has Proxmox's blessings as the ISO is pre-prepared to search for the ISO image in the installation...
  9. Proxmox 5.0 and OVS with 'dot1q-tunnel'

    Everything works as expected, live migration, GUI management, etc... Tested and working in production with untagged (trunks all VLANs), selective trunking or specifying a tag, in which case the OvS port encapsulates all packets received from the VM. This is the same as connecting a firewall or...
  10. Proxmox 5.0 and OVS with 'dot1q-tunnel'

    Proxmox 6 now includes OvS 2.10, you simply need to patch /usr/share/perl5/PVE/Network.pm as detailed above (modify 3 lines). Thanks for upvoting though!
  11. User activity logging?

    Thanks Alwin, I don't believe this records the source IP of the user though...
  12. Fencing and cluster state logging

    We had a node fence itself as the IPMI reset counter reached zero (Motherboard logs) but we can't locate logging information leading up to this event. Surely corosync or pvecm logs events such as losing quorum, occasional heartbeat messages being lost or discarded? We previously appear to have...
  13. User activity logging?

    Where can we review Proxmox GUI logins, specifically source IP and user names? This information should Ideally replicate between nodes of a cluster but I can't find log entries on individual nodes either...
  14. Ceph monitor space usage suddenly much higher

    Hi Alwin, The 'mon compact on trim' appears to be enabled by default already. I can understand the monitor database growing during rebuild operations but the size of the database went from xx MB to 1600 MB after restarting nodes last week Friday. Cluster is virtually completely idle and mon...
  15. Ceph monitor space usage suddenly much higher

    We have a small 3 node cluster where the Ceph monitor store is suddenly multitudes larger than it was previously. We typically 'systemctl restart ceph.target' when we observe Ceph packages having been updated and only schedule node restarts to apply newer kernels or Intel microcode updates...
  16. ceph-disk or ceph-volume ?

    Old thread, hopefully this is informational to others: Hard disk manufacturers correctly use the term Tera Bytes, eg 8TB which is 8,000,000,000,000 bytes. The storage term everyone else has miss used, is a multiple of 2 and called Tebi Bytes (TiB). 8 TB = 8,000,000,000,000 / 1024 / 1024 / 1024...
  17. Finding guests which generate allot of disk IO

    Herewith a refined version: #!/bin/bash time='60'; filter='rbd_hdd'; function getstats() { for host in `ls -1A /etc/pve/nodes`; do if [[ "$HOSTNAME" == "$host" ]]; then iostat -xkdy $time 1 | grep '^rbd' > /tmp/"$host"_iostat & else ( ssh -o StrictHostKeyChecking=no...
  18. Ceph Server: why block devices and not partitions ?

    The key objectives around Ceph is for it to be an easily managed, reliable and scalable storage architecture. Replacing an OSD should be as simple as replacing the old drive and running a single command which then brings it in to service. Typical Ceph deployments have OSD counts in the hundreds...
  19. Ceph - 'bad crc/signature' and 'socket closed'

    We're running Ceph Luminous with latest updates and no longer observe these errors
  20. APT CVE-2019-3462 (please read before upgrading!)

    Most systems processed updates without problems but we have one which exhibits the following. Is this possibly due to us being routed to an out of sync mirror or necessitate more careful investigation? [admin@kvm2 ~]# apt -o Acquire::http::AllowRedirect=false update Ign:1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!