Recent content by tuonoazzurro

  1. T

    [SOLVED] Logs problem after upgrade from 5 to 6

    Great, is running fine now. I think the post can be marked as solved. Thanks for the support.
  2. T

    [SOLVED] Logs problem after upgrade from 5 to 6

    So, i think we need to find if the problem is pmg or the new version of debian. i tried adding manually a line to syslog with command: #logger test message but it is still empty, so i think the problem is on Debian / Rsyslog side. I've reinstalled rsyslog with this command: #sudo apt-get...
  3. T

    [SOLVED] Logs problem after upgrade from 5 to 6

    Hi, i have the same problem after upgrade, i've tried rebooting yesterday and it solved the issue, but today i have the same error.
  4. T

    Custom interval in clearing local dns resolver cache

    Thanks for the answer, already read that post, but there is no mention to unbound settings beside just install it. i've found some setting here: https://forum.proxmox.com/threads/how-to-local-dns-resolver-for-proxmox-mail-gateway.41189/post-201416 i'm trying this setting. My problem is that...
  5. T

    Custom interval in clearing local dns resolver cache

    Hi, can you share a valid/optimal config for unbound? Thanks
  6. T

    New Hardware Configuration

    Hi, regarding this: are you using the LAG used for ceph and vms traffic also for Cluster comunication (corosync) or is on 1 or 2 separate network?
  7. T

    [SOLVED] pve5to6 ceph version mismatch

    Hi Alwin, i have reboot all the nodes one at a time waiting ceph to rebalance between reboots and now all nodes are on 12.2.12. Thanks for the support.
  8. T

    [SOLVED] pve5to6 ceph version mismatch

    Hi Alwin, already tried that on all 3 nodes but i have the same warning,
  9. T

    [SOLVED] pve5to6 ceph version mismatch

    hi, i have a 3 node cluster with ceph, all nodes updated to latest version: 5.4.11 i run the pve5to6 script and i have the following warning WARN: multiple running versions detected for daemon type monitor! WARN: multiple running versions detected for daemon type manager! SKIP: no running...
  10. T

    Proxmox VE Ceph Benchmark 2018/02

    No redundancy on install disk. What i'm looking for is: 2 small disk in mirror zfs for proxmox Other disks for zfs/ceph
  11. T

    Proxmox VE Ceph Benchmark 2018/02

    Thanks you very much! I'll try this method.
  12. T

    Proxmox VE Ceph Benchmark 2018/02

    I'm looking for this solution for like a year but Never understood how to do It so i've made a raid 0 of every single disk. Can you Please explain how you did It? Thanks
  13. T

    Proxmox VE Ceph Benchmark 2018/02

    If you out your p420 in hba, from what are you booting? Where did you install proxmox (p420 in hba cannot boot from It)
  14. T

    ZFS SSD Pool with NVMe Cache (ZIL & L2ARC)

    Is this a problem due to write amolification handled better or Just because enterprise SSD has more write Endurance? If second case, is not a solution, is like putting a bigger fuel tank in a car because the engine is loosing fuel on the floor
  15. T

    VM gone after joining cluster

    https://pve.proxmox.com/wiki/Cluster_Manager Is clear that a node should be empty before adding It to a cluster. Copy from the wiki page: A new node cannot hold any VMs, because you would get conflicts about identical VM IDs. Also, all existing configuration in /etc/pve is overwritten when...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!