Search results

  1. G

    lxcfs: utils.c: 331: read_file_fuse: Write to cache was truncated

    I have the same messages since I upgraded from 5 to 6. I don't know if it's related, but now snapshoting some of the CT freezes, making the CT unresponsive all night (until I just kill -9 the snapshot process).
  2. G

    ZFS replication gives "our of space"

    yes the target is supposed to have 2T more free... I'm assuming there's a background copy process I missed somehow... I created the replication task in the GUI so I guess it's been automatically queued and my manual run is interfering... what's really weird is that I ran manually because the GUI...
  3. G

    How do I remove Replication jobs from missing node?

    Hello, is this still considered as good practice? I mean manually removing lines from a file inside /etc/pve ? Thanks
  4. G

    ZFS replication gives "our of space"

    It's weird! now a 1.36T zfs fs exists on the target ?!?
  5. G

    ZFS replication gives "our of space"

    Thanks for your answer. Here are the info: VM config: arch: amd64 cores: 1 hostname: xxxxx memory: 1000 net0: name=eth1,bridge=vmbr1,hwaddr=xxxx,ip=xxxxx,type=veth net1: name=eth0,bridge=vmbr0,gw=xxxxx,hwaddr=xxxxx,ip=xxxx,type=veth net2...
  6. G

    ZFS replication gives "our of space"

    Greetings I'm trying to have a replication job between two nodes of a cluster, like this: pvesr run -id 111-2 -verbose After transfering 173G, the process stops with "cannot receive incremental stream: out of space". The free space on the receiving end is 1.65T. What am I doing wrong? Thanks...
  7. G

    VM disturbing the whole machine

    As further testing, I moved the problematic VM to another node, and it's working. So it's very likely a host-related issue. Any idea??
  8. G

    VM disturbing the whole machine

    Interesting follow-up. I moved a working VM to the host where the problem occured, and the VM is running erratically. So it's probably a host-related problem. Both hosts are running PVE 5.4-15, the working is Intel Xeon E3-1245v2@3,4GHz, the problematic is Intel Xeon E5-1650 v2 @3.5 GHz...
  9. G

    Understand memory usage

    Host: CPU(s) 12 x Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz (1 Socket) Kernel Version Linux 4.15.18-12-pve #1 SMP PVE 4.15.18-35 (Wed, 13 Mar 2019 08:24:42 +0100) PVE Manager Version pve-manager/5.4-15/d0ec33c6 Guest Debian GNU/Linux 9 (stretch) # free -m total used...
  10. G

    Tutorial - Helping post for upgrade from Promox 5 to Proxmox 6 on OVH.

    My personal opinion is that it's not at all a technical issue: SoYouStart is the "low cost" offer and I suspect OVH wants us to migrate to a more expensive offer, even if the physical machine are the same. The tech support told me that Proxmox 6 is already available in OVH but not in SYS.
  11. G

    Understand memory usage

    Greetings I'm puzzled by this: Proxmox reports that my CT has a memory usage of 0.08% (1.57 MiB of 1.95 GiB), however the guest system sees things differently: total used free shared buff/cache available Mem: 1000 664 4...
  12. G

    Addind a v6 node to a v5 cluster

    Do you have a procedure document to help me understand how I can do that?
  13. G

    Tutorial - Helping post for upgrade from Promox 5 to Proxmox 6 on OVH.

    Yes I had a similar answer: "hardware compatibility problem" which kind of worried me because it felt like they don't know what they are talking about. But yesterday I had a slightly different answer: "you can select a premium OVH offer", so I wonder if they are using this as an excuse to make...
  14. G

    VM disturbing the whole machine

    Well it wasn't the cause... I know have 2T free disk, the VM is using 1G of 3G allowed, server load is between 0 and 4 (for 12 cores) and I see this kind of things: Jun 1 17:18:24 dkr06 kernel: [10071.847256] clear_huge_page+0x110/0x200 Jun 1 17:19:43 dkr06 kernel: [10150.892113]...
  15. G

    VM disturbing the whole machine

    It seems you were right: I casually let the zfs partition go behind 80% and it was probably the root cause. After a lot of cleanup, things seems to be back to normal. Thanks a lot!!
  16. G

    VM disturbing the whole machine

    Only the weird CPU/irq message I posted. I'm trying to free a lot of space to test if space is the problem, I know ZFS deals poorly with low disk space.
  17. G

    Addind a v6 node to a v5 cluster

    Thanks for your answers! Hum I've never tested that :) sounds fun! Would it help me preparing my cluster update to v6? So I would copy the current hosts as "guest hypervisors" on one of the current actual nodes?
  18. G

    Addind a v6 node to a v5 cluster

    So would it be safer to first add the new node as v5 and then upgrade everything? I have a 3 nodes cluster in production and I can't afford a mirror testbed
  19. G

    Addind a v6 node to a v5 cluster

    Oh I see, I mis-understood that it was possible. I'll try to update the cluster then, it's more work but make more sense than creating a new v5 node. thanks.
  20. G

    Addind a v6 node to a v5 cluster

    Greetings I'm trying to add a new node to my cluster. I followed the documentation to update all the current nodes to the latest v5 and corosync v3. It's mostly working: - current nodes see each other - on the GUI of one node I can control the other nodes - in Datacenter/Cluster, I see the node...