Thanks for your answer. Here are the info:
VM config:
arch: amd64
cores: 1
hostname: xxxxx
memory: 1000
net0: name=eth1,bridge=vmbr1,hwaddr=xxxx,ip=xxxxx,type=veth
net1: name=eth0,bridge=vmbr0,gw=xxxxx,hwaddr=xxxxx,ip=xxxx,type=veth
net2...
Greetings
I'm trying to have a replication job between two nodes of a cluster, like this:
pvesr run -id 111-2 -verbose
After transfering 173G, the process stops with "cannot receive incremental stream: out of space". The free space on the receiving end is 1.65T.
What am I doing wrong?
Thanks...
Interesting follow-up.
I moved a working VM to the host where the problem occured, and the VM is running erratically. So it's probably a host-related problem.
Both hosts are running PVE 5.4-15, the working is Intel Xeon E3-1245v2@3,4GHz, the problematic is Intel Xeon E5-1650 v2 @3.5 GHz...
My personal opinion is that it's not at all a technical issue: SoYouStart is the "low cost" offer and I suspect OVH wants us to migrate to a more expensive offer, even if the physical machine are the same. The tech support told me that Proxmox 6 is already available in OVH but not in SYS.
Greetings
I'm puzzled by this: Proxmox reports that my CT has a memory usage of 0.08% (1.57 MiB of 1.95 GiB), however the guest system sees things differently:
total used free shared buff/cache available
Mem: 1000 664 4...
Yes I had a similar answer: "hardware compatibility problem" which kind of worried me because it felt like they don't know what they are talking about.
But yesterday I had a slightly different answer: "you can select a premium OVH offer", so I wonder if they are using this as an excuse to make...
Well it wasn't the cause... I know have 2T free disk, the VM is using 1G of 3G allowed, server load is between 0 and 4 (for 12 cores) and I see this kind of things:
Jun 1 17:18:24 dkr06 kernel: [10071.847256] clear_huge_page+0x110/0x200
Jun 1 17:19:43 dkr06 kernel: [10150.892113]...
It seems you were right: I casually let the zfs partition go behind 80% and it was probably the root cause. After a lot of cleanup, things seems to be back to normal. Thanks a lot!!
Only the weird CPU/irq message I posted.
I'm trying to free a lot of space to test if space is the problem, I know ZFS deals poorly with low disk space.
Thanks for your answers!
Hum I've never tested that :) sounds fun! Would it help me preparing my cluster update to v6? So I would copy the current hosts as "guest hypervisors" on one of the current actual nodes?
So would it be safer to first add the new node as v5 and then upgrade everything? I have a 3 nodes cluster in production and I can't afford a mirror testbed
Oh I see, I mis-understood that it was possible.
I'll try to update the cluster then, it's more work but make more sense than creating a new v5 node.
thanks.
Greetings
I'm trying to add a new node to my cluster. I followed the documentation to update all the current nodes to the latest v5 and corosync v3.
It's mostly working:
- current nodes see each other
- on the GUI of one node I can control the other nodes
- in Datacenter/Cluster, I see the node...
Thanks for sharing.
I just noticed that SoYouStart was now providing only a non-ZFS Proxmox 5 template. Does anyone has informations about that? will OVH abandon Proxmox?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.