so here's what I'd suggest-
dont use 10.13.30.x for corosync at all.
assign arbitrary addresses to bond0 and bond1; --edit- ON DIFFERENT SUBNETS. ideally, they should be on seperate vlans too. use those addresses as ring0 and ring1.
Hi everyone, I've run into a serious issue while managing a Proxmox VE Ceph environment.
A user created a lot of VMs and ended up filling the entire Ceph cluster. The problem is, when I look at the RBD storage in the WebUI, I can only see the...
Not sure if it should be posted here or in PVE, but I do it here since the migration will be startet using the PDM GUI.
Simplified setup:
Datacenter "Site A"
node1.sitea.company.com
node2.sitea.company.com
node3.sitea.company.com
Datacenter...
Hi,
we have a guide for replacing a failed ZFS device, including the case of a bootable device: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_change_failed_dev
And just to note; these MX500 are really not meant for...
Hallo @3n0x, so lässt sich das leider nicht eingrenzen — da brauchen wir deutlich mehr Infos:
Proxmox VE Version (pveversion -v)
Netzwerk-Konfiguration (cat /etc/network/interfaces)
Zwischen welchen Endpunkten ist es langsam? (VM↔VM, VM↔Host...
No. It is read completely, to seek and find modified chunks. Look at "write: 0 B/s" in every line ;-)
In a simplified nutshell: the source is read chunk by chunk. The chunk is hashed. That checksum is sent to the PBS. The PBS notices that a...
why I won't see any snapshot being taken of the VM during the backup
As far as i know, for backups qemu creates it's own "snapshot", there are no snapshots created on the storage / file system itself.
snapshot as volume chain
That's for...
Hi, @pulipulichen
You posted only a screenshot and to make matters worse, it doesn't show all the logs' content (the wrapped parts).
If we could see the log in the CODE blocks (this < / > icon above), maybe we could help more.
Anyway, what I...
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6...
I have a proxmox host with a single network interface. Now I want a second interface for my VMs with its own subnet.. Also I want to reach the main net from the VMs..
I tried a lot and this seems to work somewhat:
- can remote into a windows VM...
Hi, @pulipulichen
You posted only a screenshot and to make matters worse, it doesn't show all the logs' content (the wrapped parts).
If we could see the log in the CODE blocks (this < / > icon above), maybe we could help more.
Anyway, what I...
Full read of VM's source disk is required when VM is shutdown.
Then Dirty map skip full read only on next backup, because dirty map is managed by qemu process of the VM
Welcome, @Zexan
Hard to guess, because you haven't given the details, e.g. what exactly happens in which stage of installation.
Anyway, you check whether the same problem exists with installers of other systems, like Debian, Ubuntu or when...
Resurrecting this old thread.... I've come back to the topic and been running SR-IOV for a few weeks now without any issues but with noticeable improvements in latency. This time I kept things clean and simple:
One 25Gb PF for host (including...
I tried several other things as well to identify the issue but nothing is helping. At one time I thought of upgrading things, but that is also not possible. Problem is I can't even find logs for the services. systemctl start pvestatd or systemctl...
@nitrosont, dein top-Output zeigt das Problem ziemlich eindeutig — es ist kein CPU-Problem, sondern Speicher:
MiB Mem: 256.0 total, 1.2 free, 253.3 used — RAM praktisch voll
MiB Swap: 256.0 total, 0.1 free, 255.9 used — Swap ebenfalls voll...
Die Allocated Pages bestätigen das Bild: 1.673.851 Pages × 4 MiB (Standard-Pagesize MSA 2060) = ~6,39 TiB — das deckt sich mit den ~7,0 TB auf Pool-Ebene.
OCFS2 meldet 3,8T belegt → ca. 2,5 TiB sind stale-Allocations, die vom Filesystem...