Sorry for the mistake: I resize a VM disk on the PVE connected to PBS.
After this operation I see that next backup are executed, almost the whole disk are marked as "dirty" so it takes more than an hour to do all backup instead few minutes compared to backups before that.
Is that normal?
When...
Hi,
after a resize operation disk storage on PBS and an update from 2.3 to 2.4 on a node PVE connected to this PBS the VM start backupped from zero (ex. VM 1'' with drive of 250GB restart to copy all 250GB)...
Why this is happened?
Tnx!
All disks pass the smartctl status ...
/des/sda I intentionally formatted to see if zfs respond as I expect. Otherwise the volume was not degraded (but still unable to mount)
Fail as before.
Also I try:
zpool import -XF -m -f -o PVE03 -> same I/O error
root@pve03 ~ # zpool status
no pools available
----
root@pve03 ~ # zpool import
pool: PVE03
id: 9958204538773202748
state: DEGRADED
status: One or more devices contains corrupted data.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of...
Hi,
I have 3 proxmox nodes v 6.4.
Every node has 6 SSD drives dedicated to store VM.
Every node is configured with ZFS raidz1-0
On top on this ZFS pool data I built a Gluster Brick.
So I set a gluster volume dispersed with 3 brick (redundancy 1).And it worked flowless for the last 3 years.
Now...
I look into the forum and see many post on this matter.
The most important objection is that there's no real hardware solution in cloud or hosting today to justify an investment supporting development and testing on another architecture.
But recently Hetzner (one of the players here in Europe)...
No firewall active, nodes ping each others, and other IP on LAN, just don't ping gateway (and outside) anymore...
But fortunately I found the problem: is the gateway itself.
Is an old Telecom modem/router: I disable the DHCP on this modem to activate on a pihole, inside my lan.
This lead to the...
Hi,
I have a 4 node little cluster on mini PC's in my house.
Nothing fancy: 2 nic (one on USB adapter) on each host.
Physical nic's are dedicated to connect inside SSD storage with 3 brick for storage network.
USB<->Nic (1gb) for management and proxmox cluster.
All work well but after a while 3...
I tried with a smaller VM and restore goes fine.
The problem affect a much more larger VM (250GB of hard drive).
There's a way to avoid this problem?
Tnx.
I have a 4 nodes with gluster and similar problems... reported also in this thread:
In my case, I cannot restore any VM backupped to the gluster volume.
The complete error is:
Proxmox
Virtual Environment 6.4-4
Search
Storage 'BACKUP' on node 'pve01'
Search:
103
Server View
Logs
()
restore vma...
I have a 4 nodes with gluster and similar problems... I cannot restore any VM backupped on my gluster volume.
Same error.
" All subvolumes are down. Going offline until at least one of them comes back up"
I'm on the latest PVE version and proxmox.
The fact that I cannot restore ANY of my VM's...
Mail alert still report that problem is on restic directory, but now datastore point on another directory and backup it's ok.
If i do some task manually on web interface I had no problem at all.
Where is log files?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.