I am not sure if it is 7.2 realted but it hangs and needs hard reboot.
May 23 10:03:02 s7 kernel: [893783.712903] show_signal: 8 callbacks suppressed
May 23 10:03:02 s7 kernel: [893783.712906] traps: pvescheduler[2627784] general protection fault ip:55c0dffe3f94 sp:7ffd3cb21a60 error:0 in...
You are right but desktop hardwares are not bad lately. And if backups are in safe place, it is ok to use them. I guess many DCs are using desktop hardwares also.
Other samsung has those values 0. And I am sening it to warrany. Will see what they will do about it.
I chose samsung to not be in this sitation and paid more. Next time I may go with Crucial.
Sometimes remote hands at datacenter might be needed. And instead of shutting down server after stopping vms and waiting for staff to do things which ends up longer downtime, can I let staff at datacenter simply pressing power button and shut down server for less downtime when they are ready...
I had an other read only test this time no bad block. Totatlly confused to return to warranty now :)
root@s6:~# badblocks -v /dev/nvme0n1 > ~/bad_sectors-2-.txt
Checking blocks 0 to 976762583
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found. (0/0/0 errors)
Here is write and read test results. Less bad blocks.
root@s6:~# badblocks -vw /dev/nvme0n1 > ~/bad_sectors--.txt
Checking for bad blocks in read-write mode
From block 0 to 976762583
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and...
I have moved all vm and start testing first results:
root@s6:~# badblocks -v /dev/nvme0n1 > ~/bad_sectors.txt
Checking blocks 0 to 976762583
Checking for bad blocks (read-only test): done
Pass completed, 96 bad blocks found. (96/0/0 errors)
Now I am restesting with -w (read and write method.)...
I have a 4 TB Disk and mounted as zfs. I also created an other one with command:
zfs create D4/BACKUPS -o mountpoint=/D4BACKUP to store disk backups.
But their sizes are different. Please see screen shot. Also their used spaces are also different. Probably because it has zfs snapshot and disk...
I am a bit confused. You mean I HAVE TO have a common (at least one) snapshot after I move a vm to new node TO STOP doing full sync with next pve-zsync run ?
Also how about files in /var/lib/pve-zsync/ directory? Are they in any way in subject for before running command to check snapshots?
Ok how about ignoring existing snapshots and let it create a new snapshot and copy it over. It wont take long anyway.
Can I delete all existing snapshot with zfs list -t snapshot -o name | grep vm-124| xargs -n 1 zfs destroy -vr
along with files in /var/lib/pve-zsync/* ?
Ok I install a new node and move kvm with command:
pve-zsync sync --source 1.2.3.4:124 --dest D2 --verbose
After the move, on backup server (not a pbs), I have a cron with content:
/usr/sbin/pve-zsync sync --source 1.2.3.4:124 --dest D2D3 --name 110 --maxsnap 20 --method ssh;
I then update...
When I move a vm to a new node, pve-zsync starts for a new complete snapshot instead of doing only snapshot to backup server (not proxmox backup server, normal proxmox node). And this makes lots of traffic. Is it possible to avoid that?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.