These results are from my r610 with h200 in it mode - consumer ssds in zfs. Read test went fast but the write test nearly crashed it I thought. i lost all webUI access and test ETA was counting up hahah not sure what to make of that.
####R610
root@pve:/home# fio --randrepeat=1...
So here is my testing from the r630 with h730 in HBA mode w/ Dell 15k disks. I was curious if this only test the boot disk or if I can specify the ZFS pool to test its performance somehow. I am working on the other two machines now which have consumer ssds. Appreciate the cmds! first one I found...
So far, i rebuilt the r630 with the h730. Set it into HBA mode and setup a single boot disk on lvm. The other 3 disks running zfs raidz. I know that's not recommended... but this is just a test. What would my other storage options be? I restored my test VM and got the following results. Much...
Not to hack this thread, but in my case I had a common windows vm that I restored on each machine with different storage config.
A. PNY CS900
B. SK Hynix S31
Some testing results:
VM disk performance testing
I'm battling this issue as well. I have 2 r630s. one I switched out the h730 for an HBA330. The performance is absolutely awful. massive IO delay and noticeable slowness in VMs. Crystaldiskmark is fractions of the h730 setup and way slower than my h200 IT mode setup in my r610. Somehow that is...
on a single node i restarted pveproxy and pvedaemon services and that has worked before. or rebooting.
and on clusters ive seen it happen when quorum is not made properly or one machine isnt reporting or something.
Hello,
my machine use to run very efficiently always staying under a few % of IO delay. nothing really changed that I can tell it just started to run one and day and max our RAM. I do have two zfs pools of SSDs, so other then the probably incorrect use of TRIM they should be fast and all with...
ok thanks everyone. some guy on reddit had the trick. i used the cmd dpkg --configure -a and it picked up the install where i got locked out, finished the install and ran ifreload -a
my network came back online and i think im good to go.
when trying this from the upgrade wiki i get permission denied
"If ifupdown2 is installed, you can use ifreload -a to apply this change. For the legacy ifupdown, ifreload is not available, so you either need to reboot or use ifdown vmbr0; ifup vmbr0"
f.. i wish i woulda read this 5 minutes ago... i did a systemctl restart pveproxy pvedaemon
i got back into the UI but update was frozen.. rebooted :mad:
now i cant ping it and from the shell on the host when i try to ping 8.8.8,8 it says network unreachable. my etc/network/interfaces still...
So i was in the middle of upgrading from 6 to 7 and then the login window popped up right as it is asking me what i want to do with sshd_config file. My password isnt workking and the WEBUI is down from a new window.. wth happened and how can i recover? ive been scared to restart the server...
its unforunate there hasnt been more information on this. I just added a 10gig nic and also would like to use it for backups to my nfs share on Unraid. only getting about 50-70/50-70mb/s until the end then it reads 200+ for the last few percent. I want to check my data base file to ensure its...
so as in informal you mean they are nothing to worry about and wont effect performance or features? So far I have added all of my nodes successfully. just haven't moved further until I knew if I was gonna have to delete everything.
thanks for your input.
I followed these instructions on my first master node and while it did allow k3s to start I still have the error about the modules. Is this recommended to proceed or are the modules necessary for a clean install.
* k3s.service - Lightweight Kubernetes
Loaded: loaded...
heres some more output from testing ... tried to remove daemon.json and restart with dockerd -D
U [2021-02-06T17: 33: 28.008998835Z] [graphdriver] priority list: [btrfs zfs overlay2 fuse-overlayfs on overlay devicemapper vfs]
DEBU [2021-02-06T17: 33: 28.009180897Z] zfs command is not...
I've got the same issue after a new install of 6.3-3 but my 6.3-2 instance still works just fine. here my code output
root @ docker: ~ # journalctl -fu docker
- Logs begin at Sat 2021-02-06 03:46:19 UTC. -
Feb 06 16:55:29 docker systemd [1]: Failed to start Docker Application Container Engine...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.