Search results

  1. System Hang at 'Reached target Reboot'

    FYI I noticed that on all these servers where we hit this 30 minutes reboot time, IPMI stopped working also.
  2. System Hang at 'Reached target Reboot'

    Logs show nothing interesting: Sep 10 06:27:45 lic2 systemd[1]: Stopped PVE API Proxy Server. Sep 10 06:27:45 lic2 systemd[1]: Stopping OpenBSD Secure Shell server... Sep 10 06:27:45 lic2 systemd[1]: Stopped target PVE Storage Target. Sep 10 06:27:45 lic2 systemd[1]: Stopped OpenBSD Secure...
  3. System Hang at 'Reached target Reboot'

    I just have had the same experience right now with the same Supermicro AMD hardware type but on a different server. Took exactly 30 minutes to come back. I failed to access IPMI or attach screen in time.
  4. ZFS, file or block level storage

    I see you did what I have told you, and since rpool/data was already made, all you have had to do is to add it as a storage in PM GUI. Hopefully you ticked the thin provision box also. :-) Now you are all good, and tuning is another separate issue.
  5. ZFS, file or block level storage

    My 5 cents. Create a new dataset and add it as storage option (disk image, etc..) using PM GUI. Then PM will create ZVOLs for your VMs in that dataset. Usually it is already created (rpool/data called local-zfs), but I do not know what your provider did in it's PM install recipe.
  6. Pve-zsync - out of space error. But space is enough.

    What about space on the pool level? zpool list zfs list -t all Also take a look at this: zfs list -o name,quota,refquota,reservation,refreservation,volsize,used,available,referenced Try removing quotas and reservations..
  7. PM 6.* ZFS pools with disk by id, why?

    I guess we should open a feature request to fix this documentation as well as to make https GUI ZFS pool creation also reference devices by-id. Anyone from proxmox team wants to do it?
  8. very slow sql request on promox

    If you already set your CPU to "host", you can use EXPLAIN SQL statement to see in detail what parts take more time one PM 6.* Then you can continue to investigate in that direction.
  9. PM 6.* ZFS pools with disk by id, why?

    Hi guys, is there any reason (except for race cases when /dev/sd* device names change) why since PM 6 installer switched from /dev/sdX to /dev/disks/by-id/* names when creating rpool? I guess some documentation like one here: https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks should be updated...
  10. Enable PM HTTPS GUI on another interface / network (easiest way?)

    Hi @guletz, that is a good idea. And you even gave me an example config. Such a nice guy you are. Tnx!
  11. Enable PM HTTPS GUI on another interface / network (easiest way?)

    Hi, I have PM cluster on private network. While I can enable access using DNAT or VPN with a VM running on this cluster or another that can reach this PM private network, I still wonder what is the easiest solution to make https GUI available on another (WAN) interface? Maybe I can just define...
  12. Replication timers with ZFS in regards to reading data from ARC

    Hi guys, I have a few VMs on PM 5 with set replication using ZFS. I used to set */30 or more for replication schedule. Just now I set it to */1 for every VM because counter intuitively, it might put less work for the source disks. The reasoning is that, if we sync often, then the data to be...
  13. System Hang at 'Reached target Reboot'

    Hi, FYI, I have had the same experience with PM 5 after fully updating it around month or two ago. I rebooted it remotely and it did not come up. I drove to the office (1 a.m. - the life of sysadmins :-( ) Just when I unlocked the office door, around 30 minute mark, the monitoring system sent...
  14. High (100%) ZVOL CPU usage when doing VM import from backup.

    Hmm... I don't think I understand correctly. Do you think high CPU usage with ZVOL on restore is due to the fact that data disk of ZFS VM is actual zvol? And that I should change data disk for this NFS VM from ZVOL to RAW file on ZFS dataset? When I have some time and I put additional servers...
  15. High (100%) ZVOL CPU usage when doing VM import from backup.

    So host is: 24 x Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz (2 Sockets) 62.90 GiB RAM Linux 4.15.18-16-pve #1 SMP PVE 4.15.18-41 (Tue, 18 Jun 2019 07:36:54 +0200) pve-manager/5.4-7/fc10404a 2 x Intel DC S3510 Series 1.6TB NAME STATE READ WRITE CKSUM rpool ONLINE...
  16. High (100%) ZVOL CPU usage when doing VM import from backup.

    Hi, i did some testing yesterday with PM 5. to see how fast can I import some big VMs from PM 4. So I setup a NFS VM in new cluster on a node where I will import VMs. I defined it as "exportVM" storage on PM 4 and PM 5 cluster concurrently. I exported VMs (backup with GUI) from PM 4 to NFS VM...
  17. Unable to stop container, forced to reboot node manually

    FYI I experience exact same symptoms (never stops) with LXC with NFS network mounts. I decided I will not investigate further, because there is a simple solution, and it is just my home server. So just do "ps faxuw" in console and look for process name with VM ID in it. Usually it is just one or...
  18. BUG: (Tagged) vlans do not work with (bridge and) Mellanox Technologies MT26448 10g

    FYI, if one disables vlan support on the bridge, then it starts working again. Makes sense, right? :-) More details on how to make it work on the before linked bug thread.
  19. BUG: (Tagged) vlans do not work with (bridge and) Mellanox Technologies MT26448 10g

    Hi guys, as reported here: https://bugzilla.proxmox.com/show_bug.cgi?id=2329 in ProxMox 5.* VMs do not get tagged traffic passed to them, if traffic is incoming via MT26448. They work for Intel 10G cards. In ProxMox 4.* MT26448 works just fine. Any ideas or hints to solve this, are welcome.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!