Search results

  1. M

    proxmox 4 beta 2 doesn't boot anymore

    My 3.4 iso i had on the remote KVM didnt seem to have a rescue option but 5.1 does, but for some reason on this particular server it just hangs on a black screen and never boots (Supermicro X7DB8), so had to resort to an ubuntu live which did work.
  2. M

    Where is boot/grub in zfs root for grub rescue?

    Yeah, boot/grub is just not there for whatever reason. Bizarre. This was a pve 3.4 system with zfs raid 10 (with 12 drives I think now, I've expanded it, originally was 4 at install time). I booted a Ubuntu 16.04 desktop usb to live and installed the zfs packages (apt-get install...
  3. M

    proxmox 4 beta 2 doesn't boot anymore

    To save people time in case they hit this thread in searches for grub rescue> + zfs problems like me, Ubuntu 16.04 desktop has a live boot ('try ubuntu') that will do zfs, provided you can get the machine online to apt-get update && apt-get install zfsutils-linux.
  4. M

    Where is boot/grub in zfs root for grub rescue?

    Read lots of threads about grub rescue, but short of mounting a live iso to boot from (can't right now, the remote kvm is half busted), we're stuck at grub rescue> however, looking at (hd0) throught (hd7) (the max # of drives presented to bios by the JBOD controller), I can see...
  5. M

    [SOLVED] pct start failed after apparmor update

    To be clear, it's the upgraded kernel from that sources list that works, not that that sources list has an old copy of apparmor + libraries to downgrade to. You must add that line, apt-get update && apt-get dist-upgrade && shutdown -r now to reboot to the new kernel. Thanks.
  6. M

    PVE Cluster fails every time

    bump... with this stuck in this situation, even the web console wont come up, the server is basically functionally wedged for PVE operations
  7. M

    PVE Cluster fails every time

    Lol and I just forgot about this and did it again to myself. Anyone figure out the way to back out and restart? Though, multicast querier is ON on the master/first node. So Im confused why timeout waiting for quorum. Would having querier on both nodes cause the issue? It seemed to be on -...
  8. M

    Devuan aka Debian8 on SysVInit

    That's ok SystemD is great: SystemD defaults to 8.8.8.8 for DNS. Ridiculous. Among dozens of other massive issues with it. https://twitter.com/jpmens/status/873878528844017664 And of course https://ewontfix.com/14/ http://without-systemd.org/wiki/index.php/Arguments_against_systemd
  9. M

    ZFS with SSDs: Am I asking for a headache in the near future?

    Yes, you can power down the machine and export and reimport them too. (My method breaks the mirror briefly and re-attaches instead.) Neither method should interfere with the ability to (re)boot the system of course. BTW, Here's a linux live CD with ZFS support...
  10. M

    ZFS with SSDs: Am I asking for a headache in the near future?

    If you've created by names instead of id's can you not then break your mirror (detatch sdb3 for eg), then zpool attach rpool sda3 /dev/disk/by-id/ata-(id for your disk)-part3 and then once resilvered do that for the first disk as well? Does that not cause an issue with rebooting? (ie does grub...
  11. M

    PVE Cluster fails every time

    I figured i was going to reinstall both nodes anyway, so I started messing around with the /etc/pve/priv auth keys and known_hosts files as well as removing corosync.conf on the node and pvecm re-creating. A few reboots, manually pruning out authkeys and using pvecm add -f to force add, i got...
  12. M

    Unable to add a previous node to a cluster

    might be related to no multicast traffic which is required by corosync. check your switch. I have the same issue still tho I can see multicast udp (ip addrs in 224.0.0.0/4)
  13. M

    PVE Cluster fails every time

    Bump. Same issue. Pretty deadly issue to not have multicast working briefly result in total shutdown of the web interface/requiring reinstall. I dont know that mulitcast was blocked for certain, but I modified something that may have allowed it. I now see multicast traffic from the master on...
  14. M

    installing qemu-utils in 4.4 removes most of pve.

    Was trying to convert a qcow2, and accidentally followed old instructions that suggested installing qemu-utils - question is why can't I reinstall the removed packages now? I removed qemu-utils and reinstalled, but the failure as above.
  15. M

    installing qemu-utils in 4.4 removes most of pve.

    I accidentally didn't pay enough attention trying to get qemu-utils on the server. TL;DR: DONT INSTALL QEMU-UTILS in 4.4 I installed pve from proxmox-ve_4.4-eb2d6f1e-2.iso # pveversion pve-manager/4.4-5/c43015a5 (running kernel: 4.4.19-1-pve) # apt-get install qemu-utils Reading package...
  16. M

    Vmware Migration to Proxmox 4.2

    This page refers to proxmox 2.0 and talks of ZFS in 3.4 "beta state". Not so useful. Wondering how to install an existing qcow2 vm image for prox 4.3
  17. M

    CVE-2016-5195 Dirty COW

    to make things easier for all: https://pve.proxmox.com/wiki/Package_Repositories the PVE 3.x test repo for wheezy is listed lower down and pve-kernel package has today's date.
  18. M

    var/lib/vz/private/$CTID is deleted but CT is operational

    I see for some of my containers that var/lib/vz/private/$CTID is deleted, but the containers are operational as their mounts exist in /var/lib/vz/root. The dirs under private/ are empty (but exist). $ grep 114 /proc/mounts /var/lib/vz/private/114 /var/lib/vz/root/114 simfs rw,relatime 0 0...
  19. M

    container can't see free disk space

    This is still not solved - anyone running any internal monitoring services on the CT will not be able to monitor their diskspace without direct cooperation/exposure from the host. Is there any work around for this?
  20. M

    veth TCP Checksum bug

    This also affects google containers and is related to the veth interface structure. So this doesn't affect PVE at all? Consider this: "Our patch has been reviewed and accepted into the kernel, and is currently being backported to -stable releases back to 3.14 in different distributions (such as...