Search results

  1. VMWare --> Proxmox Booting, but not all the way

    Looks like it's having a problem with the 0a1f disk which is the OS partition "/" Not sure if that would help because I'm attempting to use a specific grub/kernel to boot with. Maybe there is a way I can see the full error? I'm not entirely sure how to do that as it's cut-off wtih the KVM...
  2. VMWare --> Proxmox Booting, but not all the way

    Thanks for the quick reply! I am not able to boot into the system. Is it possible for me to use a livecd of some kind to edit this and check it out or fix it? EDIT: Ended up booting up via an ubuntu Live CD and here is the output of the fstab...all of the UUIDs are the same as listed here...
  3. VMWare --> Proxmox Booting, but not all the way

    I have a multi disk VM running RH7 that I'm attempting to move over from VMWare Fusion. I have migrated all the files over and converted them from ovf --> vmdk --> .raw files. It is two separate disks. I am able to get the VM to boot-up but am getting errors. dracut-initqueue[283]: Warning...
  4. Hi mir, do you know if there has been any movement on the ZFS over iSCSI plugin for FreeNAS?

    Hi mir, do you know if there has been any movement on the ZFS over iSCSI plugin for FreeNAS?
  5. Node with question mark

    I'm having issues with this constantly now, not just when backing up. I'm really not able to find any possiblities as to why this is happening. Even simple poweroff's of CTs cause this to happen now. Is there anyone on the proxmox team that would be able to help us with this?!
  6. Node with question mark

    I've been having this issue as well...whenever i initiate a backup the system faults and is thrown into this state. There really isn't anything in the logs to go off of. Restarting services doesn't seem to fix the issue either. My backup is being sent to a nfs share, but it NEVER had issues...
  7. LXC Container stuck on startup, hangs pveproxy

    I'm seeing something similar to this where all of my LXCs and the node itself shows a "?" But containers are still running mostly. This is after trying ot restart one LXC. root@pve:~# ps aux| grep pmxcfs root 1955 0.1 0.1 710848 43664 ? Ssl Jan29 7:15 /usr/bin/pmxcfs root...
  8. Ceph Performance Tuning?

    Thank you for the reply and information. I'll look into it. I might be able to get away with the performance as I'm only doing this for homelab stuff. Do you know if the ceph read/write performance will really impact actual speed of the running LXCs/VMs? Or is it only the replicating to the...
  9. Ceph Performance Tuning?

    I am using Samsung 850 Pro's. It is definitely not an ideal setup. But i think i should be getting better thoroughput from that. You can see i'm able to get good speeds with sequential.
  10. Ceph Performance Tuning?

    I am running 3 nodes. All with 10GbE network. Only one OSD per machine right now, but the OSDs are SSDs. The Filesystem is also on SSDs. benchmarking them has them at around 450MB/s
  11. Ceph Performance Tuning?

    Need help tweaking the write performance...hope it might just be my settings I'm currently getting these performance numbers: root@b:~# rados -p ceph bench 60 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0...
  12. Ceph OSD Formatting help

    I got it working completely...just need help tweaking the write performance. Hope it might just be my settings I'm currently getting these performance numbers: root@b:~# rados -p ceph bench 60 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size...
  13. Ceph OSD Formatting help

    SCRATCH THAT! sgdisk -Z fixed the issue this time! Going to play around with this a bit more...
  14. Ceph OSD Formatting help

    So, on my A & C machines I was able to createosd just fine: root@a:~# fdisk -l Disk /dev/sda: 233.8 GiB, 251000193024 bytes, 490234752 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk...
  15. Ceph OSD Formatting help

    Here is the commands that I ran so far. I've gotten one re-installed. I will attempt the others now and the install ceph. I believe that the zpool command can't be found yet because I haven't installed ceph. root@b:~# zpool status bash: zpool: command not found root@b:~# parted -l Model: ATA...
  16. Ceph OSD Formatting help

    the "c" is the hostname of the node. I have 3 nodes: a.mini b.mini c.mini Only shows the first letter at the command prompt Installed on Mac Minis
  17. Ceph OSD Formatting help

    Looks like upon trying to change the partitions and such i blew out my installs =/ woops. I'll post here when I get them rebuilt. Essentially what I'm trying to do is use macmini's that are in a case which has pci-e expanders and usb extensions. I have a 10GbE card in each machine and have...
  18. Ceph OSD Formatting help

    No, that is with the new .js shell
  19. Ceph OSD Formatting help

    root@c:~# pveversion -v proxmox-ve: 5.1-35 (running kernel: 4.13.13-4-pve) pve-manager: 5.1-42 (running version: 5.1-42/724a6cb3) pve-kernel-4.13.13-4-pve: 4.13.13-35 libpve-http-server-perl: 2.0-8 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0: 1.0.1-1 pve-cluster: 5.0-19 qemu-server: 5.0-19...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!