Hey all,
I was going through the CPUs and came across the MAX type. what exactly is it? I can't seem to find much information on it. However, it does work for an OS that others weren't.
I have a bridge setup with 1 10g interface, attached to a single VM. I see about 3% packet loss (smoke ping) that I can only seem to attribute to the bridge itself. running ip -c -details -statistics addr show vmbr4 it shows 72910270 dropped packages from 2272569168 total. Which falls perfectly...
I am trying to set up IPv6 on a few VMs I set up a VM with an interface on vmbr1 with no vlan tag and no firewall. I try to ping the link local address of the vmbr1, it times out. I also tried to ping something else on that vlan - same thing - timeout
if I set the vlan tag to something other...
no not as far as I can tell. It is a fresh debian instance. there does not seem to be any network manager.
I tried running cloud-init clean -r that still did not seem to do anything.
Edit: After digging deeper into how cloudinit works, I found the file...
What I ended up doing is setting up the primary affinity to 1 for all the SSDs, and 0 for all the HDDs, than doing a pool with a simple host replication rule, a pool size of 3 with a min of 2. That doesnt 100% guarantee that a secondary is written to an HDD, but it does make sure the primary is...
Alright I am switching from a 2 node ZFS "cluster" to a proper 3 node ceph cluster. I am trying to figure out the best use of the hardware I have. I have 3 Identical servers, each has 1 500gb NVME, 3 2TB SSDs, and 5 2TB HDDs. I have seen where you can have the primary drive be an SSD with a...
All,
I'm sure there is a fairly simple solution to this, but I set a vm up with cloudinit to have a dhcp address initially, then changed it to static after the fact. The static address is given to the VM, but it also pulls a dhcp address. how would one force it to only pull the static...
I am trying to shrink the size of a drive that I accidentally used the defaults for. I tried following this guide, but always end up back at the same size i started with. It is in a cluster, and using cloud-init, however i cant see that mattering?
Any help is much appreciated.
I found something interesting on a hunch: when trying to ssh from the problem machine to the other, the ssh key was changed for the remote host. no idea how that would have changed, so i have some investigating to do. that would explain the broken pipe message.
So I fixed the drive problem, and deleted the vm images on the second machine. everything seems to run through once, than i get the error
what am i missing?
Ive run iperf tests and am right at my 10gb speed with minimal errors, but to eliminate this as a potential problem I switched the...
Interestingly I was in the middle of doing that already, I was seeing some errors on one drive and decided to reinitialize it through the IPMI and reboot. When the machine came back up I found that The problem drive had completely dropped offline and was no longer detectable even by the IPMI /...
Oh like zfs is overloaded... Gotcha. The timeout is on 2 particular VMs, the rest are the 255 error. Which is something different? All the VMs are able to both read and write to the disk without problem..
I recently started getting replication failures. I have 2 servers and a monitor node set up in a sort of poor man's HA, the VMs replicate back and forth every 15 minutes. From host02 to host01 they work fine, however from host01 to host02 most of the VMs get: command 'set -o pipefail && pvesm...
All, I had a node fail in our HA Cluster. When the VMs transferred over to another node the networking failed. after manually rebooting each vm the networking was up and operational. I am not using any advanced proxmox features like the firewall or anything. The bridges have the same ID and and...
Wait is there a way to activate it through the gui? I've always called it through the cli..
When I run status I always get the same few lines. Even in that occurance.
No, there was nothing in the journal either. I didn't check other files, but can go back in the log to when it was working and find all the output.
I ended up restarting the servers (which was obviously not ideal) and that did it. I'm not sure what got locked in, But the restart fixed it..
First if this is to far off topic let me know/move/delete this..
I've got a problem that may or may not be directly related to proxmox: I use net snmp (snmpd) to monitor my servers(dell R720), and today snmp stopped responding to requests on both of them. I believe this was around the same time...
The OSDs are not being marked as down for some reason. I think there is a setting for the number of votes for an osd to be marked as down. I am going to head your advice and go with ZFS for production, but am determined to get this to work in testing now. ill have to work around the non live...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.