Ran it for 1.5 years and it stepped away from it. Besides the fact that as soon as your host goes down or you do maintenance on your host, the network becomes kind of useless (ESP if you have multiple segmentated nets). The other thing to keep in mind is to pass through physical nics. Using just the vnics will potentially lead to security risks. That's the reason I went back to physical fws.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
The other thing to keep in mind is to pass through physical nics. Using just the vnics will potentially lead to security risks. That’s the reason I went back to physical fws.
I could throw an extra NIC in the server and pass it through, but what are the security risks of using the virtualized NICs? I'm just using virtio to share a dedicated bridge adapter with the router VM.
It works great as long as you have a method to access the server directly when the router machine is down. A laptop set to a static IP on the same subnet will let you access the host when you b0rk something. Keep a backup config on that machine It's pretty great though. Just remember pfsense won't support more than 7 external interfaces when you start getting crazy with vlans
Even if the virtualized router is down, I'll still have access to the physical server over the network until the DHCP lease expires. The switch does the work of delivering my packets on the LAN, not the router.
Thanks for the tip about the pfSense limit. After running pfSense for like 8 years, my opinion is that is flush with features but overall, it's trash. Nobody, not even Netgate, understands how to configure limiters, queues, and QoS properly. The official documentation and all the guides on the internet are all contradictory and wrong. I did loads of testing and it worked somewhat, but never as well as it should have on paper (ie. I got ping spikes if I ran a bandwidth test simultaneously, which shouldn't happen.) I don't necessarily think OpenWRT is any better, but I know the Linux kernel has multithreaded PPPOE and I expect some modern basics like SQM to work properly in it.
Even if the virtualized router is down, I'll still have access to the physical server over the network until the DHCP lease expires. The switch does the work of delivering my packets on the LAN, not the router.
Yes, of course it depends on your network topology. If you have a link in the same subnet you're good (and can configure a static IP if need be). But if you're using vlans you can get in a pickle if the router is down. In my setup everything on the user side is segregated so if the router goes down I have to take a dedicated management laptop and plug into the host management network directly on the management switch where i keep a port empty. This maintains segregation and in practices means I take my ancient Acer Aspire One used for nothing else into the server room that looks strangely like a laundry room and plug it in.
I used the same approach at the family business for years without any major problems. Go for it.
I personally wouldn't do this. You want your network to be dedicated hardware
I've been doing it for probably 8 years now without any major issues related to being a VM. In fact, that made recovery extremely easy the two times my PFsense VM shot itself in the head. Just load the backup of the VM taken the day before and off to the races. After switching to OPNsense a couple years ago I haven't had a single issue.
These days I run two identically spec'd hypervisors that constantly sync all my VMs to each other over 10GB NICs, so even a hardware failure won't take out my routing. That is something to consider if you don't have redundant hypervisors. Not really any different than if your physical router died, just something to plan for.
I would advise against it. Separation of concerns isn't important until it is. If your host server is unavailable for any reason, now EVERYTHING is unavailable. Having your server go down is bad. Being unable to browse the internet when your host is down and you're trying to figure out why is worse.
There are also risks involved in running your firewall on the same host as all your other VM's without adding a lot of complex network configurations.
I appreciate the advice. I have like 3 spare routers I can swap in if the server fails, plus I have internet on my phone lol. It's a home environment, not mission critical. I'm glad you mentioned this though, as it made me realize I should have one of these routers configured and ready-to-go as a backup.
My logic is partly that I think a VM on an x86 server could potentially be more reliable than some random SBC like a Banana Pi because it'll be running a mainline kernel with common peripherals, plus I can have RAID and ECC, etc (better hardware). I just don't fully buy the "separation of concerns" argument because you can always use that against VMs, and the argument for VMs is cost effectiveness via better utilization of hardware. At home, it can also mean spending money on better hardware instead of redundant hardware (why do I need another Linux box?).
There are also risks involved in running your firewall on the same host as all your other VM’s
I don't follow. It's isolated via a dedicated bridge adapter on the host, which is not shared with other VMs. Further, WAN traffic is also isolated by a VLAN, which only the router VM is configured for.
I run opnsense as a VM and have done for maybe 5 years now, moved across 3 different sets of hardware.
I DO have a hardware router under the ONT for if / when I feck up proxmox.
Snapshots are great when you start to play with the firewall settings or upgrades
I did it for a few years, it looks interesting on paper, but in practice, it's a nightmare.
At home, you'll be getting real sick of asking for change windows to reboot your hypervisor.
At work, you will rue the day you convinced mgmt to let it happen, only to now have hypervisor weirdness to troubleshoot on top of chasing down bgp and TCP header issues. If it's a dedicated router, you can at least narrow the scope of possible problems.
Gotta disagree, for home use at least. I have found it to be the opposite of a nightmare.
Moving my home routing and firewall to a VM saved me hours, and hours, and hours of time in the long run. I have a pretty complex home network and firewall setup with multiple public IPs, multiple outbound gateways, and multiple inbound and outbound VPN setups for various purposes. I'm also one of those loons that does outbound firewall with deny by default on my network, except the isolated guest VLAN. With a complex setup like that, being in a VM means it's so easy to tweak stuff safely and roll back if you mess something up or it just doesn't work the way you expected. Turns what would be a long outage rebuilding from scratch into a 30 second outage while you roll back the VM. And being able to snapshot your setup for backup is incredibly useful when your software doesn't behave properly (looking at you, PFsense).
All that said, I run redundant, synced hypervisors which takes care of a lot of the risk. A person who is not well versed in hypervisor management might not be a good fit for this setup, but if you have any kind of experience with VM management (or want to), I think it's the way to go.
For sure, if your thing is leaning into network configs, nothing wrong with it, especially if you have proper failover set up.
I think virtualized routing looks fun to the learning homelabber, and it is, but it does come with some caveats.
For home use, if used in an HA setup, the change window issue should disappear. Do you see any other issues that might crop up?
HA... Do you mean failover? It would need some consideration, either a second wan link or accepting that a few TCP sessions might reset after the cutover, even with state sync. But it's definitely doable.
I'm currently in a state of ramping down my hardware from a 1u dual Xeon to a more appropriate solution on less power-hungry gear, so I'm not as interested in setting up failover if it means adding to my power consumption simply for the uptime. After 25 years in IT, its become clear to me that the solutions we put in place at work come with some downsides like power consumption, noise, complexity and cost that aren't offset by any meaningful advantage.
All that said, i did run that setup for a few years and it does perform very well. The one advantage of having a router virtualized was being able to revert to a snapshot if an upgrade failed, which is a good case for virtualizing a router on its own.
Yea either failover or an active/active virtual switch… I’ve been toying with hyperconverged infrastructure and I wanted to bring my network infra into the fold, been looking at OVS. Not for any particular use case, just to learn how it works and I really like the concept of horizontally scaling out my entire infra just by plugging in another box of commodity hardware. Also been toying with a concept of automatically bootstrapping the whole thing.
OVS is fine, you can make live changes and something like spanning port traffic is a bit less hassle than using tc, but beyond that, it's not really an important component to a failover scenario over any other vswitch, since it has no idea what a TCP stream is.
I run OPNsense on a 2 node proxmox server and have for a few years now. I have HA set up and have had it fail over gracefully when I've been away and not even noticed it having failed over for more than a week. If I want to upgrade it, I snapshot it, and if I upgrade the host I live migrate it, and I've done this all remotely more than a few times with no issues.
It takes some planning and I'd say you'd want a cluster (at least a pair of nodes) where you can do HA. But I wouldn't do it any other way at this point. If you have only one port, you can VLAN it for using on both LAN and WAN.
That is pretty sweet. I have a second server I could use for an HA configuration of the router VM. I've been meaning to play around with live migrations (KVM) so this could be a cool use case for testing.
It works well. I have my docker hosts on HA as well because they're almost as important as the router.
If you just use 2 nodes, you will need a q-device to make quorum if you have one of the nodes down. I have the tiebreaker running on my Proxmox Backup Server shitbox I3.
Proxmox is basically just debian with KVM and a better virt-manager. And it deals with ZFS natively so you can build zpools, which is pretty much necessary if you want snapshotting and replication, which are necessary for HA.
If you just use 2 nodes, you will need a q-device to make quorum if you have one of the nodes down
I could just use VRRP / keepalived instead, no?
I should try Proxmox, thanks for the suggestion. I set up ZFS recently on my NAS and I regret not learning it earlier. I can see how the snapshotting would make managing VMs easier!
Proxmox uses a voting system to keep cluster integrity.
Check it out, it's free and does a lot of things out of the box that take a lot of manual work otherwise. And the backup server is stellar. It does take a while to wrap your head around the whole way it does things, but it's really powerful if you spend the time to deep dive it.
So 3+ hosts for clustering or 2 hosts and an qdevice to fake it
Yes. You can just get by with 2 devices but you need to set expected_votes=1 in the cluster config somewhere, don't recall where, and I've encountered issues with stability with that solution, seems like it'll get undone though I haven't used it for years to say if that's still the case.
The q-device will work on anything Linux that's available when the second node is down. Not having the tie-breaker isn't the end of the world, it just means you have to go in after you bring up the second node and start some things manually, and if you're replacing nodes in a 2-node cluster, it's much nicer to have the q-device.
Wow planning to do the same thing. Amazing find of a post.
And we have opinions for and against! Wow 🍿
Good luck everyone