This is an automated archive made by the Lemmit Bot.
The original was posted on /r/homelab by /u/xxtoni on 2025-01-31 00:08:44+00:00.
I am trying to simulate a real business in my homelab and want to get at least some of the base right.
Now I only have a SFF Dell running OPNSense and a Dell R740 with 48GB of RAM (waiting for more to arrive) running Proxmox.
I want to add a second proxmox node and was thinking about getting a R630, loading it up with RAM plus PCIE nvme adapters for both servers and add SFP+ cards. To me it seems more cost efficient to go with another server for the node instead of multiple SFFs
Do I need more than 10gbit? How I need a switch for that? I thought about just connecting the servers together using SFP+ and using gigabit ethernet for everything else.
What about shared storage? So I need a separate "box" or should I just passthrough the disks in the servers to proxmox?
I am trying to create an environment similar to what a business would have so HA, a bunch of VMs as clients, something like SCCM, something like PRTG, intune, a few windows DCs, database Server, AADC Server...stuff like that.