My LTE home internet is actually quite good for most internet usage scenarios - downloading files, streaming video, playing games (as long as they can deal with 80-120ms of network latency). However, as a technical "expert" I have the capability and willingness to provide certain products and services to my group of friends. Among these are dedicated servers for video games - Valheim, Project Zomboid, and Satisfactory - to name a few.
Beyond hardware resources and software knowledge, running servers usually requires a fast internet connection with low latency, consistent uptime, and a relatively static IP. Unfortunately, an LTE connection is not well suited to this application. I've used VPSes to host some servers in the past, but the "hardware resources" portion is either not satisfied or prohibitively expensive (my bar is low for budgeting). Since I have no other internet option available, my next-best solution is colocating a machine somewhere where internet is fast and stable.
Initially, I thought of sticking a PC in a corner at work - but that seems a little sketchy - and while the IT department (me) would approve in theory, in practice the small security concerns and risk of firing ultimately prevented that course of action. I could pay for colocation - but if I'm going to pay for something, I may as well shell out for a VPS. With those options out the window, I opted to go with something cheap and good enough - shipping an Intel NUC to to a buddy in Vancouver, WA with a 1Gbps cable connection. It's not fiber, but with any luck the latency would be very minimal.
The PC itself had a few requirements: it needed to be fast enough to run servers, small and efficient enough to not annoy my friend, and setup to allow remote configuration and management. I also wanted to avoid needing to open ports on my friends network to keep things a little more secure.
The backstory for the NUC is that I picked up a parts-repair 11th-gen i5 NUC from a seller on ebay who claimed it had 16GB of RAM and a 512GB SSD. The unit I recieved had neither of those things, and was also an older 8th-gen model. It was $40 though, and unlike the 11th gen I though I was buying - it actually worked. So now I had hardware I didn't have a real use case for, which was kind of the impetus for this whole deal. I replaced the missing components and found that the little guy was pretty capable. The fully-equipped PC has the following specifications:
The hardware is pretty pedestrian - but the real fun was the software side of things. Remember, this machine is getting shipped a 4.5 hour drive south, so there's really no popping in the fix issues. While my friend is smart and fairly tech competent, ideally the machine gets plugged in to power and internet - no display, keyboard, or mouse - and then never gets physically touched again. The variety of server software''s that we might run also requires some flexibility in software - for instance, Farm Simulator 2019 requires Windows and that the whole-ass be installed and running to host a dedicated server (which is brain-dead, btw).
I chose to run Proxmox as the hypervisor OS. It's based on Debian linux, and uses KVM as the virtualization technology. It's got plenty of community support, and importantly sports a nifty web interface for managing virtual machines. In commercial applications you can have machine clusters and huge storage arrays, all very cool stuff - I'll be under-utilizing all of that in this case. It's great nonetheless. I use Proxmox at home for virtualizing my Plex server and HomeAssistant server, so I already had a bit of experience there.
One thing to note if you're planning on doing something like this is that you must go into /etc/network/interfaces and change the static IP to a DHCP address configuration. That will prevent overlapping IPs on the target network (friend's house) and prevent needing to figure out their subnet beforehand. Your file might look something like this (eno1 might be eth1 - should be the only thing needing changes):
auto lo
iface lo inet loopback
iface eno1 inet dhcp
auto vmbr0
iface vmbr0 inet dhcp
bridge_ports eno1
bridge_stp off
bridge_fd 0
With Proxmox installed and configured, the next step was to implement remote-access - initially for maintenance, but hopefully capable of removing the need to open ports for servers as well. For this, I spun up a small "jump box" VM and installed cloudflared, which I then logged into my personal Cloudflare zero-trust account and configured to tunnel traffic onto 192.168.0.0/16 (remember - I don't know my friend's network layout, and this can be changed in the future).
The great part about cloudflare zero-trust for this application is that the persistent tunnel interface will pick back up once the machine is connected to the internet anywhere in the world. The zero-trust dashboard also provides for configuring authentication options for users to log into the VPN. For instance, in my case I give all of my friends an Azure AD credential on the cyberiant.com domain and can manage who has access in that way - preventing joe bloe off the street from getting into my friend's network.
Post-install, I took the device to work and found that even on that network I was able to connect to the cloudflare tunnel and access the Proxmox GUI and ssh onto the jump box. Awesome!
I installed another linux VM and configured the docker Valheim server before boxing up the NUC and shipping it to Vancouver (BTW, the 8th gen NUC fits in a USPS flat-rate padded envelope in it's retail box, money-saving tip). Once it got plugged in there I could get into the tunnel, and after some nmapping, I found the Promox, jump box, and gameserver IPs. I could connect to and play Valheim over the tunnel as well. As of writing this we haven't gotten around to actually playing one of those games - but when the time comes it should be a fairly straightforward process to get everything set up.
I highly recommend this setup to anyone that has friends and a crap internet connection.