Good day people!
(This paragraph is fluff, feel free to skip) First I’d like to thanks everyone who has answered my questions thus far. A of now I’m daily driving CachyOS on my “laptop” and Bazzite on my gaming PC. I’ve settled on Hyprland after running with sway for a few days and have been forcing myself to solve problems and do file management using the CLI exclusively (excluding firefox for duckduckgoosing help). I’ve gotten semi-comfortable manipulating files, but haven’t had to do anything too skill intensive yet.
On to the question! I am currently looking to set up a home server. My use case is for storing media, specifically videos (for watching) and game Roms (for playing older games on emulator). With this use case in mind, what’s a good resource to learn how to get started? For those who have home servers set up with similar purposes, how did you arrive at your current set-up? What considerations should i take before, during, and after set-up?
Any feedback is greatly appreciated!
Thank in advance! Hope to hear from you all soon!
For a long time, I used a Raspberry Pi running stock standard Raspbian with an external USB drive connected. I just Googled around how to set up a Samba file share, and that’s all I used. Didn’t really see the value in any media-specific software, myself. For me that was secondary to the LAMP stack web server, anyway.
Biggest thing I would recommend is making sure you have, from day one, a full backup of your data and any configuration necessary to read the data.
If you’re thinking about wanting access away from your home network, my recommendation would be to do it by also setting up a VPN and connecting to your local network, rather than something that is directly exposed to the Internet. That is, unless you want to be able to share files with other people at ease.
I use an older HP thin client PC with a 4TB solid state drive as an SFTP file server using vsftpd, but if you are local only then an SMB server using samba would probably be fine. I use SFTP because I wanted something a bit more secure which I can port-forward with my router on a random higher-numbered port for remote access.
I mostly taught myself how to do this by looking at guides originally meant for the raspberry pi, but there is nothing different about running these same programs on Debian or the like. Personally, I would not recommend a raspbery pi for a large file server, as they do not natively support SSDs without additional hardware which will make the price significantly higher and less self-contained than a used, older-gen thin-client PC which can be had for relatively low cost on places like ebay (though they do make some fairly high capacity micro SD cards these days).
Hardware-wise, generally these types of servers are not CPU intensive, nor do they require any particularly high amount of RAM, so an older-gen or lower power CPU can often work fine, but you should probably make sure to get something with at least gigabit ethernet speeds, as a 100Mbit connection on, say, a raspberry pi 3 or older will be very slow for transferring large files.
Thank you for your insight! I’ve been playing with the idea of getting a retired tower from my workplace to mess around with so it’s nice knowing that’s a valid path. Once I’ve secured some hardware and done some research on set-up I’ll be sure to report back (probably with more questions)!
No problem! Yeah, as long as you have the space, I think this would be a good way to repurpose an old junker PC. I imagine a quick search of something along the lines of “how to configure a samba server” should bring up some decent tutorials.
First you do what you, reading, learning. Then you start building stuff, repurposing old hardware.
Then you find that something breaks, the old fan in that old system quits, that linux version does something horrible to its ext filesystem because you made a configuration error. And then you look at the 100+wh it uses, 24/7.
That’s the moment I just bought a qnap which fitted my budget. (or synology or asustor)
And these run, in your use case, for decades.
The end.
Ps: dont forget a backup solution.
Edit edit edit: decades? Yes, decades. I’ve a TS-210 to prove it. Still churning along as a backup target. I’ve never had a qnap die on me, except one time, an external ac/dc power supply died.
Edit edit edit edit: muh security! Qnap, synology, asustor bad! Yeah yeah. Don’t use the (qnap) cloud services. They are neat though and quite safe. As long as you keep thinking for yourself.
The safe way would be to buy an existing NAS solution, such as a Synology DS423+, don’t forget that you want to buy at least one USB Drive that the NAS can put backups on if the data is valuable and/or unique to you (can’t redownload the photos from Vacation Summer 2024) and you want to run your NAS disks redundantly (mirrored in some way, f.e. RAID10).
If you want to expand your home lab services and the NAS can’t handle the cpu/ram requirements you can still often use the NAS as a bind mount and keep it as your storage location even when you add a second computer that runs the actual services. This is the way many traditional data centres work, with Compute and Storage separated into different hardware.
Personally I run everything virtualized in a Debian kvm/qemu server, including my gaming fedora vm with vfio gpu passthrough. For me it was a lot of fun learning to setup vfio passthrough and the like but I wouldn’t recommend it unless you do it because you’re curious and doing it that way has a value in itself.
There’s a lot of packaged hypervisor solutions, such as Proxmox, that makes it easier to get started with virtualization right away and already have builtin backup solutions and so on.