• 1 Post
  • 57 Comments
Joined 1 year ago
cake
Cake day: March 26th, 2024

help-circle
rss

  • I’m curious where you are from and what hardware for self hosting you have. I also want to know what you are interested in self-hosting or learning.

    For me, my home lab started with networking. Yours doesn’t have to. For me, I had already achieved System Administration and was working to become a network engineer. Where are you on your path? In truth, starting with the network is not the best, mine required dedicated equipment: a firewall(UDM), switching(ubiquiti), and access points. This is expensive, so perhaps not the best place to stay.

    I would say that a good place to stay is with virtualization and a hypervisor. A hypervisor is intended to run virtual machines. I think starting with a hypervisor is a good idea because once you have a hypervisor, you can experiment with just about anything you want. Windows, Linux, docker, wherever your exploration takes you.

    Now, I would say the cheapest way to do this kinda depends on you. Do you have a .edu email address? If so, you should be able to receive free licensing for Windows Server through Microsoft imagine (previously called dreamspark). If not, do you have Windows 10/11 pro edition? I would say that Windows server may require dedicated hardware, but if you are already running Windows pro, then your daily driver pc will be capable of running hyper-v.

    If you have an old spare computer, you can make it a dedicated hypervisor with either the Windows Server option, or in my opinion the preferable Proxmox. Proxmox may take a little time to get acclimated to since it is Linux command line, but you already have experience with that on the pihole.

    Those are my recommended next steps to take. Though, there is plenty more that you can do. As others have said docker is a cool way to make some of this happen. I personally hate docker on Windows(it’s weird and I just want the command line not a UI). But you should easily be able to spin up Windows Subsystem for Linux, install docker and docker compose and get started there without needing any additional hardware. You could also do the same using hyper-v if you prefer and have a pro license.

    Regardless of what direction you choose to go, you can go far, you can succeed, and you can thrive. And if you run into any issues, post them here. Selfhosted has your back, and we are all rooting for you.

    Side Note: Hyper-v used to only be available on Windows Pro, but if someone knows for sure that it is available on home please let me know and I will update my post.



  • oOooo… Quite interesting.

    If you are intending to use it, I have some thoughts about the way that you should get it setup and running.

    First thing I would look into is getting the iDrac reset and working. iDrac is intended to allow you to view the display of the server without connecting a monitor, simply use a web page. It also allows you to power on/off the server remotely even if it is frozen or off. It is a simple web interface that allows you to control it.

    After that, I have some questions about your intention for this server. If you are intending to use this server as a hypervisor, I would like to take just a moment to shill for Apache Cloudstack. I recently setup a server running this and it is going absolutely wonderfully. The reason I chose to use it is it is more open to DevOps workloads, by default compatible with Terraform and takes literally 5 minutes to setup an entire Kubernetes cluster. However, the networking behind it is a bit more advanced and if you want more detail just ask me. For now, suffice it to say that it is capable of running 201 vlans protected by virtual routers.

    If that is too much to bite off for a hypervisor at one time, then Proxmox is the way to go. You can probably see a few videos from Linus Tech Tips involving that software. It has much simpler networking and can get you up and running in no time.

    Finally, if you are intending to learn something a little more professionally viable, then I would talk to your boss about utilizing an unused VMWare license or perhaps working with Hyper-V(my least favorite option).

    If you do intend a Hypervisor, then I would highly recommend setting up a raid. Now, the type of RAID depends highly on what you want. RAID 5 will probably work for a homelab, but I would still recommend a RAID 10. RAID 5 gives you more storage space, but I like the performance benefits of a RAID 10. I think that it is very important when multiple virtual devices are sharing the same storage. You can read more about the various RAID levels here: https://www.prepressure.com/library/technology/raid








  • I’m going to go with yes. Since unlike Star Wars, Stargate has multiple enemies and is closer to Star Trek with its episodic nature. Although some(ok, most) of the enemies are based on ancient fantasy characters, I would say that what actually makes them different is simply technology as opposed to any legitimate magic powers(the force). I think that easily takes it from fantasy into Science Fiction. Perhaps this argument loses support when it comes to the ORI. But what about the wraith, the goauld, the ancients, the Jaffa, the tokra, the nox, the replicators, and the Asgard?



  • BTRFS is a damn good option too. I’m happy to hear how easy it is to use. I haven’t used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?

    I’m working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I’ve decided. It may not be a good idea, it may not even be feasible. But I’m heckin willing to give it a shot.

    I’m actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I’m simply uninformed.

    ZFS Performance Sauce

    Install Ubuntu 24.04 on ZFS RAID 10 - Github Repository

    Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I’ve heard about. I’m curious how this will be affected if I have docker running inside a VM.


  • That’s fair. I chose ZFS because I’ve used it before. And understand it fairly well already. I know nothing about BTRFS, so perhaps you could educate me a little. I’m working on setting up a cloudstack host using ZFS RAID 10. Does BTRFS have a flexible architecture to where you could do something similar?

    Edit: Perhaps you could also inform me of the speeds of BTRFS too. From what I understand, ZFS outperforms BTRFS in large datasets, but I don’t know where the cutoff is. I’ll let you know it would need to run 12 ea 10TB HDDs.






  • Here is the exact issue that I’m having. I’ve included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don’t know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can’t do that with scsi disks, then that advantage is limited.

    Windows Terminal for the win, btw.

    Live CD:

    Ubuntu 24.04 Installed:


  • Well… I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn’t actually look it up until I was questioned about it. So I appreciate being kept honest.

    While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn’t gone, I can lose two. So overall, this setup is less redundant than raidz2.

    Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but zpool status rpool will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don’t face this issue, it would result in a more reliable ZFS installation for them.