• Build 3 Easy Projects with the Raspberry Pi Zero 2


    With Raspberry Pi 4 and 5 pricing still being a bit high lately, I wanted to see if the more affordable Raspberry Pi Zero 2 could still be a practical option for real-world projects—which, ironically, is now priced around what the Pi 5 used to cost. In this guide, we’re going to get the Zero 2 up and running, walk through a quick introduction, and then connect a motion sensor, LED, breadboard, and camera module to build a few simple automation projects using Python.

    And if you’re new here and enjoy Raspberry Pi and homelab tutorials like this, consider subscribing to Mackey Tech so you don’t miss future content.


    Raspberry Pi Zero 2 Overview

    The Raspberry Pi Zero 2 comes with a modest 512MB of RAM, but it’s powered by a quad-core 64-bit ARM Cortex-A53 running at 1GHz—the same CPU found in the Raspberry Pi 3. The WH variant includes built-in Wi-Fi and Bluetooth, along with a pre-soldered 40-pin GPIO header that uses the same layout as the Raspberry Pi 3 through Pi 5, making it fully compatible with most GPIO-based projects.

    The board includes two Micro-USB ports—one for power and one for data. For peripherals like a keyboard or mouse, you’ll need a Micro USB OTG adapter, while the power port should be connected to a reliable 5V 2.5A power supply. If you already have a Raspberry Pi 4 or 5 adapter and a USB-C to Micro-USB cable, that will work just fine. For display output, the Pi Zero 2 uses Mini-HDMI, so you’ll need a Mini-HDMI to HDMI cable.


    Flashing Raspberry Pi OS

    To get started, we’ll flash a MicroSD card using the Raspberry Pi Imager. Select the Raspberry Pi Zero 2 as the device and choose the 32-bit version of Raspberry Pi OS. The 32-bit version is recommended here since it uses less RAM and runs more efficiently on the Zero 2’s limited memory.

    While flashing the OS, it’s a good idea to enable SSH, assign a hostname like ‘pizero2‘, configure your Wi-Fi, and create your user account so the system is ready to go immediately after boot.

    Once booted, take a moment to explore the desktop, open a terminal to verify your hostname and IP address, and optionally run tools like htop to monitor system resource usage. You can also open Thonny to see how lightweight Python development is on this system.


    Installing Dependencies

    Before jumping into the projects, we need to install a few required packages. Start by updating your system:

    'sudo apt update && sudo apt upgrade -y'

    Next, install the Python libraries and camera support:

    'sudo apt install python3-gpiozero python3-picamera2 -y'

    And finally, install FFmpeg for handling video processing:

    'sudo apt install ffmpeg -y'

    These dependencies will allow us to control GPIO pins, interface with the camera, and handle video recording.


    Project Setup: Motion Sensor and LED

    For the first project, we’ll connect a PIR motion sensor and an LED using a breadboard. Since the Pi Zero 2 WH already has GPIO headers, we can use jumper wires to connect everything directly.

    The motion sensor we’re using is the HC-SR501 PIR sensor. PIR stands for Passive Infrared, meaning it detects motion by sensing changes in heat—like when you move your hand in front of it. The sensor has three pins: VCC (power), GND (ground), and OUT (signal). The OUT pin connects to a GPIO input on the Pi, such as GPIO 17, while VCC connects to 5V and GND connects to any ground pin.

    On the breadboard, the LED is connected with a resistor for protection. The longer leg (anode) connects to GPIO 27, while the shorter leg (cathode) connects to ground. When motion is detected, the LED will turn on, and after a short delay, it will turn back off.


    Using Thonny for Python Development

    For all of these projects, we’re using Thonny, which comes pre-installed with Raspberry Pi OS. Thonny is a simple and beginner-friendly Python IDE that makes it easy to write, test, and run scripts directly on the Pi.

    You can open Thonny from the desktop or by running Thonny.

    We’re organizing our scripts into their own directory, and it’s important to make sure the folder has the correct permissions before running them. This helps avoid issues when accessing files or saving images and videos.

    For the full scripts used in this guide—including LED control, motion detection, and camera automation—be sure to check out my Patreon, where I’ve provided complete working examples.


    Adding a Camera for Motion Detection

    In the second project, we expand on the setup by adding a camera module that captures an image whenever motion is detected. We’re using an Arducam V2 8MP camera, which includes a ribbon cable specifically designed for the Pi Zero 2.

    To install it, power off the Pi, gently lift the CSI connector latch, insert the ribbon cable with the contacts facing the board, and secure it by pressing the latch back down. Once powered back on, the camera is ready to use with the Picamera2 library.

    If you encounter a “GPIO busy” error while running scripts, it usually means another process is already using the pins. You can stop it with:

    'sudo systemctl stop pir.service'
    

    Mounting a NAS for Video Storage

    For the final project, we take things a step further by recording video when motion is detected and saving it to a NAS. This is a great way to avoid filling up the Pi’s SD card.

    First, create a directory for the mount point if needed, then mount your NAS share using:

    'sudo mount -t cifs //NAS-IP/Pizero2 /home/username/shared -o username=NASUSERNAME,password="NASPASSWORD",uid=$(id -u),gid=$(id -g),file_mode=0666,dir_mode=0777'
    

    If your password contains special characters, make sure to wrap it in quotes. This command mounts the share temporarily for the session, which is perfect for testing.

    Once mounted, your Python script can save video files directly to the NAS location.


    Bringing It All Together

    By starting with a simple motion sensor and LED, then adding a camera and network storage, we’ve built a basic but functional surveillance system using the Raspberry Pi Zero 2. While it’s not meant to replace a dedicated solution like MotionEye, it’s a great demonstration of how powerful this small board can be when combined with Python and a few components.

    For a device with only 512MB of RAM, the Pi Zero 2 handles these projects surprisingly well and proves to be a capable option for lightweight automation tasks!


    Final Thoughts

    This project is a great introduction to working with GPIO, sensors, cameras, and network storage on the Raspberry Pi. If you want the full scripts and a deeper walkthrough of each project, make sure to check out my Patreon, where everything is available in the fre membership tier.

    Thanks for reading, and stay tuned for more Raspberry Pi and homelab content here on Mackey Tech!


  • What’s in my new Homelab for 2026?

    The Original Proxmox Server: From Workhorse to Test Bench

    Storage and Networking: Synology to UGREEN and the Move to 10GbE

    The ZimaBoard 2: Expanding Into a Proxmox Cluster

    Repurposed Hardware and Daily Drivers

    Looking Ahead: Automating the Workflow

    Final Thoughts


  • Which One of These Gadgets Do You Use?

    How I Actually Use My Tech Tools in My Homelab, Test Bench, and Studio

    In this guide, I’m going to walk you through how these tools fit into my workflow across three main areas: my test bench, my homelab, and my studio. These aren’t just random gadgets—they’re tools I rely on regularly to make things more efficient, organized, and reliable.

    Everything featured here was purchased with my own money, with the exception of a portable monitor that was sent out for review. The goal here isn’t to sell you anything—it’s to show you practical use cases so you can decide what might actually improve your own setup.


    Why Setup Matters More Than the Tools Themselves

    Before jumping into specific tools, it’s worth calling out something important: tools are only as valuable as how they’re used.

    A lot of people fall into the trap of buying gear without a clear purpose. In reality, the best setups aren’t necessarily the most expensive—they’re the ones that reduce friction. That means:

    • Less time swapping cables
    • Less clutter
    • Faster workflows
    • More reliability

    Each tool I use is there because it removes a bottleneck or simplifies something that would otherwise slow me down.


    Test Bench: Efficient Hardware Testing Without the Chaos

    When you’re regularly testing different systems—whether it’s single-board computers, mini PCs, or full servers—things can get messy quickly. Constantly swapping cables, keyboards, and monitors is not only inefficient, it’s frustrating.

    KVM Switch: One Control Point for Multiple Systems

    One of the first tools I rely on at my test bench is a KVM switch (Keyboard, Video, Mouse).

    Instead of dedicating separate peripherals to every system I test, the KVM lets me control multiple machines using a single keyboard, monitor, and mouse. Everything plugs into the switch, and I can toggle between systems with the press of a button.

    For example, if I’m working with something like a Raspberry Pi alongside another system, I don’t need to unplug anything or switch monitor inputs. I just hit a button and instantly move between them.

    This does a few key things:

    • * Keeps the desk clean and organized
    • * Eliminates constant cable swapping
    • * Reduces the need for duplicate accessories

    Some KVM switches also include additional USB ports, which means you can share devices like external drives, webcams, or speakers across multiple systems.


    USB Hub: Expanding Limited Ports

    Another essential tool on my test bench is a USB hub, especially because one of my main editing systems is a Mac. Like many modern laptops, it’s limited in terms of ports.

    When I’m testing hardware or capturing footage, I often need to connect multiple devices at once—things like a capture card, external storage, or input devices. Instead of constantly unplugging and replugging cables, the USB hub acts as a central connection point.

    For example, I use it to connect an AverMedia capture card so I can record footage from test systems without disrupting my workflow. It’s a simple tool, but it removes a lot of friction when working across multiple devices.


    Storage and Speed: Why NVMe Matters

    Once everything is connected and running, performance becomes the next bottleneck—and that’s where NVMe storage comes in.

    NVMe Drives: The “Race Cars” of Storage

    NVMe drives are significantly faster than traditional SATA SSDs or hard drives. In my setup, I use NVMe storage in a few different ways.

    In my NAS, NVMe drives act as a high-speed layer for frequently accessed data. You can think of it like keeping your most-used files on the fastest possible storage. This makes a noticeable difference when:

    • * Transferring large files
    • * Scrubbing through video footage
    • * Accessing commonly used data

    Everything just feels more responsive.

    I’ve also used NVMe drives in external enclosures for more specialized tasks. For example, when working with boards that require USB boot—like certain compute modules—I can load operating system images directly from an NVMe enclosure. This speeds up setup and makes the process more flexible compared to traditional methods.


    Power Management: Cleaning Up the Workspace

    Power is one of those things you don’t think about—until it becomes a problem. Between test equipment, studio gear, and computers, cable clutter can get out of control fast.

    Power Station: Centralized Power, Less Clutter

    To simplify things, I use a power station across both my test bench and studio.

    Instead of running multiple power bricks and extension cables, I can plug everything into a single centralized unit. This helps in a few ways:

    • * Reduces cable clutter
    • * Makes it easier to manage power connections
    • * Keeps everything accessible in one place

    It might not be the most exciting piece of gear, but it has a big impact on how clean and functional the workspace feels.


    Homelab: Reliability and Protection

    In a homelab environment—especially one running servers and network equipment—reliability is critical.

    UPS (Uninterruptible Power Supply): Protecting Your Data

    One thing I don’t compromise on is backup power.

    Sudden power loss can be more than just an inconvenience. For systems running spinning drives, it can lead to data corruption or even hardware damage.

    To prevent that, I use two separate UPS units:

    • * One dedicated to servers and NAS
    • * One dedicated to networking equipment

    Each UPS provides battery backup during a power outage, giving me enough time to safely shut everything down. This controlled shutdown is key to protecting both data and hardware.

    It’s one of those investments that you hopefully never “need”—but when you do, it can save you from major headaches.


    Studio: Smart Control and Automation

    The studio environment is where things shift from raw functionality to workflow optimization and control.

    Smart Devices and Zigbee: Local, Reliable Automation

    Lighting is a big part of any studio setup, and instead of relying on manual switches or cloud-based systems, I use smart devices powered by Zigbee.

    Zigbee is designed specifically for smart home devices, which makes it more reliable and responsive than typical Wi-Fi-based solutions.

    In my setup, I control six different lights using a Zigbee network and a remote. This allows me to create scenes—like turning everything off downstairs while activating specific lighting upstairs.

    What really makes this powerful is that I’m using Home Assistant instead of relying on something like Alexa. This gives me:

    • * Full control over automations
    • * Local operation (no cloud dependency)
    • * Custom configurations tailored to my workflow

    Everything runs locally, which means it’s faster, more private, and not dependent on an internet connection.


    A Tool That Works Everywhere: Portable Monitor

    Some tools are specialized—but others end up being useful everywhere. For me, that’s the portable monitor.

    Portable Monitor: Flexible Display Anywhere

    This has turned out to be one of the most versatile tools in my setup.

    On the test bench, it acts as a secondary display when working with multiple systems. In the studio, it becomes a field monitor for my camera—especially useful for overhead shots and B-roll where the built-in camera screen isn’t enough.

    Even in my homelab, it’s been useful for troubleshooting servers. Because it’s lightweight and uses a full-size HDMI connection, I can quickly hook it up to different systems without needing a permanent monitor setup.

    It also mounts to a tripod, which helps save space while keeping it flexible and portable.


    Bringing It All Together

    At the end of the day, my setup isn’t about having the most advanced or expensive gear—it’s about reducing friction.

    Each tool serves a purpose:

    • * The KVM switch eliminates constant cable swapping
    • * The USB hub expands connectivity
    • * NVMe drives improve speed and responsiveness
    • * The power station keeps everything organized
    • * The UPS protects critical systems
    • * Smart devices streamline control and automation
    • * The portable monitor adds flexibility across environments

    Individually, these might seem like small upgrades—but together, they create a setup that’s much smoother and easier to work with.


    Final Thoughts

    This is what my setup actually looks like day to day. Nothing over the top—just practical tools that make everything run more efficiently.

    If you’re building out your own homelab, test bench, or studio, the key takeaway is this: focus on solving problems, not just adding gear.

    Think about what slows you down, what creates clutter, and what could be simplified. Then choose tools that directly address those issues.


    What About Your Setup?

    I’d be interested to hear what tools you rely on in your own setup.

    What devices have made the biggest difference for you? What problems have you solved with them?

    And as always—thanks for reading.


  • Here are 10 Tech Tools You’re Not Using Yet!


    There’s no shortage of “top tech gadgets” lists out there—but most of them don’t really explain how those tools fit into a real workflow. They tend to focus on features instead of real-world use, which makes it hard to know what’s actually worth your time or money.

    So instead of just listing products, this guide walks through a set of tools I personally use and explains why they matter, what problems they solve, and where they actually make a difference in day-to-day use.

    Some of these you’ll recognize. Others might be new. And a few are probably things you already own—but aren’t using to their full potential.

    Everything here was purchased with my own money. No sponsorships—just tools that have proven themselves useful over time.


    UPS (Uninterruptible Power Supply): Protecting What Matters Most

    One of the most underrated pieces of tech in my entire setup is a UPS, or uninterruptible power supply. If you’re running a homelab, a NAS, or even just working from home, this is one of those tools that quietly protects everything behind the scenes.

    A UPS is constantly doing two jobs at once. First, it protects your devices from power spikes and fluctuations. Second, it provides battery backup if the power goes out. Instead of everything shutting off instantly, it switches to battery and gives you a warning—usually a beep—so you have time to safely shut things down.

    This matters more than people realize. Sudden power loss can corrupt data, damage drives (especially spinning disks), and interrupt critical processes. Even a few minutes of backup power can be the difference between a safe shutdown and a major headache.


    Precision Screwdriver Kit: Small Tool, Big Difference

    If you’ve ever worked on tech hardware, you’ve probably had the experience of digging through drawers trying to find the right screwdriver. That’s exactly the problem a precision kit solves.

    Instead of juggling multiple tools, everything is organized in one place with a wide variety of bits, including Torx screws that are common in modern devices and far less likely to strip. The added convenience of a rechargeable handle and built-in lighting makes working in tight or poorly lit spaces much easier.

    It might seem like a small upgrade, but when you’re working on mini PCs, laptops, NAS devices, or general electronics, it speeds things up and removes a lot of frustration.


    NVMe Drive + Enclosure: Turning Old Hardware Into Speed

    If you’ve upgraded a computer recently, there’s a good chance you have an NVMe drive sitting around unused. With a simple enclosure, that drive can be turned into a high-speed external storage device.

    This setup is incredibly useful for transferring large files, creating backups, or even editing directly from external storage. Compared to traditional external drives or USB flash drives, NVMe storage is significantly faster and more responsive.

    While it’s not the cheapest option, it’s one of the most effective ways to add performance to your workflow without buying entirely new hardware.


    Password Manager: Security Without the Hassle

    Most people don’t handle passwords well. Whether it’s reusing the same password, writing them down, or simply forgetting them, it creates unnecessary risk.

    A password manager solves this by generating strong, unique passwords and storing them securely. It also simplifies the login process by autofilling credentials when needed. Beyond passwords, it can store sensitive information like Wi-Fi credentials, account details, and secure notes.

    It’s not flashy, but it’s one of the most practical and impactful upgrades you can make for your digital security.


    KVM Switch: One Desk, Multiple Systems

    If you regularly use more than one computer, a KVM switch can completely change your workflow. It allows multiple systems to share a single keyboard, mouse, and monitor.

    Instead of unplugging cables or switching inputs manually, you can move between systems instantly with the press of a button. It’s similar to switching inputs on a TV—but applied to your entire workstation.

    For anyone working across multiple machines, whether for testing, development, or general productivity, it reduces friction and keeps your workspace clean and efficient.


    USB-C Hub: Restoring Missing Connectivity

    Modern laptops, especially thinner models, often come with limited ports. While that helps with design and portability, it can make everyday tasks more difficult.

    A USB-C hub restores that lost functionality by adding HDMI for external displays, multiple USB ports, SD card readers, and even Ethernet for a more stable connection. Instead of carrying several adapters, everything is consolidated into one device.

    For anyone working with external drives, capture devices, or multiple peripherals, this is an essential tool that makes a laptop far more capable.


    Portable Label Printer: Organization That Pays Off

    Organization might not be the most exciting topic, but it has a huge impact on how efficient your setup is.

    A portable label printer allows you to clearly label cables, storage bins, drawers, and equipment. Because it connects via Bluetooth and is controlled through a phone app, it’s quick and easy to use whenever you need it.

    Over time, this makes troubleshooting easier, reduces confusion, and keeps your setup looking clean and intentional. It’s a small tool that delivers long-term benefits.


    Cloud Storage and Productivity Tools: Access Anywhere

    Not every useful tool is physical. Cloud platforms like Google Drive and Docs provide a simple but powerful way to store, organize, and access your data.

    With a free account, you get a generous amount of storage along with access to document, spreadsheet, and presentation tools. Everything stays synced across devices, and sharing is straightforward.

    Whether you’re managing personal files, collaborating on projects, or storing important records, having access to everything from anywhere adds flexibility and convenience to your workflow.


    Smart Plugs: Simple Automation With Real Impact

    Most people understand the basic function of a smart plug, but they often don’t take advantage of what it can really do.

    Beyond turning devices on and off remotely, smart plugs allow you to create schedules and automate routines. Lights can turn on when you get home, devices can shut off automatically at night, and multiple actions can be triggered with a single command.

    It’s an easy entry point into home automation, but it can scale into something much more powerful as you build out your setup.


    Desk Power Station: A Cleaner, More Functional Workspace

    Cable clutter is one of the most common problems in any workspace. A desk-mounted power station helps solve this by bringing power directly to your desk instead of hiding it underneath.

    With multiple outlets and USB ports built in, everything becomes easier to access, and you don’t have to crawl under your desk to plug things in. The result is a cleaner, more organized workspace that’s easier to use every day.

    It’s not the most exciting upgrade—but it’s one you’ll notice immediately.


    What These Tools Have in Common

    At first glance, this might seem like a random collection of tools. But there’s a clear pattern.

    Each one solves a specific problem—whether that’s protecting your equipment, improving speed, simplifying workflows, or reducing clutter. None of these are about being flashy. They’re about making your setup more efficient, reliable, and easier to use.


    Final Thoughts

    The best tech setups aren’t built on hype—they’re built on practicality.

    If a tool saves time, reduces frustration, or improves reliability, it’s worth considering. You don’t need everything on this list, but chances are there’s at least one or two tools here that could noticeably improve your workflow.


    What Would You Add?

    What’s one piece of tech that has made a real difference in your setup?

    It doesn’t have to be expensive or complex—just something that actually solved a problem. That’s usually where the best tools come from.


  • How To Choose The Right NAS in 2026!

    While all of this competition is great for consumers, it also makes choosing the right NAS more confusing than it needs to be.

    Turnkey vs DIY: The First Decision You Need to Make

    When it comes to choosing a NAS, everything really starts with one decision—are you going turnkey, or are you building your own?

    Turnkey NAS systems from companies like Synology and QNAP are best thought of as appliances. The hardware and software are designed to work together out of the box, the interface is polished, and most platforms offer app stores where you can install features with just a few clicks. This makes them very approachable, especially if you just want something that works without much effort.

    The tradeoff is flexibility. You’ll usually pay more for that convenience, and you’ll have fewer options when it comes to upgrading hardware or customizing the system.

    On the other hand, building your own NAS puts you in full control. You choose the hardware, the operating system, and how everything is configured. Platforms like TrueNAS and OpenMediaVault are free and open source, while Unraid offers a paid option with a more streamlined experience.

    The benefit here isn’t just cost—it’s flexibility. But that flexibility comes with responsibility. If something breaks, you’re relying on documentation, forums, and your own troubleshooting skills to fix it.

    For some people, that’s part of the appeal. For others, it’s a dealbreaker. If you enjoy tinkering and learning how systems work, DIY can be incredibly rewarding. If you just want something reliable and easy to manage, a turnkey system is usually the better choice.

    Simple Storage and Backups: When You Don’t Need Anything Fancy

    If your needs are straightforward—things like file storage, backups, and archiving—then you don’t need to overthink your NAS choice.

    In this case, simplicity becomes more important than performance. Systems like UniFi’s NAS lineup are designed with this in mind. They focus on clean, reliable storage without trying to do everything at once, which makes them especially appealing if you’re already using other UniFi gear.

    At this level, you’re not worrying about advanced features like virtual machines or media transcoding. You just want something that stores your data safely and is easy to access.

    Media Streaming: Where Hardware Starts to Matter

    As soon as you move into media streaming, your NAS choice becomes more important.

    But once you introduce smartphones, tablets, or remote streaming, things change. This is where transcoding comes into play, which requires your NAS to convert media files into formats that different devices can handle.

    Some systems are better suited for this than others. Certain Synology models focus more on efficiency and software experience, while brands like UGREEN and QNAP often include Intel-based CPUs that are better equipped for media workloads. TerraMaster models can also perform well, but it depends on whether you choose an Intel-based system or an ARM-based one.

    Surveillance: Choosing the Right Ecosystem

    If you’re planning to use your NAS for surveillance, the software ecosystem becomes just as important as the hardware.

    Synology is often considered a strong option in this category due to its Surveillance Station platform, which is designed specifically for managing cameras and recordings. It also integrates with its own line of cameras, creating a more unified experience.

    Other brands like QNAP, Asustor, and TerraMaster also offer surveillance features, but they tend to focus more on compatibility with third-party cameras.

    UniFi takes a slightly different approach. Its surveillance system is highly regarded, but it operates separately from its NAS products. That means you won’t be using a UniFi NAS to record camera footage in the same way you would with other systems.

    Virtual Machines and Containers: Understanding Resource Needs

    Running virtual machines and containers is where your hardware requirements really start to increase.

    Virtual machines reserve dedicated resources. For example, if you assign 4GB of RAM to a VM, that memory is set aside whether it’s being used or not. Containers, on the other hand, are much more efficient. They share system resources and only use what they need at any given time.

    CPU resources are shared in both cases, but as you add more workloads—whether that’s apps, containers, or virtual machines—everything starts to compete for those resources.

    If you’re planning to go beyond basic usage, starting with at least 8GB of RAM and a multi-core CPU is a good baseline. While CPUs in turnkey NAS systems usually aren’t upgradeable, RAM often is, though some brands restrict support to their own modules.

    How the Number of Users Affects Performance

    The number of users accessing your NAS also plays a major role in determining what kind of system you need.

    A couple of users performing light tasks like file storage and backups won’t require much power. But as you add more users—especially if they’re streaming media, uploading files, or running applications at the same time—the demand increases quickly.

    It’s not just about how many users you have, but what they’re doing and whether those activities overlap. Multiple simultaneous tasks can put significant strain on both your CPU and memory.

    Software Experience vs Hardware Power

    One of the biggest differences between NAS brands comes down to software versus hardware priorities.

    Synology is known for its polished software, intuitive interface, and strong ecosystem of apps. QNAP leans more toward performance and flexibility, often offering better hardware and connectivity options but with a slightly steeper learning curve.

    Brands like TerraMaster and UGREEN tend to focus more on value and hardware performance, while Asustor sits somewhere in between, offering a balance of both.

    Security and ease of use often follow a similar pattern. Some systems are more locked down and easier to manage out of the box, while others offer more control at the cost of requiring more setup.

    Build Quality, Noise, and Placement

    The physical design of a NAS also matters more than you might expect.

    Some brands use more plastic, which helps reduce vibration and noise, making them better suited for living spaces or offices. Others use more metal or aluminum, which improves heat dissipation and gives a more premium feel.

    Where you place your NAS can influence what matters most. If it’s sitting near you, noise levels become more noticeable. If it’s tucked away in a closet or basement, cooling and performance may be a higher priority.

    Power consumption follows a similar logic. More drives, stronger CPUs, and heavier workloads all increase energy usage.

    Understanding Drive Bays and RAID

    Drive bays play a critical role in both your current storage capacity and your future flexibility.

    More bays mean more room to expand and more options for RAID configurations, which allow multiple drives to work together for protection, performance, or both.

    It’s important to remember that RAID is not a backup. It provides redundancy, but it doesn’t replace the need for proper backups.

    With a 2-bay system, you’re typically limited to mirroring your data, which reduces usable capacity. Moving up to 4 bays or more gives you more efficient options that balance storage and protection. Larger systems offer even more flexibility, including configurations designed for higher performance or increased fault tolerance.

    Simplifying Your Decision

    At this point, it’s clear there’s no single “best” NAS. The right choice depends entirely on what you value most.

    If you prioritize ease of use and a polished experience, certain brands focus heavily on software and simplicity. If performance is your main concern—especially for media streaming or virtual machines—then hardware becomes more important. And if budget is a factor, there are options that offer solid performance without the premium price tag.

    Ultimately, the goal isn’t to find the perfect NAS—it’s to find the one that fits your needs without overcomplicating things.

    Final Thoughts

    Choosing a NAS doesn’t have to be overwhelming. Once you break it down by how you plan to use it—whether that’s storage, media, surveillance, or virtualization—the options start to make a lot more sense.

    Instead of chasing specs or comparing every model on the market, focus on your use case. That’s what will guide you to the right decision.

    And if you’re still narrowing things down, think about what matters most to you: simplicity, performance, or value. That answer alone will eliminate most of the confusion.


  • How to Access Home Assistant With a Raspberry Pi

    Today we’re intalling Home Assistant on a Raspberry Pi with Tailscale! So our goal will be to configure Tailscale to access our smart devices remotely; more importantly, we’re not going to be forwarding any ports!

    So, picture this! You’re at the airport, about to board a flight for a well-earned two-week vacation in Maui. Then it hits you—you forgot to turn off the TV.

    Now you’ve got two options. You can call someone with a key and hope they answer… or you can pull out your phone and turn it off yourself.

    That’s exactly what we’re setting up today.

    In this guide, I’ll show you how to install Home Assistant on a Raspberry Pi and access it remotely using Tailscale. Once it’s up and running, you’ll be able to control your devices from anywhere—no port forwarding, no complicated networking.


    What Home Assistant Actually Does

    Home Assistant is a platform that lets you control and automate smart devices in your home.

    You can:

    1. Turn devices on or off remotely
    2. Create automations
    3. Group devices by room
    4. Build a fully customized smart home

    For example, you could have a motion sensor turn on lights automatically, or schedule your TV to power on at a specific time.

    This isn’t a deep dive. The goal here is simple: get everything installed and working so you can start controlling your home remotely.


    What You’ll Need

    For this setup, I’m using a Raspberry Pi 4B with 4GB of RAM. That’s a great starting point for Home Assistant.

    You’ll also want:

    1. A Raspberry Pi (Pi 3, 4, or 5)
    2. A 32GB Class A2 microSD card
    3. Ethernet connection (recommended for setup)
    4. Power supply for the Pi

    The A2 rating matters because it improves performance when running apps and services.

    If you’re new to the Raspberry Pi, think of it as a tiny computer about the size of a credit card. Despite its size, it’s powerful enough to run Linux, host services, and handle home automation without breaking a sweat.


    Installing Home Assistant on a Raspberry Pi

    Let’s get this up and running.

    Start by using the Raspberry Pi Imager to flash the Home Assistant operating system to your microSD card. This version is designed specifically for Home Assistant, which makes installing apps and add-ons much easier later on.

    Inside the Imager:

    1. Select “Other OS”
    2. Choose “Automation”
    3. Pick the correct Home Assistant version for your Pi

    If you prefer Balena Etcher, you can flash the image manually.

    Once complete:

    1. Insert the microSD card into the Pi
    2. Connect Ethernet
    3. Plug in power

    No monitor, keyboard, or mouse needed.


    Accessing Home Assistant for the First Time

    From another device on your network, open a browser and go to:

    homeassistant.local:8123

    That’s the default hostname and port.

    If it doesn’t load:

    1. Check your router for the Pi’s IP address
    2. Use that IP instead

    Once inside:

    1. Create an admin account
    2. Set your location
    3. Log in

    Then:

    1. Check for updates under Settings → System
    2. Review hardware and network info

    Adding Your First Devices

    Home Assistant will often detect devices automatically.

    Start by organizing your setup:

    1. Create a room (for example, “TV Room”)
    2. Assign devices to that room
    3. Test basic controls like power and volume

    If your device doesn’t show up:

    1. Go to Integrations
    2. Search for the device (Roku, Fire Stick, etc.)
    3. Follow the setup

    Setting Up Remote Access with Tailscale

    Now let’s make everything accessible from anywhere.

    First:

    1. Create a free Tailscale account
    2. Install Tailscale on your phone or laptop

    Then in Home Assistant:

    1. Go to Settings
    2. Open Add-ons or Apps
    3. Search for Tailscale
    4. Install and start it

    Log into your Tailscale account and connect Home Assistant.


    Understanding What Tailscale Enables

    Tailscale creates a secure private network between your devices.

    Here are two features worth knowing:

    1. Subnet Router This lets Home Assistant act as a bridge to your entire network. You can access other devices without installing Tailscale on each one.
    2. Exit Node This routes your internet traffic through your home network. It’s useful when using public Wi-Fi.

    You don’t need to enable these right away, but they’re powerful tools as your setup grows.


    Accessing Your Home Remotely

    Now for the payoff!

    On your smart phone:

    1. Turn off Wi-Fi
    2. Open the Tailscale app
    3. Connect to your Home Assistant device

    Then:

    1. Copy the MagicDNS or IP address
    2. Open it in your browser
    3. Add :8123 at the end

    Log in and control your devices like you’re at home.

    TV still on? Fixed in seconds.


    Why This Setup Works So Well

    This setup removes a lot of common headaches.

    You don’t need:

    1. Port forwarding
    2. Static IPs
    3. Complex firewall rules

    Tailscale handles secure networking. Home Assistant handles automation.

    Together, they give you full control from anywhere.


    Final Thoughts

    This setup is simple, but it opens the door to much more.

    Once you’re up and running, you can:

    1. Add more devices
    2. Build automations
    3. Monitor your home remotely
    4. Expand into a full smart home

    And the next time you forget to turn something off, it’s no longer a problem.

    Just pull out your phone and fix it. And If you’re already using Home Assistant, I’d love to hear what you’re running. And if you’re just getting started, this is a great place to begin! 👍


  • This Tiny Computer Surprised me; In a Good Way!

    I honestly didn’t know what to expect from the ZimaBoard 2. After a week of testing, one question kept coming up: could this actually replace one of my servers? And the timing worked out pretty well for me, because I’ve been planning to add another Proxmox node to my setup. So this felt like the perfect opportunity to see if the ZimaBoard 2 could handle a few virtual machines—or maybe even serve as a dedicated Docker host using ZimaOS.

    Transparency and First Impressions

    For transparency, IceWhale sent over this review kit along with a few accessories. No money changed hands, and as always, they didn’t get to influence this content in any way.

    Right out of the box, the packaging stood out. Instead of foam or plastic inserts, IceWhale uses a layered corrugated cardboard design. With a small modification, that packaging can even be turned into a dock for the board and a couple of SSDs. It’s a small detail, but it sets the tone for a product that feels a bit different from typical hardware.

    What is the ZimaBoard 2 ?

    If you haven’t seen one before, the ZimaBoard is kind of what you’d get if you crossed a Raspberry Pi with a server. It’s the size of a deck of cards, but it’s some good heft making it feel more substantial thanks to its all-aluminum enclosure.

    It’s completely fanless out of the box, using its enclosure as a passive heatsink. In terms of connectivity, you get dual SATA connections, two 2.5Gb Ethernet ports, 2 x USB 3.0, a mini DisplayPort, and, interestingly a PCIe 3.0 slot, which opens the door for NVMe expansion or a fast ethernet card!

    The unit I tested came with 16GB of RAM and 64GB of onboard eMMC storage, though there are lower-tier options available. At first glance, it’s clear this isn’t your average single-board computer—it’s something designed to sit somewhere between a lightweight server and a full NAS.

    Where Zimaboard 2 Fits in the Zima Ecosystem

    The ZimaBoard 2 sits right in the middle of IceWhale’s lineup. On one end, you have the original ZimaBoard and the ZimaBlade, which lean more toward experimentation and maker projects. On the other end, there’s the ZimaCube, which is positioned as a more complete NAS or home server solution.

    This second-generation board builds on the original concept with faster RAM, improved networking, a newer CPU, and updated PCIe support. All of the boards ship with ZimaOS preinstalled, which makes getting started incredibly quick.

    Setting Zima 2 Up as a NAS

    Getting started was straightforward. I connected two 2TB drives using the included SATA Y-cable, mounted them into the drive bay, and powered everything on. Since the system is designed to run headless, I simply scanned my network, found the device, and jumped into the web interface.

    Within ZimaOS, both drives were automatically detected. After a few clicks, I created a RAID 1 mirror—giving me redundancy without much effort.

    The interface itself is clean and browser-based, with an app store that installs services as Docker containers. That means most applications can be deployed with a single click and remain isolated from the rest of the system.

    ZimaOS: Simple, but With a Few Quirks

    Under the hood, everything runs on Docker, which keeps things modular and easy to manage. There’s even a built-in system monitor for checking resource usage in real time.

    That said, a few things felt a bit hidden to me. Features like SMB, SSH, and HTTPS are tucked under “Developer Mode,” which feels like they should be more front-and-center given how commonly they’re used.

    Still, once everything is configured, it’s a very approachable environment—especially for anyone new to self-hosting.

    Real-World Performance and Power Efficiency

    After setting everything up, I installed Jellyfin for media streaming, added a Linux Mint virtual machine, and ran multiple workloads simultaneously. Even with streaming across devices and a VM running in the background, the system remained surprisingly responsive.

    Power consumption was one of the board’s biggest highlights. Under load, the system hovered around 26 to 27 watts. At idle, it dropped down to roughly 7 watts. That’s incredibly efficient, especially considering it was acting as both a NAS and a lightweight server.

    Networking performance was equally impressive. Using a 2.5Gb connection, I was able to sustain speeds around 2.3Gbps—even while streaming and running a VM.

    Thermals and Cooling

    Thermal performance was solid for a fanless system. During stress testing, temperatures settled in the mid-80s Celsius, with occasional peaks into the low 90s. That might sound high, but it’s within safe operating limits for this type of design.

    When I adding the optional fan, it brought temperatures down by about 10 degrees and helped prevent heat buildup during extended workloads. While the system runs fine without it, the fan adds a bit of extra headroom if you plan to push it harder.

    Turning The Zima Board 2 Into a Proxmox Node

    This is where things got really interesting. So, instead of replacing ZimaOS, I installed Proxmox onto an NVMe drive connected via the PCIe slot. The installation process was around seven minutes and once it was up and running, I configured additional storage and started deploying virtual machines.

    I spun up Linux Mint, Zorin OS, and Home Assistant, and even ran updates across multiple VMs simultaneously. Despite allocating a large portion of the available RAM, the system handled everything without noticeable slowdowns.

    At idle with multiple VMs running, power usage stayed in the 7 to 9 watt range, which is incredibly efficient for a virtualization host.

    Storage Performance and Limitations

    Storage performance scaled exactly as expected. The onboard eMMC is fine for the operating system but not ideal for heavy workloads. SATA SSDs provide a noticeable improvement, while NVMe delivers a massive jump in performance.

    However, there is a limitation. The PCIe slot runs at Gen 3 x1, which caps total bandwidth. That means even though you can run multiple NVMe drives, they’re sharing that single lane. You still get improved performance, but not a linear increase.

    Even with that limitation, NVMe storage still felt fast and responsive!

    So Who Is the ZimaBoard 2 For?

    The ZimaBoard 2 sits in an interesting middle ground. On one hand, ZimaOS makes it feel like a turnkey appliance. You can get up and running quickly without needing deep technical knowledge.

    On the other hand, the hardware and expandability make it feel like something much more flexible. You can run Proxmox, experiment with Docker, and build out a variety of services.

    There are a few quirks. Some features feel slightly hidden, and certain hardware configurations require more hands on. But once you get past that, the system really opens up.

    Final Thoughts

    After a week of testing, I honesly don’t see the ZimaBoard 2 replacing any of my servers.

    However, what it does incredibly well is fill the gap between a single-board computer and a traditional server. It’s efficient, versatile, and surprisingly capable for its size.

    It feels less like a fixed device and more like a platform you can grow into. Whether you’re offloading services, experimenting with VMs, or building a homelab, it just fits. Honestly, that’s what makes the Zima 2 so interesting!


  • Can This Raspberry Pi Really Replace your Computer?


    First Impressions: Clean, Capable, and Promising

    Under the hood, it carries over the same quad-core ARM Cortex-A76 processor found in the Raspberry Pi 5, along with a fast 256GB NVMe boot drive, a generous 16GB of RAM, and a solid mix of connectivity including USB ports, Gigabit Ethernet, Wi-Fi, Bluetooth 5.0, and even access to GPIO pins. On paper, it sounds like the perfect balance between convenience and performance, something that could realistically serve both as a desktop machine and a platform for Raspberry Pi enthusiasts.

    Before going further, I should mention that this Raspberry Pi 500+ was sent over by the Raspberry Pi Foundation at no charge for a review I did back in September of last year. As always, though, all opinions here are my own.


    Living With It: Where the Design Starts to Struggle

    With that out of the way, the real question becomes whether this device actually delivers on that promise once you move beyond the initial impression. After spending several months trying to use it as a desktop replacement, I found that while the idea is compelling, the execution feels a bit more complicated.

    The biggest issue shows up in the physical design. At first, having everything built into the keyboard feels clean and efficient, but once you start actually using it day to day, it becomes clear that this design introduces its own set of problems. You’re constantly dealing with cables coming directly out of the keyboard—power, display, peripherals—and that creates a setup that feels more tethered than streamlined. Instead of sitting neatly on a desk like a typical keyboard, it ends up acting as the central hub for everything, which can feel bulky and awkward, especially when you’re working around display cables and adapters.


    Hardware Experience: Strong Foundation, Limited Flexibility

    To be fair, the Raspberry Pi 500+ does a lot right when it comes to the hardware itself. It runs quietly, consumes very little power, and the move to NVMe storage is a noticeable upgrade over the traditional microSD approach. The system feels responsive for everyday tasks, and the ARM processor does a solid job handling light productivity, web browsing, and general use. Even the keyboard, at least in terms of feel, is enjoyable to type on, thanks to its tactile switches.

    But once you move past that initial experience, some limitations start to stand out. One of the biggest is the lack of configuration options. There’s only one version of the 500+, which means no flexibility in RAM or storage at the time of purchase. While you can technically swap out the NVMe drive for something larger, it would have been nice to see multiple configurations available, especially if this is being positioned as a desktop replacement. When you compare that to even entry-level desktops or mini PCs, which often allow upgrades or come in different tiers, the Pi 500+ starts to feel a bit restricted.


    The Keyboard: Great Feel, Questionable Practicality

    The keyboard itself is also a bit of a mixed experience. It uses tactile blue switches that provide a satisfying click, but they are on the louder side, which may not be ideal depending on your environment. While the keycaps can be swapped out, the switches themselves cannot, which limits customization. On top of that, the lettering on the keys is relatively dim and doesn’t contrast well with the white surface, making it harder to read than you might expect.

    The RGB lighting, while visually interesting at first, ends up feeling more like a novelty than a useful feature, and in some cases actually makes the keys harder to see rather than easier. Over time, what initially feels like a fun addition becomes something you’re more likely to turn off.


    Connectivity and Ports: Good, But Still Frustrating

    Connectivity is generally solid, with multiple USB ports and Gigabit Ethernet providing enough flexibility for most setups. However, the continued reliance on micro-HDMI ports is still frustrating. If you’ve used micro-HDMI before, you already know how inconvenient it can be, especially when dealing with adapters and cable management. It’s one of those small details that ends up affecting the overall experience more than you might think.


    Pricing and Value: Where the Math Changes


    Software Reality: ARM Still Has Limits

    This is where software and platform limitations also come into play. ARM has come a long way, and for many tasks, it works perfectly well. You can browse the web, use office applications, write code, and handle general productivity without much trouble. But modern desktop workflows still lean heavily toward x86, and that creates gaps in compatibility that are hard to ignore.

    Many commercial applications simply aren’t available on Linux at all, and others either lack native ARM support or don’t perform as well as their x86 counterparts. Even when you look beyond specific applications, the overall software ecosystem still feels a bit limited compared to more traditional desktop environments. Raspberry Pi OS continues to improve, especially with the newer Trixie-based releases, but it still doesn’t offer the same level of depth or polish as distributions like Linux Mint or Fedora. And even if you install those distributions on the Pi, the underlying limitation remains the same—the ARM architecture itself.


    The Mini PC Comparison: A Tough Benchmark

    When you start comparing the Raspberry Pi 500+ to alternatives, the conversation shifts even more. For roughly $100 more, you can get an x86 mini PC that offers more I/O, full-sized HDMI ports, larger storage options, faster processors, and upgradeable components. You also gain access to a much broader software ecosystem, including full support for Windows or more traditional Linux distributions without the constraints of ARM compatibility. That difference in flexibility makes it harder to justify the Pi 500+ as a long-term desktop solution.


    Where It Doesn’t Fit: Homelab and Tinkering

    Beyond the desktop use case, the Raspberry Pi ecosystem itself has always appealed to two main groups: hardware tinkerers and homelab users. Most people who spend time with Raspberry Pi devices eventually find themselves somewhere between those two worlds. The challenge for the Pi 500+ is that it doesn’t fit neatly into either category.

    From a homelab perspective, you could technically use it as a server, but the built-in keyboard makes that impractical. It takes up more physical space than necessary and doesn’t integrate well into a rack or compact setup. For tinkerers, while the device does include GPIO pins, their placement on the back of the keyboard makes them difficult to use in a clean or stable way. Instead of stacking components neatly, you end up working around the design rather than with it.


    Who It Actually Makes Sense For

    So that brings us back to the core question: what is the Raspberry Pi 500+ actually trying to be? After spending time with it, I think the answer becomes clearer. It’s not really a desktop replacement in the traditional sense, and it’s not the best option for homelab or hardware-focused projects either.

    Where it does make sense is as an entry point for beginners—people who want to explore the Raspberry Pi ecosystem without having to worry about setup, compatibility, or choosing the right components. For someone in that position, the Pi 500+ offers a simple, plug-and-play experience. You don’t need to flash an operating system, figure out power requirements, or assemble anything. You plug it in, connect your display, and you’re ready to go.


    Final Thoughts

    Whether that convenience is worth the price is ultimately going to depend on the individual. For some, it may be exactly what they’re looking for. For others, especially those who want more flexibility or long-term expandability, it may feel limiting.

    In the end, the Raspberry Pi 500+ is a well-intentioned device with a clear vision, but one that doesn’t fully land in practice. It’s an interesting addition to the lineup, and it absolutely has its place, but it’s not the universal solution it might appear to be at first glance.

    So what do you think? Does the Raspberry Pi 500+ work as a desktop replacement, or does it make more sense as an entry-level system for new users?


  • How to Move Your Data from Synology to UGREEN

    You probably have better things to do this weekend than babysit a multi-terabyte NAS migration… or maybe you don’t—and that’s completely fine too. But if you’re moving from Synology to UGREEN, the real question becomes: how do you transfer terabytes of data from one NAS to another without constantly swapping external drives or dealing with painfully slow transfers?

    Now, yes—you can migrate large amounts of data remotely, but in most cases it’s just not worth it. Between ISP data caps, bandwidth limitations, and the occasional disconnect, it ends up being more frustrating than anything else. Remote setups make more sense for incremental backups, not full migrations. So for this, we’re keeping everything local.


    Preparing Both NAS Devices

    Before jumping in, I’m assuming your storage pools are already configured on the UGREEN and you know which folders you want to migrate. The first step is making sure both NAS devices are ready to communicate properly. On my setup, I was running DSM 7.2.3 on the Synology and version 1.13.1 on the UGREEN.

    From there, I verified that rsync was enabled on both systems and created a dedicated user account specifically for the transfer. You’ll also want to make sure port 873 is open, since that’s what rsync uses. While you’re in there, it’s a good idea to enable WebDAV and SSH as well—we’ll be using SSH later on.

    One thing that tripped me up for a while was firewall settings. Make sure each NAS is whitelisted in the other device’s firewall, and if you’re using Synology, double-check the Protection settings to ensure your UGREEN’s IP isn’t blocked. Mine was blocked for weeks, and I couldn’t figure out why rsync wasn’t working. Also confirm that your rsync user has permission to access the folders you want to migrate—otherwise the job will fail before it even starts.


    Migrating Data Using rsync and Hyper Backup

    For the actual migration, rsync is really the most practical option when you’re working locally. On the Synology side, I used Hyper Backup to create a new task. After selecting “Folders and Packages,” you’ll see a variety of destinations like USB, cloud services, and enterprise options—but what you’re looking for is the rsync server under the File Server section.

    I chose a single-version backup since this was just a one-time migration, entered the UGREEN’s IP address, user credentials, and port 873, and left encryption off since everything was staying on my local network. From there, I selected the destination folder on the UGREEN and chose which folders to transfer from the Synology. You can include application data if you want, but I skipped that for now.

    Once everything is set, you name the task, confirm the destination, and let it run. Hyper Backup will create any necessary folders on the UGREEN automatically. Depending on how much data you’re moving and your network speed, this can take a while. In my case, transferring about 15TB over a 1Gb connection took roughly 48 hours. The nice part is that even if something interrupts the process, rsync will pick up where it left off, and both systems keep logs so you can verify everything completed successfully.


    Setting Up a Backup Strategy on UGREEN

    Once the migration is done, the next step is setting up a proper backup strategy on the UGREEN. The built-in Backup & Sync options are geared toward backing up to another NAS or services that support WebDAV or rsync, which is great if you have that infrastructure—but if you just want something simple, like backing up to an external USB drive, there isn’t really a straightforward built-in option. That felt a little odd to me, so instead I set up my own solution.


    Creating a Local Backup with rsync over SSH

    To handle local backups, I created a simple script using rsync over SSH. The script looks at a list of source folders, copies them to an external drive, and logs everything so I can confirm it worked. I’ll be sharing that script on my free Patreon, but the core idea is straightforward—you define your source path, your destination path, and the folders you want to mirror. Everything else is just there for safety and logging.

    On the UGREEN, understanding folder paths is important. In the file manager, you’ll see ‘User’, ‘Shared’, and ‘Personal’ folders, but for scripting, you need the absolute path—the exact location on the system. When you log in over SSH, you’ll land in your home directory, which corresponds to your Personal folder. Running pwd will show you the full path.

    If you need to locate something in the ‘Shared’ directory, you can use the find command to reveal its absolute path. For example, finding a folder named “MackeyTech” might return something like ‘/volume1/MackeyTech’, which becomes your source path.

    To identify your external drive, you can use lsblk and look for the mount point. In my case, the drive was mounted at /mnt/@usb/sde1, and I created a folder within it for backups.


    Testing and Running the Backup Script

    Before running the full script, I always recommend doing a dry run with ‘rsync’. This shows you what would be copied without actually making changes, which is a great way to confirm everything is set correctly. Once you’re confident, you can run the full command or execute your script directly.

    After that, it’s just a matter of making the script executable and running it manually once to verify everything works as expected. From there, you’re ready to automate it.


    Automating Backups with Cron

    To automate the backup process, I used cron, which is Linux’s built-in scheduler. With a simple cron job, you can have your script run daily, weekly, or on whatever schedule you prefer. In my case, I set it to run every day at 3am.

    The format might look a little intimidating at first, but it’s essentially just defining when the script should run, followed by the path to the script itself. Once it’s saved, cron takes over and runs the job in the background without any further input.

    Since rsync is incremental, after the first full backup, it only copies new or changed files, which makes ongoing backups much faster and more efficient.


    Final Thoughts

    At this point, your data is migrated, backed up, and running on a schedule without any manual intervention—which means you can finally stop babysitting your NAS and go do something a little more interesting with your weekend.


  • I finally tried Linux Cachy OS and here’s what happened!

    Arch Linux has this long-standing reputation for being intimidating—something reserved for hardcore Linux users who enjoy living in the terminal and configuring everything from scratch. But I wanted to challenge that idea and see if an Arch-based distro could actually work as a practical daily driver. That’s where CachyOS comes in.

    One of the biggest differences is that Arch is a rolling release. Instead of waiting for major version upgrades, you’re always getting the latest updates, which means newer drivers, performance improvements, and security patches arrive continuously. It uses Pacman as its package manager, which is fast and clean, and it also gives you access to the Arch User Repository, or AUR, which is a massive library of community-maintained software. That said, Arch has a reputation for being more hands-on, which can be a barrier for a lot of people. CachyOS aims to smooth out those rough edges while keeping everything that makes Arch powerful.


    What is CachyOS?

    To test CachyOS, I used my Lenovo gaming laptop with a 14th-gen Intel Core i9, 32GB of RAM, dual 1TB SSDs, and an NVIDIA 4070. So this wasn’t just a casual test—I wanted to see how it handled both everyday tasks and gaming performance.


    Installation and First Impressions

    Booting into the USB drops you straight into the live environment, and one of the first things you see is the CachyOS Hello screen. This is a really nice touch. Instead of being dropped onto a blank desktop with no direction, you get quick access to documentation, forums, software tools, and an overview of the system. When you’re ready to install, there’s a clear entry point right in the middle.

    The installer itself is straightforward but still gives you a lot of control. One thing I liked is that it lets you choose your bootloader. While many distros default to GRUB, CachyOS gives you options like rEFInd, systemd-boot, and Limine. I stuck with GRUB for simplicity, but it’s nice to have the choice.

    When it came to partitioning, I used manual setup since I was installing to my second SSD. I initially tried pointing to my existing Windows EFI partition, but it turned out to be too small. So I created a new 600MB EFI partition within the space I allocated for CachyOS, and then used the remaining space for the main Linux partition. I chose ext4 for simplicity, although Btrfs is also available if you want snapshot functionality.


    Desktop Experience and Setup

    You can also install additional desktop environments during setup, which is great if you like experimenting. Just keep in mind that running multiple desktops can lead to duplicate apps and a bit of menu clutter.

    After installation, which only took about five minutes, I was dropped back into the CachyOS Hello dashboard. This acts as a central hub where you can install common applications, run system updates, and access documentation without needing to jump straight into the terminal. It’s small touches like this that make CachyOS feel approachable while still being powerful.

    Another thing that stood out was the inclusion of a kernel manager. It’s not something you’ll use every day, but it gives you a safe and simple way to switch kernels if you’re troubleshooting or tuning performance.


    Terminal, Tools, and Usability

    Opening the terminal was a bit of a surprise because CachyOS uses the Fish shell by default instead of Bash. Fish stands for “Friendly Interactive Shell,” and it’s designed to be more user-friendly, with built-in autocomplete and suggestions. If you prefer the traditional experience, you can easily switch back to Bash, but I actually found Fish pretty nice to use.

    CachyOS also includes a built-in recovery tool using chroot, which is extremely useful if something goes wrong. If your system fails to boot or you run into issues, you can boot from a live USB, chroot into your existing installation, and attempt repairs. CachyOS makes this process more approachable by guiding you through it instead of expecting you to manually mount everything.


    Gaming Performance on CachyOS

    CachyOS comes with its own Proton build alongside Valve’s standard versions, so you already have multiple compatibility layers ready to go. In my case, the game launched without any issues. My NVIDIA 4070 was recognized right away, and running at 1080p on high settings, I was seeing frame rates in the high 60s to low 70s, with consistent performance overall.


    Switching Desktop Environments

    Out of curiosity, I also tried switching to the Budgie desktop environment. What stood out to me was how consistent everything felt underneath. The CachyOS Hello dashboard was still there, the same tools were available, and even Fish remained the default shell. It really reinforces that the core experience stays the same, regardless of which desktop environment you choose.


    Final Thoughts

    If you’ve been curious about Arch but didn’t want to deal with the complexity, CachyOS feels like a great entry point. It gives you the benefits of a rolling release and access to the Arch ecosystem, without throwing you straight into the deep end.

    That’s going to wrap things up for this overview. If you found this helpful, feel free to share your thoughts or let me know what Linux distro you’re currently using. And as always, make sure you’re subscribed to Mackey Tech—there’s plenty more coming soon.


  • Why I decided to finally leave Synology after 15 years.

    Synology: A Longstanding Industry Leader

    My own journey with Synology goes back to around 2009. My first unit was a simple 2-bay NAS that I used for self-hosting websites, cataloging movies, learning virtual machines, and streaming media. In 2014, I upgraded to the DS214play, and then again in 2020 to the DS920+. Over time, I became deeply familiar with the Synology ecosystem—and honestly, a bit of a loyalist.

    Their DiskStation Manager (DSM) software is excellent. It’s smooth, intuitive, and packed with both first-party and third-party apps. But as my needs evolved—especially with growing storage demands for YouTube backups—I started to hit limitations. My 20TB setup was filling up, and I needed more performance, faster networking, and a platform that could scale with me.

    That’s when I realized it was time to look beyond Synology.


    Reconsidering the Upgrade Path

    But digging deeper revealed some concerns.

    The biggest issue was the lack of hardware transcoding. The DS1525+ uses an AMD Ryzen V1500B processor, which, while capable, doesn’t support hardware-accelerated media transcoding. That’s actually a step backward from my older DS920+, which used an Intel Celeron with Quick Sync support. While I wasn’t heavily reliant on transcoding at the time, I didn’t want to lose that capability.

    Then came a more significant concern. In 2025, Synology introduced a policy change for its newer NAS models, limiting full functionality and support to Synology-branded or certified drives. Third-party drives would still work—but with restrictions on features like storage pool creation, drive health monitoring, and firmware updates.

    As someone who values flexibility—especially in a homelab—that was a turning point.


    Discovering UGREEN as an Alternative

    But the hardware advantages didn’t stop there.

    The DXP6800 Pro features a 10-core Intel i5-1235U processor with integrated Iris Xe graphics, which supports hardware acceleration for media streaming. In real-world terms, that means significantly better performance for tasks like Plex or Jellyfin, along with stronger multitasking capabilities. Benchmarks consistently show it outperforming the Ryzen V1500B, especially in single-threaded workloads.

    On top of that, it includes dual NVMe slots for SSD storage pools or even running a separate operating system like TrueNAS. Add in Thunderbolt 4, multiple USB ports, and expanded I/O, and the difference in flexibility becomes hard to ignore.


    Performance vs Price: The Growing Gap

    One of the biggest factors in my decision was the widening gap between price and performance.

    Synology has always charged a premium, and historically, that premium made sense because of DSM and overall reliability. But in recent years, that balance has shifted.

    That’s not just a small difference—it’s a pattern.

    Across multiple models, Synology tends to offer older hardware with fewer resources, while competitors like UGREEN deliver more modern components, better networking, and greater expandability at similar price points. And with Synology, many features—like 10GbE, RAM upgrades, or expanded storage—often come as additional costs.

    At some point, you have to ask what that premium is really buying.


    Software Ecosystem: Polish vs Flexibility

    There’s no denying that Synology’s biggest strength is its software.

    DSM is one of the most refined NAS operating systems available. It offers a wide range of integrated applications—from backup solutions and file syncing to virtualization and surveillance—all working seamlessly together. It’s cohesive, reliable, and incredibly user-friendly.

    UGREEN’s software ecosystem, while newer, is already surprisingly capable. It covers the essentials: file sharing, backups, snapshots, Docker containers, virtual machines, and media streaming. But it doesn’t yet match the depth or maturity of Synology’s platform.

    That said, UGREEN leans heavily into openness. You’re not locked into a specific ecosystem—you can run alternative operating systems like TrueNAS or deploy standard Linux containers without restriction. For someone running a homelab, that flexibility is a major advantage.


    Mobile Experience and Real-World Use

    One area where UGREEN genuinely surprised me was the mobile experience.

    With Synology, most setup and configuration is done through a desktop interface. UGREEN, on the other hand, offers a mobile app that handles the entire setup process—from initial configuration to storage pool creation and user management.

    I was able to unbox the NAS, connect it, and complete the entire setup from my phone without ever opening a laptop. That level of convenience was refreshing and felt more modern compared to what I was used to.

    That said, not everything is perfect. UGREEN’s sync and backup tools can be a bit inconsistent across devices, especially when dealing with file naming issues. It’s an area that still needs refinement.


    Final Thoughts: Why UGREEN Made Sense for Me

    At the end of the day, this wasn’t an easy decision. Synology has been a reliable part of my setup for years, and there are still things I genuinely miss—especially the polish and maturity of their ecosystem.

    But when I stepped back and looked at what I actually needed, the decision became clearer.

    UGREEN offered more performance, more flexibility, and better overall value. The hardware is simply more capable, and the platform gives me room to grow without artificial limitations.

    Synology still makes a strong case for users who want a turnkey, polished experience. But for my workflow—and especially in a homelab environment where performance and flexibility matter—UGREEN was the better fit.

    And for the first time in years, stepping outside the Synology ecosystem felt like the right move!

Verified by MonsterInsights