DEV Community

crow
crow

Posted on

Build an AI‑Ready Linux Workstation Under $800 in 2024 – Step‑by‑Step Guide

Intro: The Budget AI Dilemma

Ever dreamed of training a neural net or running GPT‑style inference on your own desk‑side machine, but the price tag keeps you from buying that “high‑end” GPU?

I was in the same spot last year. I wanted to experiment with LM Studio and Golem, but my laptop’s 2 GB VRAM felt like a stone wall. After hunting for deals and swapping some components, I landed on an $800 build that actually runs most of the popular AI workloads on Linux.

Quick Snapshot – My $800 Build (2024)

  • CPU: AMD Ryzen 5 5600G 6‑core 3.9 GHz – $110
  • GPU: NVIDIA RTX 3060 (12 GB) – $300
  • Motherboard: MSI B550M PRO‑VDH WIFI – $80
  • RAM: 16 GB DDR4 @3200 MHz – $70
  • Storage: 1 TB NVMe SSD – $90
  • Power Supply: EVGA 500W Gold – $55
  • Case: NZXT H510 – $70 Total: ~$785 (prices vary, but you can find similar bundles for less)

This guide walks you through exactly how to replicate a build like that, install Ubuntu, set up NVIDIA drivers & CUDA, get LM Studio running, and even hook it into the Golem network—all while keeping your wallet happy.


Step 1: Pick Budget‑Friendly Hardware

CPU

  • Why? AI inference is GPU‑bound; the CPU just needs to keep data moving.
  • Choice: AMD Ryzen 5 5600G – excellent integrated graphics for fallback, low power draw, and a sweet price/performance ratio.

Affiliate: [AFF: Ryzen 5 5600G on Amazon]

Motherboard

  • Must support PCIe 4.0 (for NVMe speed) & have an M‑SATA slot if you plan to add a secondary SSD later.
  • Choice: MSI B550M PRO‑VDH WIFI – Wi‑Fi included, no hidden fees.

Affiliate: [AFF: MSI B550M PRO‑VDH]

GPU

  • The heart of any AI workstation. 12 GB VRAM is more than enough for most mid‑size models and gives you headroom for future projects.
  • Choice: NVIDIA RTX 3060 – excellent CUDA core count, Tensor cores, and price.

Affiliate: [AFF: Amazon RTX 3060 (~$300)]

RAM

  • 16 GB is the sweet spot for most beginner AI workloads; if you’re training large models, consider 32 GB.
  • Choice: G.Skill Ripjaws V DDR4‑3200 – stable and affordable.

Affiliate: [AFF: G.Skill Ripjaws DDR4]

Storage

  • Speed matters when loading datasets. An NVMe SSD keeps transfer times low.
  • Choice: Crucial P5 1 TB NVMe – balanced price & performance.

Affiliate: [AFF: NVMe SSD]

Power Supply

  • A 500W Gold PSU is plenty for this rig and leaves room for future upgrades.
  • Choice: EVGA 500W G3 Gold – reliable, fully modular.

Affiliate: [AFF: EVGA 500W PSU]

Case & Cooling

  • The NZXT H510 offers good airflow, cable management, and a clean aesthetic without breaking the bank.

Affiliate: [AFF: NZXT H510]


Step 2: Assemble and Power‑On

  1. Insert CPU into AM4 socket, secure with the lever.
  2. Apply thermal paste (if not pre‑applied) and attach the Ryzen cooler.
  3. Mount the motherboard in the case, screw it down.
  4. Install RAM sticks in dual‑channel slots.
  5. Plug the NVMe SSD into the M‑SATA slot; secure with a screw.
  6. Slot the RTX 3060 into the primary PCIe x16 slot. Connect the 8‑pin EPS from the PSU to the GPU.
  7. Hook up all power cables (24‑pin, CPU 4‑pin, SATA).
  8. Close the case, connect peripherals, and hit Power.

If everything boots, you’ll see a black screen with an LED indicator—no worries; we’re about to install Ubuntu next.


Step 3: Install Ubuntu 22.04 LTS (or 24.04)

Create Bootable USB

  • Download the ISO from the official Ubuntu website.
  • Use Rufus or BalenaEtcher on Windows, or dd on Linux:
  sudo dd if=ubuntu-22.04-live-server-amd64.iso of=/dev/sdx bs=4M status=progress && sync
Enter fullscreen mode Exit fullscreen mode

Install

  1. Boot from USB → “Install Ubuntu”.
  2. Choose Erase disk and install (or set up a manual partition scheme if you’re comfortable).
  3. Set timezone, user credentials.
  4. When prompted for third‑party software, tick install updates and install third‑party software for graphics & Wi‑Fi.

Affiliate: [AFF: Ubuntu Installation Guide Book]

After installation, reboot into your fresh desktop.


Step 4: Install NVIDIA Drivers & CUDA Toolkit

Update System

sudo apt update && sudo apt upgrade -y
Enter fullscreen mode Exit fullscreen mode

Add Graphics PPA (for latest drivers)

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
Enter fullscreen mode Exit fullscreen mode

Detect Available Driver Version

ubuntu-drivers devices
Enter fullscreen mode Exit fullscreen mode

You’ll likely see nvidia-driver-535 or newer. Install it:

sudo ubuntu-drivers autoinstall
Enter fullscreen mode Exit fullscreen mode

Reboot after installation.

Verify Driver Installation

nvidia-smi
Enter fullscreen mode Exit fullscreen mode

You should see your RTX 3060, driver version, and CUDA 12.x (or later).

Install CUDA Toolkit (Optional if you need specific libraries)

sudo apt install nvidia-cuda-toolkit -y
Enter fullscreen mode Exit fullscreen mode

Step 5: Set Up LM Studio

LM Studio is a lightweight IDE for building and training language models on local GPUs. It’s Python‑based, so you’ll need to set up a virtual environment.

  1. Install Python & Pip (Ubuntu ships with 3.10; upgrade if needed):
   sudo apt install python3-pip python3-venv -y
Enter fullscreen mode Exit fullscreen mode
  1. Create Project Directory
   mkdir ~/lmstudio && cd ~/lmstudio
   python3 -m venv .venv
   source .venv/bin/activate
Enter fullscreen mode Exit fullscreen mode
  1. Install LM Studio Dependencies
   pip install --upgrade pip setuptools wheel
   pip install torch==2.1.0+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
   pip install transformers datasets accelerate bitsandbytes einops sentencepiece tqdm
Enter fullscreen mode Exit fullscreen mode
  1. Clone LM Studio Repository
   git clone https://github.com/lmstudio-ai/LM-Studio.git
   cd LM-Studio
Enter fullscreen mode Exit fullscreen mode
  1. Run the App
   python run.py
Enter fullscreen mode Exit fullscreen mode

The first run will download a small model (e.g., gpt2). You can swap in larger models later.

Tip: For GPU‑accelerated inference, ensure that the environment variable CUDA_VISIBLE_DEVICES=0 is set if you have multiple GPUs.


Step 6: Integrate with Golem Network

Golem allows you to rent out your unused GPU cycles for distributed computing tasks. Here’s a quick setup:

  1. Install Docker (required by Golem):
   sudo apt install docker.io -y
   sudo systemctl enable --now docker
Enter fullscreen mode Exit fullscreen mode
  1. Pull Golem Docker Image
   docker pull golengine/golem:latest
Enter fullscreen mode Exit fullscreen mode
  1. Run Golem Node (replace <wallet-address> with your actual address):
   docker run -d --name golem-node \
     -v ~/golem:/data \
     -e WALLET_ADDRESS=<wallet-address> \
     golengine/golem:latest
Enter fullscreen mode Exit fullscreen mode
  1. Monitor Performance
   docker logs -f golem-node
Enter fullscreen mode Exit fullscreen mode

You’ll see tasks coming in, GPU utilization, and earnings (if you’ve set up a wallet on the Golem marketplace).

Note: Running Golem may affect your system’s temperature profile; monitor via nvidia-smi or htop.


Step 7: Optimize & Maintain

Task Why It Matters How To Do It
Keep Drivers Updated New CUDA releases bring performance boosts. `sudo ubuntu-drivers autoinstall

Top comments (0)