iDatam

IN AFRICA

ALBANIA

ARGENTINA

AUSTRALIA

AUSTRIA

AZERBAIJAN

B AND H

BANGLADESH

BELGIUM

BRAZIL

BULGARIA

CANADA

CHILE

CHINA

COLOMBIA

COSTA RICA

CROATIA

CYPRUS

CZECH

DENMARK

ECUADOR

EGYPT

EL SALVADOR

ESTONIA

FINLAND

FOR BACKUP AND STORAGE

FOR DATABASE

FOR EMAIL

FOR MEDIA STREAMING

FRANCE

GEORGIA

GERMANY

GREECE

GUATEMALA

HUNGARY

ICELAND

IN ASIA

IN AUSTRALIA

IN EUROPE

IN NORTH AMERICA

IN SOUTH AMERICA

INDIA

INDONESIA

IRELAND

ISRAEL

ITALY

JAPAN

KAZAKHSTAN

KENYA

KOSOVO

LATVIA

LIBYA

LITHUANIA

LUXEMBOURG

MALAYSIA

MALTA

MEXICO

MOLDOVA

MONTENEGRO

MOROCCO

NETHERLANDS

NEW ZEALAND

NIGERIA

NORWAY

PAKISTAN

PANAMA

PARAGUAY

PERU

PHILIPPINES

POLAND

PORTUGAL

QATAR

ROMANIA

RUSSIA

SAUDI ARABIA

SERBIA

SINGAPORE

SLOVAKIA

SLOVENIA

SOUTH AFRICA

SOUTH KOREA

SPAIN

SWEDEN

SWITZERLAND

TAIWAN

THAILAND

TUNISIA

TURKEY

UK

UKRAINE

UNITED ARAB EMIRATES

URUGUAY

USA

UZBEKISTAN

VIETNAM

Building an S3-Compatible Object Storage Server on Bare Metal using MinIO

Learn how to escape massive AWS S3 storage and egress fees by deploying a high-performance, distributed MinIO object storage cluster on unmetered bare-metal servers.

Building an S3-Compatible Object Storage Server on Bare Metal

Storing petabytes of unstructured data—like machine learning datasets, massive media libraries, or enterprise backups—on Amazon S3 is incredibly convenient until you look at the monthly bill. While storing the data is expensive, moving that data out of the cloud for processing triggers massive "egress fees" that can cripple an IT budget.

The most powerful 2026 solution for Cloud Repatriation is MinIO. MinIO is an open-source, high-performance object storage server that is 100% API compatible with Amazon S3. By pointing your existing applications to a MinIO server instead of AWS, your developers won't even have to rewrite their S3 integration code.

By deploying a distributed MinIO cluster across iDatam’s Storage Dedicated Servers, you pair high-density NVMe storage with our unmetered 10Gbps or 100Gbps network uplinks. This means you can transfer massive AI training datasets as fast as physics allows, with absolute zero per-gigabyte bandwidth fees.

What You'll Learn

The Architecture Setup

To build a highly available, distributed object storage cluster (which protects against drive and total server failures using Erasure Coding), MinIO requires a minimum of 4 drives. In this tutorial, we will use a robust 4-node bare-metal setup.

  • minio-node1 (10.0.0.11)

  • minio-node2 (10.0.0.12)

  • minio-node3 (10.0.0.13)

  • minio-node4 (10.0.0.14)

(Note: We assume you are running a fresh installation of Ubuntu 24.04 LTS on all nodes).

Step 1: Prepare the Hardware and Network Environment

Execute this step on all four nodes.

First, ensure your servers are updated.

bash

sudo apt update && sudo apt upgrade -y
                                

MinIO relies heavily on accurate hostname resolution to communicate across the internal network. Edit the /etc/hosts file on every server so they can locate each other:

bash

sudo nano /etc/hosts
                                

Add the following lines to the bottom of the file on all four nodes:

plaintext

10.0.0.11 minio-node1
10.0.0.12 minio-node2
10.0.0.13 minio-node3
10.0.0.14 minio-node4
                                

Step 2: Format and Mount the NVMe Storage Drives

Execute this step on all four nodes.

MinIO strongly recommends using the XFS file system for the underlying storage drives due to its performance with massive files.

Assuming your dedicated server has a raw, unformatted NVMe drive located at /dev/nvme1n1, format it using XFS:

bash

sudo mkfs.xfs /dev/nvme1n1 -L MINIO_DATA
                                

Next, create the mount point directory:

bash

sudo mkdir -p /mnt/data1
                                

To ensure the drive mounts automatically if the server reboots, add it to your /etc/fstab file:

bash

echo 'LABEL=MINIO_DATA /mnt/data1 xfs defaults,noatime 0 2' | sudo tee -a /etc/fstab
                                

Mount the drive immediately:

bash

sudo mount -a
                                

Step 3: Install the MinIO Server Binaries

Execute this step on all four nodes.

MinIO is distributed as a single, highly optimized binary file. Download the latest version directly from the official repository:

bash

wget https://dl.min.io/server/minio/release/linux-amd64/minio
                                

Make the binary executable and move it to the system binary directory:

bash

chmod +x minio
sudo mv minio /usr/local/bin/
                                

For security purposes, you should never run MinIO as the root user. Create a dedicated system user and group for the service:

bash

sudo groupadd -r minio-user
sudo useradd -M -r -g minio-user minio-user
                                

Transfer the ownership of your mounted NVMe drive to this new user:

bash

sudo chown minio-user:minio-user /mnt/data1
                                

Step 4: Configure the Distributed MinIO Cluster

Execute this step on all four nodes.

Create the MinIO configuration file directory and the file itself:

bash

sudo mkdir /etc/minio
sudo nano /etc/default/minio
                                

Add the following configuration. This exact configuration must be identical across all four servers.

plaintext

# Volume locations for the distributed cluster. 
# The {1...4} syntax tells MinIO to span across all 4 nodes.
MINIO_VOLUMES="http://minio-node{1...4}:9000/mnt/data1"

# Set the Root Access Key and Secret Key (These act as your AWS IAM Admin credentials)
MINIO_ROOT_USER="admin-minio-2026"
MINIO_ROOT_PASSWORD="SuperSecretStrongPassword!"

# The address to listen on for S3 API requests
MINIO_OPTS="--address :9000 --console-address :9001"
                                

(Save and exit the file).

Step 5: Set Up MinIO as a Systemd Service

Execute this step on all four nodes.

To ensure MinIO runs in the background and starts on boot, we will configure a systemd service.

Download the official MinIO systemd script:

bash

cd /tmp
wget https://raw.githubusercontent.com/minio/minio-service/master/linux-systemd/minio.service
                                

Move it to the systemd directory:

bash

sudo mv minio.service /etc/systemd/system/
                                

Reload the system daemon, start the MinIO service, and enable it on boot:

bash

sudo systemctl daemon-reload
sudo systemctl start minio
sudo systemctl enable minio
                                

Verify that the cluster is healthy and running on all nodes:

bash

sudo systemctl status minio
                                

(You should see active (running) and a log message indicating that the server has started in distributed mode).

Step 6: Access the MinIO Console and Create S3 Buckets

Your massive, distributed object storage cluster is now online!

Open your web browser and navigate to the MinIO Console using any of your node IP addresses on port 9001: http://10.0.0.11:9001

Log in using the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD you defined in Step 4.

From this beautiful Web GUI, you can:

  • Click Buckets to create your first S3-compatible bucket (e.g., ai-training-data).

  • Click Identity to generate Access Keys and Secret Keys for your developers.

  • Monitor your cluster's NVMe drive health and network throughput.

To connect your existing applications to this cluster, simply update their AWS S3 SDK configurations to point to your new endpoint (http://10.0.0.11:9000) and replace the AWS credentials with your newly generated MinIO keys.

Conclusion: Scale Without Limits

You have successfully built an enterprise-grade object storage cluster. Because of MinIO's erasure coding, you can safely lose an entire physical server without losing a single byte of your data.

As your data ingestion scales, avoiding network bottlenecks is critical. This is why enterprise architectures pair MinIO's highly parallel I/O capabilities with iDatam’s 100Gbps Dedicated Servers. By keeping your data on high-speed bare metal and routing it through unmetered pipes, you achieve faster-than-cloud performance while completely eliminating variable data egress costs.

Discover iDatam Dedicated Server Locations

iDatam servers are available around the world, providing diverse options for hosting websites. Each region offers unique advantages, making it easier to choose a location that best suits your specific hosting needs.

Up