Skip to content

Quick Start

This guide walks you through setting up a complete local development environment for the Enterprise Backend Blueprint β€” a multi-service NestJS monorepo with PostgreSQL, Redis, and MinIO.

SoftwareVersionNotes
Node.js22 LTS+Use NVM for version management
npm10.x+Bundled with Node.js
PostgreSQL17.x+Primary relational database
Redis7.x+Sessions, stateful JWT, and BullMQ job queues
MinIOLatestS3-compatible object storage for file handling

πŸ’‘ Tip: For local infrastructure (PostgreSQL, Redis, MinIO), the fastest path is Docker Compose. See the docker-compose.yml in the repository root to spin everything up in one command.

  • ESLint β€” real-time linting feedback
  • Prettier - Code Formatter β€” auto-formatting on save
  • Todo Tree β€” surface TODO and FIXME comments across the codebase
  • Markdown Preview Mermaid Support β€” render architecture diagrams inline

Terminal window
git clone <repository-url>
cd enterprise-backend

This project uses Git submodules for shared libraries. After cloning, initialize them:

Terminal window
git submodule sync --recursive && git submodule update --init --recursive

πŸ”‘ SSH Key Setup: If your submodules are hosted on a separate Git server and require a dedicated SSH key, generate one with ssh-keygen -t ed25519 -C "you@example.com" and add the resulting public key to your Git provider. Then configure ~/.ssh/config with a Host alias pointing to the correct IdentityFile, and update .gitmodules to use that alias instead of the raw hostname.

Terminal window
npm install
Terminal window
cp .env.example .env

Edit .env with your local values:

Terminal window
# ── Database ─────────────────────────────────────────────
DB_MASTER_HOST=localhost
DB_MASTER_PORT=5432
DB_MASTER_USERNAME=your_db_user
DB_MASTER_PASSWORD=your_db_password
# ── Redis (Sessions & Stateful JWT) ──────────────────────
REDIS_HOST=localhost
REDIS_PORT=6379
# ── Redis (BullMQ Job Queue) ─────────────────────────────
REDIS_BULLMQ_HOST=localhost
REDIS_BULLMQ_PORT=6379
# REDIS_BULLMQ_PASSWORD=your_password # Uncomment if auth is enabled
# ── MinIO (Object Storage) ───────────────────────────────
MINIO_ENDPOINT=localhost
MINIO_PORT=9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
# ── Auth ─────────────────────────────────────────────────
JWT_SECRET=change-me-in-production

Connect to PostgreSQL and create three databases β€” one per bounded domain:

CREATE DATABASE app_core_db;
CREATE DATABASE app_master_db;
CREATE DATABASE app_iam_db;

Why three databases? Each bounded context owns its data. Separating them at the database level enforces domain boundaries and allows independent scaling and backup strategies.

Terminal window
npm run migration:run:all

Terminal window
npm run start:dev:all

This starts all microservices concurrently with hot-reload enabled via Nx. Ideal for active feature development where you need the full system running.

For targeted development, you can start just the services you need:

Terminal window
# Auth Service (handles login, token issuance, session management)
npm run start:dev:auth # HTTP :5001 | Microservice TCP :5000
# IAM Service (roles, permissions, access control)
npm run start:dev:iam # HTTP :5003 | Microservice TCP :5002
# Core Bounded Context A (primary domain logic)
npm run start:dev:service-a # HTTP :3000 | Microservice TCP :4000
# Core Bounded Context B (secondary domain logic)
npm run start:dev:service-b # HTTP :3001 | Microservice TCP :4001
# Master Data Service (reference/lookup data)
npm run start:dev:master # HTTP :6001 | Microservice TCP :6000
# Storage Service (file upload/download via MinIO)
npm run start:dev:storage # HTTP :3002 | Microservice TCP :4002
# System Admin (back-office operations)
npm run start:dev:system-admin # HTTP :3088 | Microservice TCP :4088
# Aggregated API Documentation Gateway
npm run start:dev:docs # HTTP :9999

The system is structured as a dual-port microservice pattern: each bounded context exposes both an HTTP interface (for external clients) and a TCP interface (for internal microservice-to-microservice communication).

graph TB
    Client[Client / Frontend]

    subgraph "HTTP Layer (External)"
        Auth[Auth Service :5001]
        IAM[IAM Service :5003]
        SvcA[Service A :3000]
        SvcB[Service B :3001]
        Master[Master Data :6001]
        Storage[Storage :3002]
        SysAdmin[System Admin :3088]
        Docs[API Docs Gateway :9999]
    end

    subgraph "Microservice Layer (Internal TCP)"
        AuthMS[Auth MS :5000]
        IAMMS[IAM MS :5002]
        SvcAMS[Service A MS :4000]
        SvcBMS[Service B MS :4001]
        MasterMS[Master MS :6000]
        StorageMS[Storage MS :4002]
        SysAdminMS[SysAdmin MS :4088]
    end

    subgraph "Infrastructure"
        DB[(PostgreSQL\nPrimary + Replica)]
        Redis[(Redis\nSessions + Queue)]
        Minio[(MinIO\nObject Storage)]
    end

    Client --> Auth
    Client --> SvcA
    Client --> SvcB
    Client --> Docs

    Auth --> AuthMS
    SvcA --> SvcAMS
    SvcB --> SvcBMS

    SvcBMS --> SvcAMS
    AuthMS --> Redis
    SvcAMS --> DB
    StorageMS --> Minio

Why dual-port? HTTP controllers handle validation, auth guards, and response shaping for external consumers. The TCP microservice layer handles internal RPC calls between services with lower overhead and without re-running auth middleware. This separation keeps internal and external contracts explicit and independently evolvable.


Each service logs its bound address on startup:

[Auth] Listening on http://localhost:5001
[IAM] Listening on http://localhost:5003
[Service-A] Listening on http://localhost:3000
...

The aggregated Swagger documentation for all services is available at:

http://localhost:9999

Individual service docs are also available at their own /api-docs route:

http://localhost:5001/auth/v1/api-docs ← Auth
http://localhost:3000/v1/api-docs ← Service A
http://localhost:3001/v1/api-docs ← Service B
Terminal window
# Test that the auth service is alive
curl http://localhost:5001/auth/v1/health
# Test a login endpoint
curl -X POST http://localhost:5001/auth/v1/login \
-H "Content-Type: application/json" \
-d '{"username": "admin", "password": "password"}'

Error: EADDRINUSE: address already in use :::3000

Find and kill the conflicting process:

Terminal window
lsof -i :3000 # find the PID
kill -9 <PID>
  1. Check that PostgreSQL is running: pg_isready
  2. Verify credentials in your .env file
  3. Confirm the databases were created: \l inside psql
Error: connect ECONNREFUSED 127.0.0.1:6379

Start Redis:

Terminal window
# macOS (Homebrew)
brew services start redis
# Linux (systemd)
sudo systemctl start redis
# Direct
redis-server
Terminal window
minio server /path/to/data --console-address ":9001"

MinIO’s web console will be at http://localhost:9001 (default credentials: minioadmin / minioadmin).


Terminal window
# ── Testing ───────────────────────────────────────────────
npm test # Run all unit tests
npm run test:watch # Watch mode
npm run test:cov # With coverage report
# ── Linting ───────────────────────────────────────────────
npm run lint # Lint all files
npm run lint:fix # Auto-fix lint issues
# ── Build ─────────────────────────────────────────────────
npm run build:all # Build all services
npm run build:auth # Build a single service
# ── Database Migrations ───────────────────────────────────
npm run migration:generate:core -- --name=CreateUsersTable
npm run migration:generate:master -- --name=AddLookupIndex
npm run migration:generate:iam -- --name=AddRolePermissions
npm run migration:run:all # Apply all pending migrations

With your environment running, explore these guides:

  1. Project Structure β€” understand the Nx monorepo layout
  2. Clean Architecture β€” the layering philosophy behind every module
  3. API Response & Error Handling β€” standardized response contracts
  4. Security: Stateful JWT + Redis β€” how authentication works end-to-end
  5. Database Overview β€” primary/replica topology and bounded context data ownership