| Component | Minimum / Recommended | Rationale |
|---|---|---|
| CPU | 2× high‑core Intel Xeon Scalable or AMD EPYC (≥ 32 cores, 8 memory channels) | Memory‑bound solvers need high bandwidth and many threads. |
| System RAM | ≥ 256 GB (512–1 TB for heavy CPU workloads) | Typical scientific workflows use 256–512 GB; 1 TB is common for very large simulations. |
| GPU(s) | • FP64‑required → NVIDIA compute class (L40S, H200, etc.) • FP32‑only → RTX 6000 Ada / RTX 5090 / RTX 5080 (≥ 12–32 GB VRAM each) |
Double‑precision workloads must use compute GPUs; single‑precision can use high‑end RTX cards. |
| Total GPU VRAM | ≥ 2× GPU memory (e.g., 2 × 32 GB = 64 GB) | System RAM should be at least twice the total VRAM for staging and CPU‑to‑GPU mapping. |
| Storage | • Primary OS/Apps: NVMe SSD ≥ 1 TB • Scratch / data: additional NVMe (≥ 1 TB) or SATA SSD (≥ 2–4 TB). • Large datasets: 10 GbE‑connected NAS or local HDDs. |
NVMe gives fastest I/O; scratch space is often needed for quantum chemistry and other out‑of‑core tasks. |
| Cooling & Power | Rack‑mounted chassis with adequate airflow (passive cooling for
L40S/H200). Power supply ≥ 1200 W (adjust per GPU count). |
Compute GPUs are large, passively cooled; high‑end RTX cards need robust PSU and cooling. |
| Networking | 10 GbE NIC (or higher) for NAS/cluster connectivity. | Enables fast transfer of large datasets and remote scratch spaces. |
Quick Take‑away
- CPU + RAM: 32‑core CPU, 256–512 GB RAM (1 TB if you can afford it).
- GPU: If FP64 is needed → NVIDIA compute GPU; otherwise a high‑end RTX card with ≥12 GB VRAM per GPU.
- Memory Ratio: System RAM ≥ 2× total GPU VRAM.
- Storage: NVMe 1 TB + extra NVMe or SATA SSD for scratch/data.