[重要安全性通知] 发现假冒 Qfinder Pro 网站。了解详情 >

QAI-h1290FX
A GPU-ready edge AI storage server supporting NVIDIA® GPUs, U.2 NVMe SSDs, and 25GbE connectivity—designed for on-premises AI, virtualization, and compute-intensive workloads.

QAI-h1290FX is a desktop-class edge compute and storage convergence server that combines high-performance computing architecture with ultra-fast storage. It supports configurable NVIDIA® RTX™ PRO Blackwell GPUs, making it ideal for on-premises AI, LLM inference, private RAG search, virtualization, and other demanding compute workloads.

Powered by QuTS hero with the ZFS file system, the platform delivers enterprise-grade data integrity and consistent performance. Whether for AI deployment, research and development, high-performance computing, or enterprise virtualization environments, QAI-h1290FX enables flexible configuration and rapid deployment, ensuring critical workloads run securely and efficiently at the edge.

amd-epycai2.5gbe-25gbepcie
  • GPU-Ready Architecture with RTX PRO Blackwell Support

    Built with a GPU-ready design, supporting NVIDIA® RTX™ PRO Blackwell GPUs, including options such as the RTX PRO 6000 Blackwell Max-Q Workstation, to meet the demands of AI workloads, image generation, inference, and GPU-accelerated computing.

  • High-Speed All-Flash NVMe Storage Architecture

    Equipped with 12 U.2 NVMe SSD bays and support for SATA SSDs, allowing flexible storage configurations optimized for performance, capacity, or cost. Ideal for AI workloads, virtualization, and real-time data processing.

  • On-Premise LLM & RAG Search

    Enables local deployment of private LLMs and RAG-based search, providing secure semantic document retrieval without sending sensitive data to the cloud.

  • ZFS-based QuTS hero OS

    Powered by QuTS hero with ZFS, offering inline compression, self-healing, snapshots, and SnapSync for enterprise-grade data integrity.

  • GPU Acceleration & AI App Templates

    Leverage GPU acceleration via Container Station. One-click deploy Ollama, AnythingLLM, Stable Diffusion, etc. Simplifying AI application rollout.

  • 25GbE Connectivity & Expansion Ready

    Built-in dual 25GbE and 2.5GbE ports, and upgradable for 100GbE. Scale up with QNAP JBODs to meet growing AI data storage demands.

QNAP QAI-h1290FX

TechRadar Pro Picks Awards Winner for CES 2026

QAI Ideal applications

  • Internal Chatbot & Knowledge Base

    Deploy private ChatGPT-like bots using AnythingLLM or OpenWebUI. Securely connect to internal documents for employee Q&A, policy search, and training support—no internet required.

  • Private RAG Search Engine

    Run Retrieval-Augmented Generation (RAG) locally with full control over data. Enable natural-language document search across contracts, reports, and archives—ideal for legal, finance, and enterprise teams.

  • AI Inference & Content Generation

    Use Stable Diffusion or ComfyUI for image generation, or deploy custom models for video tagging, document summarization, and medical analysis. Benefit from GPU acceleration and all-flash storage.

Enterprise-Grade Edge AI and High-Performance Computing

QAI-h1290FX is more than a storage system — it is a compute-ready, enterprise-grade edge computing platform. Built on a high-performance computing architecture, it supports configurable NVIDIA® RTX™ Pro Blackwell GPUs, making it well-suited for large language model (LLM) inference, image generation, RAG search, and a wide range of compute-intensive and virtualized workloads.

Whether for AI inference, research and development, data analytics, or enterprise applications requiring high core counts and sustained performance, a single desktop-class enterprise platform can deliver outstanding compute efficiency and data security entirely on-premises.

Maximum AI Compute Performance (Optional GPU Configuration)

  • 3511 AI TOPS (FP4)
    333 TFLOPS (RT Core)

GPU-Ready Architecture — Supporting NVIDIA® RTX™ Pro Blackwell

QAI-h1290FX features a GPU-ready architecture designed to support NVIDIA® RTX™ Pro Blackwell GPUs. Built on the Blackwell architecture, it supports acceleration technologies such as CUDA, TensorRT, and the Transformer Engine, making it well-suited for modern AI and GPU-accelerated computing workloads.

From large language model (LLM) inference and computer vision to generative AI and other GPU-accelerated professional applications, workloads can be deployed and executed entirely on-premises—delivering strong performance while maintaining data privacy and full system control. The platform can also operate as a CPU-centric high-performance computing system, supporting virtualization and a wide range of enterprise computing scenarios.

  • Flagship

    NVIDIA® RTX™ Pro 6000 Blackwell

    • 96 GB GDDR7 ECC memory
    • 24,064 CUDA cores, 752 Tensor cores, 188 RT cores
    • 125 TFLOPS (FP32), up to 4000 AI TOPS
    • 1,792 GB/s memory bandwidth
    • 300W power consumption
    • PCIe 5.0 x16 interface
    • Designed for large-scale LLMs (up to 70B+ parameters), multi-model parallel workloads, and high-throughput AI pipelines

  • Performance

    NVIDIA® RTX™ Pro 4500 Blackwell

    • 32 GB GDDR7 ECC memory
    • 10,496 CUDA cores, 328 Tensor cores, 82 RT cores
    • 54.94 TFLOPS (FP32)
    • 896 GB/s memory bandwidth
    • 200W power consumption
    • PCIe 5.0 x16 interface
    • Ideal for mid-sized LLMs (up to ~30B parameters), retrieval-augmented generation (RAG) pipelines, and high-performance graphics applications

NVIDIA® RTX™ Pro Blackwell Series — Redefining AI and High-Performance Computing Workflows

The NVIDIA® RTX™ Pro Blackwell series GPUs are purpose-built for high-intensity AI, compute, and creative workloads, combining the next-generation Blackwell architecture with ultra-fast GDDR7 ECC memory. This delivers a level of compute performance and VRAM capacity on a single professional GPU that previously required multiple consumer-grade graphics cards.

With support for up to 96GB of VRAM and enhanced AI acceleration capabilities, the RTX™ Pro Blackwell series is ideal for advanced LLMs, generative models, data analytics, and complex 3D visualization and professional compute workflows.

  • 5th Gen Tensor Cores

    Up to 3× faster AI processing with FP4 precision and DLSS 4 acceleration.

  • 4th Gen RT Cores

    2× faster ray tracing for photorealistic rendering and real-time simulations.

  • Up to 96GB GDDR7 Memory

    Massive capacity and bandwidth to handle complex AI and graphics tasks.

  • Power consumption and performance balance

    Powerful AI performance with a power consumption of only 300W

Powered by Server-Class AMD EPYC™ Processors for High-Performance Compute

QAI-h1290FX is built on a server-class AMD EPYC processor platform, delivering high core counts and massive multithreaded performance.
Designed for long-term, stable operation under highly parallel workloads, it is well suited for virtualization, multithreaded computing, data processing, and edge computing scenarios—while also supporting AI inference and a wide range of compute-intensive applications.

  • CPU

    Server-class AMD EPYC™ processors
    High core-count and multithreaded architecture

  • Memory

    128 GB RDIMM DDR4 ECC memory (Expandable up to 1 TB)

采用 QuTS hero 操作系统

专为企业所打造的 QuTS hero 操作系统采用高可靠 ZFS 文件系统,为关键数据存储提供强悍数据安全与系统稳定,更拥有专注于提升 SSD 性能与寿命的先进技术,满足企业对于高性能与可靠度的严苛要求。探索 QuTS hero 操作系统了解 QuTS hero 最新功能

  • 自动修复损坏数据

    ZFS 支持静态数据自我修复能力 (Self-healing),可侦测并自动修复损坏数据,确保数据完整性。

  • 数据不可变性

    通过创建 WORM(Write Once, Read Many)文件夹来防范勒索软件,防止数据被覆盖、修改或删除,同时启用不可变备份。

  • 写入保护避免数据毁损

    ZIL 断电保护机制在 NAS 遭遇无预警断电后启用写入保护,避免修改中的数据毁损。

  • 资料缩减

    在线数据重复删除 (Inline data deduplication) 节省存储空间占用,让可用硬盘容量保持优化的状态。

  • 防止多个 SSD 同时故障

    自主研发 QSAL 专利算法可自动定期侦测 RAID 层级的 SSD 寿命,防止多个 SSD 同时毁损,避免 RAID 崩溃造成数据遗失。

Remote access to on-prem AI – anytime, anywhere

Create a seamless hybrid work environment with multiple remote access options offered by QNAP. Whether you're managing AI applications or accessing files, the QAI-h1290FX ensures you're always connected—without compromising security.

Direct or relay access options

  • myQNAPcloud DDNS:
    Access your QuTS hero interface from anywhere via a custom domain, without remembering IP addresses.
  • myQNAPcloud Link:
    Establishes a secure relay connection through QNAP servers—no need to open router ports or modify firewall settings.
  • VPN Server Support:
    Set up a private VPN using QVPN Service, enabling secure encrypted tunnels for full network access.

Whether you're fine-tuning your LLM container setup, reviewing inference logs, or collaborating across locations, the QAI-h1290FX offers reliable access to your on-prem AI environment from any device, anytime.

A More Powerful Container Station: A New Experience in AI Application Deployment

To promote practical AI adoption, the QAI series integrates Container Station with a wide selection of AI application templates. These templates support one-click deployment of popular AI tools and frameworks, with regular updatesfrom QNAP to ensure access to the latest technologies.

Whether you're new to AI or looking to move workloads on-premises, QAI makes it easy to explore AI, reduce costs, enhance data security, and even develop custom AI tools to boost business innovation.

Containerized AI deployment made simple

Enhance your AI infrastructure with seamless container integration. Explore Container Station

AI-centric container environment

The QAI-h1290FX comes with Container Station, enabling the deployment of Docker and LXD containers. As most AI tools today are delivered as containerized applications, QAI-h1290FX provides a direct and efficient way to deploy models such as LLM, RAG Search, image generation (e.g. Stable Diffusion), or knowledge base engines like AnythingLLM—all without complex setup.

Persistent data storage for AI models

With Docker Volume support, the QAI-h1290FX allows containers to mount shared folders from the NAS, ensuring persistent storage even after container rebuilds. This is especially beneficial for AI workloads involving large model files, training data, or logs. No more worrying about losing key data during updates or version switching.

Make GPU acceleration simple and accessible for all AI workloads

The QAI-h1290FX removes the complexity of configuring GPU resources in Docker environments. With Container Station’s intuitive interface, when creating a new container, simply select the desired NVIDIA® GPU from the dropdown menu, and the container will instantly gain access to GPU compute capabilities. Whether you're deploying LLMs, running image generation, or executing deep learning inference, the QAI-h1290FX ensures the process is seamless and efficient.

Redefining creativity with AI-powered visual design

ComfyUI empowers artists, designers, and content creators with a powerful, modular interface for AI-driven image and video creation. Through its intuitive node-based design and support for advanced models like Stable Diffusion, users can effortlessly generate, transform, and animate visual content. Combined with GPU acceleration and flexible workflows, ComfyUI lowers the barrier to complex visual design—unlocking unprecedented creative freedom.

Real-world AI performance – measured on QAI-h1290FX

AI deployment performance is validated through real-world benchmark data. Under a high-end GPU test configuration, QAI-h1290FX was fully evaluated with the NVIDIA® RTX™ PRO 6000 Blackwell Max-Q Workstation GPU, verifying its performance in on-premises AI inference and enterprise deployment scenarios.

Ollama LLM Inference Benchmark (Rapid Deployment)

Leveraging the GPU acceleration capabilities of the Blackwell architecture, QAI-h1290FX can run a wide range of large language models locally via Ollama.
Ollama enables rapid deployment and simplified management, making it well suited for proof-of-concept (PoC) projects, single-user environments, and small to mid-scale use cases such as RAG-based search, AI assistants, and offline inference.

Mode Token/sec VRAM Usage
gpt-oss:120b (MXFP4) 90 Token/sec ~63GB
deepseek-r1:70b (q4_K_M) 24 Token/sec ~41GB
qwen3:32b (q4_K_M) 46 Token/sec ~21GB
gemma3:27b (q4_K_M) 54 Token/sec ~19GB
deepseek-r1:8b (q4_K_M) 140 Token/sec ~7GB
qwen3:8b (q4_K_M) 172 Token/sec ~7GB



vLLM Concurrent Inference Benchmark (Enterprise-Grade Throughput)

To address multi-user and high-concurrency AI service requirements, QAI-h1290FX also supports deployment with the vLLM inference engine.
Compared to single-request–oriented inference approaches, vLLM significantly improves GPU utilization and overall throughput through Paged Attention and efficient scheduling mechanisms. This makes it particularly suitable for enterprise AI services, multi-user RAG systems, and API-based AI applications.

Under the same GPU configuration, vLLM demonstrates more consistent latency characteristics and higher tokens-per-second throughput in concurrent request scenarios, making it ideal for production environments and long-running enterprise AI deployments.

Tested Large Language Model : deepseek-ai/DeepSeek-R1-Distill-Qwen-7B (Hugging Face)

Thread Total Token/sec avg Token/Thread/Sec
1 79 Token/sec 79 Token/sec
2 166 Token/sec 83 Token/sec
5 410 Token/sec 82 Token/sec
10 688 Token/sec 68.8 Token/sec
20 810 Token/sec 40.5 Token/sec
50 850 Token/sec 17 Token/sec

Tested Large Language Model : openai/gpt-oss-20b (Hugging Face)

Thread Total Token/sec avg Token/Thread/Sec
1 218 Token/sec 218 Token/sec
2 340 Token/sec 170 Token/sec
5 1045 Token/sec 209 Token/sec
10 880 Token/sec 88 Token/sec
20 600 Token/sec 30 Token/sec

Benchmark results are based on the average performance from multiple prompts across various domains, such as mathematics, physics, computer science, and philosophy.

Unlock AI potential with practical use cases

From document automation to creative workflows and system-wide automation, the QAI-h1290FX empowers every department to apply AI in meaningful, measurable ways—securely hosted on your own infrastructure.

No cloud lock-in, no complex setup—just real results driven by local LLMs, secure containers, and integrated QNAP features.

Smart HR Assistant – Internal Policy Chatbot

Build an internal Q&A assistant using AnythingLLM + Ollama. Upload HR documents such as employee handbooks, leave policies, and benefits to the QAI-h1290FX. Employees can ask natural questions like:

“How do I apply for family care leave?”

The system performs local RAG search and replies instantly, reducing HR workload and improving response time.

Creative Team AI Studio – Image Generation Hub

Design teams use Stable Diffusion and ComfyUI deployed on the QAI-h1290FX to generate promotional images, mockups, or stylized artwork via prompt input.
Thanks to GPU acceleration and persistent NAS storage, designers get fast and reproducible results—no more starting from scratch.

AI Co-Pilot for Developers – Docs, Code & Summaries

Engineering teams run LLMs like Qwen or Llama in Ollama on the QAI-h1290FX to assist with spec writing, code review, and technical translations.
Upload API docs or whitepapers, then chat with the model for clarifications, summaries, or even markdown formatting—completely offline and secure.

n8n + NAS Automation – Trigger AI from Anywhere

With n8n installed on the QAI-h1290FX, you can automate tasks that integrate AI with business operations. For example:

  • When receiving a support email, trigger LLM to summarize and provide suggested responses, saving them in drafts.
  • Connect to QNAP MCP to regularly check device status, and provide suggestions to users after AI analysis.
  • Analyze NAS backed-up emails and conversations with LLM to check if there is any inappropriate content

AI Docker applications on QAI-h1290FX

Run powerful AI solutions via Container Station & GPU integration.

  • AnythingLLM

    Enables teams to deploy customizable, private Large Language Models in a secure environment. Integrate various data sources and interact with your LLM for tailored Q&A, document parsing, and more—ideal for businesses prioritizing privacy and control.

  • OpenWebUI

    A versatile and user-friendly web interface for interacting with local or remote LLMs. OpenWebUI supports chat, prompt engineering, and conversation management in a sleek UI, simplifying AI adoption for users and developers alike.

  • Ollama

    Ollama makes running and experimenting with open-source language models effortless. With GPU support, developers can efficiently test, fine-tune, and serve powerful LLMs on-premises, keeping data secure and workflows agile.

  • ComfyUI

    An advanced, modular UI for generative AI workflows. ComfyUI allows users to create, visualize, and customize complex AI pipelines—including image synthesis and text-to-image tasks—using an intuitive drag-and-drop interface powered by GPU acceleration.

  • n8n

    n8n is a workflow automation tool that connects to hundreds of services, including AI models. Integrate business operations, trigger AI tasks, and automate repetitive processes—all within a secure, containerized environment, with support for powerful extensions and custom logic.

  • Whisper

    Whisper delivers versatile, multilingual speech processing in a single, open-source model. It accurately transcribes, translates, and identifies languages from diverse audio sources, enabling seamless voice-to-text workflows across industries. With support for on-premises deployment, teams can maintain data security while integrating advanced speech capabilities into their applications.

Transform your NAS into an AI-powered knowledge hub

Empower enterprise users to search, understand, and retrieve information faster than ever—powered by Qsirch and next-generation RAG Search. The QAI-h1290FX brings intelligence to your documents while keeping everything secure and on-premises.

Qsirch – Smart Full-Text Search

Qsirch – Smart Full-Text Search

Qsirch offers blazing-fast full-text search across all your NAS files—PDFs, Office docs, emails, and more. With advanced filters, indexing, and preview functions, users can locate relevant documents instantly across terabytes of data. Learn more

RAG Search

RAG Search – AI-Powered Contextual Answers

RAG (Retrieval-Augmented Generation) Search takes things further by combining Qsirch with a local Large Language Model (LLM). When users ask a question, the system first retrieves relevant files via Qsirch, then generates an accurate, natural language answer using the on-device AI. Learn more

On QAI-h1290FX, RAG Search is fully on-premise:

  • No cloud APIs
  • No data leaks
  • Complete privacy and compliance

5 年标准保固

QAI-h1290FX 享有多至 5 年的免费产品保固,为您的企业组织提供更安心的保障。

Edge AI Storage Server Comparison:
QAI-h1290FX vs. Other AI NAS vs. AI Workstation

Features QNAP QAI-h1290FX Other NAS brand (AI NAS) AI Workstation
Core AI Operating Model On-prem LLM + Private RAG
(models, data, and inference run locally)
Cloud-dependent AI
(cloud search, labeling, or API-based services)
Local compute only
(limited integration of data and AI workflows)
AI Compute Supports enterprise NVIDIA RTX PRO 6000 workstation GPU (96GB VRAM) No GPU, or iGPU / NPU only Varies by model, typically a single GPU
Model Capability Runs 70B+ LLMs, SDXL, and large open-source models Limited to smaller models or cloud AI Large models are costly and VRAM-limited
AI Experience Built-in container management with ready-to-deploy AI templates (LLM / RAG / automation) Complex setup: Docker/YAML tuning + manual GPU configuration Fully manual setup: OS, drivers, and container stack required
Core NAS Capabilities Enterprise NAS with ZFS, RAID, snapshots, WORM, and backup Limited or consumer-oriented NAS features No full NAS management or data protection
Deployment Fit Quiet tower design—ideal for offices and studios Varies by model Large, loud, heat-intensive—often needs a dedicated room/rack
Total Value (TCO) All-in-one AI + NAS with the lowest deployment and management overhead Separate investments for AI and storage Compute-focused system with limited data value built in

需要协助吗?

即刻与我们联系,了解更多符合您需求的产品或解决方案。

选购配件

  • QM2-2P-244A

    QM2-2P-244A

    Dual M.2 22110/2280 PCIe SSD expansion card;

    Dimension (L × W × H): 170.5 × 20.6 × 68.9 (mm)

    Weight: 0.29 (kg)

    Please check the M.2 SSD compatibility list and QM2 Installation Guide

  • QM2-2P-344A

    QM2-2P-344A

    Dual M.2 PCIe SSD expansion card; M.2 2280/22110 PCIe NVMe(Gen 3x4) SSDs; PCIe Gen3x4 host interface

    Dimension (L × W × H): 170.5 × 19.3 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-2P-384A

    QM2-2P-384A

    Dual M.2 PCIe SSD expansion card; M.2 2280/22110 PCIe NVMe(Gen 3x4) SSDs; PCIe Gen3x8 host interface

    Dimension (L × W × H): 170.5 × 19.3 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-2P10G1TB

    QM2-2P10G1TB

    QM2 series, 2 x PCIe 2280 M.2 SSD slots, PCIe Gen3 x 8 , 1 x Marvell AQC113C 10GbE NBASE-T port

    Dimension (L × W × H): 152.65 × 18.9 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-2P2G2T

    QM2-2P2G2T

    QNAP QM2 series, 2 x PCIe 2280 M.2 SSD slots, PCIe Gen3 x 4 , 2 x  Intel I225LM 2.5GbE NBASE-T port

    Dimension (L × W × H): 152.65 × 20.6 × 68.9 (mm)

    Weight: 0.29 (kg)

  • QM2-2P410G1T

    QM2-2P410G1T (EOL)

    QM2 series, 2 x PCIe 2280 M.2 SSD slots, PCIe Gen4 x 4 , 1 x AQC113C 10GbE NBASE-T port

    Dimension (L × W × H): 187 × 19.35 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-2P410G2T

    QM2-2P410G2T (EOL)

    QM2 series, 2 x PCIe 2280 M.2 SSD slots, PCIe Gen4 x 4 , 2 x AQC113C 10GbE NBASE-T port

    Dimension (L × W × H): 187 × 19.35 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-2S-220A

    QM2-2S-220A

    Dual M.2 22110/2280 SATA SSD expansion card;

    Dimension (L × W × H): 147.15 × 20.6 × 68.9 (mm)

    Weight: 0.30 (kg)

  • QM2-4P-384

    QM2-4P-384

    Quad M.2 PCIe SSD expansion card; supports up to four M.2 2280 formfactor M.2 PCIe (Gen3 x4) SSDs; PCIe Gen3 x8 host interface; Low-profile bracket pre-loaded, Low-profile flat and Full-height are bundled

    Dimension (L × W × H): 204.95 × 68.9 × 20.6 (mm)

    Weight: 0.32 (kg)

  • QM2-4S-240

    QM2-4S-240 (EOL)

    Quad M.2 2280 SATA SSD expansion card

    Dimension (L × W × H): 204.95 × 68.9 × 20.6 (mm)

    Weight: 0.32 (kg)

  • QXG-100G2SF-E810

    QXG-100G2SF-E810

    Dual port 100GbE Network adapter; 2 x QSFP28; Intel E810 Ethernet controller

    Dimension (L × W × H): 169.6 × 69 × 18.7 (mm)

    Weight: 0.36 (kg)

  • QXG-10G1T

    QXG-10G1T

    Single-port (10Gbase-T) 10GbE network expansion card, PCIe Gen3 x4, Low-profile bracket pre-loaded, Low-profile flat and Full-height are bundled

    Dimension (L × W × H): 143 × 193 × 52 (mm)

    Weight: 0.53 (kg)

  • QXG-10G2SF-X710

    QXG-10G2SF-X710

    Dual-port SFP+ 10Gb network expansion card; low-profile formfactor; PCIe Gen3 x8

    Dimension (L × W × H): 26 × 10.5 × 6 (mm)

    Weight: 0.29 (kg)

  • QXG-10G2T

    QXG-10G2T

    Dual-port 10GBASE-T 10Gb network expansion card; low-profile formfactor; PCIe Gen3 x4

    Dimension (L × W × H): 54.5 × 39.5 × 18 (mm)

    Weight: 0.29 (kg)

  • QXG-10G2T-X710

    QXG-10G2T-X710

    Dual-port 10GbE Network Adaptor, Intel 700 series Ethernet Controller

    Dimension (L × W × H): 113.6 × 68.9 × 18.3 (mm)

    Weight: 0.24 (kg)

  • QXG-10G2TB

    QXG-10G2TB (EOL)

    Dual-port 10GbE Network Adaptor, Aquantia AQC113C

    Dimension (L × W × H): 104.7 × 16.1 × 68.9 (mm)

    Weight: 0.28 (kg)

  • QXG-25G2SF-CX6

    QXG-25G2SF-CX6

    Dual-port SFP28 25Gb network expansion card; Mellanox ConnectX-6 Lx;low-profile formfactor; PCIe Gen4 x8

    Dimension (L × W × H): 120 × 16.5 × 69 (mm)

    Weight: 0.15 (kg)

  • QXG-25G2SF-E810

    QXG-25G2SF-E810

    2 port 25GbE(Intel E810-XXVAM2) Network Interface Cards (NIC)

    Dimension (L × W × H): 119.3 × 68.9 × 18.1 (mm)

    Weight: 0.23 (kg)

  • QXG-2G1T-I225

    QXG-2G1T-I225

    Single port 2.5GbE 4-speed Network card

    Dimension (L × W × H): 67.3 × 68.9 × 25.2 (mm)

    Weight: 0.19 (kg)

  • QXG-2G2T-I225

    QXG-2G2T-I225

    Dual port 2.5GbE 4-speed Network card

    Dimension (L × W × H): 81.3 × 68.9 × 25.2 (mm)

    Weight: 0.23 (kg)

  • QXG-2G4T-I225

    QXG-2G4T-I225

    Quad port 2.5GbE 4-speed Network card

    Dimension (L × W × H): 104.6 × 68.9 × 24.1 (mm)

    Weight: 0.24 (kg)

  • QXG-5G1T-111C

    QXG-5G1T-111C

    QNAP 5GbE multi-Gig expansion card;Aquantia AQC111C;Gen2 x 1;low profile

    Dimension (L × W × H): 145 × 190 × 52 (mm)

    Weight: 0.20 (kg)

  • QXG-5G2T-111C

    QXG-5G2T-111C

    QNAP dual port 5GbE multi-Gig expansion card;Aquantia AQC111C;Gen2 x 2;low profile

    Dimension (L × W × H): 145 × 190 × 52 (mm)

    Weight: 0.20 (kg)

  • QXG-5G4T-111C

    QXG-5G4T-111C

    QNAP Quad port 5GbE multi-Gig expansion card;Aquantia AQC111C;Gen2 x 4;low profile

    Dimension (L × W × H): 145 × 190 × 52 (mm)

    Weight: 0.23 (kg)

  • QXP-16G2FC

    QXP-16G2FC

    QNAP 2-port 16Gbps fiber channel adapter, PCIe 3.0 x8, SFP+, low profile, w/ SFP+ 16G transceivers

    Dimension (L × W × H): 190 × 143 × 50 (mm)

    Weight: 0.40 (kg)

  • QXP-32G2FC

    QXP-32G2FC

    QNAP 2-port 32Gbps fiber channel adapter, PCIe 3.0 x8, SFP+, low profile, w/ SFP+ 32G optical transceivers

    Dimension (L × W × H): 190 × 143 × 50 (mm)

    Weight: 0.40 (kg)

  • QXP-3X4PES

    QXP-3X4PES

    2 ports (SFF-8644) Expansion card; PCIe Gen3 x4 for QNAP PCIe JBOD series

    Dimension (L × W × H): 102.65 × 68.9 × 19 (mm)

    Weight: 0.11 (kg)

  • QXP-3X8PES

    QXP-3X8PES

    2 ports (SFF-8644 1x2) Expansion card; PCIe Gen3 x8 for QNAP PCIe JBOD series

    Dimension (L × W × H): 112.65 × 68.9 × 18.26 (mm)

    Weight: 0.17 (kg)

选择规格

      显示更多 隐藏更多
      open menu
      back to top