icon-developing.png
XNAT Development
Codebase


Best Practices for Development


Mercurial


Other topics

[Edit Nav]


We will detail our system architecture for the CNDA, Washington University's flagship installation of XNAT. The current architecture (as of May, 2010) responsively handles traffic for more than 400 projects and more than 10,000 imaging sessions. We will also sketch out our future plans to improve the system's reliability and scalability.

Current CNDA Architecture (May 2010)


current_cnda_architecture.png
(Original: current_cnda_architecture.svg)

Reverse Proxy / Load Balancer

The Kemp LoadMaster handles SSL communications with the client and sends unencrypted traffic to the Tomcat and DicomServer instances. The Kemp includes hardware accelerated SSL, which may be excessive in many cases (other reverse proxies may be appropriate, like Apache with mod_proxy, HAProxy, nginx, or Pound).

App Servers

We currently run two Tomcat 5.5 web containers (each are dual core Xeon processors with about 4GB of RAM). The app servers also run DicomServer. Kemp is configured to route traffic to a single app server until that app server fails. A single app server sufficiently responds to our daily traffic demands.

Database Server

A single instance of PostgreSQL 8.3 runs on a quad core Xeon processor with 16GB of RAM. We have tuned PostgreSQL to take advantage of available memory.

Storage

XNAT's archive is located on an NFS mount of a BlueArc NAS. The NAS transparently handles replication to a second instance running remotely. We currently have dedicated roughly 20TB of the BlueArc to the CNDA.

Processing

The CNDA utilizes a variety of machines for its Pipeline processing via the Sun Grid Engine. These machines include Solaris, Linux, 32 bit, and 64 bit, which are used as required by the specific pipeline.

Planned CNDA Architecture

Our current architecture is more than appropriate for handling the existing usage of the CNDA. However, you will notice that there are a few single points of failure in the architecture. We are working on a plan to improve resiliency while also improving our ability to scale.

future_cnda_architecture.png
(Original: future_cnda_architecture.svg)

Changes we are planning / investigating:
  • Dual Kemp LoadMasters sharing a Virtual IP to provide reverse proxy redundancy
  • PostgreSQL Warm Standby (Hot Standby when 9.0 is released) to provide database redundancy
  • Integration of Washington University's super computer for pipelines
  • Backup SAN for long term data archival and emergency BlueArc fallback.
  • Automated app server creation and configuration. Potentially implement by virtual machine snapshots or a configuration management system (Puppet, CFEngine)

Related Pages