- Memory Type: REG
- Blade server architecture: 2U
- Brand: Asus / Asus
- Model: ESC4000 G2
- Interface Type: Other / other
- The maximum number of CPU support: 2
- Standard Memory: 8GB
- Hard drive capacity: Other Capacities
- Processor clock speed: 2.1GHz
- Service: Genius
- Server Type: Rack
The official three-site service, if you need to change the configuration of the standard, you can contact our customer service.
- Intel Xeon® E5-2620v2 a standard processor (six-core, 2.1GHz, 15MB level three cache, 7.2GT / s QPI), the processor can be expanded Road;
- Standard 8GB (1x8GB) DDR3 1600 ECC REG memory;
- No standard hard drive;
- Standard SATA DVD-RW drive;
- Standard rail
Simple specifications describing;
CPU : Support for two Intel® Xeon® E5-2600v2 Series processor;
Memory: 16 root DDR3 RECC Memory slots, maximum support 512GB ;
Memory slots: 16 memory slots (4-channel / CPU, 8-DIMM slots / CPU)
Memory Capacity : Up to 512GB RDIMM / up to 128GB UDIMM
Memory Type: Four-channel DDR3 1866/1600/1333/1066/800 RDIMM
Four-channel DDR3 1866/1600/1333/1066 UDIMM
Four-channel DDR3 1866/1600/1333/1066 LRDIMM
Memory specifications: 32GB, 16GB, 8GB, 4GB, 2GB, 1GB RDIMM
8GB, 4GB, 2GB, 1GB UDIMM
32GB, 16GB, 8GB LRDIMM
storage: 8 Hot-swappable 3.5 'Drive bays; embedded integration SATA Controller , Intel® Rapid Storage Technology Enterprise (RSTe) (supported for Windows only)
(Support software RAID 0, 1, 5, 10)
LSI® MegaRAID (supports Linux / Windows)
(Support software RAID 0, 1, 10)
Network: Integrated Intel® 82574L Dual Port Gigabit Server Adapter;
I / O Expansion slots: eight full-length full-height PCI-Express 3.0 x16 GPU graphics card slot can add / Supercomputer card, a PCIe Adapter;
Parallel Computing Cards:
Optional NVIDIA Tesla M2050 (memory 3GB)
Optional NVIDIA Tesla M2070 (memory 6GB)
Optional NVIDIA Tesla M2090 (memory 6GB)
Optional NVIDIA Tesla K10 (memory 8GB)
It supports up to four M2050 / M2070 / M2090 / K10
power supply: 1620W by 80PLUS Platinum certification efficient hot-swap 1+1 Redundant power supplies;
Management Software: ASWM2.0 Server Management Suite;
Other: Optional ASMB6-iKVM Remote Management Module; optional ASUS PIKE SAS Array card ( Support must be ordered PIKE Slot riser card SKU)
size: 750mm x 444mm x 88mm
Dimensions: 2U Rackmount
Fans: 7 8CM fan system
ESC4000 G2 supercomputers use CPU + GPU collaborative computing architecture, ASUS clusters for small and medium customers, the new customized supercomputer on a single trillion calculations per second capacity, designed for life sciences, medicine, engineering, financial modeling professionals electronic design automation and other industries, optimize system cooling solutions not only ensure the reliability of the operation of the machine, the machine run more effectively reduce noise, providing the user with a high-performance, high stability Supercomputer workbench.
Collaborative Accelerated Computing Architecture
Innovative introducing GPU computing unit, breaking the traditional computing model using a single CPU, using the latest Intel Xeon processor cores to accelerate the collaborative computing technology NVIDIA Tesla, the CPU and GPU to perform their duties, is mainly responsible for CPU more good the logical choice, judging duties jump, etc., and then full-time GPU compute-intensive, highly parallel computing work, so that a reasonable allocation of computing resources, computing power has been fully released, computing performance reached from several times to several hundred times improve.
Massively parallel computing processing cores
Compared to only have a few multi-core CPU threads simultaneously in terms of characteristics of the GPU can execute thousands of threads simultaneously, so that the system can handle more traffic. Currently Tesla K10 calculate card has 3072 computing cores, peak processing speed per second 4.5 trillion floating-point computing power, through co-scalable architecture that can increase the number of GPU based on the elastic computing needs and achieve higher computing performance.
Excellent programming environment
CUDA general purpose parallel computing architecture enables GPU to solve complex computational problems, it contains the CUDA Instruction Set Architecture and the internal CPU parallel computing engine. Developers can now use the C language to write programs for the CUDA architecture, C language is the most widely used on a high-level programming language, written out of the program can support the CUDA processors running at ultra-high performance.
Computer Design Automation EDA: SPICE, Verilog, 3D EM, etc.
Engineering Science: CAD / CAM / CAE, astrophysics, CFD, etc.
Life Sciences: Molecular dynamics, gene sequencing, protein folding, etc.
Petroleum and natural gas: seismic data processing, reservoir simulation
Meteorology: Weather and ocean modeling WRF, etc.