Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In addition to NSF support,  Intel (formerly Altera), Nallatech, Xilinx and Alpha-data have all committed FPGA/board donations.  Altera Intel has committed their entire suite of CAD tools, IBM has donated POWER8 servers, Intel has provided funds for the CAD tool servers, Microsoft has provided Catapult servers and funds for operating them, Nvidia has donated their GPUs, Bluespec has committed their Bluespec compiler, and Impulse Accelerated Technologies has commited their ImpulseC C-to-gates compiler. We are in active discussions with other companies who are interested in contributing their technologies to FAbRIC.

This material is based on work supported by the National Science Foundation under Grant No. 1205721 and generous donations and technical support from Alpha-data, AlteraIntel, Bluespec, IBM, ImpulseC, Intel, Microsoft, Nallatech, Nvidia and Xilinx.

...

If you would like access to Altera Intel FPGAs and tools (IBM Power8+CAPI, Microsoft Catapult, Intel Hardware Accelerator Research Platform), please also get a myAltera account  (https://www.altera.com/mal-all/mal-signin.html) and forward the confirmation email to account-request@openfabric.org as well.

...

  • Nallatech 385 A7 Stratix V Altera -based FPGA adapter
  • Alpha-data 7V3 Virtex7 Xilinx-based FPGA adapter
  • NVIDIA Tesla K40m GPGPU card

...

The Microsoft Catapult system consists of 432 two-socket Intel Xeon-based nodes, each with 64 GB of memory and an Altera Stratix V D5 FPGA with 8 GB of local DDR3 memory. FPGAs communicate to their host CPUs via a PCIe Gen3 x8 connection, providing 8GB/s guaranteed-not-to-exceed bandwidth, and each FPGA can read and write data stored on its host node using this connection. The FPGAs are connected to one another via a dedicated network using high-speed serial links. This network forms a two dimensional 6x8 torus within a pod of 48 servers, and provides low latency communication between neighboring FPGAs. This design supports the use of multiple FPGAs to solve a single problem, while adding resilience to server and FPGA failures.

Per Node:

  • Two Xeon E5Two Intel® Xeon® E5-2450, 2.1GHz, 8-core, 20MB Cache, 95W
  • 64GB RAM
  • Four 2TB 7.2k 3G SATA 3.5"; Two 480GB 6G Micron SATA SSD 2.5"
  • Intel 82599 10GbE Mezz Card
  • Altera Stratix V FPGA Card
  • Operating System: Windows Server 2012

...

The gateway node: Dell R720 server, 64GB memory, 16 cores (Intel Xeon Intel® Xeon®  CPU E5-2670 @ 2.60GHz)

The compute node is a Convey MX system, with 128GB of RAM, 100GB of 64b granularity bandwidth, and 4 user "application" FPGAs. 

...