Intensivate has deep experience in Enterprise software, and designed the technology to fit existing, complex, code ecosystems. More importantly, we understand the pain caused by disrupting your existing software environment and the barrier presented by learning new technology and adopting new ways of architecting and implementing your code.
For these reasons, Intensivate has created a new kind of processor that runs your existing application code, and the full ecosystem around it. That includes booting standard, upstreamed, OS distributions like Fedora (RedHat) and Ubuntu. It also includes popular management tools like Docker and Kubernetes. Not to mention popular cluster computing platforms like Kafka, Hadoop, and Spark.
As seen in the illustration below, an IntenScale card plugs into a Host server. The card then has a number of independent servers of its own. The Host’s role is simply to route traffic between the network and the IntenScale card, supply power, cooling, and housing for the card, and supply disk storage for the card.
One difference to other kinds of accelerators is that Intensivate does not split an application between Host and accelerator. Rather, the Host provides infrastructure that supports the 21 servers on the card. The servers each run their own code, in their own address space, with their own network connections. It’s as if a shrink ray took half a rack and put it onto the card. Literally. Each server on the card has it’s own set of IP addresses, boots its own OS, and provides the full server software environment.
What makes it an accelerator is that the CPU has been specially designed for cluster computing. It can run any source code, but full advantage is gained when the application has been architected to run on a cluster and scale to multiple servers. (Multi-threaded applications also take full advantage of the IntenScale processor). Thus, the hardware is specialized to a certain class of applications, making it an accelerator for that class. On this kind of code, you get 12 (twelve) times more servers for your money, for 12 times more performance.
This figure shows the software stack that runs on the Host. The Host sees the IntenScale card as a NIC card. Behind that NIC, its sees a network of IntenServers, which are reached via the NIC. The Host shuttles packets between this sub-net of IntenServers and the outside network.
The figure below shows the software stack running on an IntenServer. It is the normal Linux software stack. Notice that the file system is NFS (Network File System). iSCSI is also available. This allows the disk to be resident on the Host or it can also be in the network. The TCP/IP stack is the standard, upstream, stack that comes with the OS. At the bottom of the TCP stack lies the Intensivate device driver. This driver presents itself to the OS as a normal NIC driver, while inside it uses the on-chip network to talk to the IntenSwitch that is integrated into the same silicon as the IntenCore processors.
An NFS boot disk is supplied for each IntenServer. It is either in the Host or on the network. Firmware embedded into each server holds PXE boot, which includes its own TCP stack and its own NFS client. PXE runs at power-on reset, and uses the TCP stack that’s in firmware to boot from the NFS disk.
This graphic represents how the IntenScale card achieves 12x performance. Head to head, one IntenCore is roughly the same performance as the current generation of Xeon cores. However, due to lower cost to purchase, lower electricity consumption, less cooling, and less floorspace, the IT department gets 12 times more of these cores for the same total cost of ownership. When stacked up, they deliver 12 times more Instructions per second, and finish the work in 1/12 the time.