The content below is taken from the original (What’s DriveScale up to? Mix-‘n’-match server and disk storage, for starters), to continue reading please visit the site. Remember to respect the Author & Copyright.
Backgrounder DriveScale is a startup that emerged from a three-year stealth effort earlier this year with hardware and software to dynamically present externally connected JBODS to servers as if they were local. The idea is to provide composable server-storage combos for changing Hadoop-type distributed workloads.
It is meant to share the characteristics of hyper-scale data centres without involving that degree of scale and DIY activity by enterprises. DriveScale says existing server and storage product designs are too rigid for distributed Hadoop-style workloads with varying degrees of under-and over-provisioning of server and storage resources.
Servers and storage should be managed as seperate resource pools. As the company has sterling credibility based on its exec staff’s combined Sun and Cisco experience it is worth a look.
The software composes systems: that is servers with local (in the rack) disk storage, which can be decomposed when a Big Data Hadoop workload changes, and then composed afresh in a new configuration.
How it is done is by DriveScale software virtualising storage in a rack of disk drive enclosures and presenting it across 10GbitE links to servers in the rack. There is a top-of-rack Ethernet switch and the servers and SAS JBOD storage enclosures in the rack have Ethernet ports. DriveScale adapters (virtualisation cards), which use SAS, are interposed between the server and storage Ethernet ports.
The adapters feature an Ethernet controller (2 x 10GbitE) and a SAS (2 x 12Gbit/s 4-lane) controller. A 1U DriveScale appliance chassis holds four of these adapter cards and it links to the TOR Ethernet switch. The adapter enclosure has 80Gbit/s of bandwidth. Server access to disk pas over this Ethernet and DtiveScale system. Obviously the Ethernet and SAS links contribute some latency to the disk accesses, around 200 microseconds.
The servers run Linux and have a DriveScale software agent. DriveScale has a management facility which is used to compose (configure or allocate) storage to one or more servers, and to configure the software agents.
The agents presents storage, that has been composed for a server, as local (directly attached) to that server, using the DriveScale server agent and adapters to hide the fact that it is actually externally connected over Ethernet.
There is a SaaS overall management facility. Users control DriveScale with a GUI or via RESTful API. The JBODS could contain SSDs in the future.
Founding, founders, and funding
DriveScale received seed funding of $3m when it was founded in 2013 by chief scientist Tom Lyon and CTO Satya Nishtala. Lyon was an early Sun employee and he and Nishtala worked for Banman there. They were founders of the Nuovo spinout which developed Cisco’s UCS server technology. At Sun Lyon worked on Sparc processor design and SunOS while Nishtala was involved with Sun storage, UltraSparc workgroup servers and workstation products.
The CEO is Gene Banman and Duane Northcutt, an ex-Sun hardware architect, is VP Engineering. Ex-Sun CEO and co-founder Scott McNealy and Java man James Gosling are its advisers.
The company had a $15m A-round earlier this year, led by Pelion Venture Partners with participation from Nautilus Venture Partners and Foxconn’s Ingrasys. Ingrasys is a wholly owned Foxconn subsidiary, and helped to develop, and manufactures, the DriveScale appliance.
In summary. DriveScale says the time is right for a disaggregated design. Its technology can enable businesses to have a scale-out architecture as seen in web-scale businesses (Amazon, Facebook, Google, etc), giving them for more flexible scaling of server and storage resources. ®