Fermilab Computing Division

CS Document 5523-v1

Big Data Challenges in the Era of Data Deluge

Document #:
CS-doc-5523-v1
Document type:
Presentation
Submitted by:
Tanya Levshina
Updated by:
Tanya Levshina
Document Created:
27 Feb 2015, 11:50
Contents Revised:
27 Feb 2015, 11:50
Metadata Revised:
27 Feb 2015, 11:50
Viewable by:
  • Public document
Modifiable by:

Quick Links:
Latest Version

Abstract:
Abstract:
For better or worse, the amount of data generated in the world is growing exponentially. 2012 was dubbed the year of Big Data and Data Deluge. In 2013, the petabyte scale was referenced metaphorically, and in 2014, the exabyte was in the vocabulary of storage providers and large organizations. Traditional copy based technology doesn't scale to this size; relational DBs give up on many billions rows in tables, typical File Systems are not designed to store trillions of objects. Disks fail, and networks are not always available. Yet individuals, businesses, and academic institutions demand 100% availability with no data loss. Is this the final dead end? This seminar will discuss design principles for storage based on IDA (Information Dispersal Algorithm) unlimited in scale, with a high level of reliability, availability, and unbounded scalable indexing and metadata support. We will describe important practical elements of a viable commercial large scale storage system which include dsNet broad set of operational characteristics such as: supported interfaces, rebuilding capabilities, performance characteristics, system installation, upgrade, configurability, expansion, security, monitoring, disk management, disk failure prediction and failure management, limping hardware handling, and the Cleversafe dsNet hardware option.
Files in Document:
Associated with Events:
Big Data Challenges in the Era of Data Deluge held on 26 Feb 2015 in WH 1 West
DocDB Home ]  [ Search ] [ Authors ] [ Events ] [ Topics ]

DocDB Version 8.8.9, contact Document Database Administrators