If one designs a model with 3 factors it will be obviously be less effective than a model with 300 factors. Here is the role of VOLUME in big data. Huge data are being produced daily but the challenge lies in the fact on how to store and process effectively. The need of the hour was scalable storage and distributed querying. As traditional relational databases were unable to cope with this huge volume of data, massively parallel processing architectures like data warehouses, MPP databases etc provided the technology for structured data and HDFS, Big Table etc for unstructured data.
Asked In: Many Interviews |