One of the top uses of big data today is IT operations analytics. This makes sense. By nature, IT components are designed to log all of their many status messages, and this information is generated with debug, tracking, and audit purposes in mind. The aggregate output, however, can be a logistical problem in itself. For each device, some poor sysadmin has to decide what level of logging is desired, and then live with the consequences of that decision. Set the logging threshold to "errors only" and important context will be missing when it's time to diagnose an issue. Set the logging criteria to "everything" and staggering amounts of data will be generated, often too much to process, and certainly much of little or no value. Limit the time period to an hour or a day, and the key information may have been overwritten by the time it's needed, and then the problem will have to be recreated.