Extracting Knowledge from the Data Deluge to Reveal the Mysteries of the Universe

author: Melanie Johnston-Hollitt, Singularity University
published: Dec. 10, 2019,   recorded: October 2019,   views: 50
Categories

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

Description

Astrophysics is one of the most data intensive research fields of the modern world and, as such, provides a unique context to drive many of the required innovations in the “big data” regime. In particular, radio astronomy is foremost in the field in terms of big data generation, and thanks to sustained global investment in the discipline over the last decade, present telescopes generate tens of petabytes of data per annum. The pinnacle of this so-called on-going ‘radio renaissance’ will be the Square Kilometre Array (SKA) — a global observatory tasked with probing the deepest mysteries of the Universe. The SKA will create the highest resolution, fastest frame rate movie of the evolving Universe ever and in doing so will generate 160 terrabytes of raw data a day, or close to 5 zettabytes of data per annum. These data will be processed into of order 1 petabyte of image cubes per day which will be processed, curated, and ultimately distributed via a network of coordinated tiered compute facilities to the global astronomical community for scientific exploitation. However, this truly data-rich environment will require new automated and semantic processes to fully exploit the vast sea of results generated. In fact, to fully realize the enormous scientific potential of this endeavour, we will need not only better data tagging and coordination mechanisms, but also improved algorithms, artificial intelligence, semantics and ontologies to track and extract knowledge in an automated way at a scale not yet attempted in science. In this keynote I will present an overview of the SKA project, outline the “big data” challenges the project faces and discuss some of the approaches we are taking to tame this astronomical data deluge.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: