Primary tabs

When disaster hits, whether it be man-made or natural, the speed at which first responders are able to attend to the scene is often crucial in terms of survival rates. There will often also be many people trying to access the network at the same time: individuals calling home to let loved ones know they’re safe, many people phoning emergency services or sending information to authorities.

This leads to the problem of a lot of data coming from the site of the disaster – a relatively small, clustered area. Often, there will not be enough computational power to process all of this data quickly and efficiently, which leads to potentially life-threatening delays. During a disaster, authorities have access to a vast amount of information from a wide range of sources.

Being able to process this data quickly and efficiently is crucial to knowing where to send emergency first responders and resources. This data can come from high-resolution videos, CCTV cameras, cell phones as well as aerial data from unmanned aerial vehicles (UAVs) and drones. If there is a delay in processing all of this data, first responders may not know exactly where the worst affected areas are, what emergency equipment is most important or whether access roads into the disaster site have been compromised.

A team from the University of Missouri Computer Science Department have been working on a framework which provides a potential solution to this problem by utilizing visual cloud computing.

During a disaster, there is often a vast amount of visual data produced and the transmission of this data can cause networks to slow down. In particular, high-resolution video streams are very difficult to process quickly and efficiently. Prasad Calyam, assistant professor of computer science, mentioned that one of their aims was “to develop the most efficient way to process data and study how to quickly present visual information to first responders and law enforcement.”

The team developed a system which links mobile devices in the cloud. An algorithm helps determine what information should be processed in the cloud and which can be processed on mobile devices close to the disaster zone. The intention is for low-end processing to take place close to the disaster site, with more complex computing taking place further away. This technique effectively spreads the data processing over a large number of devices as well as the cloud, and allows the faster dissemination of information.

The intention for the future is to continue this research and invest funds in technologies like these in order to work towards rapid responses to disasters, with the ultimate aim of reducing the impact of these events.

Top image: Natural Disaster (Public Domain)

References

http://engineering.missouri.edu/2016/06/mu-study-leads-to-improvements-in-data-transmission-disasters/

http://ieeexplore.ieee.org/document/7466100/

http://munews.missouri.edu/news-releases/2016/0623-visual-cloud-computing-methods-could-help-first-responders-in-disaster-scenarios/

Emma's picture

Emma Stenhouse, MSc

Emma qualified with a BSc (Hons) in Equine Science in 2003 and has had a passion for horses since a young age. She continued her academic career with an MSc in Applied Marine Science, gained in 2004. Emma’s main scientific focus was the navigational techniques of sea turtles and whether they use the acoustics of the surf-zone as a cue for nesting. She then worked for a sea turtle conservation project on the Pacific coast of Costa Rica before travelling to New Zealand where she worked as a Mari...Read More

No comment

Leave a Response