Background Research
With the beginning of our project, it was important for us to dive into what others have done with regard to UAS technology in the Search and Rescue (SAR) world. Below you will find articles that we read through that pertained to either unmanned systems in SAR operations, or various forms of human identification via various sensors or cameras.
Dufour, L., Owen, K., Mintchev, S., & Floreano, D. (2016, 9-14 Oct. 2016). A drone with insect-inspired folding wings. Paper presented at the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
This article speaks to the potential effectiveness of a new type of portable “origami” wing, via folding the wings such as many insects do. An overview of insect wings, proposing certain characteristics that allows them to be easily deployable yet sturdy, is given. Design challenges and manufacturability of the new wing is discussed, followed by experimental prototypes being built. The article finishes by analyzing results, and proposing intended work for the future.
The new wing is able to fold from a deployed surface area of 620cm^2 down to a mere 160cm^2. The overall reduced area makes a large difference in the portability of this UAV, especially if scaling up to slightly larger platforms. Actual flight tests were performed with the folded wing prototype, and seemed to perform very similarly to rigid wing options. A high framerate camera was used to determine exactly how fast the UAV can be set up. By the 500ms mark (0.5 seconds), the UAV was fully deployed, making this a faster alternative for fixed wing platforms. The compact folding design does limit the payload capacity substantially, where only a few hundred grams of weight can be comfortably added.
My main critique of the article is how the pre-flight is not accounted for in the ability to quickly take off. They wing can be unfolded and deployed in less than a second, which is great. However, flight plans need to be created, variables such as altitude, speed, coverage area, etc. need to be taken into account. A link between the UAV and the transmitter takes a few seconds, and then to acquire satellites takes more time as well. This UAV is certainly a quicker solution than setting up the Bramor C-Astral for example, but in practicality is it really so much faster than other small fixed wing UAVs such as the eBee which is mentioned in the article? The extra 1-2 minutes it may take to set up the eBee may still be the best solution for now, at least until the folding wing is able to hold heavier payloads.
Karaca, Y., Cicek, M., Tatli, O., Sahin, A., Pasli, S., Beser, M. F., & Turedi, S. (2018). The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations. The American journal of emergency medicine, 36(4), 583-588.
Search and rescue can be aided substantially through the use of UAVs, where time is of the essence and UAVs can cover ground at a much faster pace. In mountain conditions, the first 60 minutes are the most important due to injury likelihood. This article compares two mountain-based SAR operations: utilization of UAVs and motorized transportation for SAR providers, and the more classic on-foot search.
The scenario was carried out ten times, and similar to our simulation method, a mannequin has been randomly placed on the mountain to be found by the search teams. The on-foot team consisted of five certified rescuers, while the UAV/snowmobile team consisted of three rescuers and a UAV pilot. A DJI Phantom 3 Pro was used for the mission. On-foot rescue averaged 57.3 minutes while the UAV rescue averaged 8.9 minutes. While the on-foot team was still able to rescue the person in less than 60 minutes most of the time, every second is precious when dealing with potential hypothermic conditions in the mountains. The UAV team was much quicker, and more effective, and this is without utilization of any software other than a live video feed from the UAV.
The main issue with this article is the lack of depth of various scenarios. They do an incredible job outlining their specific scenario, a person dressed in dark clothes unconscious on top of the snow. However, many different scenarios are likely to occur, even just in the mountains. For example, if the person were stuck under the snow in an avalanche, and overhead view from a UAV would be little to no help. The writers do make note of this, though. The limitations of the study are very well thought out, and many different ones are included. If wind speed had been higher, temperature colder, or visibility is poor then on-foot rescue could potentially be better. Bigger platforms than the Phantom 3 Pro could have also been utilized, ones that stand against wind and temperature conditions better. The last critique would be that deployment time of the UAV was not accounted for, just actual search time. The Phantom 3 Pro should only take a few minutes at most to set up, but that time was not included in the search time. The UAV team still has a very clear and distinct advantage in this specific scenario over on-foot rescue missions.
Rémy, G., Senouci, S.-M., Jan, F., & Gourhant, Y. (2013). SAR. Drones: drones for advanced search and rescue missions. Journées Nationales des Communications dans les Transports, 1, 1-3.
Discussed in this article is the cooperation of a fleet of UAVs to be used in disaster scenarios, and SAR missions to autonomously report certain events back to the mission leader. Various sections of the article describe the autopilot and the exploration feature of the UAVs, followed by network information, and finally test results.
One issue with this proposed framework is that a fleet of UAVs is being deployed by one “pilot”, which goes against Part 107 regulations. A COA could be acquired in order to bypass this though. The next problem I see is the algorithm in which the UAVs report different disaster and rescue scenarios. An “event” is mentioned throughout the article that the UAVs should be able to report, but what is the event? The proposed usage for these UAVs is for earthquakes, tsunamis, SAR, and more. The real test is looking for a singular “event” (a black line on a white background). How easy would it be to create an algorithm that looks for specific events at specific disasters or SAR missions? Would they be able to work all at once, or would a different firmware with a separate algorithm need to be uploaded to the network for each unique disaster?
There are a lot of interesting ideas here, but at this moment this seems very proof-of-concept-esque, rather than something that can be implemented to help today. However, the Loc8 software is in a very similar stage right now, so I understand the user-side challenges that arise with new technology to some extent. If this could be a cohesive, one-stop shop for disaster aide, it will be an incredible new technology.
Waharte, S., & Trigoni, N. (2010). Supporting search and rescue operations with UAVs. Paper presented at the 2010 International Conference on Emerging Security Technologies.
This article discusses four main parameters that need to be thought about when creating SAR UAV algorithms. The four parameters are sensor data quality, energy limitations of the UAV, environmental conditions, and information exchange from the UAV to the team.
One issue with the algorithmic integration is utilizing “Greedy heuristics”, which essentially forces the UAV to search certain areas more heavily than others. An example given is when a person is believed to be along a road, and not a river, but both are within the range the person is assumed to be in. Essentially, humans are forcing the UAV to do less rigorous searches of certain areas to allow for more detailed searches of others. What is the missing person isn’t where we believe they are? The UAV may miss the person in this scenario. While it may be beneficial in some or even most cases due to our knowledge of human psychology, what if we’re wrong and miss finding the person because we looked in the wrong place? People who are lost may be delirious, hallucinating, or encounter a whole host of other problems based on many factors.
Another issue is that, regardless of altitude, the UAVs are set to fly at the same speed. In a situation where an area need only be imaged once in order to find the missing person, we are losing valuable time by flying at the same speed. At a higher altitude, more area is covered in one frame. Therefore, we can fly at a faster speed and achieve the same overlap that a slower speed and lower altitude could. There should exist the ability for the UAVs to speed up at higher altitudes in order to cover ground more effectively.
Ward, S., Hensler, J., Alsalam, B., & Gonzalez, L. F. (2016). Autonomous UAVs wildlife detection
using thermal imaging, predictive navigation and computer vision. Paper presented at the 2016 IEEE Aerospace Conference.
Utilization of thermal cameras onboard UAVs in order to detect wildlife certainly has its benefits. The wide-reaching thoughts of this paper, however, can be extended into the field of search and rescue missions by utilizing thermal imagery in order to detect humans. This paper discusses thermal imagery being used to locate wilderness animals, and then transmit the GPS information of the animal back to a ground station. The same thing can be done in SAR with missing people. The system is capable of locating animals autonomously, and then creating thermal heatmaps of the area to help give the team a better idea of where the animal is, in addition the having GPS coordinates. It would seem that the next logical step would be to use this for SAR.
The major issue I see here is the wilderness scene that the operations may need to be performed in. With canopy cover that may be seen in many wilderness areas, RGB, thermal, and many other sensors would have a tough time locating a person. One fix for this would be flying at an oblique angle to the tree line, rather than above it in nadir. Challenges are still present though, because trees and brush may still provide too much cover to visualize a person, even with thermal. Lidar is able to penetrate canopy cover, so it may be a method to look into when performing SAR missions in heavily forested areas.
Bevacqua, G., Cacace, J., Finzi, A., & Lippiello, V. (2015, April). Mixed-initiative planning and execution for multiple drones in search and rescue missions. In Twenty-Fifth International Conference on Automated Planning and Scheduling.
Similar in concept to that of the Intel swarm drone show this article discusses the technique of using a multitude of unmanned systems at once in a coordinated effort to increase the speed of search in rescue times. This coordinated effort still requires a human input, but the idea is that a single person can control the entirety of the swarm with minimal involvement. This article also discusses the varying level of autonomy that can be achieved. The article also discusses the basic structure of their proposed search and rescue techniques, and also provides multiple diagrams that demonstrate different flight plans and patterns that the drones could use to cover area in the most effective manner. There are also flow charts that show the different levels of autonomy that can be achieved, and how the operator fits into the overall system.
Although this article is an interesting read and proposes ideas that seem very useful, there are many downsides. The biggest issue with this proposal is exactly that, it's a proposal. No hands on research or testing has been done to prove or support any claims, it is all theoretical. Easier said than done is a phrase that comes to mind when reading this paper, with no real evidence that this technique will actually benefit search and rescue it is difficult to consider it. The other significant downside to the proposed plan is that it seems very complex. In a real natural disaster scenario action needs to be taken quickly, and the complexity of adding in more platforms and trying to control them at once, not to mention trying to process twice the amount of data at once seems overwhelming for a small team. OVerall the article proposes a very interesting concept, one that is probably worth testing, but for now it seems beneficial to focus on the techniques we have planned.
Câmara, D. (2014, November). Cavalry to the rescue: Drones fleet to help rescuers operations over disasters scenarios. In 2014 IEEE Conference on Antenna Measurements & Applications (CAMA) (pp. 1-4). IEEE.
This article proposes more than just using drones as a way to locate lost persons, it suggests taking the power of unmanned systems to the next level. In a natural disaster scenario once of the most important items is communication, without it trying to find missing items can become very difficult. This article discusses using drones to provide a communication structure along with creating up to date maps to help rescuers understand where events are happening. This is very helpful because usually during a natural disaster the original methods of communication are not available, so by using an unmanned system as a kind of cell tower to allow rescuers to communicate is vital. By providing maps it will also allow teams to better understand their present location along with their ability to locate those who are lost.
Similar to the last article this one is offering a solution that is complex and difficult to quickly deploy during a disaster. It involves using a multitude of fully autonomous drones that would fly around communicating with one another and the ground crew to offer the best possible assistance to the rescue teams. The biggest issue with this is the financial burden it would put on teams. Not only are multiple platforms required to operate this system but the equipment and programming needed to provide this level of coms is immense. The level of communication needed between the drones is also very complex. In the diagrams provided illustrates an EV as the fixed wing platform. While this drone does provide a level of autonomy that is self sustaining, it does not communicate with other systems or behave in a way that allows for a swarm mentality.
Wolfe, V., Frobe, W., Shrinivasan, V., Hsieh, T. Y., & Gates, H. M. (2014). Feasibility Study of Utilizing 4G LTE Signals in Combination With Unmanned Aerial Vehicles for the Purpose of Search and Rescue of Avalanche Victims (Increment 1). University of Colorado at Boulder, Research Report.
The purpose of this study was to determine whether using a drone combined with 4GLTE could be a faster and safer way of finding a missing person in an avalanche than the traditional techniques. The drone would be deployed and emit a signal that would turn a smartphone into a homing beacon. One of the biggest issues was determining if the signal would travel through snow, along with the launch and recovery of an unmanned system in an avalanche environment.
This article mostly focuses on the theoretical application of this idea, little real world testing was done. Although this project did focus and prove that the use of LTE to locate someone using their smartphone as a homing device is feasible, in regards to our particular project this paper does not discuss the use of a UAV in depth. It provides some good ideas in terms of frequencies and methods to locate a person but for a class that is focused on the use of unmanned systems this paper did not seem to provide any help on what kind of platforms or techniques to try for SAR.
Molina, P., Colomina, I., Victoria, P., Skaloud, J., Kornus, W., Prades, R., & Aguilera, C. (2012). Drones to the Rescue!.
This paper is interesting because it specifically focuses on the use of drone technology for SAR while making the rescuers have as little involvement in the process as possible. It proposes using mini drones that are fully autonomous and programmed to recognize certain situations, such as groups of disabled people in need of assistance. The concept of using minidrones that are fully autonomous is intriguing because this gives them the capability to fly both indoors and outdoors. This significantly increases the benefit of the system because it is not limited to those who are stuck outside. In a situation such as flooding, for example, it is more than likely that most of the victims in need of help would be indoors seeking shelter. This system could prove extremely beneficial in a natural disaster scenario if perfected.
The level of complexity needed to perfect this system is on a different level than those previously discussed. While this article does go into great detail, demonstrating the use of fully autonomous drones indoors and the various sensors that could be used to help locate people, trying to coordinate this during a disaster might prove difficult. This article is very beneficial to our project because it specifically discusses ways unmanned systems can be used to track people, locate targets, and work together to improve the SAR time. However, this is a very complex system while our proposed idea of using locate to find a missing person could prove to be just as fast and require significantly less time.
Cacace, J., Finzi, A., & Lippiello, V. (2016). Multimodal interaction with multiple co-located drones in search and rescue missions. arXiv preprint arXiv:1605.07316.
The research paper discusses the ability for SAR personnel to have limited but decisive interactions with UAVs that reduce the time needed for someone to give instructions to a UAV but still maintain the same effectiveness of a traditional UAV operator. Some techniques mentioned in the article include hand gestures and speech recognition. This allows someone performing SAR to remain hands free and not fully invested to the UAV but still achieve the benefit from the platform. This is a win win scenario, not as many people are required for SAR efforts and yet the traditional SAR personnel can still be focused on their tasks while receiving aid from the drones.
This article does not discuss the specifics of the UAV operation such as takeoff, landing, charging, or an overall hub for the drones to be worked on. It might be nice for the SAR personnel to not have to be as immersed as a traditional UAV operator but that does not mean there are no people who specifically watch over the drones. At some point they will need in field maintenance and attention and the article does not focus on some of these kinds of specifics. Theoretically it would be relatively easy to operate a UAV in these kinds of environments, but someone on the ground cannot see what the drone is seeing without FPV, so a level of semi autonomy is needed in order for this to be successful.
Bejiga, M. B., Zeggada, A., & Melgani, F. (2016). Convolutional neural networks for near real-time object detection from uav imagery in avalanche search and rescue operations. Paper presented at the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).
This paper outlines the usage of UAV’s in an avalanche Search and Rescue (SAR) scenario. The proposes are intended to decrease the total time spent in search and rescue missions. The images acquired by the UAV are processed through a pre-trained convolutional neural network (CNN) to extract discriminative features. The experiment was conducted at ski resorts and used backpacks, skis and other equipment typically worn in a skiing accident. 165 training images were collected and ran through the CNN. The main factors that were tested in the experiment were processing speed and accuracy. Producing false negatives and false positives were a big issue in the experiments.
While this methodology has worked for the data set it is still a very small data set and needs to be tested more and in more environments. The software seems to be there and functions well but is missing the implementation into search and rescue. The data was processed on a standard desktop computer so most people will be able to run the processing. More flights need to be conducted and more experiments for this process to fully work out the kinks. It is a good start but there needs to be more field work to prove this will be an effective solution over typical search and rescue methods.
Ghazali, S. N. A. M., Anuar, H. A., Zakaria, S. N. A. S., & Yusoff, Z. (2016). Determining position of target subjects in maritime search and rescue (msar) operations using rotary wing unmanned aerial vehicles (uavs). Paper presented at the 2016 International Conference on Information and Communication Technology (ICICTM).
This paper proposes possible UAV operations to assist with search and rescue in maritime environments. The proposed process includes the use of a multi-rotor platform for its versatility and ability to fly vertically, horizontally and hover. The paper discusses how a grid pattern flown by multiple UAVs at one time will be an effective method for covering large areas quickly. The main research question they were trying to answer is how to determine the exact coordinates of the target. The proposed solution is to implement an algorithm that would identify the target in a specific quadrant in the image, decrease altitude, reidentify the target and continue to descend until it is considered accurate.
While this paper has gone into a more well thought out workflow and implementation of UAV’s into search and rescue there is no field work that has been conducted to conclude that their proposed solution works. As of right now the solution is theoretical, with experiments and field work I believe there could be potential to this solution.
Kellenberger, B., Volpi, M., & Tuia, D. (2017). Fast animal detection in UAV images using convolutional neural networks. Paper presented at the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).
The use of UAV’s in animal tracking can directly relate and/ or be used in a search and rescue mission. This method is proposed for large animal identification and animal tracking methods intended for livestock conservation. They had also used low cost UAV’s to ensure that the proposed solution would work for almost everyone with access to a UAV. Through their experiments and software they are able to process 75 images per second allowing for real time monitoring and tracking of animals. These experiments were done at the Kuzikus Wildlife Reserve Park in central Namibia.
This solution has been thoroughly tested and has been proven to work. It is possible to use this technology in a search and rescue mission by identifying a human instead of an animal. The ability of them to work in nearly real time allows for very fast missions and allows for quick deployment to assist the person of interest. The issue with the data is that is was acquired in 2014. They should conduct more recent data collection missions to have more up to date data as the UAS industry is advancing rather quickly.
Leira, F. S., Johansen, T. A., & Fossen, T. I. (2015). Automatic detection, classification and tracking of objects in the ocean surface from uavs using a thermal camera. Paper presented at the 2015 IEEE aerospace conference.
This paper proposes a solution for object identification with thermal imagery. The solution uses a custom made fixed wing UAV using a pixhawk flight controller and a FLIR thermal camera. Using complex algorithms they are able to process thermal imagery using a machine vision system. The machine vision system incorporates the use of the thermal camera and onboard processing power to perform real time classification, object detection and tracking of objects on the ocean surface. Flights and experiments were conducted to test the platform and data processing.
There were only a few data collection flights conducted for this experiment. More flights will need to be conducted to work out issues with the solution. The platform that is used is also not very accessible to the general public or first responders looking to use it for search and rescue.
Ranjan, A., Panigrahi, B., Sahu, H. B., & Misra, P. (2018). SkyHelp: Leveraging UAVs for emergency communication support in deep open pit mines. Paper presented at the 2018 10th International Conference on Communication Systems & Networks (COMSNETS).
Open pit mines are extremely dangerous working environments, the use of UAV’s in search and rescue of open pit mines allows for a safer work environment. The paper discusses the use of UAV’s for emergency communication with the rescuers outside of the mine and the workers that are trapped. Due to the characteristics of open pit mines there is normally difficulty transmitting radio waves from the top to the bottom. The proposed solution is to use one or multiple uavs to act as relays or nodes to enhance the radio communication.
The main issue with the solution is the interference the UAV will encounter if it is being used to help transmit radio waves. It is possible the UAV will encounter connection issues with the transmitter or GPS lock and end up crashing and causing more damage. This is an obstacle not discussed in the paper and is a big concern.
Andriluka, M., et al. (2010). Vision based victim detection from unmanned aerial vehicles. 2010
IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE.
This paper examines the use of UAVs to detect people using two different detection models using an onboard camera. The researchers flew a quadcopter in a closed office environment at an approximate altitude of 1.5 meters and captured 220 images. The models used detected various body shapes and positions and highlighted potential targets of what it thought were humans. The study compared the two body detection models, as well as how performance was affected by combining the two models.
While this test is certainly useful in determining effectiveness of body detection models, the environment in which the test was flown seems impractical. They stated that they are focusing on victim detection from a UAV, which is true; however, a UAV flying a SAR mission in an outdoor environment is not going to be flying at 1.5 meters, which means that the resolution presented in their images is impractical, and would be more similar to doing a person detection model study using a typical handheld camera.
Secondly, the study photographed people in an office setting who are in a prone, or lying position. While this might work well for their intentions for their models, I believe they could have incorporated more positions to test with their models. In a SAR mission, a person might not always be prone. He or she could be standing or sitting as well. Because of this I believe their dataset is relatively incomplete, as they were testing the effectiveness of their models in various body poses, yet completely left out the more vertical poses.
Cavaliere, D., et al. (2017). "Semantically enhanced UAVs to increase the aerial scene understanding." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49(3): 555-567.
This paper focuses on using UAVs to increase the understanding of a scene in a video. Throughout the experiment, the testers used video footage and Google APIs to track, identify, and evaluate various objects. These objects range from people, to vehicles, to surrounding objects that give a georeference. They used this object identification and tracking to analyze these objects with one another, for example, the interaction between two people, a person and a vehicle, and multiple vehicles, and provided analysis between the objects such as potential danger of collision.
This study mainly uses video and uses a tracking algorithm to analyze objects within the video. One of the variables collected was GPS position, and this was taken from the embedded GPS tag in the video frame. While this can be effective in getting a general area in which an object is located, it quickly becomes fairly inaccurate depending on where in the image the object is located, as the GPS tag is based on the UAS platform location, or the center of the image if the camera is pointed straight down.
This is further affected if the camera is mounted at an angle less than 90 degrees. Since the algorithm they were using relied heavily on GPS position and analyzing the difference in position between various objects to determine proximity. The article even stated that a student walking on a sidewalk was detected as them crossing the road, and associated it with the low location accuracy of GPS Google Maps.
Goodrich, M. A., et al. (2008). "Supporting wilderness search and rescue using a camera‐equipped mini UAV." Journal of Field Robotics 25(1‐2): 89-110.
This article simply explores the use of a fixed-wing UAV in SAR operations. The researchers mainly use orthomosaics and video in their search process with the fixed wing UAV. For the video feed, there was a person monitoring who would freeze the video whenever something of note appeared. Throughout their SAR process, the person monitoring the video would point out an area of interest to the field personnel, and the ground searchers would attempt to locate that object and verify and report their findings. They found that this was fairly ineffective due to a lack of coordinating roles.
This article is fairly reflective of what the average person would assume goes on when using UAVs for SAR operations. A person is monitoring a video feed and reporting when something of interest appears, and attempting to direct the field searchers to the location. This is obviously ineffective due to the human aspect of SAR. Though a UAS has a better vantage point than ground searchers, the human aspect is by far their limiting factor because of how much the video monitor could potentially miss. Not only this, the article explained that when the video frame is frozen, there is video still playing in the background and footage that the monitor is missing that could contain valuable information.
Also, the article stated that orthomosaics were often created as they found these to help with identifying areas of interest. Issues with this, however, is that orthomosaics clip together images, and you could lose an area of the picture that contains what you actually are looking for. Not only this, but orthomosaics are impractical to SAR operations because of the lengthy processing time involved.
Rudol, P. and P. Doherty (2008). Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. 2008 IEEE aerospace conference, Ieee.
This study combined the use of color imagery and thermal imagery to attempt to detect human bodies within a given area for SAR operations. For this to work, the researchers had to calibrate their cameras so that a given pixel on the thermal camera corresponded to a given pixel on the color camera. Two platforms were used simultaneously the mission took around 10 minutes. The algorithm they used did find all 11 targets placed within the area, as well as 3 false positives.
One issue that they addressed but is apparent with thermal cameras is the resolution. With thermal imagery, it becomes more difficult to accurately measure an object’s temperature the further away from the object the sensor is. Another issue is that when a target was located, the location was determined based on the measuring or onboard GPS without “differential correction”, or correction based on where in the image the target was located. This definitely skews the accuracy, but the researchers did present the error in GPS measurements in their results.
Sun, J., et al. (2016). "A camera-based target detection and positioning UAV system for search and rescue (SAR) purposes." Sensors 16(11): 1778.
For this experiment, a fixed-wing UAV was used in conjunction with a GoPro Hero 4 camera which transmitted video to a ground station. The ground station consisted of small computing devices loaded with object identification algorithms. Once the software identifies a target, the image is transmitted back to the GCS. The main purpose was testing to determine the capabilities of locating a downed aircraft, although for testing, the researchers used a red Z, a red plane, a blue I, a blue V, a blue J, and a red Q. These targets were all located within the software except two during the entirety of the tests.
Overall the tests performed are performed well. The onboard computer in the UAV is continuously processing images as they are taken and transmitting in real-time to the GCS for further analysis. There remains the GPS inaccuracies due to how GPS data is recorded within an image; however, depending on the altitude of the platform, this inaccuracy could be nominal and not necessarily detrimental to the SAR operation. Another issue I see with a test of this sort is in the variety of targets identified. The test consisted of identifying blue and red objects, all of the same color blue and red. In real SAR operations, an object that needs identified could have a multitude of colors associated, and the test does not do a good job in showing a wider variety of colors and determining any issues associated with various colors.
Dufour, L., Owen, K., Mintchev, S., & Floreano, D. (2016, 9-14 Oct. 2016). A drone with insect-inspired folding wings. Paper presented at the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
This article speaks to the potential effectiveness of a new type of portable “origami” wing, via folding the wings such as many insects do. An overview of insect wings, proposing certain characteristics that allows them to be easily deployable yet sturdy, is given. Design challenges and manufacturability of the new wing is discussed, followed by experimental prototypes being built. The article finishes by analyzing results, and proposing intended work for the future.
The new wing is able to fold from a deployed surface area of 620cm^2 down to a mere 160cm^2. The overall reduced area makes a large difference in the portability of this UAV, especially if scaling up to slightly larger platforms. Actual flight tests were performed with the folded wing prototype, and seemed to perform very similarly to rigid wing options. A high framerate camera was used to determine exactly how fast the UAV can be set up. By the 500ms mark (0.5 seconds), the UAV was fully deployed, making this a faster alternative for fixed wing platforms. The compact folding design does limit the payload capacity substantially, where only a few hundred grams of weight can be comfortably added.
My main critique of the article is how the pre-flight is not accounted for in the ability to quickly take off. They wing can be unfolded and deployed in less than a second, which is great. However, flight plans need to be created, variables such as altitude, speed, coverage area, etc. need to be taken into account. A link between the UAV and the transmitter takes a few seconds, and then to acquire satellites takes more time as well. This UAV is certainly a quicker solution than setting up the Bramor C-Astral for example, but in practicality is it really so much faster than other small fixed wing UAVs such as the eBee which is mentioned in the article? The extra 1-2 minutes it may take to set up the eBee may still be the best solution for now, at least until the folding wing is able to hold heavier payloads.
Karaca, Y., Cicek, M., Tatli, O., Sahin, A., Pasli, S., Beser, M. F., & Turedi, S. (2018). The potential use of unmanned aircraft systems (drones) in mountain search and rescue operations. The American journal of emergency medicine, 36(4), 583-588.
Search and rescue can be aided substantially through the use of UAVs, where time is of the essence and UAVs can cover ground at a much faster pace. In mountain conditions, the first 60 minutes are the most important due to injury likelihood. This article compares two mountain-based SAR operations: utilization of UAVs and motorized transportation for SAR providers, and the more classic on-foot search.
The scenario was carried out ten times, and similar to our simulation method, a mannequin has been randomly placed on the mountain to be found by the search teams. The on-foot team consisted of five certified rescuers, while the UAV/snowmobile team consisted of three rescuers and a UAV pilot. A DJI Phantom 3 Pro was used for the mission. On-foot rescue averaged 57.3 minutes while the UAV rescue averaged 8.9 minutes. While the on-foot team was still able to rescue the person in less than 60 minutes most of the time, every second is precious when dealing with potential hypothermic conditions in the mountains. The UAV team was much quicker, and more effective, and this is without utilization of any software other than a live video feed from the UAV.
The main issue with this article is the lack of depth of various scenarios. They do an incredible job outlining their specific scenario, a person dressed in dark clothes unconscious on top of the snow. However, many different scenarios are likely to occur, even just in the mountains. For example, if the person were stuck under the snow in an avalanche, and overhead view from a UAV would be little to no help. The writers do make note of this, though. The limitations of the study are very well thought out, and many different ones are included. If wind speed had been higher, temperature colder, or visibility is poor then on-foot rescue could potentially be better. Bigger platforms than the Phantom 3 Pro could have also been utilized, ones that stand against wind and temperature conditions better. The last critique would be that deployment time of the UAV was not accounted for, just actual search time. The Phantom 3 Pro should only take a few minutes at most to set up, but that time was not included in the search time. The UAV team still has a very clear and distinct advantage in this specific scenario over on-foot rescue missions.
Rémy, G., Senouci, S.-M., Jan, F., & Gourhant, Y. (2013). SAR. Drones: drones for advanced search and rescue missions. Journées Nationales des Communications dans les Transports, 1, 1-3.
Discussed in this article is the cooperation of a fleet of UAVs to be used in disaster scenarios, and SAR missions to autonomously report certain events back to the mission leader. Various sections of the article describe the autopilot and the exploration feature of the UAVs, followed by network information, and finally test results.
One issue with this proposed framework is that a fleet of UAVs is being deployed by one “pilot”, which goes against Part 107 regulations. A COA could be acquired in order to bypass this though. The next problem I see is the algorithm in which the UAVs report different disaster and rescue scenarios. An “event” is mentioned throughout the article that the UAVs should be able to report, but what is the event? The proposed usage for these UAVs is for earthquakes, tsunamis, SAR, and more. The real test is looking for a singular “event” (a black line on a white background). How easy would it be to create an algorithm that looks for specific events at specific disasters or SAR missions? Would they be able to work all at once, or would a different firmware with a separate algorithm need to be uploaded to the network for each unique disaster?
There are a lot of interesting ideas here, but at this moment this seems very proof-of-concept-esque, rather than something that can be implemented to help today. However, the Loc8 software is in a very similar stage right now, so I understand the user-side challenges that arise with new technology to some extent. If this could be a cohesive, one-stop shop for disaster aide, it will be an incredible new technology.
Waharte, S., & Trigoni, N. (2010). Supporting search and rescue operations with UAVs. Paper presented at the 2010 International Conference on Emerging Security Technologies.
This article discusses four main parameters that need to be thought about when creating SAR UAV algorithms. The four parameters are sensor data quality, energy limitations of the UAV, environmental conditions, and information exchange from the UAV to the team.
One issue with the algorithmic integration is utilizing “Greedy heuristics”, which essentially forces the UAV to search certain areas more heavily than others. An example given is when a person is believed to be along a road, and not a river, but both are within the range the person is assumed to be in. Essentially, humans are forcing the UAV to do less rigorous searches of certain areas to allow for more detailed searches of others. What is the missing person isn’t where we believe they are? The UAV may miss the person in this scenario. While it may be beneficial in some or even most cases due to our knowledge of human psychology, what if we’re wrong and miss finding the person because we looked in the wrong place? People who are lost may be delirious, hallucinating, or encounter a whole host of other problems based on many factors.
Another issue is that, regardless of altitude, the UAVs are set to fly at the same speed. In a situation where an area need only be imaged once in order to find the missing person, we are losing valuable time by flying at the same speed. At a higher altitude, more area is covered in one frame. Therefore, we can fly at a faster speed and achieve the same overlap that a slower speed and lower altitude could. There should exist the ability for the UAVs to speed up at higher altitudes in order to cover ground more effectively.
Ward, S., Hensler, J., Alsalam, B., & Gonzalez, L. F. (2016). Autonomous UAVs wildlife detection
using thermal imaging, predictive navigation and computer vision. Paper presented at the 2016 IEEE Aerospace Conference.
Utilization of thermal cameras onboard UAVs in order to detect wildlife certainly has its benefits. The wide-reaching thoughts of this paper, however, can be extended into the field of search and rescue missions by utilizing thermal imagery in order to detect humans. This paper discusses thermal imagery being used to locate wilderness animals, and then transmit the GPS information of the animal back to a ground station. The same thing can be done in SAR with missing people. The system is capable of locating animals autonomously, and then creating thermal heatmaps of the area to help give the team a better idea of where the animal is, in addition the having GPS coordinates. It would seem that the next logical step would be to use this for SAR.
The major issue I see here is the wilderness scene that the operations may need to be performed in. With canopy cover that may be seen in many wilderness areas, RGB, thermal, and many other sensors would have a tough time locating a person. One fix for this would be flying at an oblique angle to the tree line, rather than above it in nadir. Challenges are still present though, because trees and brush may still provide too much cover to visualize a person, even with thermal. Lidar is able to penetrate canopy cover, so it may be a method to look into when performing SAR missions in heavily forested areas.
Bevacqua, G., Cacace, J., Finzi, A., & Lippiello, V. (2015, April). Mixed-initiative planning and execution for multiple drones in search and rescue missions. In Twenty-Fifth International Conference on Automated Planning and Scheduling.
Similar in concept to that of the Intel swarm drone show this article discusses the technique of using a multitude of unmanned systems at once in a coordinated effort to increase the speed of search in rescue times. This coordinated effort still requires a human input, but the idea is that a single person can control the entirety of the swarm with minimal involvement. This article also discusses the varying level of autonomy that can be achieved. The article also discusses the basic structure of their proposed search and rescue techniques, and also provides multiple diagrams that demonstrate different flight plans and patterns that the drones could use to cover area in the most effective manner. There are also flow charts that show the different levels of autonomy that can be achieved, and how the operator fits into the overall system.
Although this article is an interesting read and proposes ideas that seem very useful, there are many downsides. The biggest issue with this proposal is exactly that, it's a proposal. No hands on research or testing has been done to prove or support any claims, it is all theoretical. Easier said than done is a phrase that comes to mind when reading this paper, with no real evidence that this technique will actually benefit search and rescue it is difficult to consider it. The other significant downside to the proposed plan is that it seems very complex. In a real natural disaster scenario action needs to be taken quickly, and the complexity of adding in more platforms and trying to control them at once, not to mention trying to process twice the amount of data at once seems overwhelming for a small team. OVerall the article proposes a very interesting concept, one that is probably worth testing, but for now it seems beneficial to focus on the techniques we have planned.
Câmara, D. (2014, November). Cavalry to the rescue: Drones fleet to help rescuers operations over disasters scenarios. In 2014 IEEE Conference on Antenna Measurements & Applications (CAMA) (pp. 1-4). IEEE.
This article proposes more than just using drones as a way to locate lost persons, it suggests taking the power of unmanned systems to the next level. In a natural disaster scenario once of the most important items is communication, without it trying to find missing items can become very difficult. This article discusses using drones to provide a communication structure along with creating up to date maps to help rescuers understand where events are happening. This is very helpful because usually during a natural disaster the original methods of communication are not available, so by using an unmanned system as a kind of cell tower to allow rescuers to communicate is vital. By providing maps it will also allow teams to better understand their present location along with their ability to locate those who are lost.
Similar to the last article this one is offering a solution that is complex and difficult to quickly deploy during a disaster. It involves using a multitude of fully autonomous drones that would fly around communicating with one another and the ground crew to offer the best possible assistance to the rescue teams. The biggest issue with this is the financial burden it would put on teams. Not only are multiple platforms required to operate this system but the equipment and programming needed to provide this level of coms is immense. The level of communication needed between the drones is also very complex. In the diagrams provided illustrates an EV as the fixed wing platform. While this drone does provide a level of autonomy that is self sustaining, it does not communicate with other systems or behave in a way that allows for a swarm mentality.
Wolfe, V., Frobe, W., Shrinivasan, V., Hsieh, T. Y., & Gates, H. M. (2014). Feasibility Study of Utilizing 4G LTE Signals in Combination With Unmanned Aerial Vehicles for the Purpose of Search and Rescue of Avalanche Victims (Increment 1). University of Colorado at Boulder, Research Report.
The purpose of this study was to determine whether using a drone combined with 4GLTE could be a faster and safer way of finding a missing person in an avalanche than the traditional techniques. The drone would be deployed and emit a signal that would turn a smartphone into a homing beacon. One of the biggest issues was determining if the signal would travel through snow, along with the launch and recovery of an unmanned system in an avalanche environment.
This article mostly focuses on the theoretical application of this idea, little real world testing was done. Although this project did focus and prove that the use of LTE to locate someone using their smartphone as a homing device is feasible, in regards to our particular project this paper does not discuss the use of a UAV in depth. It provides some good ideas in terms of frequencies and methods to locate a person but for a class that is focused on the use of unmanned systems this paper did not seem to provide any help on what kind of platforms or techniques to try for SAR.
Molina, P., Colomina, I., Victoria, P., Skaloud, J., Kornus, W., Prades, R., & Aguilera, C. (2012). Drones to the Rescue!.
This paper is interesting because it specifically focuses on the use of drone technology for SAR while making the rescuers have as little involvement in the process as possible. It proposes using mini drones that are fully autonomous and programmed to recognize certain situations, such as groups of disabled people in need of assistance. The concept of using minidrones that are fully autonomous is intriguing because this gives them the capability to fly both indoors and outdoors. This significantly increases the benefit of the system because it is not limited to those who are stuck outside. In a situation such as flooding, for example, it is more than likely that most of the victims in need of help would be indoors seeking shelter. This system could prove extremely beneficial in a natural disaster scenario if perfected.
The level of complexity needed to perfect this system is on a different level than those previously discussed. While this article does go into great detail, demonstrating the use of fully autonomous drones indoors and the various sensors that could be used to help locate people, trying to coordinate this during a disaster might prove difficult. This article is very beneficial to our project because it specifically discusses ways unmanned systems can be used to track people, locate targets, and work together to improve the SAR time. However, this is a very complex system while our proposed idea of using locate to find a missing person could prove to be just as fast and require significantly less time.
Cacace, J., Finzi, A., & Lippiello, V. (2016). Multimodal interaction with multiple co-located drones in search and rescue missions. arXiv preprint arXiv:1605.07316.
The research paper discusses the ability for SAR personnel to have limited but decisive interactions with UAVs that reduce the time needed for someone to give instructions to a UAV but still maintain the same effectiveness of a traditional UAV operator. Some techniques mentioned in the article include hand gestures and speech recognition. This allows someone performing SAR to remain hands free and not fully invested to the UAV but still achieve the benefit from the platform. This is a win win scenario, not as many people are required for SAR efforts and yet the traditional SAR personnel can still be focused on their tasks while receiving aid from the drones.
This article does not discuss the specifics of the UAV operation such as takeoff, landing, charging, or an overall hub for the drones to be worked on. It might be nice for the SAR personnel to not have to be as immersed as a traditional UAV operator but that does not mean there are no people who specifically watch over the drones. At some point they will need in field maintenance and attention and the article does not focus on some of these kinds of specifics. Theoretically it would be relatively easy to operate a UAV in these kinds of environments, but someone on the ground cannot see what the drone is seeing without FPV, so a level of semi autonomy is needed in order for this to be successful.
Bejiga, M. B., Zeggada, A., & Melgani, F. (2016). Convolutional neural networks for near real-time object detection from uav imagery in avalanche search and rescue operations. Paper presented at the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).
This paper outlines the usage of UAV’s in an avalanche Search and Rescue (SAR) scenario. The proposes are intended to decrease the total time spent in search and rescue missions. The images acquired by the UAV are processed through a pre-trained convolutional neural network (CNN) to extract discriminative features. The experiment was conducted at ski resorts and used backpacks, skis and other equipment typically worn in a skiing accident. 165 training images were collected and ran through the CNN. The main factors that were tested in the experiment were processing speed and accuracy. Producing false negatives and false positives were a big issue in the experiments.
While this methodology has worked for the data set it is still a very small data set and needs to be tested more and in more environments. The software seems to be there and functions well but is missing the implementation into search and rescue. The data was processed on a standard desktop computer so most people will be able to run the processing. More flights need to be conducted and more experiments for this process to fully work out the kinks. It is a good start but there needs to be more field work to prove this will be an effective solution over typical search and rescue methods.
Ghazali, S. N. A. M., Anuar, H. A., Zakaria, S. N. A. S., & Yusoff, Z. (2016). Determining position of target subjects in maritime search and rescue (msar) operations using rotary wing unmanned aerial vehicles (uavs). Paper presented at the 2016 International Conference on Information and Communication Technology (ICICTM).
This paper proposes possible UAV operations to assist with search and rescue in maritime environments. The proposed process includes the use of a multi-rotor platform for its versatility and ability to fly vertically, horizontally and hover. The paper discusses how a grid pattern flown by multiple UAVs at one time will be an effective method for covering large areas quickly. The main research question they were trying to answer is how to determine the exact coordinates of the target. The proposed solution is to implement an algorithm that would identify the target in a specific quadrant in the image, decrease altitude, reidentify the target and continue to descend until it is considered accurate.
While this paper has gone into a more well thought out workflow and implementation of UAV’s into search and rescue there is no field work that has been conducted to conclude that their proposed solution works. As of right now the solution is theoretical, with experiments and field work I believe there could be potential to this solution.
Kellenberger, B., Volpi, M., & Tuia, D. (2017). Fast animal detection in UAV images using convolutional neural networks. Paper presented at the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS).
The use of UAV’s in animal tracking can directly relate and/ or be used in a search and rescue mission. This method is proposed for large animal identification and animal tracking methods intended for livestock conservation. They had also used low cost UAV’s to ensure that the proposed solution would work for almost everyone with access to a UAV. Through their experiments and software they are able to process 75 images per second allowing for real time monitoring and tracking of animals. These experiments were done at the Kuzikus Wildlife Reserve Park in central Namibia.
This solution has been thoroughly tested and has been proven to work. It is possible to use this technology in a search and rescue mission by identifying a human instead of an animal. The ability of them to work in nearly real time allows for very fast missions and allows for quick deployment to assist the person of interest. The issue with the data is that is was acquired in 2014. They should conduct more recent data collection missions to have more up to date data as the UAS industry is advancing rather quickly.
Leira, F. S., Johansen, T. A., & Fossen, T. I. (2015). Automatic detection, classification and tracking of objects in the ocean surface from uavs using a thermal camera. Paper presented at the 2015 IEEE aerospace conference.
This paper proposes a solution for object identification with thermal imagery. The solution uses a custom made fixed wing UAV using a pixhawk flight controller and a FLIR thermal camera. Using complex algorithms they are able to process thermal imagery using a machine vision system. The machine vision system incorporates the use of the thermal camera and onboard processing power to perform real time classification, object detection and tracking of objects on the ocean surface. Flights and experiments were conducted to test the platform and data processing.
There were only a few data collection flights conducted for this experiment. More flights will need to be conducted to work out issues with the solution. The platform that is used is also not very accessible to the general public or first responders looking to use it for search and rescue.
Ranjan, A., Panigrahi, B., Sahu, H. B., & Misra, P. (2018). SkyHelp: Leveraging UAVs for emergency communication support in deep open pit mines. Paper presented at the 2018 10th International Conference on Communication Systems & Networks (COMSNETS).
Open pit mines are extremely dangerous working environments, the use of UAV’s in search and rescue of open pit mines allows for a safer work environment. The paper discusses the use of UAV’s for emergency communication with the rescuers outside of the mine and the workers that are trapped. Due to the characteristics of open pit mines there is normally difficulty transmitting radio waves from the top to the bottom. The proposed solution is to use one or multiple uavs to act as relays or nodes to enhance the radio communication.
The main issue with the solution is the interference the UAV will encounter if it is being used to help transmit radio waves. It is possible the UAV will encounter connection issues with the transmitter or GPS lock and end up crashing and causing more damage. This is an obstacle not discussed in the paper and is a big concern.
Andriluka, M., et al. (2010). Vision based victim detection from unmanned aerial vehicles. 2010
IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE.
This paper examines the use of UAVs to detect people using two different detection models using an onboard camera. The researchers flew a quadcopter in a closed office environment at an approximate altitude of 1.5 meters and captured 220 images. The models used detected various body shapes and positions and highlighted potential targets of what it thought were humans. The study compared the two body detection models, as well as how performance was affected by combining the two models.
While this test is certainly useful in determining effectiveness of body detection models, the environment in which the test was flown seems impractical. They stated that they are focusing on victim detection from a UAV, which is true; however, a UAV flying a SAR mission in an outdoor environment is not going to be flying at 1.5 meters, which means that the resolution presented in their images is impractical, and would be more similar to doing a person detection model study using a typical handheld camera.
Secondly, the study photographed people in an office setting who are in a prone, or lying position. While this might work well for their intentions for their models, I believe they could have incorporated more positions to test with their models. In a SAR mission, a person might not always be prone. He or she could be standing or sitting as well. Because of this I believe their dataset is relatively incomplete, as they were testing the effectiveness of their models in various body poses, yet completely left out the more vertical poses.
Cavaliere, D., et al. (2017). "Semantically enhanced UAVs to increase the aerial scene understanding." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49(3): 555-567.
This paper focuses on using UAVs to increase the understanding of a scene in a video. Throughout the experiment, the testers used video footage and Google APIs to track, identify, and evaluate various objects. These objects range from people, to vehicles, to surrounding objects that give a georeference. They used this object identification and tracking to analyze these objects with one another, for example, the interaction between two people, a person and a vehicle, and multiple vehicles, and provided analysis between the objects such as potential danger of collision.
This study mainly uses video and uses a tracking algorithm to analyze objects within the video. One of the variables collected was GPS position, and this was taken from the embedded GPS tag in the video frame. While this can be effective in getting a general area in which an object is located, it quickly becomes fairly inaccurate depending on where in the image the object is located, as the GPS tag is based on the UAS platform location, or the center of the image if the camera is pointed straight down.
This is further affected if the camera is mounted at an angle less than 90 degrees. Since the algorithm they were using relied heavily on GPS position and analyzing the difference in position between various objects to determine proximity. The article even stated that a student walking on a sidewalk was detected as them crossing the road, and associated it with the low location accuracy of GPS Google Maps.
Goodrich, M. A., et al. (2008). "Supporting wilderness search and rescue using a camera‐equipped mini UAV." Journal of Field Robotics 25(1‐2): 89-110.
This article simply explores the use of a fixed-wing UAV in SAR operations. The researchers mainly use orthomosaics and video in their search process with the fixed wing UAV. For the video feed, there was a person monitoring who would freeze the video whenever something of note appeared. Throughout their SAR process, the person monitoring the video would point out an area of interest to the field personnel, and the ground searchers would attempt to locate that object and verify and report their findings. They found that this was fairly ineffective due to a lack of coordinating roles.
This article is fairly reflective of what the average person would assume goes on when using UAVs for SAR operations. A person is monitoring a video feed and reporting when something of interest appears, and attempting to direct the field searchers to the location. This is obviously ineffective due to the human aspect of SAR. Though a UAS has a better vantage point than ground searchers, the human aspect is by far their limiting factor because of how much the video monitor could potentially miss. Not only this, the article explained that when the video frame is frozen, there is video still playing in the background and footage that the monitor is missing that could contain valuable information.
Also, the article stated that orthomosaics were often created as they found these to help with identifying areas of interest. Issues with this, however, is that orthomosaics clip together images, and you could lose an area of the picture that contains what you actually are looking for. Not only this, but orthomosaics are impractical to SAR operations because of the lengthy processing time involved.
Rudol, P. and P. Doherty (2008). Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. 2008 IEEE aerospace conference, Ieee.
This study combined the use of color imagery and thermal imagery to attempt to detect human bodies within a given area for SAR operations. For this to work, the researchers had to calibrate their cameras so that a given pixel on the thermal camera corresponded to a given pixel on the color camera. Two platforms were used simultaneously the mission took around 10 minutes. The algorithm they used did find all 11 targets placed within the area, as well as 3 false positives.
One issue that they addressed but is apparent with thermal cameras is the resolution. With thermal imagery, it becomes more difficult to accurately measure an object’s temperature the further away from the object the sensor is. Another issue is that when a target was located, the location was determined based on the measuring or onboard GPS without “differential correction”, or correction based on where in the image the target was located. This definitely skews the accuracy, but the researchers did present the error in GPS measurements in their results.
Sun, J., et al. (2016). "A camera-based target detection and positioning UAV system for search and rescue (SAR) purposes." Sensors 16(11): 1778.
For this experiment, a fixed-wing UAV was used in conjunction with a GoPro Hero 4 camera which transmitted video to a ground station. The ground station consisted of small computing devices loaded with object identification algorithms. Once the software identifies a target, the image is transmitted back to the GCS. The main purpose was testing to determine the capabilities of locating a downed aircraft, although for testing, the researchers used a red Z, a red plane, a blue I, a blue V, a blue J, and a red Q. These targets were all located within the software except two during the entirety of the tests.
Overall the tests performed are performed well. The onboard computer in the UAV is continuously processing images as they are taken and transmitting in real-time to the GCS for further analysis. There remains the GPS inaccuracies due to how GPS data is recorded within an image; however, depending on the altitude of the platform, this inaccuracy could be nominal and not necessarily detrimental to the SAR operation. Another issue I see with a test of this sort is in the variety of targets identified. The test consisted of identifying blue and red objects, all of the same color blue and red. In real SAR operations, an object that needs identified could have a multitude of colors associated, and the test does not do a good job in showing a wider variety of colors and determining any issues associated with various colors.
Comments
Post a Comment