GIS and Robotics
Annotated Bibliography Project for GIS and Science (GEO 565) at Oregon State University
created by: Rachel NehmerThis annotated bibliography was created in completion of a course requirement for Geoscience 565 GIS and Science at Oregon State University. The sources are related to automated navigation and/or robotics using geographic information. There are two types of papers presented, one is the use of GIS as input to the robot (i.e. robot navigation) and the other bis GIS as an output of the robot (i.e. a robot that senses and records an environment).
P. Gutierrez, A. Barrientos, J. del Cerro and R. San Martin
Mission planning and simulation of unmanned aerial vehicles with a GIS-based framework
AIAA Guidance, Navigation, and Control Conference and Exhibit, 21 - 24 August 2006, Keystone, Colorado
Mission planning and simulation of unmanned aerial vehicles with a GIS-based framework
AIAA Guidance, Navigation, and Control Conference and Exhibit, 21 - 24 August 2006, Keystone, Colorado
The paper presents a framework that was designed to help plan missions for heterogenous unmanned aerial vehicles (UAVs) with a variety of payloads. The framework consists of a data model, software with a graphical user interface for mission planning, and an Aerial Vehicle Control Language (AVCL). The data model starts with GIS data collected from various sources and preprocessed into a single file to be used as the Project's GIS. The mission planning software allows the planner to add to the data model by setting dynamic rules like no fly zones and by creating hotspots that are relevant to the mission. The collected data is then used to run the simulator in 2D and 3D environments to visualize the results of the mission plan. If the plan is satisfactory, it can then be sent to any UAV that has the right equipment for the mission and has an interpreter for the AVCL. Because the framework is not tied to a particular set vehicles, the authors also created generic UAVs to have as a baseline for planning missions that could be completed by any UAV that falls into a particular set. The authors conclude that the AVCL-based framework for mixing vehicles at run-time is a significant advantage of their mission planning framework.
H. Mori and S. Kotani
Robotic Travel Aid for the Blind: HARUNOBU-6
Second European Conference on Disability, Virtual Reality, and Assistive Technology, S¨ovde, Sweden, 1998.
Robotic Travel Aid for the Blind: HARUNOBU-6
Second European Conference on Disability, Virtual Reality, and Assistive Technology, S¨ovde, Sweden, 1998.
This paper discusses the development of a travel aid for the blind that is built on top of a wheelchair base and uses a combination of GIS, GPS, computer vision, sensors, and detection algorithms for guidance. A GIS is used as a base for the guidance system. The GIS contains robot guide information which includes road networks, path networks, sign patterns, and landmarks. A sign pattern is anything in the immediate area that the robot's sensors can detect, examples being fence posts, sidewalk curbs, and landmarks. The robot works generally by using computer vision to match the sign patterns in the GIS to the real-time data that the sensors provide. The robot also has algorithms to detect car shadows to avoid traffic accidents and rhythm matching to try to detect human movement for following foot traffic. A significant amount of effort went into the car detection algorithms for obvious safety concerns. The authors set up three courses to test various components of the system. The first course was a university campus which would test the more general setup. The second course was an open space which would rely more heavily on the GPS system and the third course was inside a hospital where the sonar range sensor and optical sensor was dominant.
J. Meguro, J. Takiguchi, M. Hatayama, T. Hashizume, et al
Creating Spatial Temporal Database by Autonomous Mobile Surveillance System
(A Study of Mobile Robot Surveillance System using Spatial Temporal GIS Part1)
Proceedings of IEEE International Workshop on Safety Security Rescue Robotics 2005
Creating Spatial Temporal Database by Autonomous Mobile Surveillance System
(A Study of Mobile Robot Surveillance System using Spatial Temporal GIS Part1)
Proceedings of IEEE International Workshop on Safety Security Rescue Robotics 2005
The paper presents a mobile robot surveillance system that can be used in areas where GPS signals are not always reliable. The system is evaluated in a factory area with tall buildings that can block or change the path of GPS signals. The GIS of the mobile robot starts as a base map of the area that includes the streets and buildings. The mobile robot uses GPS signals and the base map whenever possible and uses road crossings to reorient itself. The mobile robot is constantly scanning its surroundings to determine if there are any obstacles like parked cars or pedestrians. If there is a high probability of there being an obstacle, the robot makes adjustments to it's planned path using the base map and predefined avoidance rules. The perceived obstacle is recorded in the GIS along with the time it was observed. These 'obstacles' are an important part of surveillance because they are not part of the base map or planned path and therefore become features of interest on a real-time environmental map.
M. Kais, S. Dauvillier, A. De La Fortelle, I. Masaki, C. Laugier, et al
Towards Outdoor Localization Using GIS, Vision System and Stochastic Error Propagation
2nd International Conference on Autonomous Robots and Agents December 13-15, 2004 Palmerston North, New Zealand
Towards Outdoor Localization Using GIS, Vision System and Stochastic Error Propagation
2nd International Conference on Autonomous Robots and Agents December 13-15, 2004 Palmerston North, New Zealand
The paper presents a method for robot localization (the position and attitude of the robot with respect to a model) when a robot is on the move and does not necessarily have good gps coverage. The robot discussed in the paper is a remote controlled vehicle with a GIS database and an onboard camera. The method developed starts with an initial vehicle configuration (steering wheel angle, speed) and an initial point in the GIS mapped to an initial point in the camera's image. Then, for each small displacement of the vehicle, the linear and angular velocities are calculated and a formula developed in the paper for error adjustment is applied if there is a good gps reading. The result of the calculation is used to determine the uncertainty of the location and can be used along with the 3D GIS data to project areas of uncertainty for features of interest onto the camera image. For example, say the GIS data contains fire hydrants and the calculations show that there is a high degree of location uncertainty then the camera image will have an overlay of large ellipses around the fire hydrants whereas a small degree of uncertainty would have smaller ellipses around the fire hydrants. An experiment testing the method is discussed in the paper and there is also a good review of prior work on localization techniques.
M Jabbour, V Cherfaoui, P Bonnifait
Management of Landmarks in a GIS for an Enhanced Localisation in Urban Areas
IEEE Intelligent Vehicle Symposium, 2006
Management of Landmarks in a GIS for an Enhanced Localisation in Urban Areas
IEEE Intelligent Vehicle Symposium, 2006
The authors discuss a localization technique which uses a GIS that starts with only road networks. A route is precalculated for the autonomous vehicle and it makes a first pass on the route to 'learn' the way. During this first pass the vehicle is recording images of the route in the GIS along with the vehicle position (heading and speed), gps coordinate (including strength and precision of signal), and a timestamp. After the vehicle has made the first pass, the size of the GIS has grown quite large so the images are processed to find landmarks that are easiest to detect with computer vision. These landmarks are then stored with IDs connecting them to the time and place and the images are deleted. When the vehicle makes a second pass, it queries the GIS for landmark IDs that correspond to the road segment and orders them by time. The vehicle then attempts to match the landmarks using computer vision and corrects it's route accordingly. This approach is useful for unmanned vehicles that will be repeating a route and cannot rely solely on GPS navigation.
I. Noda, H. Shimora, H. Akiyama
Conceptual Framework to Maintain Multiple and Floating Relationship among Coordinate Reference Systems for Robotics
First International Conference, Simulation Modeling and Programming for Autonomous Robots, 2008
Conceptual Framework to Maintain Multiple and Floating Relationship among Coordinate Reference Systems for Robotics
First International Conference, Simulation Modeling and Programming for Autonomous Robots, 2008
The paper presents a framework for coordinate reference systems that are outside of traditional GIS coordinate reference systems which are tied to the earth. This could be the case when doing extraterrestrial mapping but the paper focuses on robots creating their own coordinate system when they have limited knowledge of their surroundings. An example being a robot that is powered on in an area with no prior information about the location. The robots create their own coordinate system by exploring the area around them with whatever sensing technologies are at their disposal and recording it to a GIS with a coordinate system that makes sense given the surroundings. When the robot meets another robot or a source of traditional GIS, the presented framework provides a means of transformation between coordinate systems.
A. C. Murtra, J. M. Mirats Tur
Map Format for Mobile Robot Map-based Autonomous Navigation
Technical Report IRI-DT 2007/01, Institut de Robotica i Informatica Industrial, Barcelona, Spain, February 2007
Map Format for Mobile Robot Map-based Autonomous Navigation
Technical Report IRI-DT 2007/01, Institut de Robotica i Informatica Industrial, Barcelona, Spain, February 2007
This paper presents a map file format to be used by mobile robots for spatial understanding of its environment. Six key requirements are identified for creating map formats for mobile autonomous navigation. Given the requirements, the authors propose a map format that stores data in a 2D vector format with height attributes attached to geometric entities like stairs and ramps that are important to robot navigation. Along with the height information, the robot map format encodes information for path finding, localization, computer vision, and obstacle avoidance. When comparing a robot map to a GIS map, there are many similarities but the main differences are that a robot map is geometric as opposed to topological and the goal of a robot map is to help a robot navigate not to accurately represent the environment.
J. Meguro, K. Ishikawa, T. Hashizume, J. Takiguchi, I. Noda, M. Hatayama
Disaster Information Integration into Geographic Information System using Rescue Robots
IEEE/RSJ International Conference on Intelligent Robots and Systems
Disaster Information Integration into Geographic Information System using Rescue Robots
IEEE/RSJ International Conference on Intelligent Robots and Systems
In this paper a Mitigation Information Sharing Protocol (MISP) is developed to help standardize the information gathered from rescue robots so that it can be easily added to a GIS database which can then be queried in a standardized way. During rescue missions there may be a variety of robots used (i.e. unmanned airplanes or helicopters, snake robots) which often save their data in different formats with the exception that most rescue robots store disaster information as images and/or movies. The paper proposes that the rescue robot store an additional text file that records position (coordinates, angle of robot, etc.) and time so that the images can have spatial and temporal attributes. The image and text files are added to the database using an insert protocol that is a subset of the MISP. With the data added in a uniform way, the rescue humans can query the GIS to see what an area looked like at a given time to help with rescue making.
Sørensen, C.G., H.J. Olsen, A.P. Ravn, and P. Makowski
Planning and Operation of an Autonomous Vehicle for Weed Inspection
ASAE Ann. Int. Meeting/CIGR XVth World Cong., Chicago, Illinois, USA, 2002. ASAE paper 021177
Planning and Operation of an Autonomous Vehicle for Weed Inspection
ASAE Ann. Int. Meeting/CIGR XVth World Cong., Chicago, Illinois, USA, 2002. ASAE paper 021177
The authors developed software for creating a weed monitoring plan to be used to program and autonomous vehicle. First a sampling plan is developed, this may be influenced by weed densities from previous years or may use expectations of weed occurences. Either way, once a sampling scheme is decided a fixed grid is defined and a route is planned to visit each of the sampling points. After the planning stage, the autonomous vehicle travels the route and uses computer vision to determine the density of weeds at each sample point. Once the data is collected, spatial interpolation is used to create a map of weed densities which can be used for developing a spraying plan. The result of this system, is a lower cost weed monitoring system that results in a more precise application of sprays.
M. Rahimi, R. Pon, W. J. Kaiser, G. S. Sukhatme, D. Estrin, and M. Srivastava
Adaptive sampling for environmental robotics
IEEE International Conference on Robotics and Automation, ICRA, New Orleans, LA, 2004.
Adaptive sampling for environmental robotics
IEEE International Conference on Robotics and Automation, ICRA, New Orleans, LA, 2004.
This paper describes a new distributed, robotic sensor methodology, Networked Infomechanical Systems (NIMS), that enables accurate sampling data. The authors note that distributed sensor networks have been feasible for environmental monitoring but face the challenge of sensing uncertainty that can occur from unexpected physical phenomena. One way to deal with this uncertainty is for the sensor to be able to move and possibly get a better sensor reading. However, with robotics in natural setting there are a number of issues with location uncertainty once the robot is in motion. Also, in difficult terrain the robot may not be able to reach a sampling point. The authors describe NIMS which addresses many of the limitations of mobile robots in natural settings. The NIMS consists of robotic sensors that use a series of suspended cables for navigation which allows for more accurate localization, a better view, and easier obstacle avoidance. Once the robotic mobility issues were addressed, different adaptive sampling scheme were considered with the final choice being nested stratified sampling. In the end the authors created a novel robotic sensor system that used adaptive sampling to record environmental information. The collected information was then interpolated to create various environmental maps of the area.