Why field-of-view matters
Mar 23, 2021
The Robot Report
March 23, 2021
The more an Autonomous Mobile Robot can sense, the better it will perform. And the larger the field-of-view of each sensor, the more practical a solution can be. If a robot cannot sense it, it cannot deal with it. Whether a person, or another approaching AMR, or an object affecting the terrain it is traversing, it’s important for a robot to have complete situational understanding. But this can be impractical due to bill-of-material costs, weight of too many sensors and wiring, not enough inputs available to the on-board computer, or due to software overhead to prepare, stitch, fuse, and/or synchronize all of the sensing data. The concept of sensors that maximize the field-of-sensing is a concept that makes logical sense. Yet, we see so many sensors being used as ‘de facto’ with limited sensing fields. Figure 1 is an example of multiple stereo vision sensors being used to try to simulate a large field-of view sensor. Figure 1: Three normal stereo cameras combined to cover the entire forward range of motion of an AMR. Green represents depth field, yellow represents RGB field, red represents blind spots. Normally when a large sensing field is needed, LiDARs are considered. And while LiDARs do provide a large horizontal sensing field, it comes at the expense of vertical sensing field. Unless of course the system can afford a 3D LiDAR, but even then, the sensing field is limited. See Figure 2. Figure 2: Examples of vertical field-of-view. Top represents standard stereo cameras and 3D LiDAR. Bottom represents 2D LiDAR. Red shapes represent objects outside of the sensing range. Another reason LiDAR are often selected is because of distance range and accuracy. Undoubtedly, this is true; however, as LiDAR are scanning sensors, a great deal of contextual understanding is missing between scan lines in the vertical and possibly during the horizontal scans. Let us now consider a stereo vision system that not only has a large horizontal field-of-view, but also a large vertical FoV as well. I am not proposing that this is a perfect sensor; however, it does have some attributes that can be used to simplify a sensor stack by providing a large sensing field. First, let’s introduce the idea that a stereo depth camera can have up to 360° horizontal field of view with greater than 100° in the vertical. Additionally, this stereo camera has zero minimum depth distance – which means this camera has no blind spots anywhere around it and within a large field-of-view from floor to ceiling. And no moving parts thanks to an innovative optics design allowing only a single CMOS sensor to be required. When we start with a stereo camera possessing these attributes, tasks like obstacle detection become much easier. While few robot designs can take advantage of the entire 360° horizontal FoV, starting with too much FoV and reducing it through a software command is much more straightforward than trying to increase the FoV with multiple stereo cameras and stitching. Figure 3 shows a configuration using two of these stereo cameras on opposing corners of an AMR whose payload goes on the top platform. With just two cameras, the robot can now have complete situational understanding around it. Now, the robot can maneuver forward and backward equally well; the robot knows if obstacles are approaching from any direction; and it will know if anything falls off of the payload. Figure 3: Two large field-of-view stereo cameras provide comprehensive situational awareness completely around an AMR
Figure 4 shows the benefit of vertical FoV. Small obstacles on the ground or hanging above the AMR can easily be detected. In total, this means that any obstacle which could interfere with the AMR will be detected. Figure 4: The benefit of Vertical FoV for detecting obstacles from the floor to hanging within the height of the AMR plus payload
This configuration is also beneficial for mapping. The obstacle detection data can be transmitted as 2D or 2.5D compatible with most SLAM algorithms. And, it’s an affordable and ready solution for the migration to 3D mapping, which has many benefits and is expected to become more widely adopted later in 2021. We could consider this an ideal solution, but even these stereo cameras can’t do everything. LiDAR may be needed for safety certifications or to operate with low ambient lighting. But the premise is still valid. A large FoV stereo camera can greatly simplify the sensor stack of any robot, but especially AMRs. What is this stereo camera being presented? It’s actually a family of omnidirectional stereo depth cameras from DreamVu, Inc. PAL & PAL Mini each use innovative optics and computational efficient software to de-warp the stereo images captured so an immersive and dense stereo RGB pair can be used for AI algorithms such as object recognition. Or so obstacle detection and obstacle avoidance (ODOA) can be efficiently performed with a comprehensive occupancy map. Or so a complete 3D point-cloud can be generated in every frame for digital twinning and 3D mapping. There is no perfect sensor, but relying on large FoV stereo cameras provides a great opportunity to simplify sensor stacks and to help improve the contextual understanding of robots. Large FoV stereo cameras won’t get fooled even on plain walls with no texture. And fewer sensors means simpler calibration and more up-time. Figure 5: PAL and PAL Mini omnidirectional stereo depth cameras
Interested in learning more? Visit www.dreamvu.com where you can see for yourself. These cameras are fully qualified with an IP67 rating. They are built for the stringent requirements of industrial applications; but they’re also built to meet the cost pressures of domestic and other cost sensitive applications. Sponsored content by DreamVu Inc.
Tell Us What You Think!