Information processing device and information processing method
10731994 · 2020-08-04
Assignee
Inventors
- Takaaki Kato (Tokyo, JP)
- Shingo Tsurumi (Saitama, JP)
- Masashi Eshima (Chiba, JP)
- Akihiko KAINO (Kanagawa, JP)
- Masaki Fukuchi (Tokyo, JP)
Cpc classification
G06T7/246
PHYSICS
G01C3/14
PHYSICS
G06V20/56
PHYSICS
G01B11/2545
PHYSICS
H04N13/239
ELECTRICITY
G01C21/28
PHYSICS
G06T7/521
PHYSICS
International classification
G01C21/28
PHYSICS
H04N13/239
ELECTRICITY
G06T7/246
PHYSICS
G01C3/14
PHYSICS
G06T7/521
PHYSICS
H04N13/00
ELECTRICITY
G01B11/25
PHYSICS
Abstract
The present disclosure relates to an information processing device and an information processing method that are capable of estimating the self-position by accurately and continuously estimating the self-movement. The information processing device according to an aspect of the present disclosure includes a downward imaging section and a movement estimation section. The downward imaging section is disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface. The movement estimation section estimates the movement of the moving object in accordance with a plurality of images representing the road surface and captured at different time points by the downward imaging section. The present disclosure can be applied, for example, to a position sensor mounted in an automobile.
Claims
1. An information processing device; comprising: a downward imaging section on a bottom of a moving object, wherein the moving object is movable on a road surface; and the downward imaging section is configured to capture a plurality of images of the road surface at different time points; a movement estimation section configured to estimate movement of the moving object based on the plurality of images; a road surface estimation section configured to estimate a distance from the downward imaging section to the road surface; a first self-position estimation section configured to estimate a first self-position of the moving object based on the estimated movement of the moving object; an outward imaging section on the moving object, wherein the outward imaging section is configured to capture an image of surroundings of the moving object; a second self-position estimation section configured to estimate a second self-position of the moving object based on the image of the surroundings of the moving object; and a correction section configured to correct the first self-position based on the second self-position.
2. The information processing device according to claim 1, wherein the downward imaging section includes stereo cameras configured to capture a pair of images of the road surface at a same time point, and the road surface estimation section is further configured to estimate the distance to the road surface based on the pair of images of the road surface.
3. The information processing device according to claim 1, wherein the road surface estimation section includes a time of flight (ToF) sensor.
4. The information processing device according to claim 1, further comprising a light projection section configured to project a texture pattern onto the road surface, wherein the downward imaging section is further configured to capture an image of the texture pattern, and the road surface estimation section is further configured to estimate the distance to the road surface based on the image of the texture pattern.
5. The information processing device according to claim 4, wherein illumination of an imaging range of the downward imaging section is based on the projection of the texture pattern.
6. The information processing device according to claim 1, further comprising an illumination section configured to illuminate an imaging range of the downward imaging section.
7. The information processing device according to claim 1, wherein the correction section is further configured to compute weighted average of the first self-position and the second self-position.
8. The information processing device according to claim 1, wherein the correction section is further configured to adopt, based on a vehicle speed of the moving object, the second self-position instead of the first self-position.
9. The information processing device according to claim 1, wherein the correction section is further configured to adopt, based on a number of landmarks derived from the plurality of images of the road surface, the second self-position instead of the first self-position.
10. An information processing method, comprising: in an information processing device that includes a downward imaging section, a road surface estimation section, a first self-position estimation section, an outward imaging section, a second self-position estimation section, a correction section, and a movement estimation section; capturing, by the downward imaging section on a bottom of a moving object a plurality of images of a road surface at different time points, wherein the moving object is movable on the road surface; estimating, by the movement estimation section, movement of the moving object based on the plurality of images; estimating, by the road surface estimation section, a distance from the downward imaging section to the road surface; estimating, by the first self-position estimation section, a first self-position of the moving object based on the estimated movement of the moving object; capturing, by the outward imaging section on the moving object, an image of surroundings of the moving object; estimating, by the second self-position estimation section, a second self-position of the moving object based on the image of the surroundings of the moving object; and correcting, by the correction section, the first self-position based on the second self-position.
11. A non-transitory computer-readable medium having stored thereon, computer-executable instructions, which when executed by an information processing device, cause the information processing device to execute operations, the operations comprising: capturing, by a downward imaging section of the information processing device, a plurality of images of a road surface at different time points, wherein the downward imaging section is on a bottom of a moving object, and the moving object is movable on the road surface; estimating, by a movement estimation section of the information processing device, movement of the moving object based on the plurality of images; estimating, by a road surface estimation section of the information processing device, a distance from the downward imaging section to the road surface; estimating, by a first self-position estimation section of the information processing device, a first self-position of the moving object based on the estimated movement of the moving object; capturing, by an outward imaging section of the information processing device, an image of surroundings of the moving object, wherein the outward imaging section is on the moving object; estimating, by a second self-position estimation section of the information processing device, a second self-position of the moving object based on the image of the surroundings of the moving object; and correcting, by a correction section of the information processing device, the first self-position based on the second self-position.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DESCRIPTION OF EMBODIMENT
(16) The best mode for carrying out the present disclosure (hereinafter referred to as the embodiment) will now be described with reference to the accompanying drawings.
(17) An information processing device according to the embodiment of the present disclosure is mounted in a moving object to estimate its position. Note that the embodiment described below assumes that the moving object is an automobile. However, the information processing device can also be mounted in various other moving objects including autonomous mobile robots and vehicles such as bicycles and motorcycles not only in an automobile.
First Exemplary Configuration of Information Processing Device According to Embodiment of Present Disclosure
(18)
(19) The information processing device 10 includes a downward imaging section 11, an illumination section 13, a control section 14, and an information processing section 15.
(20) The downward imaging section 11 includes a pair of stereo cameras, namely, a first camera 12L and a second camera 12R (these cameras may be referred to also as the stereo cameras 12), captures a moving image at a predetermined frame rate to obtain chronological frame images, and outputs the chronological frame images to a subsequent stage. A frame image captured at time point t by the first camera 12L is hereinafter referred to as the first image (t), and a frame image captured at time t by the second camera 12R is hereinafter referred to as the second image (t).
(21)
(22) As illustrated in
(23) Further, two or more pairs of stereo cameras 12 may be included in the downward imaging section 11. In such an instance, as illustrated in
(24) Returning to
(25) The information processing section 15 includes a road surface estimation section 16, a movement estimation section 17, a road surface environment estimation section 18, and a self-position estimation section 19.
(26) The road surface estimation section 16 performs road surface estimation in accordance with a first image (t) and a second image (t) that are captured at the same time point t by the stereo cameras 12. The movement estimation section 17 performs movement estimation in accordance with a first image (t) (or a second image (t)) captured at a time point t and a first image (t1) (or a second image (t1)) captured at an immediately preceding time point t1. Both of these estimation operations are described in detail below.
(27)
(28) As illustrated in
(29) As illustrated in
(30) Returning to
(31) The self-position estimation section 19 estimates a current self-position (the current position of the own moving object) by adding the estimated movement of the own moving object to a reference point, that is, a known location at a certain time point.
(32) <First Self-Position Estimation Process by Information Processing Device 10>
(33)
(34) The first self-position estimation process is repeatedly performed at intervals equivalent to the frame rate of imaging by the stereo cameras 12 of the downward imaging section 11.
(35) Further, it is assumed that the downward imaging section 11 uses a pair of stereo cameras 12 to capture a moving image in advance at a predetermined frame rate in order to obtain chronological frame images, and outputs the chronological frame images to the information processing section 15 in the output stage. It is also assumed that the self-position estimation section 19 acquires geographical coordinates of a location at a certain time point and identifies the current self-position (the position of the own moving object) by regarding such a location as a reference point.
(36) In step S1, the road surface estimation section 16 of the information processing section 15 acquires the first image (t) and the second image (t), which are captured at time point t. In step S2, the road surface estimation section 16 detects feature points from the first image (t) and the second image (t), identifies the same feature points in these images, performs road surface estimation on the basis of parallax matching of these images, and calculates the feature point-to-camera distance.
(37) In step S3, the movement estimation section 17 of the information processing section 15 detects the same feature points in the first image (t) captured at the time point t and the first image (t1) captured at the immediately preceding time point t1, and estimates the self-movement (the movement of an own automobile) in accordance with the coordinate difference between the feature points in the two images.
(38) In step S4, the road surface environment estimation section 18 estimates the road surface environment, that is, for example, the slope and irregularities of the road surface, in accordance with the result of road surface estimation (the feature point-to-camera distance). At the same time, the self-position estimation section 19 estimates the current self-position by adding the currently estimated self-movement to a known reference point or to the previously estimated self-position. Subsequently, processing returns to step S1 so that steps S1 and beyond are repeated. The description of the first self-position estimation process performed by the information processing device 10 is now completed.
(39) <Second Self-Position Estimation Process by Information Processing Device 10>
(40) If it can be assumed that the road surface is a combination of planar triangles, the road surface estimation section 16 and the movement estimation section 17 are capable of performing road surface estimation and movement estimation by using a pixel gradient-based method.
(41)
(42) As illustrated in
(43) As illustrated in
(44)
(45) The second self-position estimation process is repeatedly performed at intervals equivalent to the frame rate of imaging by the stereo cameras 12 of the downward imaging section 11.
(46) Further, it is assumed that the road surface on which an automobile travels can be expressed by a polygon model including planar surfaces, and that the downward imaging section 11 uses a pair of stereo cameras 12 to capture, in advance, a moving image of a specific region set on the road surface at a predetermined frame rate in order to obtain chronological frame images, and outputs the chronological frame images to the information processing section 15 in the output stage. It is also assumed that the self-position estimation section 19 acquires geographical coordinates of a location at a certain time point and identifies the current self-position by regarding such a location as a reference point.
(47) In step S11, the road surface estimation section 16 of the information processing section 15 acquires the first image (t) and the second image (t), which are captured at time point t. In step S12, the road surface estimation section 16 directly estimates the planar shape of the specific region in accordance with the pixel gradient of the specific region in each of the first image (t) and the second image (t).
(48) In step S13, the movement estimation section 17 of the information processing section 15 directly estimates the self-movement (the movement of the own moving object) in accordance with the pixel gradient of the specific region in each of the first image (t) and the first image (t1) captured at the immediately preceding time point t1.
(49) In step S14, the road surface environment estimation section 18 estimates the road surface environment, that is, for example, the slope and irregularities of the road surface, in accordance with the estimated planar shape of the specific region. At the same time, the self-position estimation section 19 estimates the current self-position by adding the currently estimated self-movement to a known reference point or to the previously estimated self-position. Subsequently, processing returns to step S11 so that steps S11 and beyond are repeated. The description of the second self-position estimation process performed by the information processing device 10 is now completed.
Second Exemplary Configuration of Information Processing Device According to Embodiment of Present Disclosure
(50)
(51) This information processing device 20 includes a downward imaging section 21, an illumination section 22, a light projection section 23, a control section 24, and an information processing section 25.
(52) The downward imaging section 21 includes a camera. The camera is mounted on the bottom of a moving object (e.g., an automobile) and oriented downward such that a road surface is an imaging range without being affected by external light. The downward imaging section 21 captures a moving image at a predetermined frame rate to obtain chronological frame images, and outputs the chronological frame images to a subsequent stage. A frame image captured at time point t by the camera of the downward imaging section 21 is hereinafter referred to as the image (t).
(53) The illumination section 22 irradiates an imaging range of the downward imaging section 21 with illumination light. The light projection section 23 projects a texture pattern (structured light) onto the imaging range of the downward imaging section 21. Note that the illumination section 22 may be integral with the light projection section 23. Further, if adequate illuminance is obtained for capturing an image of feature points that may exist within the imaging range of the road surface when the texture pattern is projected onto the imaging range, the illumination light need not be irradiated. The control section 24 controls the downward imaging section 21, the illumination section 22, and the light projection section 23.
(54) The information processing section 25 includes a road surface estimation section 26, a movement estimation section 27, a road surface environment estimation section 28, and a self-position estimation section 29.
(55) The road surface estimation section 26 performs road surface estimation in accordance with the image (t) representing the road surface irradiated with the texture pattern. Note that a ToF (Time of Flight) sensor for irradiating the road surface with laser light and measuring the distance to the road surface in accordance with the time required for the laser light to bounce back may be used instead of the light projection section 23 and the road surface estimation section 26.
(56) The movement estimation section 27 performs movement estimation by tracking the same feature points in a first image (t) (or a second image (t)) captured at a time point t and a first image (t1) (or a second image (t1)) captured at an immediately preceding time point t1.
(57) The road surface environment estimation section 28 estimates the road surface environment, that is, for example, the slope and irregularities of the road surface, in accordance with the result of road surface estimation (road surface-to-camera distance).
(58) The self-position estimation section 29 estimates the current self-position by adding the estimated self-movement to a reference point, that is, a known location at a certain time point.
(59) <Self-Position Estimation Process by Information Processing Device 20>
(60)
(61) This self-position estimation process is repeatedly performed at intervals equivalent to the frame rate of imaging by the downward imaging section 21.
(62) Further, it is assumed that the downward imaging section 21 captures, in advance, an image of the road surface irradiated with the texture pattern at a predetermined frame rate in order to obtain chronological frame images, and outputs the chronological frame images to the information processing section 25 in the output stage. It is also assumed that the self-position estimation section 29 acquires geographical coordinates of a location at a certain time point and identifies the current self-position by regarding such a location as a reference point.
(63) In step S21, the road surface estimation section 26 of the information processing section 25 acquires an image (t) of the road surface irradiated with the texture pattern that is captured at time point t. In step S22, the road surface estimation section 26 calculates the road surface-to-camera distance in accordance with the shape of the texture pattern in the image (t).
(64) In step S23, the movement estimation section 27 of the information processing section 25 detects the same feature points in the image (t) captured at a time point t and the image (t1) captured at an immediately preceding time point t1, and estimates the self-movement (the movement of the own moving object) in accordance with the coordinate difference between the feature points in the two images and with the road surface-to-camera distance calculated in step S22.
(65) In step S24, the road surface environment estimation section 28 estimates the road environment, that is, for example, the slope, irregularities, and slipperiness of the road surface, in accordance with the result of road surface estimation (road surface-to-camera distance). At the same time, the self-position estimation section 29 estimates the current self-position by adding the currently estimated self-movement to a known reference point or to the previously estimated self-position. Subsequently, processing returns to step S21 so that steps S21 and beyond are repeated. The description of the self-position estimation process performed by the information processing device 20 is now completed.
(66) The above-described information processing devices 10 and 20 are capable of accurately estimating the self-movement at internals equivalent to the frame rate of imaging. Therefore, the current self-position can be accurately estimated.
(67) However, when, for example, an automobile is traveling on the road surface of a snowy or muddy road on which feature points cannot easily be detected or is traveling at a high speed that cannot be handled by the frame rate of the downward imaging section 11, the self-movement might not be estimated or the result of estimation might be in error. Therefore, if only road surface images are used, such errors will be accumulated so that the resulting estimated self-position deviates from the correct position.
Third Exemplary Configuration of Information Processing Device According to Embodiment of Present Disclosure
(68)
(69) This information processing device 30 is obtained by adding an outward imaging section 31, a vehicle speed detection section 32, and a GPS processing section 33 to the information processing device 10 depicted in
(70) The information processing section 34, which is a replacement of the information processing section 15, is obtained by adding a SLAM processing section 35 and a correction section 36 to the information processing section 15, which is replaced by the information processing section 34. Note that components of the information processing device 30 that are identical with those of the information processing device 10 are designated by the same reference numerals as the corresponding elements and will not be redundantly described.
(71) The outward imaging section 31 includes at least one camera that is attached to a moving object and oriented so as to capture an image of the surroundings. The image captured by the outward imaging section 31 is supplied to the SLAM processing section 35.
(72)
(73) Returning to
(74) The correction section 36 corrects the result of self-position estimation by the self-position estimation section 19, which is based on images captured by the downward imaging section 11, by using the self-position identified by the GPS processing section 33 in accordance with the GPS signal and using the self-position estimated by the SLAM processing section 35.
(75) <First Self-Position Correction Process by Correction Section 36>
(76)
(77) The first self-position correction process corrects a first self-position estimation result wTc.sup.G based on images from the downward imaging section 11 in accordance with a second self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(78) In step S31, the correction section 36 acquires from the self-position estimation section 19 the first self-position estimation result wTc.sup.G based on images from the downward imaging section 11.
(79) In step S32, the correction section 36 acquires from the SLAM processing section 35 the second self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(80) In step S33, in accordance with Equation (1) below, the correction section 36 makes corrections by performing weighted averaging of the first self-position estimation result wTc.sup.G acquired in step S31 and the second self-position estimation result wTc.sup.O acquired in step S32, and acquires a corrected self-position wTc.
wTc=wTc.sup.G+(1)wTc.sup.O(1)
(81) Here, is a value that satisfies the relational expression 0<<1. Note that the value of may be adaptively set as is the case with the Kalman filter.
(82) Note that a self-position wTc.sup.GPS identified by the GPS processing section 33 in accordance with a GPS signal may be used instead of the self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(83) The first self-position correction process described above may be performed intermittently at predetermined intervals. When performed in such a manner, the first self-position correction process corrects the accumulation of errors in the self-position wTc.sup.G estimated on the basis of road surface images.
(84) <Second Self-Position Correction Process by Correction Section 36>
(85)
(86) When the vehicle speed v.sub.t of an automobile is equal to or higher than a predetermined threshold value, the second self-position correction process adopts the second self-position estimation result wTc.sup.O based on images from the outward imaging section 31 instead of the first self-position estimation result wTc.sup.G based on images from the downward imaging section 11.
(87) In step S41, the correction section 36 acquires from the self-position estimation section 19 the first self-position estimation result wTc.sup.G based on images from the downward imaging section 11.
(88) In step S42, the correction section 36 acquires from the SLAM processing section 35 the second self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(89) In step S43, the correction section 36 acquires the vehicle speed v.sub.t from the vehicle speed detection section 32, and determines whether the vehicle speed v.sub.t is equal to or higher than the predetermined threshold value.
(90) If the vehicle speed v.sub.t is equal to or higher than the predetermined threshold value, it is conceivable that the first self-position estimation result wTc.sup.G is susceptible to error. Therefore, the correction section 36 proceeds to step S44 and adopts the second self-position estimation result wTc.sup.O as a final self-position estimation result wTc.
(91) If, on the contrary, the vehicle speed v.sub.t is lower than the predetermined threshold value, it is conceivable that the first self-position estimation result wTc.sup.G is not susceptible to error. Therefore, the correction section 36 proceeds to step S45 and adopts the first self-position estimation result wTc.sup.G as the final self-position estimation result wTc.
(92) Note that the self-position wTc.sup.GPS identified by the GPS processing section 33 in accordance with a GPS signal may be used instead of the self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(93) The second self-position correction process described above may be repeated continuously. As a result, if the first self-position estimation result wTc.sup.G is in error, the second self-position estimation result wTc.sup.O may be adopted.
(94) <Third Self-Position Correction Process by Correction Section 36>
(95)
(96) If the number x.sub.t of landmarks identified in the images from the downward imaging section 11 (feature points whose real-space coordinates are identified) is equal to or larger than a predetermined threshold value during the first self-position estimation process based on images from the downward imaging section 11, the third self-position correction process adopts the second self-position estimation result wTc.sup.O based on images from the outward imaging section 31 instead of the first self-position estimation result wTc.sup.G based on images from the downward imaging section 11.
(97) In step S51, the correction section 36 acquires from the self-position estimation section 19 the first self-position estimation result wTc.sup.G based on images from the downward imaging section 11.
(98) In step S52, the correction section 36 acquires from the SLAM processing section 35 the second self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(99) In step S53, the correction section 36 acquires from the road surface environment estimation section 18 the number x.sub.t of landmarks identified in the images from the downward imaging section 11, and determines whether the number x.sub.t of landmarks is equal to or larger than the predetermined threshold value.
(100) If the number x.sub.t of landmarks is equal to or larger than the predetermined threshold value, it is conceivable that the first self-position estimation result wTc.sup.G is not susceptible to error. Therefore, the correction section 36 proceeds to step S54 and adopts the first self-position estimation result wTc.sup.G as the final self-position estimation result wTc.
(101) If, on the contrary, the number x.sub.t of landmarks is smaller than the predetermined threshold value, it is conceivable that the first self-position estimation result wTc.sup.G is susceptible to error. Therefore, the correction section 36 proceeds to step S55 and adopts the second self-position estimation result wTc.sup.O as the final self-position estimation result wTc.
(102) Note that the self-position wTc.sup.GPS identified by the GPS processing section 33 in accordance with a GPS signal may be used instead of the self-position estimation result wTc.sup.O based on images from the outward imaging section 31.
(103) The third self-position correction process described above may be repeated continuously. As a result, if the first self-position estimation result wTc.sup.G is in error, the second self-position estimation result wTc.sup.O may be adopted.
CONCLUSION
(104) As described above, the information processing devices 10, 20, and 30 are capable of accurately and continuously estimating the self-movement and the current self-position.
(105) Further, even if the estimated self-movement and self-position are in error, the information processing device 30 is capable of correcting such errors without allowing them to accumulate.
(106) A series of processes to be performed by the information processing section 15 in the information processing device 10, the information processing section 25 in the information processing device 20, and the information processing section 34 in the information processing device 30 may be executed by hardware or by software. When the series of processes is to be executed by software, a program forming the software is installed in a computer. The computer may be, for example, a computer built in dedicated hardware or a general-purpose personal computer or other computer capable of executing various functions with various programs installed.
(107)
(108) In the computer 100, a CPU (Central Processing Unit) 101, a ROM (Read Only Memory) 102, and a RAM (Random Access Memory) 103 are interconnected by a bus 104.
(109) The bus 104 is further connected to an input/output interface 105. The input/output interface 105 is connected to an input section 106, an output section 107, a storage section 108, a communication section 109, and a drive 110.
(110) The input section 106 is formed, for example, of a keyboard, a mouse, and a microphone. The output section 107 is formed, for example, of a display and a speaker. The storage section 108 is formed, for example, of a hard disk and a nonvolatile memory. The communication section 109 is formed, for example, of a network interface. The drive 110 drives a magnetic disk, an optical disk, a magneto-optical disk, or a removable medium 111 such as a semiconductor memory.
(111) In the computer 100 configured as described above, the above-described series of processes are performed by allowing the CPU 101 to load a program stored, for example, in the storage section 108 into the RAM 103 through the input/output interface 105 and the bus 104 and execute the loaded program.
(112) Note that programs to be executed by the computer 100 may be processed chronologically in the order described in this document, in a parallel manner, or at an appropriate time point in response, for example, to call-up.
(113) Note that the present disclosure is not limited to the foregoing embodiment. Various changes and modifications can be made without departing from the spirit of present disclosure.
(114) The present disclosure may adopt the following configurations.
(115) (1) An information processing device including:
(116) a downward imaging section that is disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface; and
(117) a movement estimation section that estimates the movement of the moving object in accordance with a plurality of images representing the road surface and captured at different time points by the downward imaging section.
(118) (2) The information processing device as described in (1) above, further including:
(119) a road surface estimation section that estimates the distance to the road surface.
(120) (3) The information processing device as described in (2) above, in which the downward imaging section includes stereo cameras, and the road surface estimation section estimates the distance to the road surface in accordance with a pair of images representing the road surface and captured at the same time point by the stereo cameras.
(121) (4) The information processing device as described in (2) above, in which the road surface estimation section includes a ToF sensor.
(122) (5) The information processing device as described in (2) above, further including:
(123) a light projection section that projects a texture pattern onto the road surface;
(124) in which the road surface estimation section estimates the distance to the road surface in accordance with an image of the texture pattern projected onto the road surface imaged by the downward imaging section.
(125) (6) The information processing device as described in (5) above, in which the light transmitting section doubles as an illumination section that illuminates the imaging range of the downward imaging section.
(126) (7) The information processing device as described in any one of (1) to (5) above, further including:
(127) an illumination section that illuminates the imaging range of the downward imaging section.
(128) (8) The information processing device as described in any one of (1) to (7) above, further including:
(129) a first self-position estimation section that estimates the self-position of the moving object in accordance with the movement of the moving object that is estimated by the movement estimation section.
(130) (9) The information processing device as described in any one of (1) to (8) above, further including:
(131) an outward imaging section that is disposed on a moving object traveling on a road surface and captures an image of the surroundings of the moving object; and
(132) a second self-position estimation section that estimates the self-position of the moving object in accordance with the image of the surroundings of the moving object that is captured by the outward imaging section.
(133) (10) The information processing device as described in (9) above, further including:
(134) a correction section that corrects a first self-position estimated by the first self-position estimation section by using a second self-position estimated by the second self-position estimation section.
(135) (11) The information processing device as described in (10) above, in which the correction section performs weighted averaging of the first self-position estimated by the first self-position estimation section and the second self-position estimated by the second self-position estimation section.
(136) (12) The information processing device as described in (10) above, in which the correction section adopts, in accordance with the vehicle speed of the moving object, the second self-position estimated by the second self-position estimation section instead of the first self-position estimated by the first self-position estimation section.
(137) (13) The information processing device as described in (10) above, in which the correction section adopts, in accordance with the number of landmarks derived from the image of the road surface that is captured by the downward imaging section, the second self-position estimated by the second self-position estimation section instead of the first self-position estimated by the first self-position estimation section.
(138) (14) An information processing method that is used in an information processing device including a downward imaging section and a movement estimation section, the downward imaging section being disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface, the movement estimation section estimating the movement of the moving object in accordance with a plurality of road surface images that are captured at different time points by the downward imaging section, the information processing method including:
(139) an acquisition step of acquiring, by the movement estimation section, the image of the road surface that is captured by the downward imaging section; and
(140) a movement estimation step of estimating, by the movement estimation section, the movement of the moving object in accordance with a plurality of acquired images representing the road surface and captured at different time points.
(141) (15) A program that controls an information processing device including a downward imaging section and a movement estimation section, the downward imaging section being disposed on the bottom of a moving object traveling on a road surface and captures an image of the road surface, the movement estimation section estimating the movement of the moving object in accordance with a plurality of images representing the road surface and captured at different time points by the downward imaging section, the program causing a computer in the information processing device to perform a process including:
(142) an acquisition step of acquiring the image of the road surface that is captured by the downward imaging section; and
(143) a movement estimation step of estimating the movement of the moving object in accordance with the plurality of acquired road surface images that are captured at different time points.
REFERENCE SIGNS LIST
(144) 10 Information processing device 11 Downward imaging section 12 Stereo camera 13 Illumination section 14 Control section 15 Information processing section 16 Road surface estimation section 17 Movement estimation section 18 Road surface environment estimation section 19 Self-position estimation section 20 Information processing device 21 Downward imaging section 22 Illumination section 23 Light projection section 24 Control section 25 Information processing section 26 Road surface estimation section 27 Movement estimation section 28 Road surface environment estimation section 29 Self-position estimation section 30 Information processing device 31 Outward imaging section 32 Vehicle speed detection section 33 GPS processing section 34 Information processing section 35 SLAM processing section 36 Correction section 100 Computer 101 CPU