CN115963485A - Speed detection method, device, equipment and readable storage medium - Google Patents
- ️Fri Apr 14 2023
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
As shown in fig. 1, in the related art, for the detection of the lateral velocity and the longitudinal velocity of the detection target, an electromagnetic wave is generally transmitted to the detection target by using a radar so that a plurality of reflection points of the detection target reflect the electromagnetic wave, so that the radar can detect the radial velocity and the azimuth angle of the plurality of reflection points. And moreover, a plurality of corresponding relations among the radial speed, the transverse speed and the longitudinal speed are established through the azimuth angle, and the transverse speed and the longitudinal speed can be determined by utilizing the plurality of corresponding relations, so that the detection is realized. Wherein, a plurality of corresponding relations among the radial velocity, the transverse velocity and the longitudinal velocity are established through the azimuth angle as shown in formula (1):
wherein, V ri For detecting the speed V of movement of the target at the P-th position i Radial velocity of the reflecting point, θ i For detecting the P-th of the target i Azimuth angle of reflection point, v x To detect the longitudinal speed of the target, v y To detect the lateral velocity of the target. FIG. 1 shows that the detection target generates a first reflection point P 1 And a second reflection point P 2 The case (1).
However, this detection method is not suitable when the detection target is far away from the radar and the azimuth angle interval between adjacent reflection points is small, for example, when the distance between the detection target and the radar exceeds 80m, the detection target cannot generate a plurality of reflection points; alternatively, when the azimuthal spacing of adjacent reflection points is small, the radar cannot resolve. Therefore, there is a problem that the accuracy of the speed detection is low.
In order to solve the above technical problem, the present application provides a speed detection method, as shown in fig. 2, the speed detection method may include:
step S201, establishing a first corresponding relationship, where the first corresponding relationship is used to represent a relationship between a lateral speed, a longitudinal speed, and a speed included angle of the detection target at the kth time, where the speed included angle is an included angle between the lateral speed and a movement speed of the detection target.
Wherein, the transverse speed is a transverse component of the movement speed, and the longitudinal speed is a longitudinal component of the movement speed. Illustratively, as shown in fig. 2B, the first correspondence may be represented by the following formula (2):
wherein alpha is k 、v x,k 、v y,k And sequentially setting a speed included angle, a longitudinal speed and a transverse speed of the detection target at the kth moment.
Step S202, establishing a second corresponding relation, wherein the second corresponding relation is used for representing the relation among the azimuth angle, the transverse speed, the longitudinal speed and the radial speed of the detection target at the kth moment.
The radial velocity may be obtained by Radar or Radar recorders, such as millimeter wave radars, lidar, laser Radar Recorders (LRRs), and the like. The second correspondence relationship can be expressed by the following formula (3):
v r,k =v x,k *cos(θ k )+v y,k *sin(θ k ) (3)
wherein v is r,k 、θ k Sequentially the radial velocity, the azimuth angle, v, of the detected target at the k-th moment r,k Representing the moving speed V of the detected object at the k-th moment k The radial component of (a).
And step S203, determining the transverse speed and the longitudinal speed based on the first corresponding relation and the second corresponding relation.
According to the scheme, the first corresponding relation and the second corresponding relation of the detection target at the kth moment are utilized to determine the transverse speed and the longitudinal speed of the detection target at the kth moment, so that the detection target can realize speed detection only by generating one reflection point, the detection target is not required to generate a plurality of reflection points, the influence of the distance factor and the azimuth angle resolution of the radar on the detection accuracy can be reduced, and the accuracy of speed detection is improved.
In one embodiment, as shown in fig. 3, step S201 may include:
step S301, a first position of the detection target at the k-1 moment and a second position at the k moment are obtained.
Illustratively, the first and second positions may be acquired using a position sensor, which may be a camera, a laser sensor, or the like.
Step S302, determining a displacement included angle between a first position and a second position;
and step S303, taking the displacement included angle as a speed included angle.
In one example, referring to fig. 2B, the position of the camera and the radar are the same (e.g., the position of the lidar recorder is the same as the position of the camera), and the displacement included angle α of the detected target can be determined by the following formula (4) k :
In another example, if the time interval Δ T between the k-th time and the k-1 th time is greater than the acquisition time interval of the camera, n position coordinates (x) may be acquired during the movement of the detection target from the first position to the second position m ,y m ) Wherein m =1,2, \8230, n, n is an integer, n is more than or equal to 1, and the displacement included angle alpha can be determined by the following formula (5) k :
Referring to fig. 2B, the included angle α is obtained by the following formula (6) k As the velocity angle:
in the related art, the lateral distance variation y between the second position and the first position is generally utilized k -y k-1 Longitudinal distance change x k -x k-1 And the time interval delta T between the k time and the k-1 time to determine the transverse speed of the detection target at the k time. For example, the lateral velocity v y,k Can be determined by the following equation (7):
however, since Δ T may be inaccurate, the determination capacity of the lateral velocity is susceptible to a time error, resulting in a large error in the lateral velocity. E.g. y k -y k-1 =0.6m, Δ T =30ms, then v y,k =20m/s; if Δ T has a timing error of 2ms, then acquired Δ T =28ms, and finally v is calculated y,k =21.4m/s, yielding a velocity error of 1.4 m/s.
In this embodiment, the displacement included angle of the detection target moving from the first position to the second position is used as the speed included angle of the detection target at the kth moment, so that the time error can be eliminated, the speed error can be reduced, and the accuracy of speed detection can be improved. In addition, in the process of moving the detection target from the first position to the second position, more than two position coordinates are obtained to determine a speed included angle, on one hand, because the time interval between the kth time and the kth-1 time is greater than the acquisition time interval of the camera, a measurement time window can be prolonged, so that the time error is effectively eliminated, and the accuracy of speed detection is improved; on the other hand, the average value of the plurality of speed included angles is smooth, so that the speed detection is more accurate.
In one embodiment, as shown in fig. 4A, acquiring the first position of the detection target at the k-1 st time may include:
step S401, a first image of the detection target at the k-1 moment is acquired.
The first image may be acquired by a camera, and a resolution of the camera to the position change of the detection target is greater than a resolution of the radar to the position change of the detection target.
S402, inputting a first image into a target detection model to obtain a first preselected detection frame; the target detection model is obtained by training a deep learning network model based on a plurality of sample images; the sample image comprises an image of a detection target;
step S403, determining first particle information from a first preselection detection frame;
in step S404, coordinate conversion is performed on the first particle information to obtain a first position.
In one example, as shown in fig. 4B, the first image is input into the target detection model, and a detection frame with an Intersection over Union (IoU) of 0.8 or more with the annotation frame 410 is determined as the first preselected detection frame. The first pre-selection detection box 420 may be a rectangular box, and the pixel coordinate of the ith vertex Ai of the first pre-selection detection box 420 is (u) 1i ,v 1i ) Wherein i is more than or equal to 1 and less than or equal to 4. The pixel coordinates (u) of the first particle B1 are determined from the pixel coordinates of the vertices of the first
pre-selected detection box320 1 ,v 1 ) Wherein
based on the pixel coordinates (u) of the first particle B1 1 ,v 1 ) Mapping relation with world coordinate system determines first position coordinate (x) of first particle at time k-1 k-1 ,y k-1 ) Wherein the pixel coordinate (u) of the first dot B1 1 ,v 1 ) The mapping relation with the world coordinate system is shown in formula (8):
wherein M is a conversion matrix determined by the internal and external parameters of the camera.
Correspondingly, a second image of the detection target can be acquired by the camera at the kth moment, and a second position coordinate (x) of the detection target at the kth moment can be determined by a determination method similar to the first position coordinate k ,y k )。
In this embodiment, the first image is input to the target detection model for target detection, so that the calculation speed is high, the position information can be determined at a high speed, and redundant pixels in the first image can be removed, so that the first preselection detection frame only retains the pixels of the detection target to the maximum extent, the accuracy of the first particle information is improved, and the accuracy of obtaining the first position is improved.
In one embodiment, determining first particle information from a first pre-selected detection box may comprise:
taking the center of the first preselection detection frame as first particle information; or,
and performing semantic segmentation on the first pre-selected detection frame to obtain a semantic segmentation image with a detection target, and determining first particle information from the semantic segmentation image.
Illustratively, as shown in FIG. 4B, where first pre-selection detection box 420 is a rectangular box, the geometric center of first pre-selection detection box 420 may be taken as the pixel coordinates of first particle B1.
Or, as shown in fig. 4C, performing semantic segmentation on the first pre-selected detection frame, further removing redundant pixels in the first pre-selected detection frame, and retaining pixels of the detection target to obtain a semantic segmentation image 430 with the detection target; based on the circumscribed circle contour or the inscribed circle contour (not shown in the figure) of the semantic segmentation image 430, the pixel coordinate corresponding to the center of the circumscribed circle contour or the inscribed circle contour is determined as the pixel coordinate of the first mass point.
Therefore, the accuracy of determining the first particle information is improved, and the accuracy of speed retrieval is improved.
In one embodiment, as shown in fig. 5, the azimuth angle of the detection target may be determined by the following steps:
step S501, acquiring a second position and a radial distance of a detection target at the kth moment;
and step S502, determining the azimuth angle of the detection target according to the second position and the radial distance.
In one example, referring to fig. 2B together, the second position is determined by the second image acquired by the camera at the k-th time. Radial distance from radar to a reflection point (e.g. first reflection point P) of detection target 1 ) Transmitting the electromagnetic wave and receiving the reflected electromagnetic wave.
Determining the azimuth angle of the detection target at the k-th time may include:
determining the transverse distance between the detection target and the camera from the second position coordinate;
and determining the azimuth angle of the detected target at the k moment based on the transverse distance and the radial distance measured by the radar. For example, the azimuth angle θ is determined by the following formula (9) k :
Wherein, y k To detect the lateral distance between the target and the camera, r k To detect the radial distance of the target at the k-th instant.
In another example, the radial velocity v of the detection target at the k-th time may be determined by the following formula (10) r,k With transverse velocity v y,k Longitudinal velocity v x,k And azimuth angle theta k The relationship between:
v r,k =v x,k *cos(θ k )+v y,k *sin(θ k ) (10)
in the embodiment, the azimuth angle of the detection target is determined by adopting the transverse distance between the detection target and the camera acquired by the camera and the radial distance acquired by the radar, the accuracy of the azimuth angle is higher than that of the azimuth angle measured by the radar, and the azimuth angle obtained by measuring the radar is replaced by the azimuth angle, so that the accuracy of speed detection is improved.
In one embodiment, as shown in fig. 6, the speed detection method may further include:
step S601, establishing a measurement model of the detection target based on the first corresponding relation and the second corresponding relation;
step S602, establishing a process model of a detected target based on a preset Constant Velocity (CV);
and S603, based on the measurement model and the process model, estimating the optimal transverse speed and the optimal longitudinal speed of the detection target at the kth moment by adopting Unscented Kalman Filtering (UKF).
Based on the method, the optimal transverse speed and the optimal longitudinal speed of the detection target at the kth moment are estimated by establishing a measurement model and a process model of the detection target and fusing the position information acquired by the camera with the radial distance and the radial speed acquired by the radar by adopting unscented Kalman filtering, so that the accuracy of speed detection can be effectively improved.
In one embodiment, the metrology model may include:
wherein x is k-1 、y k-1 Longitudinal distance and transverse distance, x, of the detected object at the k-1 th moment k 、y k 、r k 、v r 、v x 、v y 、α k 、θ k The longitudinal distance, the transverse distance, the radial speed, the longitudinal speed, the transverse speed, the displacement included angle/speed included angle and the azimuth angle of the detected target at the kth moment are respectively.
In one embodiment, a process model includes:
wherein x is k 、y k 、v x,k 、v y,k Respectively the longitudinal distance, the transverse distance, the longitudinal speed, the transverse speed, x of the detected target at the kth moment k-1 、y k-1 、v x,k-1 、v y,k-1 The longitudinal distance, the transverse distance, the longitudinal speed and the transverse speed of the detection target at the k-1 moment are respectively, and delta T is the time interval between the k moment and the k-1 moment.
Fig. 7 is a diagram illustrating various application scenarios according to an embodiment of the present application. As shown in fig. 7, the speed detection method according to the embodiment of the present application may be applied to an autonomous vehicle. For example, the present invention is applicable to scenes such as lane change cut-in, lane change cut-out, and intersection traffic of the
target vehicle710. In these application scenarios, the
target vehicle door710 can accurately detect the lateral speed and the longitudinal speed of the detected target (including the detected vehicle, the pedestrian, etc.), which is helpful for the
target vehicle710 to better perform path planning, obstacle avoidance, etc. in the automatic driving scenario.
Fig. 8 is a block diagram of a speed detection apparatus according to an embodiment of the present application. As shown in fig. 8, the speed detecting apparatus 800 may include:
a
first establishing module810, configured to establish a first corresponding relationship, where the first corresponding relationship is used to represent a relationship between a lateral speed, a longitudinal speed, and a speed included angle of a detected target at a kth time, where the speed included angle is an included angle between the lateral speed and a movement speed of the detected target at the kth time;
a
second establishing module820, configured to establish a second corresponding relationship, where the second corresponding relationship is used to represent a relationship among an azimuth angle, a lateral speed, a longitudinal speed, and a radial speed of the detected target at the kth time;
a determining
module830 is configured to determine the lateral speed and the longitudinal speed based on the first corresponding relationship and the second corresponding relationship.
In one embodiment, the
first establishing module810 may include:
the first acquisition submodule is used for acquiring a first position of a detection target at the k-1 th moment and a second position of the detection target at the k-1 th moment;
the first determining submodule is used for determining a displacement included angle between the first position and the second position;
and setting a submodule for taking the displacement included angle as a speed included angle.
In one embodiment, the first obtaining sub-module may include:
the acquisition unit is used for acquiring a first image of the detection target at the k-1 th moment;
the identification unit is used for inputting the first image into the target detection model to obtain a first preselected detection frame; the target detection model is obtained by training a deep learning network model based on a plurality of sample images; the sample image comprises an image of a detection target;
a determining unit, configured to determine first particle information from the first pre-selection detection frame;
and the conversion unit is used for carrying out coordinate conversion on the first particle information to obtain a first position.
In one embodiment, the determining unit may be configured to:
taking the center of a first pre-selection detection frame as first particle information; or,
and performing semantic segmentation on the first pre-selected detection frame to obtain a semantic segmentation image with a detection target, and determining first particle information from the semantic segmentation image.
In one embodiment, the
second establishing module820 may include:
the second acquisition sub-module is used for acquiring a second position and a radial distance of the detection target at the kth moment;
and the second determining submodule is used for determining the azimuth angle of the detection target according to the second position and the radial distance.
In one embodiment, the speed detection apparatus may further include:
the measurement model establishing module is used for establishing a measurement model of the detection target based on the first corresponding relation and the second corresponding relation;
the process model establishing module is used for establishing a process model of the detected target based on a preset uniform motion model;
and the estimation module is used for estimating the optimal transverse speed and the optimal longitudinal speed of the detection target at the kth moment by adopting unscented Kalman filtering based on the measurement model and the process model.
In one embodiment, the metrology model may include:
wherein x is k-1 、y k-1 Longitudinal distance and transverse distance, x, of the detected object at the k-1 th moment k 、y k 、r k,R 、v r,k 、v x,k 、v y,k 、α k 、θ k The longitudinal distance, the transverse distance, the radial speed, the longitudinal speed, the transverse speed, the displacement included angle/speed included angle and the azimuth angle of the detected target at the kth moment are respectively.
In one embodiment, a process model includes:
wherein x is k 、y k 、v x,k 、v y,k Respectively the longitudinal distance, the transverse distance, the longitudinal speed, the transverse speed, x of the detected target at the kth moment k-1 、y k-1 、v x,k-1 、v y,k-1 The longitudinal distance, the transverse distance, the longitudinal speed and the transverse speed of the detection target at the k-1 moment are respectively, and delta T is the time interval between the k moment and the k-1 moment.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
Fig. 9 is a block diagram of an electronic device for implementing a speed detection method according to an embodiment of the present application. As shown in fig. 9, the electronic apparatus includes: a
memory910 and a
processor920, the
memory910 having stored therein instructions executable on the
processor920. The
processor920, when executing the instructions, implements the speed detection method in the above-described embodiments. The number of the
memory910 and the
processor920 may be one or more. The electronic device is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The electronic device may further include a
communication interface930 for communicating with an external device for data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The
processor920 may process instructions for execution within the electronic device, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in an implementation, if the
memory910, the
processor920 and the
communication interface930 are integrated on a chip, the
memory910, the
processor920 and the
communication interface930 may complete communication with each other through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting Advanced reduced instruction set machine (ARM) architecture.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 910) storing computer instructions, which when executed by a processor, implement the method provided in embodiments of the present application.
Alternatively, the
memory910 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of an electronic device for implementing the speed detection method, and the like. Further, the
memory910 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the
memory910 may optionally include memory located remotely from the
processor920, which may be connected via a network to an electronic device for implementing the speed detection method. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description of the present specification, reference to the description of "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or a portion of the steps of the method of the above embodiments may be performed by associated hardware that is instructed by a program, which may be stored in a computer-readable storage medium, that when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.