CN112188082A - High dynamic range image shooting method, shooting device, terminal and storage medium - Google Patents
- ️Tue Jan 05 2021
Info
-
Publication number
- CN112188082A CN112188082A CN202010883969.3A CN202010883969A CN112188082A CN 112188082 A CN112188082 A CN 112188082A CN 202010883969 A CN202010883969 A CN 202010883969A CN 112188082 A CN112188082 A CN 112188082A Authority
- CN
- China Prior art keywords
- original image
- image
- camera
- original
- dynamic range Prior art date
- 2020-08-28 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000004891 communication Methods 0.000 claims description 21
- 238000012937 correction Methods 0.000 claims description 8
- 238000013519 translation Methods 0.000 claims description 8
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 6
- 230000004927 fusion Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 238000006073 displacement reaction Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a high dynamic range image shooting method, a shooting device, a terminal and a storage medium, wherein a first original image of a current scene is shot through a first camera; simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different; performing image registration on the second original image by taking the first original image as a reference image; the first original image and the second original image after registration are fused to obtain a high dynamic range image of a current scene, different original images of the current scene are obtained through simultaneous shooting of at least two cameras, then fusion is carried out on the different original images after image registration, the problem that the HDR image shooting speed is low due to the fact that the HDR image shooting is completed by a monocular camera and source images with different exposures need to be shot for multiple times to synthesize the HDR image is solved, the imaging speed of the HDR image is improved, and user experience is improved.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a high dynamic range image capturing method, a capturing apparatus, a terminal, and a storage medium.
Background
With the development of electronic products and image processing technologies becoming faster and faster, people have higher requirements on the image presentation effect and are no longer only satisfied with the original image which is not subjected to image processing. HDR (High-dynamic range, High dynamic range image) is sought after by many photographers because it can provide more dynamic range and image details by synthesizing images of different exposure levels. The existing HDR image shooting method mainly shoots multiple frames of images with different exposure degrees through a monocular camera by controlling the exposure degree, and then synthesizes the multiple images with different exposure degrees into an HDR image as a finally presented image. Although a better detail-retaining HDR image can be obtained by shooting a plurality of images with different exposures, the existing HDR mode shooting is completed by a monocular camera, namely, how many source images with different exposures are needed to synthesize the HDR image, the shooting is needed for many times, and the time is long, so that the HDR application scene is limited. Therefore, the speed of capturing HDR images is slow, and the user experience is not good.
Disclosure of Invention
The invention aims to solve the technical problems that in the related art, HDR image shooting is completed by a monocular camera, source images with different exposures need to be shot for multiple times to synthesize an HDR image, so that the HDR image shooting speed is low, and long time is needed.
In order to solve the above technical problem, the present invention provides a high dynamic range image capturing method, including:
shooting a first original image of a current scene through a first camera;
simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different;
performing image registration on the second original image by taking the first original image as a reference image;
and fusing the first original image and the registered second original image to obtain a high dynamic range image of the current scene.
Optionally, before the capturing the first original image of the current scene by the first camera, the method further includes:
setting exposure parameters of the first camera;
and setting exposure parameters of the second camera.
Optionally, before the image registration of the second original image with the first original image as a reference image, the method further includes:
and respectively carrying out distortion correction on the first original image and the second original image.
Optionally, the image registration of the second original image with the first original image as a reference image includes:
determining the offset direction and the offset size of the first original image relative to the second original image;
and performing translation compensation on the second image by taking the first original image as a reference image.
Optionally, the determining the offset direction and the offset size of the first original image relative to the second original image includes:
and determining the offset direction and the offset size of the first original image relative to the second original image according to the relative position relationship between the first camera and the second camera.
Optionally, the image registration of the second original image with the first original image as a reference image includes:
and taking the first original image as a reference image, performing pixel matching, and determining the pixel matching relationship between the second original image and the first original image.
Optionally, the fusing the first original image and the registered second original image includes:
respectively acquiring exposure attribute values of the first original image and the registered second original image;
and fusing the first original image and the second original image after registration according to the exposure attribute value.
Further, the present invention also provides a photographing apparatus, comprising:
the shooting module is used for shooting a first original image of a current scene through a first camera and shooting a second original image of the current scene through at least one second camera, and exposure parameters of the first camera and the second camera are different;
a registration module for performing image registration on the second original image with the first original image as a reference image;
and the synthesis module is used for fusing the first original image and the registered second original image to obtain a high dynamic range image of the current scene.
Furthermore, the invention also provides a terminal, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the high dynamic range image capturing method as described in any one of the above.
Further, the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the high dynamic range image capturing method as described in any one of the above.
Advantageous effects
The invention provides a high dynamic range image shooting method, a shooting device, a terminal and a storage medium, aiming at the defects that the existing HDR image shooting is finished by a monocular camera, and source images with different exposures need to be shot for multiple times to synthesize an HDR image, so that the HDR image shooting speed is slow, and long time is needed, a first original image of a current scene is shot by a first camera; simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different; performing image registration on the second original image by taking the first original image as a reference image; the first original image and the second original image after registration are fused to obtain a high dynamic range image of a current scene, different original images of the current scene are obtained by shooting through at least two cameras at the same time, then the different original images are fused after image registration, and the problems that the HDR image shooting is finished by a monocular camera, source images with different exposures need to be shot for multiple times to synthesize an HDR image, the HDR image shooting speed is slow, and the time is long are caused are solved; the imaging speed of the HDR image is improved, the time spent is reduced, and the user experience is improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
FIG. 3 is a basic flowchart of a high dynamic range image capturing method according to a first embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for capturing a high dynamic range image according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a basic mechanism of a camera array according to a second embodiment of the present invention;
fig. 6 is a schematic diagram of a basic mechanism of a shooting device according to a third embodiment of the invention;
fig. 7 is a schematic structural diagram of a terminal according to a third embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the
mobile terminal100 may include: RF (Radio Frequency)
unit101,
WiFi module102,
audio output unit103, a/V (audio/video)
input unit104,
sensor105,
display unit106,
user input unit107,
interface unit108,
memory109,
processor110, and
power supply111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the
radio frequency unit101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the
processor110; in addition, the uplink data is transmitted to the base station. Typically,
radio frequency unit101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the
radio frequency unit101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the
WiFi module102, and provides wireless broadband internet access for the user. Although fig. 1 shows the
WiFi module102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The
audio output unit103 may convert audio data received by the
radio frequency unit101 or the
WiFi module102 or stored in the
memory109 into an audio signal and output as sound when the
mobile terminal100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the
audio output unit103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The
audio output unit103 may include a speaker, a buzzer, and the like.
The a/
V input unit104 is used to receive audio or video signals. The a/
V input Unit104 may include a Graphics Processing Unit (GPU) 1041 and a
microphone1042, the
Graphics processor1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the
display unit106. The image frames processed by the
graphic processor1041 may be stored in the memory 109 (or other storage medium) or transmitted via the
radio frequency unit101 or the
WiFi module102. The
microphone1042 may receive sounds (audio data) via the
microphone1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the
radio frequency unit101 in case of a phone call mode. The
microphone1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The
mobile terminal100 also includes at least one
sensor105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the
display panel1061 according to the brightness of ambient light, and a proximity sensor that can turn off the
display panel1061 and/or a backlight when the
mobile terminal100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The
display unit106 is used to display information input by a user or information provided to the user. The
Display unit106 may include a
Display panel1061, and the
Display panel1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The
user input unit107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the
user input unit107 may include a
touch panel1071 and
other input devices1072. The
touch panel1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the
touch panel1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The
touch panel1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the
processor110, and can receive and execute commands sent by the
processor110. In addition, the
touch panel1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the
touch panel1071, the
user input unit107 may include
other input devices1072. In particular,
other input devices1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the
touch panel1071 may cover the
display panel1061, and when the
touch panel1071 detects a touch operation thereon or nearby, the
touch panel1071 transmits the touch operation to the
processor110 to determine the type of the touch event, and then the
processor110 provides a corresponding visual output on the
display panel1061 according to the type of the touch event. Although the
touch panel1071 and the
display panel1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the
touch panel1071 and the
display panel1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The
interface unit108 serves as an interface through which at least one external device is connected to the
mobile terminal100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The
interface unit108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the
mobile terminal100 or may be used to transmit data between the
mobile terminal100 and external devices.
The
memory109 may be used to store software programs as well as various data. The
memory109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the
memory109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The
processor110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the
memory109 and calling data stored in the
memory109, thereby performing overall monitoring of the mobile terminal.
Processor110 may include one or more processing units; preferably, the
processor110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the
processor110.
The
mobile terminal100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the
power supply111 may be logically connected to the
processor110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the
mobile terminal100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an
IP service204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and
other eNodeBs2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the
EPC203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032,
other MMEs2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The
IP services204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
First embodiment
In order to solve the problem that in the related art, when HDR image shooting is completed by a monocular camera, a plurality of source images with different exposures need to be shot to synthesize an HDR image, so that the speed of shooting the HDR image is slow, and a long time is required, the embodiment provides a high dynamic range image shooting method.
Fig. 3 is a basic flowchart of a high dynamic range image capturing method provided in this embodiment, where the high dynamic range image capturing method includes:
s301, shooting a first original image of the current scene through a first camera.
S302, shooting a second original image of the current scene through at least one second camera, wherein exposure parameters of the first camera and the second camera are different.
And S303, carrying out image registration on the second original image by taking the first original image as a reference image.
S304, fusing the first original image and the second original image after registration to obtain a high dynamic range image of the current scene.
In this embodiment, after receiving a shooting instruction, shooting a first original image of a current scene through a first camera, and simultaneously shooting a second original image of the current scene through at least one second camera; it should be understood that, a plurality of second cameras may exist on the terminal, and when a plurality of second cameras exist, at least one of the second cameras may be selected to capture a first original image of a current scene simultaneously with the first camera when the first camera captures the first original image of the current scene to obtain a second original image of the current scene; when a plurality of second cameras and the first camera are selected to shoot simultaneously, a plurality of second original images of the current scene are obtained; it should be understood that each second camera and the first camera are all located on the same shooting plane for shooting.
It is understood that the viewing ranges, angles, shooting parameters except exposure parameters, and the like of the first camera and the second camera are basically consistent or only have differences without influencing imaging. Since the HDR image is synthesized from original images with different exposure parameters, the exposure parameters of each original image need to be different from each other, that is, the exposure parameters of the original images captured by the first camera and the second camera are different, and the exposure parameters of the different original images captured by each camera are also different. Further, the exposure parameters of the second cameras are different.
In this embodiment, before capturing a first original image of a current scene by a first camera, the method further includes: setting exposure parameters of a first camera; setting exposure parameters of a second camera; it should be understood that the exposure parameter of each camera may be set by the terminal receiving a setting instruction of the user, or may be the exposure parameter of each camera acquired by the terminal from the cloud.
In this embodiment, before performing image registration on the second original image with the first original image as the reference image, the method further includes: respectively carrying out distortion correction on the first original image and the second original image; it is understood that the camera lens introduces distortion due to manufacturing accuracy and assembly process variations, resulting in distortion of the original image. Therefore, it is necessary to perform distortion correction on the first original image and the second original image respectively to obtain the first original image and the second original image after the distortion correction and restoration.
In this embodiment, the image registration of the second original image with the first original image as the reference image includes: determining the offset direction and the offset size of the first original image relative to the second original image; and performing translation compensation on the second image by taking the first original image as a reference image. It should be understood that, due to a certain distance between the first camera and the second camera, the first original image captured by the first camera and the second original image captured by the second camera may have a certain offset and cannot be completely overlapped, so that image registration is required.
In this embodiment, determining the offset direction and the offset size of the first original image relative to the second original image comprises: and determining the offset direction and the offset size of the first original image relative to the second original image according to the relative position relationship of the first camera and the second camera. For example, according to the horizontal relationship and the vertical relationship between the first camera and each second camera, the first camera is used as a coordinate reference origin to obtain the horizontal displacement difference between the first camera and the second camera, so as to determine the offset direction and the offset size of the first original image relative to the second original image, and finally, the first original image is used as a reference image, and the second image is subjected to translation compensation according to the determined offset direction and the offset size.
In this embodiment, determining the offset direction and the offset size of the first original image relative to the second original image may further include: detecting similar feature points of a first original image and a second original image, extracting at least one group of reference feature point pairs of the first original image shot by a first camera and the second original image shot by a second camera, determining a coordinate mapping relation of the first original image shot by the first camera and the second original image shot by the second camera according to reference coordinates of the reference feature point pairs on the first original image shot by the first camera and the second original image shot by the second camera respectively, and calculating an image displacement difference vector generated by the distance between the first camera and the second camera according to the mapping relation. After the displacement difference vector is determined, the second original image is subjected to translation compensation by taking the first original image as a reference image according to the vector, so that a plurality of overlapped original images are obtained, and the time and difficulty of the fusion processing of the first original image and the second original image are further reduced. For example, a first original image of a current scene is shot through a first camera, two second original images of the current scene are shot through two second cameras, coordinate mapping relations of the first original image shot by the first camera and the second original images shot by the two second cameras are respectively determined, then an image displacement difference vector generated by the distance between the first camera and the second camera is calculated according to the mapping relations, the two second original images shot by the second cameras are subjected to translation compensation by taking the first original image shot by the first camera as a reference, and the time and the difficulty of fusion processing are reduced.
In this embodiment, the image registration of the second original image with the first original image as the reference image includes: taking the first original image as a reference image, performing pixel point matching, determining the pixel point matching relationship between the second original image and the first original image, and performing pixel point matching; it should be understood that when there are second original images shot by a plurality of second cameras, pixel matching is performed on each second original image and the first original image respectively, and after pixel matching is completed on each second original image and the first original image respectively, pixel matching is also naturally completed on each second original image.
In this embodiment, fusing the first original image and the registered second original image includes: respectively acquiring exposure attribute values of the first original image and the second original image after registration; fusing the first original image and the registered second original image according to the exposure attribute value; it is to be understood that after the shooting and registration of the first original image and the second original image are completed, the fusion weight of the first original image and the second original image needs to be calculated respectively, the calculation of the weight needs to comprehensively consider the image contents of all the images, the determination of the weight needs to simultaneously satisfy the requirement of suppressing the highlight area and keep the low-highlight area of the overexposed image; therefore, after the capturing and registering of the first original image and the second original image are completed, the terminal acquires the exposure attribute values of the first original image and the second original image, and then synthesizes an HDR image according to the different exposure attribute values of the first original image and the second original image and the registered first original image and second original image. Because the second original image is obtained by shooting through at least one second camera, at least one second original image exists, and the HDR image is synthesized by adopting at least two source images with different exposure degrees, so that the detail expression of the dark part and the bright part of the image is greatly improved, the effect of the bright part is bright, more details are reserved in the dark, and the outline and the depth of an object can be distinguished instead of the previous black group.
The embodiment of the invention shoots a first original image of a current scene through a first camera; simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different; performing image registration on the second original image by taking the first original image as a reference image; the first original image and the second original image after registration are fused to obtain a high dynamic range image of a current scene, different original images of the current scene are obtained by shooting through at least two cameras at the same time, then the different original images are fused after image registration, and the problems that the HDR image shooting is finished by a monocular camera, source images with different exposures need to be shot for multiple times to synthesize an HDR image, the HDR image shooting speed is slow, and the time is long are caused are solved; the imaging speed of the HDR image is improved, the time spent is reduced, and the user experience is improved.
Second embodiment
For better explanation, this embodiment provides a specific example to explain a high dynamic range image capturing method, please refer to fig. 4, fig. 4 is a detailed flowchart of the high dynamic range image capturing method according to the second embodiment of the present invention, and the high dynamic range image capturing method includes:
s401, setting exposure parameters of each camera.
In this embodiment, a plurality of cameras form a camera array to achieve high dynamic range image shooting, and it is understood that at least two cameras exist in the camera array; for example, referring to fig. 5, fig. 5 is a schematic diagram of a basic structure of a camera array, in which there are five cameras; it can be understood that the five cameras are all in the same shooting plane.
In the embodiment, the exposure parameters of each camera in the camera array are set respectively, and the viewing range, the angle, the shooting parameters except the exposure parameters and the like of each camera are basically consistent or only have the difference of no influence on imaging. Since the HDR image is synthesized from original images with different exposure parameters, the exposure parameters of each original image need to be different from each other, that is, the exposure parameters of the original images captured by the cameras are different, and the exposure parameters of the different original images captured by the cameras are also different. When one of the cameras is a first camera, it is a second camera in the camera, for example, when the
camera5 in fig. 2 is a first camera, it is a second camera in the
cameras1 to 4.
S402, receiving an instruction, and shooting original images with different exposure parameters at the same time.
In the embodiment, a shooting instruction is obtained, and an original image of a current scene is shot through a first camera and at least one second camera; it is to be understood that the photographing instruction may be issued in many forms, such as clicking a photographing icon or photographing key, bluetooth transmitting an instruction, voice recognition photographing, smiley face recognition photographing, and the like. In this embodiment, for example, after receiving a shooting instruction, a first original image and four second original images of a current scene are obtained by simultaneously shooting through a first camera (camera 5) and four second cameras (
cameras1 to 4).
And S403, distortion correction is carried out on each original image.
In this embodiment, the camera lens introduces distortion due to manufacturing accuracy and variations in the assembly process, resulting in distortion of the original image. Therefore, it is necessary to perform distortion correction on the first original image and each second original image respectively to obtain the first original image and each second original image after distortion correction.
And S404, carrying out image registration on each second original image by taking the first original image as a reference image.
In this embodiment, similar feature points of a first original image and each second original image are detected, at least one group of reference feature point pairs of the first original image captured by a first camera and each second original image captured by each second camera are respectively extracted, reference coordinates on the first original image captured by the first camera and each second original image captured by each second camera are respectively determined according to the reference feature point pairs, a coordinate mapping relationship between the first original image captured by the first camera and each second original image captured by each second camera is determined, and then an image displacement difference vector generated by a distance between the first camera and each second camera is calculated according to the mapping relationship. After the displacement difference vector is determined, the translation compensation is performed on each second original image according to the vector by taking the first original image as a reference image, so as to obtain a plurality of approximately overlapped original images, and further reduce the processing time and difficulty of the fusion of the first original image and each second original image, for example, the first original image of the current scene is shot by the first camera, four second original images of the current scene are shot by four second cameras, the coordinate mapping relation of the first original image shot by the first camera and the four second original images shot by the four second cameras is respectively determined, then the image displacement difference vector generated by the distance between the first camera and the four second cameras is respectively calculated according to the mapping relation, the translation compensation is performed on the four second original images shot by the four second cameras by taking the first original image shot by the first camera as a reference, the time and difficulty of fusion treatment are reduced.
S405, matching pixel points by taking the first original image as a reference image;
in this embodiment, the image registration of the second original image with the first original image as the reference image includes: taking the first original image as a reference image, performing pixel point matching, determining the pixel point matching relationship between the second original image and the first original image, and performing pixel point matching; for example, when there are four second original images shot by four second cameras, pixel point matching is performed on each second original image and the first original image, and after pixel point matching is completed on each second original image and the first original image, pixel point matching is also naturally completed on each second original image.
S406, the first original image and each registered second original image are fused to obtain a high dynamic range image of the current scene.
In this embodiment, fusing the first original image with the respective registered second original images includes: respectively acquiring exposure attribute values of the first original image and the registered second original images; fusing the first original image and each second original image after registration according to the exposure attribute value; it is to be understood that after the shooting and registration of the first original image and each second original image are completed, the fusion weight of the first original image and each second original image needs to be calculated respectively, the calculation of the weight needs to consider the image contents of all the images comprehensively, the determination of the weight needs to satisfy the requirement of suppressing the highlight region and keep the low-highlight region of the overexposed image; therefore, after the capturing and registering of the first original image and each second original image are completed, the terminal will acquire the exposure attribute values of the first original image and each second original image, and then synthesize an HDR image according to the different exposure attribute values of the first original image and each second original image and the registered first original image and each second original image. For example, a first original image is shot by a first camera, four second original images are shot by four second cameras, and because there are four second original images, the HDR image is synthesized by five source images with different exposure levels, so that the detail expression of dark and bright parts of the image is greatly improved, the effect of the bright parts is bright, and the dark parts retain more details, and the outline and the depth of an object can be distinguished, rather than the previous black lump.
The embodiment of the invention shoots a first original image of a current scene through a first camera; simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different; performing image registration on the second original image by taking the first original image as a reference image; the first original image and the second original image after registration are fused to obtain a high dynamic range image of a current scene, different original images of the current scene are obtained by shooting through at least two cameras at the same time, then the different original images are fused after image registration, and the problems that the HDR image shooting is finished by a monocular camera, source images with different exposures need to be shot for multiple times to synthesize an HDR image, the HDR image shooting speed is slow, and the time is long are caused are solved; the imaging speed of the HDR image is improved, the time spent is reduced, and the user experience is improved.
Third embodiment
The present embodiment also provides a photographing apparatus, as shown in fig. 6, including:
the shooting module is used for shooting a first original image of a current scene through a first camera and shooting a second original image of the current scene through at least one second camera, and exposure parameters of the first camera and the second camera are different;
the registration module is used for carrying out image registration on the second original image by taking the first original image as a reference image;
and the synthesis module is used for fusing the first original image and the registered second original image to obtain a high dynamic range image of the current scene.
It should be understood that the modules of the shooting device are combined to realize the steps of the high dynamic range image shooting method in the first and second embodiments.
The present embodiment further provides a terminal, as shown in fig. 7, which includes a
processor71, a
memory72 and a
communication bus73, wherein:
the
communication bus73 is used for realizing connection communication between the
processor71 and the
memory72;
the
processor71 is configured to execute one or more programs stored in the
memory72 to implement the steps of the high dynamic range image capturing method in the first and second embodiments described above.
The present embodiment also provides a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the high dynamic range image capturing method as in the embodiment and the second embodiment.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A high dynamic range image capturing method, characterized by comprising:
shooting a first original image of a current scene through a first camera;
simultaneously shooting a second original image of the current scene through at least one second camera, wherein the exposure parameters of the first camera and the second camera are different;
performing image registration on the second original image by taking the first original image as a reference image;
and fusing the first original image and the registered second original image to obtain a high dynamic range image of the current scene.
2. The high dynamic range image capturing method of claim 1, wherein capturing the first original image of the current scene by the first camera further comprises, prior to:
setting exposure parameters of the first camera;
and setting exposure parameters of the second camera.
3. The high dynamic range image capturing method according to any one of claims 1 to 2, wherein before the image-registering the second original image with the first original image as a reference image, further comprising:
and respectively carrying out distortion correction on the first original image and the second original image.
4. The high dynamic range image capturing method according to claim 3, wherein the image-registering the second original image with the first original image as a reference image includes:
determining the offset direction and the offset size of the first original image relative to the second original image;
and performing translation compensation on the second image by taking the first original image as a reference image.
5. The high dynamic range image capturing method according to claim 4, wherein the determining of the shift direction and the shift size of the first original image with respect to the second original image includes:
and determining the offset direction and the offset size of the first original image relative to the second original image according to the relative position relationship between the first camera and the second camera.
6. The high dynamic range image capturing method according to claim 5, wherein the image-registering the second original image with the first original image as a reference image includes:
and taking the first original image as a reference image, performing pixel matching, and determining the pixel matching relationship between the second original image and the first original image.
7. The high dynamic range image capturing method of any one of claims 4 to 5, wherein said fusing the first raw image with the registered second raw image comprises:
respectively acquiring exposure attribute values of the first original image and the registered second original image;
and fusing the first original image and the second original image after registration according to the exposure attribute value.
8. A photographing apparatus, characterized by comprising:
the shooting module is used for shooting a first original image of a current scene through a first camera and shooting a second original image of the current scene through at least one second camera, and exposure parameters of the first camera and the second camera are different;
a registration module for performing image registration on the second original image with the first original image as a reference image;
and the synthesis module is used for fusing the first original image and the registered second original image to obtain a high dynamic range image of the current scene.
9. A terminal, characterized in that the terminal comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the high dynamic range image capturing method as recited in any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the high dynamic range image capturing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010883969.3A CN112188082A (en) | 2020-08-28 | 2020-08-28 | High dynamic range image shooting method, shooting device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010883969.3A CN112188082A (en) | 2020-08-28 | 2020-08-28 | High dynamic range image shooting method, shooting device, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112188082A true CN112188082A (en) | 2021-01-05 |
Family
ID=73925238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010883969.3A Pending CN112188082A (en) | 2020-08-28 | 2020-08-28 | High dynamic range image shooting method, shooting device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112188082A (en) |
Cited By (6)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113592922A (en) * | 2021-06-09 | 2021-11-02 | 维沃移动通信(杭州)有限公司 | Image registration processing method and device |
CN113612919A (en) * | 2021-06-22 | 2021-11-05 | 北京迈格威科技有限公司 | Image shooting method and device, electronic equipment and computer readable storage medium |
CN114143471A (en) * | 2021-11-24 | 2022-03-04 | 深圳传音控股股份有限公司 | Image processing method, system, mobile terminal and computer readable storage medium |
CN114466134A (en) * | 2021-08-17 | 2022-05-10 | 荣耀终端有限公司 | Method and electronic device for generating HDR image |
CN116452481A (en) * | 2023-04-19 | 2023-07-18 | 北京拙河科技有限公司 | Multi-angle combined shooting method and device |
CN119211464A (en) * | 2024-11-28 | 2024-12-27 | 荣耀终端有限公司 | Video processing method, electronic device, storage medium, chip system and computer program product |
Citations (4)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578023A (en) * | 2015-05-27 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Image quick photographing method and device |
EP3200446A1 (en) * | 2015-12-08 | 2017-08-02 | LE Holdings (Beijing) Co., Ltd. | Method and apparatus for generating high dynamic range image |
CN108337449A (en) * | 2018-04-12 | 2018-07-27 | Oppo广东移动通信有限公司 | High dynamic range image acquisition method, device and equipment based on double cameras |
CN110620873A (en) * | 2019-08-06 | 2019-12-27 | RealMe重庆移动通信有限公司 | Device imaging method and device, storage medium and electronic device |
-
2020
- 2020-08-28 CN CN202010883969.3A patent/CN112188082A/en active Pending
Patent Citations (4)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578023A (en) * | 2015-05-27 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Image quick photographing method and device |
EP3200446A1 (en) * | 2015-12-08 | 2017-08-02 | LE Holdings (Beijing) Co., Ltd. | Method and apparatus for generating high dynamic range image |
CN108337449A (en) * | 2018-04-12 | 2018-07-27 | Oppo广东移动通信有限公司 | High dynamic range image acquisition method, device and equipment based on double cameras |
CN110620873A (en) * | 2019-08-06 | 2019-12-27 | RealMe重庆移动通信有限公司 | Device imaging method and device, storage medium and electronic device |
Cited By (8)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113592922A (en) * | 2021-06-09 | 2021-11-02 | 维沃移动通信(杭州)有限公司 | Image registration processing method and device |
CN113612919A (en) * | 2021-06-22 | 2021-11-05 | 北京迈格威科技有限公司 | Image shooting method and device, electronic equipment and computer readable storage medium |
CN113612919B (en) * | 2021-06-22 | 2023-06-30 | 北京迈格威科技有限公司 | Image shooting method, device, electronic equipment and computer readable storage medium |
CN114466134A (en) * | 2021-08-17 | 2022-05-10 | 荣耀终端有限公司 | Method and electronic device for generating HDR image |
CN114143471A (en) * | 2021-11-24 | 2022-03-04 | 深圳传音控股股份有限公司 | Image processing method, system, mobile terminal and computer readable storage medium |
CN114143471B (en) * | 2021-11-24 | 2024-03-29 | 深圳传音控股股份有限公司 | Image processing method, system, mobile terminal and computer readable storage medium |
CN116452481A (en) * | 2023-04-19 | 2023-07-18 | 北京拙河科技有限公司 | Multi-angle combined shooting method and device |
CN119211464A (en) * | 2024-11-28 | 2024-12-27 | 荣耀终端有限公司 | Video processing method, electronic device, storage medium, chip system and computer program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107820014B (en) | 2020-07-21 | Shooting method, mobile terminal and computer storage medium |
CN112188082A (en) | 2021-01-05 | High dynamic range image shooting method, shooting device, terminal and storage medium |
CN107948360B (en) | 2020-11-03 | Shooting method of flexible screen terminal, terminal and computer readable storage medium |
CN110072061B (en) | 2021-02-09 | Interactive shooting method, mobile terminal and storage medium |
CN107105166B (en) | 2020-12-01 | Image photographing method, terminal, and computer-readable storage medium |
CN110086993B (en) | 2021-09-07 | Image processing method, image processing device, mobile terminal and computer readable storage medium |
CN107707821B (en) | 2020-11-06 | Distortion parameter modeling method and device, correction method, terminal and storage medium |
CN107133939A (en) | 2017-09-05 | A kind of picture synthesis method, equipment and computer-readable recording medium |
CN111327840A (en) | 2020-06-23 | Multi-frame special-effect video acquisition method, terminal and computer readable storage medium |
CN108184052A (en) | 2018-06-19 | A kind of method of video record, mobile terminal and computer readable storage medium |
CN112511741A (en) | 2021-03-16 | Image processing method, mobile terminal and computer storage medium |
CN111885307A (en) | 2020-11-03 | Depth-of-field shooting method and device and computer readable storage medium |
CN111866388B (en) | 2022-07-12 | Multiple exposure shooting method, equipment and computer readable storage medium |
CN108900779B (en) | 2020-10-16 | Initial automatic exposure convergence method, mobile terminal and computer-readable storage medium |
CN111447371A (en) | 2020-07-24 | Automatic exposure control method, terminal and computer readable storage medium |
CN107896304B (en) | 2020-05-26 | Image shooting method and device and computer readable storage medium |
CN112135045A (en) | 2020-12-25 | Video processing method, mobile terminal and computer storage medium |
CN109510941B (en) | 2021-08-03 | Shooting processing method and device and computer readable storage medium |
CN108848321B (en) | 2021-03-19 | Exposure optimization method, device and computer-readable storage medium |
CN107395971B (en) | 2020-06-12 | Image acquisition method, image acquisition equipment and computer-readable storage medium |
CN112135060B (en) | 2022-06-10 | Focusing processing method, mobile terminal and computer storage medium |
CN107493431A (en) | 2017-12-19 | A kind of image taking synthetic method, terminal and computer-readable recording medium |
CN112153305A (en) | 2020-12-29 | Camera starting method, mobile terminal and computer storage medium |
CN111787234A (en) | 2020-10-16 | Shooting control method and device and computer readable storage medium |
CN112532838B (en) | 2023-03-07 | Image processing method, mobile terminal and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2021-01-05 | PB01 | Publication | |
2021-01-05 | PB01 | Publication | |
2021-01-22 | SE01 | Entry into force of request for substantive examination | |
2021-01-22 | SE01 | Entry into force of request for substantive examination | |
2022-06-03 | WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210105 |
2022-06-03 | WD01 | Invention patent application deemed withdrawn after publication |