CN106331510B - A kind of backlight photographic method and mobile terminal - Google Patents
- ️Tue Oct 15 2019
CN106331510B - A kind of backlight photographic method and mobile terminal - Google Patents
A kind of backlight photographic method and mobile terminal Download PDFInfo
-
Publication number
- CN106331510B CN106331510B CN201610942908.3A CN201610942908A CN106331510B CN 106331510 B CN106331510 B CN 106331510B CN 201610942908 A CN201610942908 A CN 201610942908A CN 106331510 B CN106331510 B CN 106331510B Authority
- CN
- China Prior art keywords
- image
- camera
- target
- human face
- brightness Prior art date
- 2016-10-31 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides a kind of backlight photographic method and mobile terminals, which comprises when first camera and second camera are in preview state, carry out object detection to photographed scene and brightness detects;When detecting that photographed scene is backlight scene and when including face, controls first camera and second camera respectively with the first Exposure mode and the second Exposure mode and image is acquired to the photographed scene;Receive the photographing instruction of mobile terminal user's input;Obtain the first image that first camera is shot according to first Exposure mode;Obtain the second image that the second camera is shot according to second Exposure mode;Based on the first image and the second image, target image is generated.The embodiment of the present invention solves when being shot under backlight scene, and existing color is incorrect when current shooting object includes face, the unsharp problem of portrait.
Description
Technical field
The present invention relates to field of communication technology more particularly to a kind of backlight photographic methods and mobile terminal.
Background technique
With digital camera, the various mobile terminals for being equipped with camera it is universal, shoot digital image in people's lives In be quotidian thing.When shooting, can usually encounter photographic subjects object must backlight the case where.In this feelings The image shot under condition usually can be because of the matter for making image in the high bright part of image or the details missing of shade Amount is had a greatly reduced quality.
Currently, being for the processing method of backlight scene shooting: using HDR (High-Dynamic Range, high dynamic range Enclose image) algorithm shooting, still, if there is more special part in scene, for example at this moment portrait, face etc. use The synthesized photo come out of HDR algorithm, may change the color of portrait.Existing backlight scene Processing Algorithm, there are people In the case that picture etc. compares special part.The effect of processing will appear exception, such as that there are colors is incorrect, portrait is unsharp Problem.
Summary of the invention
The embodiment of the present invention provides a kind of backlight photographic method and mobile terminal, to solve to be shot under backlight scene When, existing color is incorrect when current shooting object includes face, the unsharp problem of portrait.
In a first aspect, a kind of backlight photographic method is provided, applied to the shifting with the first camera and second camera Dynamic terminal, which comprises
When first camera and second camera are in preview state, object detection and bright is carried out to photographed scene Degree detection;
When detecting that photographed scene is backlight scene and when including face, first camera and second is controlled respectively and is taken the photograph As head is acquired image to the photographed scene with the first Exposure mode and the second Exposure mode;
Receive the photographing instruction of mobile terminal user's input;
Obtain the first image that first camera is shot according to first Exposure mode;
Obtain the second image that the second camera is shot according to second Exposure mode;
Based on the first image and the second image, target image is generated.
Second aspect provides a kind of mobile terminal, including the first camera and second camera, and the mobile terminal is also Include:
Preview mode enters module, is used for when first camera and second camera are in preview state, to bat It takes the photograph scene and carries out object detection and brightness detection;
Exposure parameter acquisition module, for being controlled respectively when detecting that photographed scene is backlight scene and when including face First camera and second camera are acquired the photographed scene with the first Exposure mode and the second Exposure mode Image;
Photographing instruction receiving module, for receiving the photographing instruction of mobile terminal user's input;
First image obtains module, first shot for obtaining first camera according to first Exposure mode Image;
Second image obtains module, second shot for obtaining the second camera according to second Exposure mode Image;
Target image generation module generates target image for being based on the first image and the second image.
In this way, when being shot, at least two cameras for controlling mobile terminal enter preview in the embodiment of the present invention Mode will acquire when detecting that current shooting object includes face and present filming scene is backlight scene in preview mould Formula is based at least two cameras and carries out Image Acquisition with the first Exposure mode and the second Exposure mode, when receiving mobile terminal When the photographing instruction of user's input, the first image for shoot according to the first Exposure mode of the first camera is obtained, and acquisition the The second image that two cameras are shot according to the second Exposure mode finally obtains target figure based on the first image and the second image Picture, since the first image is to expose to acquire based on face, so resulting image face is distinguished clear, the second image is to be based on All reference object exposure acquisitions, so resulting image entirety clear background, so finally according to the first image and second Image target image color generated is correct and image clearly.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is a kind of flow chart of backlight photographic method embodiment of the embodiment of the present invention one;
Fig. 2 is a kind of flow chart of backlight photographic method embodiment of the embodiment of the present invention two;
Fig. 3 is a kind of structural block diagram of mobile terminal of the embodiment of the present invention three;
Fig. 3 a is a kind of one of the structural block diagram of mobile terminal of the embodiment of the present invention three;
Fig. 3 b is the two of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention three;
Fig. 3 c is the three of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention three;
Fig. 3 d is the four of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention three;
Fig. 3 e is the five of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention three;
Fig. 3 f is the six of the structural block diagram of a kind of mobile terminal of the embodiment of the present invention three;
Fig. 4 is a kind of block diagram of mobile terminal of the embodiment of the present invention four;
Fig. 5 is a kind of structural schematic diagram of mobile terminal of the embodiment of the present invention five.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment one
Referring to Fig.1, it is shown a kind of flow chart of backlight photographic method of the offer of the embodiment of the present invention one, applied to having The mobile terminal of first camera and second camera, this method specifically can be with following steps:
101, when first camera and second camera are in preview state, object is carried out to photographed scene Detection and brightness detection.
It in the concrete realization, can be same based at least two cameras equipped at least two cameras on mobile terminal Step asynchronous obtains multiple images.Carried on certain mobile terminal can also carry there are three or three or more cameras, this Inventive embodiments are without restriction to this.For ease of description, mainly the present invention is implemented by taking two cameras as an example below Example is illustrated.
In a kind of concrete application, the first camera of mobile terminal is controlled when taking pictures and second camera enters preview Mode.Under preview mode, it can control camera and carry out object detection and brightness detection.Wherein, object refers to camera institute The reference object of alignment, such as landscape or personage etc., brightness then refer to the brightness under current shooting situation.
102, when detecting that photographed scene is backlight scene and when including face, first camera is controlled respectively Image is acquired to the photographed scene with the first Exposure mode and the second Exposure mode with second camera.
In embodiments of the present invention, when detecting that photographed scene is backlight scene and when including face, will control it is mobile eventually The camera at end carries out Image Acquisition according to different Exposure modes respectively, to collect multiple photograph subject clarity in the presence of poor Different image.
It is appreciated that it is face that photographer, which most pays close attention to reference object, in practice, so, it in embodiments of the present invention can be with Respective handling is carried out when being set as detecting face.Preferably, if photographer also more pays close attention to other photographing sections, Such as portrait, part can also be paid close attention to based on photographer and carry out respective handling, the embodiments of the present invention are not limited thereto.
103, the photographing instruction of mobile terminal user's input is received.
When user clicks shooting button on mobile terminals, or clicks the designated position of screen, on mobile terminals The photographing instruction of user's input can be received.
104, the first image that first camera is shot according to first Exposure mode is obtained.
When user is shot using mobile terminal, the first camera of control is based on the first Exposure mode and carries out Image Acquisition, Face clearly the first image is obtained at this time.
105, the second image that the second camera is shot according to second Exposure mode is obtained.
When user is shot using mobile terminal, second camera can also be controlled simultaneously and is carried out based on the second Exposure mode Image Acquisition obtains clear background but darker the second image of face at this time.
106, it is based on the first image and the second image, generates target image.
In the concrete realization, when respectively obtaining the first image and the second image based on the first camera and second camera Afterwards, the first image and the second image can be synthesized by target image according to the field depth characteristic of dual camera, in this way synthesis gained The target image arrived combines the first image face clearly and the clear advantage of the second image background, so the people of target image Face part and background parts are all very clear and color is correct.
When obtaining target image, so that it may terminate current shooting, the Image Acquisition into next round exposure parameter operates, To carry out shooting next time.
In this way, when being shot, at least two cameras for controlling mobile terminal enter preview in the embodiment of the present invention Mode will acquire when detecting that current shooting object includes face and present filming scene is backlight scene in preview mould Formula is based at least two cameras and carries out Image Acquisition with the first Exposure mode and the second Exposure mode, when receiving mobile terminal When the photographing instruction of user's input, the first image for shoot according to the first Exposure mode of the first camera is obtained, and acquisition the The second image that two cameras are shot according to the second Exposure mode finally obtains target figure based on the first image and the second image Picture, since the first image is to expose to acquire based on face, so resulting image face is distinguished clear, the second image is to be based on All reference object exposure acquisitions, so resulting image entirety clear background, so finally according to the first image and second Image target image color generated is correct and image clearly.
Embodiment two
Referring to Fig. 2, it is shown a kind of flow chart of backlight photographic method provided by Embodiment 2 of the present invention, applied to having The mobile terminal of first camera and second camera, this method specifically can be with following steps:
201, when first camera and second camera are in preview state, object is carried out to photographed scene Detection and brightness detection.
Mobile terminal of the embodiment of the present invention is when user takes pictures, into preview state.It can be controlled under preview state At least two cameras of mobile terminal processed carry out object detection and brightness detection.
202, when detecting that photographed scene is backlight scene and when including face, first camera is controlled respectively Image is acquired to the photographed scene with the first Exposure mode and the second Exposure mode with second camera.
Under preview mode, the second exposure parameter that the first camera can be exposed based on face, second camera is then It can be exposed to obtain the second exposure parameter based on current shooting object.
In one preferred embodiment of the invention, the
step202 may include following sub-step:
First camera is controlled when acquiring every frame image, survey light is carried out to image-region where the face, really Fixed first exposure parameter;
The second camera is controlled when acquiring every frame image, survey light is carried out to whole image region, determines the second exposure Optical parameter.
In embodiments of the present invention, when user is taken pictures using mobile terminal, will for current shooting object and Present filming scene carries out survey light, if detection current shooting object includes face and present filming scene is backlight scene When, it will acquire the first exposure parameter for face obtained under preview mode, and, for the second exposure of whole region Parameter.Then it is based respectively on the first exposure parameter and the second exposure parameter again, obtains the first image and the second image.
203, the photographing instruction of mobile terminal user's input is received.
When user clicks shooting button on mobile terminals, or clicks the designated position of screen, on mobile terminals The photographing instruction of user's input can be received.
204, the first image that first camera is shot according to first Exposure mode is obtained.
When user is shot using mobile terminal, the first camera of control is based on the first Exposure mode and carries out Image Acquisition, Face clearly the first image is obtained at this time.
205, the second image that the second camera is shot according to second Exposure mode is obtained.
When user is shot using mobile terminal, second camera can also be controlled simultaneously and is carried out based on the second Exposure mode Image Acquisition obtains clear background but darker the second image of face at this time.
206, it is based on the first image and the second image, generates target image.
In embodiments of the present invention, it can synthesize according to the first image and the second image and obtain face part and background all Clearly target image.In one preferred embodiment of the invention, the
step206 may include following sub-step:
Extract the human face region image of the first image;
Record target position of the human face region image in the first image.
The human face region figure will be replaced with the image-region of the target position same position in second image Picture generates the target image.
When user is shot using mobile terminal, camera responds the request of user, obtains the first figure in focal plane Picture, first image have depth of view information, the clear area of main body when the depth of field is shooting.
After the first camera acquisition face point clearly the first image, the depth of view information of the image is obtained, wherein Face point clearly distance range when depth of view information can be for shooting, according to this distance range, so that it may extract the Clearly face part in one image.Target position of the portrait part in the first image will be recorded simultaneously, is then based on target Position is replaced corresponding position in the second image, obtains target image.
Specific to the embodiment of the present invention, it is assumed that the first image is image A, and the second image is image B, then according to field depth characteristic The image of the face part of extraction is then image C, can be by image according to the image C formerly recorded in the target position of image A C is covered on the corresponding position of image B, to obtain the image C of target image.At this point, resulting image C face part is clear Clear, while background is also clearly.
In one preferred embodiment of the invention, the step of human face region image for extracting the first image can To include following sub-step:
Face is identified from the first image, and one rectangle frame of the face is identified;
Obtain the depth of view information of the first image;
The central pixel point of the rectangle frame is extracted from the depth of view information to the first distance of camera;
From other pixels extracted in the depth of view information in the rectangle frame in addition to central pixel point to camera Second distance;
Calculate the difference between the second distance and the first distance;
The pixel that all differences are less than preset threshold is extracted from the first image;
Using the pixel extracted as the facial image region.
Firstly, according to recognition of face as a result, available face on the 2 d image on whole image coordinate (such as: Mobile terminal will do it recognition of face when taking pictures, and can be identified in face part with a rectangle frame when recognizing face, This rectangle frame is exactly to be delimited according to face coordinate on the first image), which is defined as P (X, Y).
Then, the depth of view information for obtaining the first image, based in available first image of depth of view information, all objects are arrived The distance of camera arrives the distance D of camera in the central point (must be human face region) of rectangle frame, this information of distance D can By being obtained in depth of view information.
Finally, all pixels point in rectangle frame coordinate P (X, Y), if the distance of the pixel differed with distance D compared with Small (such as: less than 5cm), it may be considered that the pixel in rectangle frame is face part, so that it may extract these pixels As face part.In this manner it is possible to which face part and background parts are separated.
It should be noted that in implementing the embodiments of the present invention, can also be divided from the first image using other modes From face, portrait part, the embodiments of the present invention are not limited thereto.
207, judge whether background area image and the brightness of the human face region image assist in the target image It adjusts.
In the concrete realization, there may be the brightness of image of face part and overall brightnesses for the target image synthesized not Coordinate.Therefore in embodiments of the present invention, it can judge whether face part coordinate in the target image according to brightness.
In one preferred embodiment of the invention, the
step207 may include following sub-step:
Calculate the first average brightness value of background area image in the target image;
Calculate the second average brightness value of human face region image in the target image;
Calculate the difference between first average brightness value and second average brightness value;
The difference is compared with preset threshold;
If the difference is greater than the preset threshold, background area image and the face in the target image are determined The brightness of area image is uncoordinated.
In embodiments of the present invention, the brightness of image S of background entirety and the brightness of image SF of face part be will test, In, brightness of image S can be the average image brightness of the second image, and brightness of image SF can be portrait part in the second image Average image brightness, certainly, brightness of image S are also possible to the average image brightness of background parts in target image, and the present invention is real It is without restriction to this to apply example.
The difference between brightness of image S and brightness of image SF is calculated, judges that brightness of image S and image are bright based on the difference Threshold value H whether is differed by more than between degree SF, if it is, illustrating that the brightness of image between face part and target image differs too Greatly, the brightness of image of target image is uncoordinated, if it is not, then illustrating the brightness of image phase between portrait part and target image Difference is smaller, and the brightness of image of target image compares coordination.
208, if background area image and the brightness of the human face region image are uncoordinated in the target image, Adjust the brightness of human face region image described in the target image.
In the concrete realization, there may be the brightness of image of portrait part and overall brightnesses for the target image synthesized not Coordinate.If there is the uncoordinated situation of brightness of image, further adjustment can also be made for the brightness of image of target image.
Specific in the embodiment of the present invention, if there is the brightness of image in the brightness of image and target image of face part It is not inconsistent, then the brightness of image of face part can be adjusted.
In one preferred embodiment of the invention, human face region image described in the adjustment target image is bright The step of spending may include following sub-step:
Obtain the corresponding grey level histogram information in target image whole image region;
Based on the grey level histogram information, histogram regulation is carried out to human face region image described in the target image Change processing.
In embodiments of the present invention, if the brightness of image of target image is coordinated, available grey level histogram information, Histogram specification processing is carried out to face part, so that face part is identical as target image brightness.Target image at this time, Not only face distinguishes clear, clear background, and whole brightness of image is all to coordinate.
Specifically, the detailed process of histogram specification processing is carried out for the face part of target image are as follows: based on whole It opening image P1 (target image) and obtains a grey level histogram A1, the image P2 of face part obtains a grey level histogram A2, If the difference between P1 and the average brightness of P2 is less than certain threshold value (threshold value H as the aforementioned), the ash of target image is utilized Histogram information is spent, histogram specification processing is carried out to face part, the face part association of acquired target image after processing It adjusts.Save final image.
It should be noted that histogram specification is one of image procossing processing means, according to grey level histogram A1 Grey level histogram A2 is adjusted, so that the Luminance Distribution approach of the histogram of A2 and the histogram luminance existence of A1 are distributed, thus So that the brightness of the image P2 of face part is close to whole image P1.After being disposed, a complete image, gained are obtained The brightness of image arrived does not have the excessively dark information of overexposure with regard to relatively uniform.
For treated image, it can be used as final image and saved, alternatively, opening up on the screen of the mobile terminal Now watched to user.
It should be noted that tradition is directed to the backlight scene of portrait using HDR algorithm, and with the figure under general scene As carrying out different exposures, the operation synthesized after three images is obtained.But HDR algorithm is not to portrait, especially face Part is handled.So face separates now poor treatment effect (such as color incorrect or unnatural etc.), alternatively, There are brightness of image is darker or situations such as brightness of image is uncoordinated.And the embodiment of the present invention is applied, it can be to face split-phase It should handle, so that face partial color is normal and naturally, obtained significant effect, and brightness of image effect is coordinated.
In addition to this, using the embodiment of the present invention shooting performance of camera is also improved largely.Specifically, it passes The image that the HDR algorithm of system needs to acquire three different exposures carries out synthetic operation, and after the embodiment of the present invention only needs detection It is shot, is not needed that exposure parameter repeatedly is arranged, the image that can directly pre-store, so that shooting speed increases substantially.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
In embodiments of the present invention, when first camera and second camera are in preview state, to shooting field Scape carries out object detection and brightness detection, when detecting that photographed scene is backlight scene and when including face, respectively described in control First camera and second camera are acquired image to the photographed scene with the first Exposure mode and the second Exposure mode; The photographing instruction of mobile terminal user's input is received, obtains first camera is shot according to first Exposure mode the One image obtains the second image that the second camera is shot according to second Exposure mode, is based on the first image With the second image, target image is generated;By judging background area image and the human face region image in the target image Brightness whether coordinate, when the brightness of background area image and the human face region image in the target image is uncoordinated, The brightness of human face region image described in the target image is adjusted, so that the overall brightness of target image is coordinated, it is further excellent The display effect that backlight is taken pictures is changed.
Embodiment three
Referring to Fig. 3, a kind of structural block diagram of
mobile terminal300 of the embodiment of the present invention four, 300 energy of mobile terminal are shown It realizes the details of backlight photographic method of the embodiment one into embodiment two, and reaches identical effect.The mobile terminal packet The first camera and second camera are included, the
mobile terminal300 may include that preview mode enters
module301, exposure
parameter Acquisition module302, photographing
instruction receiving module303, the first image obtain
module304, and the second image obtains
module305 and target
Image generation module306.
Preview mode enters
module301, is used for when first camera and second camera are in preview state, right Photographed scene carries out object detection and brightness detection;
Exposure
parameter acquisition module302, for being controlled respectively when detecting that photographed scene is backlight scene and when including face It makes first camera and second camera adopts the photographed scene with the first Exposure mode and the second Exposure mode Collect image;
Photographing
instruction receiving module303, for receiving the photographing instruction of mobile terminal user's input;
First image obtains
module304, is shot according to first Exposure mode for obtaining first camera First image;
Second image obtains
module305, is shot according to second Exposure mode for obtaining the second camera Second image;
Target
image generation module306 generates target image for being based on the first image and the second image.
Referring to shown in Fig. 3 a, the
mobile terminal300 provided in one embodiment of the invention, the exposure
parameter acquisition Module302 includes that the first exposure parameter acquisition submodule 3021 and the second exposure
parameter acquire submodule3022.
Referring to shown in Fig. 3 b, the
mobile terminal300 provided in one embodiment of the invention, the target image is generated
Module306, including human face region image zooming-
out submodule3061, target position
record sub module3062 and target image generate
son Module3063.
Human face region image zooming-
out submodule3061, for extracting the human face region image of the first image;
Target position
record sub module3062, for recording target of the human face region image in the first image Position.
Target image generates
submodule3063, for by the figure in second image with the target position same position As region replaces with the human face region image, the target image is generated.
Referring to shown in Fig. 3 c, the
mobile terminal300 provided in one embodiment of the invention, the human face region image zooming-out
Submodule3061, including
face mark unit30611, depth of view
information acquiring unit30612, first
distance extraction unit30613, Second
distance extraction unit30614, difference
computational unit30615,
pixel extraction unit30616 and facial image region obtain
Unit30617.
Face identifies
unit30611, for identifying face from the first image, by one rectangle collimation mark of the face Know;
Depth of view
information acquiring unit30612, for obtaining the depth of view information of the first image;
First
distance extraction unit30613, for extracting the center pixel of the rectangle frame from the depth of view information Point arrives the first distance of camera;
Second
distance extraction unit30614, for from being extracted in the depth of view information in the rectangle frame except middle imago The second distance of other pixels outside vegetarian refreshments to camera;
Difference
computational unit30615, for calculating the difference between the second distance and the first distance;
30616 is less than default threshold for extracting all differences from the first image The pixel of value;
Facial image region obtains
unit30617, for using the pixel extracted as the facial image area Domain.
Referring to shown in Fig. 3 d, the
mobile terminal300 provided in one embodiment of the invention, the mobile terminal further includes figure Image
brightness judgment module307 and brightness of image adjust
module308.
Brightness of
image judgment module307, for judging background area image and the human face region in the target image Whether the brightness of image is coordinated;
Brightness of image adjusts
module308, if for background area image in the target image and the human face region figure The brightness of picture is uncoordinated, then adjusts the brightness of human face region image described in the target image.
Referring to shown in Fig. 3 e, the
mobile terminal300 provided in one embodiment of the invention, described image
brightness judgment module307, including the first average brightness value
computational submodule3071, the second average brightness value
computational submodule3072,
difference calculating Module3073,
difference comparsion submodule3074 and uncoordinated decision sub-module 3075.
First average brightness value
computational submodule3071, for calculating first of background area image in the target image Average brightness value;
Second average brightness value
computational submodule3072, for calculating second of human face region image in the target image Average brightness value;
Difference
computational submodule3073, for calculating between first average brightness value and second average brightness value Difference;
3074, for the difference to be compared with preset threshold;
3075 determines the target figure if being greater than the preset threshold for the difference Background area image and the brightness of the human face region image are uncoordinated as in.
Referring to shown in Fig. 3 f, the
mobile terminal300 provided in one embodiment of the invention, described image
brightness adjusting section308, including grey level histogram acquisition of information submodule 3081 and equalization processing submodule 3082.
Grey level histogram acquisition of information submodule 3081, for obtaining the corresponding ash in target image whole image region Spend histogram information;
Equalization processing submodule 3082, for being based on the grey level histogram information, to described in the target image Human face region image carries out histogram specification processing.
300 can be realized each process that mobile terminal is realized in embodiment of the method, to avoid repeating, here It repeats no more.
In this way, when being shot, at least two cameras for controlling mobile terminal enter preview in the embodiment of the present invention Mode will acquire when detecting that current shooting object includes portrait and present filming scene is backlight scene in preview mould The portrait exposure parameter and overall exposing parameter acquired under formula, and shot based on portrait exposure parameter using the first camera It obtains the first image and is shot to obtain the second image using second camera based on overall exposing parameter, be finally synthesizing the One image and the second image obtain target image, since the first image is to expose to acquire based on portrait, so resulting image Portrait part is clear, and the second image is to expose to acquire based on reference object, so resulting image entirety clear background, so The correct and image clearly according to target image color synthesized by the first image and the second image.
Example IV
Fig. 4 is the block diagram of the mobile terminal of another embodiment of the present invention.
700 shown in Fig. 4 includes: at least one
processor701,
memory702, at least one
network interface704,
user interface703 and
picture shooting assembly706,
picture shooting assembly706 include at least two cameras.It is each in mobile terminal 700 A component is coupled by bus system 705.It is understood that
bus system705 is for realizing the connection between these components
Communication.Bus system705 further includes power bus, control bus and status signal bus in addition in addition to including data/address bus.But For the sake of clear explanation, various buses are all designated as
bus system705 in Fig. 4.
Wherein,
user interface703 may include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the
memory702 in the embodiment of the present invention can be volatile memory or nonvolatile memory, It or may include both volatile and non-volatile memories.Wherein, nonvolatile memory can be read-only memory (Read- OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM (ErasablePROM, EPROM), electrically erasable programmable read-only memory (ElectricallyEPROM, EEPROM) dodge It deposits.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is slow to be used as external high speed It deposits.By exemplary but be not restricted explanation, the RAM of many forms is available, such as static random access memory (StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory (SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory (DirectRambusRAM, DRRAM).The
memory702 of the system and method for description of the embodiment of the present invention is intended to include but unlimited In the memory of these and any other suitable type.
In some embodiments,
memory702 stores following element, executable modules or data structures, or Their subset of person or their superset:
operating system7021 and
application program7022.
Wherein,
operating system7021 include various system programs, such as ccf layer, core library layer, driving layer etc., are used for Realize various basic businesses and the hardware based task of
processing.Application program7022 includes various application programs, such as media Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize embodiment of the present invention side The program of method may be embodied in
application program7022.
In embodiments of the present invention, by the program or instruction of calling
memory702 to store, specifically, can be application The program or instruction stored in
program7022,
processor701 are used to be in preview in first camera and second camera When state, object detection is carried out to photographed scene and brightness detects;When detecting that photographed scene is backlight scene and including face When, first camera and second camera are controlled respectively with the first Exposure mode and the second Exposure mode to the shooting field Scape is acquired image;Receive the photographing instruction of mobile terminal user's input;First camera is obtained according to described first First image of Exposure mode shooting;Obtain the second image that the second camera is shot according to second Exposure mode; Based on the first image and the second image, target image is generated.
The method that the embodiments of the present invention disclose can be applied in
processor701, or be realized by
processor701. Processor 701 may be a kind of IC chip, the processing capacity with signal.During realization, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 701 or the instruction of software form.Above-mentioned processing Device 701 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), specific integrated circuit (ApplicationSpecific IntegratedCircuit, ASIC), ready-made programmable gate array (FieldProgrammableGateArray, FPGA) either other programmable logic device, discrete gate or transistor logic Device, discrete hardware components.It may be implemented or execute disclosed each method, step and the logical box in the embodiment of the present invention Figure.General processor can be microprocessor or the processor is also possible to any conventional processor etc..In conjunction with the present invention The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and execute completion, or use decoding processor In hardware and software module combination execute completion.Software module can be located at random access memory, and flash memory, read-only memory can In the storage medium of this fields such as program read-only memory or electrically erasable programmable memory, register maturation.The storage Medium is located at memory 702, and processor 701 reads the information in memory 702, and the step of the above method is completed in conjunction with its hardware Suddenly.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware, Microcode or combinations thereof is realized.For hardware realization, processing unit be may be implemented in one or more specific integrated circuit (App LicationSpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD), field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for executing herein described function In member or combinations thereof.
For software implementations, can by execute the embodiment of the present invention described in function module (such as process, function etc.) come Realize technology described in the embodiment of the present invention.Software code is storable in memory and is executed by processor.Memory can With portion realizes in the processor or outside the processor.
Optionally,
processor701 is also used to: control first camera is when acquiring every frame image, to the face Place image-region carries out survey light, determines the first exposure parameter;The second camera is controlled when acquiring every frame image, to whole A image-region carries out survey light, determines the second exposure parameter.
Optionally,
processor701 is also used to: extracting the human face region image of the first image;Record the face area Target position of the area image in the first image.By the image in second image with the target position same position Region replaces with the human face region image, generates the target image.
Optionally,
processor701 is also used to: face is identified from the first image, by one rectangle frame of the face Mark;Obtain the depth of view information of the first image;The central pixel point of the rectangle frame is extracted from the depth of view information To the first distance of camera;From other pixels extracted in the depth of view information in the rectangle frame in addition to central pixel point Point arrives the second distance of camera;Calculate the difference between the second distance and the first distance;From the first image In extract all differences be less than preset threshold pixel;Using the pixel extracted as the facial image Region.
Optionally,
processor701 is also used to: judging background area image and the human face region figure in the target image Whether the brightness of picture is coordinated;If background area image and the brightness of the human face region image are uncoordinated in the target image, Then adjust the brightness of human face region image described in the target image.
Optionally,
processor701 is also used to: calculating the first average brightness of background area image in the target image Value;Calculate the second average brightness value of human face region image in the target image;Calculate first average brightness value and institute State the difference between the second average brightness value;The difference is compared with preset threshold;If the difference is greater than described pre- If threshold value, then determine that background area image and the brightness of the human face region image are uncoordinated in the target image.
Optionally,
processor701 is also used to: obtaining the corresponding grey level histogram letter in target image whole image region Breath;Based on the grey level histogram information, human face region image described in the target image is carried out at histogram specification Reason.
700 can be realized each process that mobile terminal is realized in previous embodiment, to avoid repeating, here It repeats no more.
In this way, when being shot, at least two cameras for controlling mobile terminal enter preview in the embodiment of the present invention Mode will acquire when detecting that current shooting object includes face and present filming scene is backlight scene in preview mould Formula is based at least two cameras and carries out Image Acquisition with the first Exposure mode and the second Exposure mode, when receiving mobile terminal When the photographing instruction of user's input, the first image for shoot according to the first Exposure mode of the first camera is obtained, and acquisition the The second image that two cameras are shot according to the second Exposure mode finally obtains target figure based on the first image and the second image Picture, since the first image is to expose to acquire based on face, so resulting image face is distinguished clear, the second image is to be based on All reference object exposure acquisitions, so resulting image entirety clear background, so finally according to the first image and second Image target image color generated is correct and image clearly.
Embodiment five
Fig. 5 is the structural schematic diagram of the mobile terminal of another embodiment of the present invention.
Specifically, the
mobile terminal800 in Fig. 5 can be mobile phone, tablet computer, personal digital assistant (PersonalDigital Assistant, PDA) or vehicle-mounted computer etc..
800 in Fig. 5 includes radio frequency (RadioFrequency, RF)
circuit810,
memory820,
input list First830,
display unit840,
processor860,
voicefrequency circuit870, WiFi (WirelessFidelity)
module880,
power supply890 With
picture shooting assembly850.
Wherein, input unit 830 can be used for receiving the number or character information of user's input, and generation and mobile terminal The related signal input of 800 user setting and function control.Specifically, in the embodiment of the present invention, which can To include touch panel 831.Touch panel 831, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses the operations of any suitable object or attachment on touch panel 831 such as finger, stylus), and according to preparatory The formula of setting drives corresponding attachment device.Optionally, touch panel 831 may include touch detecting apparatus and touch controller Two parts.Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, by signal Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, The processor 860 is given again, and can be received order that processor 860 is sent and be executed.Furthermore, it is possible to using resistance-type, The multiple types such as condenser type, infrared ray and surface acoustic wave realize touch panel 831.In addition to touch panel 831, input unit 830 can also include other input equipments 832, other input equipments 832 can include but is not limited to physical keyboard, function key One of (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. are a variety of.
Wherein,
display unit840 can be used for showing information input by user or be supplied to the information and movement of user The various menu interfaces of terminal 800.
Display unit840 may include
display panel841, optionally, can use LCD or organic hair The forms such as optical diode (OrganicLight-EmittingDiode, OLED) configure
display panel841.
It should be noted that
touch panel831 can cover
display panel841, touch display screen is formed, when the touch display screen is examined After measuring touch operation on it or nearby,
processor860 is sent to determine the type of touch event, is followed by
subsequent processing device860 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and common control viewing area.The Application Program Interface viewing area And arrangement mode of the common control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish two it is aobvious Show the arrangement mode in area.The Application Program Interface viewing area is displayed for the interface of application program.Each interface can be with The interface elements such as the icon comprising at least one application program and/or widget desktop control.The Application Program Interface viewing area Or the empty interface not comprising any content.This commonly uses control viewing area for showing the higher control of utilization rate, for example, Application icons such as button, interface number, scroll bar, phone directory icon etc. are set.
850 includes the first camera 8051 and the first camera 8052, and the first camera 8051 is according to described First image of the first Exposure mode shooting, the first camera 8052 are used for the second figure shot according to second Exposure mode Picture, and the first image and the second image are sent to
processor860.
Wherein
processor860 is the control centre of
mobile terminal800, utilizes various interfaces and connection whole mobile phone Various pieces, by running or executing the software program and/or module that are stored in
first memory821, and calling storage Data in
second memory822 execute the various functions and processing data of
mobile terminal800, thus to
mobile terminal800 Carry out integral monitoring.Optionally,
processor860 may include one or more processing units.
In embodiments of the present invention, by call store the
first memory821 in software program and/or module and/ Or the data in the
second memory822,
processor860 are used to be in preview shape in first camera and second camera When state, object detection is carried out to photographed scene and brightness detects;When detecting that photographed scene is backlight scene and when including face, First camera and second camera are controlled respectively with the first Exposure mode and the second Exposure mode to the photographed scene It is acquired image;Receive the photographing instruction of mobile terminal user's input;First camera is obtained to expose according to described first The first image that light mode is shot;Obtain the second image that the second camera is shot according to second Exposure mode;Base In the first image and the second image, target image is generated.
Optionally, the
processor860 is also used to: control first camera is when acquiring every frame image, to described Image-region where face carries out survey light, determines the first exposure parameter;The second camera is controlled when acquiring every frame image, Survey light is carried out to whole image region, determines the second exposure parameter.
Optionally, the
processor860 is also used to: extracting the human face region image of the first image;Record the people Target position of the face area image in the first image.By in second image with the target position same position Image-region replaces with the human face region image, generates the target image.
Optionally, the
processor860 is also used to: face is identified from the first image, by one square of the face Shape collimation mark is known;Obtain the depth of view information of the first image;The middle imago of the rectangle frame is extracted from the depth of view information First distance of the vegetarian refreshments to camera;From extracting other in the rectangle frame in addition to central pixel point in the depth of view information Second distance of the pixel to camera;Calculate the difference between the second distance and the first distance;From described first The pixel that all differences are less than preset threshold is extracted in image;Using the pixel extracted as the face Image-region.
Optionally, the
processor860 is also used to: judging background area image and the face area in the target image Whether the brightness of area image is coordinated;If background area image and the brightness of the human face region image are not assisted in the target image It adjusts, then adjusts the brightness of human face region image described in the target image.
Optionally, the
processor860 is also used to: calculate background area image in the target image first is average bright Angle value;Calculate the second average brightness value of human face region image in the target image;Calculate first average brightness value with Difference between second average brightness value;The difference is compared with preset threshold;If the difference is greater than described Preset threshold then determines that background area image and the brightness of the human face region image are uncoordinated in the target image.
Optionally, the
processor860 is also used to: obtaining the corresponding intensity histogram in target image whole image region Figure information;Based on the grey level histogram information, histogram regulation is carried out to human face region image described in the target image Change processing.
For 800 embodiment of mobile terminal, since it is basically similar to the method embodiment, so the comparison of description is simple Single, the relevent part can refer to the partial explaination of embodiments of method.
In this way, when being shot, at least two cameras for controlling mobile terminal enter preview in the embodiment of the present invention Mode will acquire when detecting that current shooting object includes face and present filming scene is backlight scene in preview mould Formula is based at least two cameras and carries out Image Acquisition with the first Exposure mode and the second Exposure mode, when receiving mobile terminal When the photographing instruction of user's input, the first image for shoot according to the first Exposure mode of the first camera is obtained, and acquisition the The second image that two cameras are shot according to the second Exposure mode finally obtains target figure based on the first image and the second image Picture, since the first image is to expose to acquire based on face, so resulting image face is distinguished clear, the second image is to be based on All reference object exposure acquisitions, so resulting image entirety clear background, so finally according to the first image and second Image target image color generated is correct and image clearly.
Those of ordinary skill in the art may be aware that the embodiment in conjunction with disclosed in the embodiment of the present invention describe it is each Exemplary unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered Think beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of backlight photographic method, applied to the mobile terminal with the first camera and second camera, which is characterized in that The described method includes:
When first camera and second camera are in preview state, object detection is carried out to photographed scene and brightness is examined It surveys;
When detecting that photographed scene is backlight scene and when including face, controls first camera and acquiring every frame image When, survey light is carried out to image-region where the face, determines the first exposure parameter;
The second camera is controlled when acquiring every frame image, survey light is carried out to whole image region, determines that the second exposure is joined Number;
Receive the photographing instruction of mobile terminal user's input;
Obtain the first image that first camera is shot according to first Exposure mode;
Obtain the second image that the second camera is shot according to second Exposure mode;
Extract the human face region image of the first image;
Record target position of the human face region image in the first image;
The human face region image will be replaced with the image-region of the target position same position in second image, it is raw At the target image.
2. the method according to claim 1, wherein the human face region image for extracting the first image Step, comprising:
Face is identified from the first image, and one rectangle frame of the face is identified;
Obtain the depth of view information of the first image;
The central pixel point of the rectangle frame is extracted from the depth of view information to the first distance of camera;
From other pixels extracted in the depth of view information in the rectangle frame in addition to central pixel point to camera Two distances;
Calculate the difference between the second distance and the first distance;
The pixel that all differences are less than preset threshold is extracted from the first image;
Using the pixel extracted as the facial image region.
3. the method according to claim 1, wherein it is described by second image with the target position phase After the step of replacing with the human face region image with the image-region of position, generate the target image, the method is also Include:
Judge whether background area image and the brightness of the human face region image are coordinated in the target image;
If background area image and the brightness of the human face region image are uncoordinated in the target image, the target is adjusted The brightness of human face region image described in image.
4. according to the method described in claim 3, it is characterized in that, in the judgement target image background area image with The step of whether brightness of the human face region image is coordinated, comprising:
Calculate the first average brightness value of background area image in the target image;
Calculate the second average brightness value of human face region image in the target image;
Calculate the difference between first average brightness value and second average brightness value;
The difference is compared with preset threshold;
If the difference is greater than the preset threshold, background area image and the human face region in the target image are determined The brightness of image is uncoordinated.
5. according to the method described in claim 3, it is characterized in that, human face region figure described in the adjustment target image The step of brightness of picture, comprising:
Obtain the corresponding grey level histogram information in target image whole image region;
Based on the grey level histogram information, human face region image described in the target image is carried out at histogram specification Reason.
6. a kind of mobile terminal, including the first camera and second camera, which is characterized in that the mobile terminal further include:
Preview mode enters module, is used for when first camera and second camera are in preview state, to shooting field Scape carries out object detection and brightness detection;
Exposure parameter acquisition module, for when detecting that photographed scene is backlight scene and when including face, respectively described in control First camera and second camera are acquired image to the photographed scene with the first Exposure mode and the second Exposure mode;
The exposure parameter acquisition module, comprising:
First exposure parameter acquires submodule, for controlling first camera when acquiring every frame image, to the face Place image-region carries out survey light, determines the first exposure parameter;
Second exposure parameter acquires submodule, for controlling the second camera when acquiring every frame image, to whole image Region carries out survey light, determines the second exposure parameter
Photographing instruction receiving module, for receiving the photographing instruction of mobile terminal user's input;
First image obtains module, the first figure shot for obtaining first camera according to first Exposure mode Picture;
Second image obtains module, the second figure shot for obtaining the second camera according to second Exposure mode Picture;
Target image generation module generates target image for being based on the first image and the second image;
The target image generation module, comprising:
Human face region image zooming-out submodule, for extracting the human face region image of the first image;
Target position record sub module, for recording target position of the human face region image in the first image;
Target image generates submodule, for replacing the image-region in second image with the target position same position It is changed to the human face region image, generates the target image.
7. mobile terminal according to claim 6, which is characterized in that the human face region image zooming-out submodule, comprising:
Face mark unit identifies one rectangle frame of the face for identifying face from the first image;
Depth of view information acquiring unit, for obtaining the depth of view information of the first image;
First distance extraction unit, for extracting the central pixel point of the rectangle frame from the depth of view information to camera First distance;
Second distance extraction unit, for from extracting its in the rectangle frame in addition to central pixel point in the depth of view information The second distance of his pixel to camera;
Difference computational unit, for calculating the difference between the second distance and the first distance;
Pixel extraction unit, the pixel for being less than preset threshold for extracting all differences from the first image Point;
Facial image region obtains unit, for using the pixel extracted as the facial image region.
8. mobile terminal according to claim 6, which is characterized in that the mobile terminal further include:
Brightness of image judgment module, for judging the bright of background area image and the human face region image in the target image Whether degree is coordinated;
Brightness of image adjusts module, if the brightness for background area image in the target image and the human face region image It is uncoordinated, then adjust the brightness of human face region image described in the target image.
9. mobile terminal according to claim 8, which is characterized in that described image brightness judgment module, comprising:
First average brightness value computational submodule, for calculating the first average brightness of background area image in the target image Value;
Second average brightness value computational submodule, for calculating the second average brightness of human face region image in the target image Value;
Difference computational submodule, for calculating the difference between first average brightness value and second average brightness value;
Difference comparsion submodule, for the difference to be compared with preset threshold;
Uncoordinated decision sub-module determines background in the target image if being greater than the preset threshold for the difference The brightness of area image and the human face region image is uncoordinated.
10. mobile terminal according to claim 8, which is characterized in that described image brightness adjusting section, comprising:
Grey level histogram acquisition of information submodule, for obtaining the corresponding grey level histogram in target image whole image region Information;
Equalization processing submodule, for being based on the grey level histogram information, to human face region described in the target image Image carries out histogram specification processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610942908.3A CN106331510B (en) | 2016-10-31 | 2016-10-31 | A kind of backlight photographic method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610942908.3A CN106331510B (en) | 2016-10-31 | 2016-10-31 | A kind of backlight photographic method and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106331510A CN106331510A (en) | 2017-01-11 |
CN106331510B true CN106331510B (en) | 2019-10-15 |
Family
ID=57818106
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610942908.3A Active CN106331510B (en) | 2016-10-31 | 2016-10-31 | A kind of backlight photographic method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106331510B (en) |
Families Citing this family (40)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657798A (en) * | 2017-02-28 | 2017-05-10 | 上海传英信息技术有限公司 | Photographing method for intelligent terminal |
CN106803920B (en) * | 2017-03-17 | 2020-07-10 | 广州视源电子科技股份有限公司 | An image processing method, device and intelligent conference terminal |
CN106851125B (en) * | 2017-03-31 | 2020-10-16 | 努比亚技术有限公司 | Mobile terminal and multiple exposure shooting method |
CN106878625B (en) * | 2017-04-19 | 2019-10-11 | 宇龙计算机通信科技(深圳)有限公司 | The synchronous exposure method of dual camera and system |
CN107360361B (en) * | 2017-06-14 | 2020-07-31 | 中科创达软件科技(深圳)有限公司 | Method and device for shooting people in backlight mode |
CN107241557A (en) * | 2017-06-16 | 2017-10-10 | 广东欧珀移动通信有限公司 | Image exposure method, image exposure device, image pickup apparatus, and storage medium |
CN107358593B (en) * | 2017-06-16 | 2020-06-26 | Oppo广东移动通信有限公司 | Image forming method and apparatus |
CN107220953A (en) | 2017-06-16 | 2017-09-29 | 广东欧珀移动通信有限公司 | image processing method, device and terminal |
CN107370940B (en) * | 2017-06-16 | 2019-08-02 | Oppo广东移动通信有限公司 | Image acquiring method, device and terminal device |
CN107172353B (en) * | 2017-06-16 | 2019-08-20 | Oppo广东移动通信有限公司 | Automatic exposure method and device and computer equipment |
CN107241559B (en) * | 2017-06-16 | 2020-01-10 | Oppo广东移动通信有限公司 | Portrait photographing method and device and camera equipment |
CN107343188A (en) * | 2017-06-16 | 2017-11-10 | 广东欧珀移动通信有限公司 | image processing method, device and terminal |
CN107516289A (en) * | 2017-08-29 | 2017-12-26 | 努比亚技术有限公司 | A kind of certificate image acquisition methods and equipment |
CN107592453B (en) * | 2017-09-08 | 2019-11-05 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108875485A (en) * | 2017-09-22 | 2018-11-23 | 北京旷视科技有限公司 | A kind of base map input method, apparatus and system |
CN107613221B (en) * | 2017-10-19 | 2020-09-01 | 浪潮金融信息技术有限公司 | Image processing method and device, computer readable storage medium and terminal |
CN107592473A (en) * | 2017-10-31 | 2018-01-16 | 广东欧珀移动通信有限公司 | Exposure parameter adjustment method, device, electronic device and readable storage medium |
CN107995418B (en) * | 2017-11-21 | 2020-07-21 | 维沃移动通信有限公司 | Shooting method and device and mobile terminal |
CN108156366A (en) * | 2017-11-30 | 2018-06-12 | 维沃移动通信有限公司 | A kind of image capturing method and mobile device based on dual camera |
CN108234880B (en) * | 2018-02-02 | 2020-11-24 | 成都西纬科技有限公司 | Image enhancement method and device |
CN108319940A (en) * | 2018-04-12 | 2018-07-24 | Oppo广东移动通信有限公司 | Face recognition processing method, device, equipment and storage medium |
CN110443102B (en) * | 2018-05-04 | 2022-05-24 | 北京眼神科技有限公司 | Living body face detection method and device |
CN108764139B (en) * | 2018-05-29 | 2021-01-29 | Oppo(重庆)智能科技有限公司 | Face detection method, mobile terminal and computer readable storage medium |
TWI684165B (en) * | 2018-07-02 | 2020-02-01 | 華晶科技股份有限公司 | Image processing method and electronic device |
CN109544486A (en) * | 2018-10-18 | 2019-03-29 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and terminal device |
CN109951637B (en) * | 2019-03-19 | 2020-09-11 | 河北川谷信息技术有限公司 | Security monitoring probe analysis processing method based on big data |
CN110136166B (en) * | 2019-04-09 | 2021-04-30 | 深圳锐取信息技术股份有限公司 | Automatic tracking method for multi-channel pictures |
CN110264413B (en) | 2019-05-17 | 2021-08-31 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110166705B (en) * | 2019-06-06 | 2021-04-23 | Oppo广东移动通信有限公司 | High dynamic range HDR image generation method and device, electronic equipment and computer readable storage medium |
CN110677584A (en) * | 2019-09-30 | 2020-01-10 | 联想(北京)有限公司 | Processing method and electronic equipment |
CN110730310A (en) * | 2019-10-18 | 2020-01-24 | 青岛海信移动通信技术股份有限公司 | Camera exposure control method and terminal |
CN113099101B (en) * | 2019-12-23 | 2023-03-24 | 杭州宇泛智能科技有限公司 | Camera shooting parameter adjusting method and device and electronic equipment |
CN111107281B (en) * | 2019-12-30 | 2022-04-12 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN111479059B (en) * | 2020-04-15 | 2021-08-13 | Oppo广东移动通信有限公司 | Photographic processing method, device, electronic device and storage medium |
CN112085686A (en) * | 2020-08-21 | 2020-12-15 | 北京迈格威科技有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112243091B (en) * | 2020-10-16 | 2022-12-16 | 上海微创医疗机器人(集团)股份有限公司 | Three-dimensional endoscope system, control method, and storage medium |
CN112399078B (en) * | 2020-10-30 | 2022-09-02 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN112887612B (en) * | 2021-01-27 | 2022-10-04 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN113411498B (en) * | 2021-06-17 | 2023-04-28 | 深圳传音控股股份有限公司 | Image shooting method, mobile terminal and storage medium |
CN114418914A (en) * | 2022-01-18 | 2022-04-29 | 上海闻泰信息技术有限公司 | Image processing method, device, electronic device and storage medium |
Citations (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8144214B2 (en) * | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
CN104917950A (en) * | 2014-03-10 | 2015-09-16 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105049726A (en) * | 2015-08-05 | 2015-11-11 | 广东欧珀移动通信有限公司 | Mobile terminal shooting method and mobile terminal |
CN105100491A (en) * | 2015-08-11 | 2015-11-25 | 努比亚技术有限公司 | Device and method for processing photo |
CN105450932A (en) * | 2015-12-31 | 2016-03-30 | 华为技术有限公司 | Backlight photographing method and device |
Family Cites Families (3)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5409829B2 (en) * | 2012-02-17 | 2014-02-05 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
KR101952684B1 (en) * | 2012-08-16 | 2019-02-27 | 엘지전자 주식회사 | Mobile terminal and controlling method therof, and recording medium thereof |
KR102130756B1 (en) * | 2013-12-24 | 2020-07-06 | 한화테크윈 주식회사 | Auto focus adjusting method and auto focus adjusting apparatus |
-
2016
- 2016-10-31 CN CN201610942908.3A patent/CN106331510B/en active Active
Patent Citations (5)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8144214B2 (en) * | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
CN104917950A (en) * | 2014-03-10 | 2015-09-16 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105049726A (en) * | 2015-08-05 | 2015-11-11 | 广东欧珀移动通信有限公司 | Mobile terminal shooting method and mobile terminal |
CN105100491A (en) * | 2015-08-11 | 2015-11-25 | 努比亚技术有限公司 | Device and method for processing photo |
CN105450932A (en) * | 2015-12-31 | 2016-03-30 | 华为技术有限公司 | Backlight photographing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106331510A (en) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106331510B (en) | 2019-10-15 | A kind of backlight photographic method and mobile terminal |
CN105827964B (en) | 2019-05-17 | A kind of image processing method and mobile terminal |
CN106131449B (en) | 2019-11-29 | A kind of photographic method and mobile terminal |
CN105872148B (en) | 2019-05-17 | A kind of generation method and mobile terminal of high dynamic range images |
CN105827990B (en) | 2018-12-04 | A kind of automatic explosion method and mobile terminal |
CN107205120B (en) | 2019-04-09 | A kind of processing method and mobile terminal of image |
CN105898143B (en) | 2019-05-17 | A kind of grasp shoot method and mobile terminal of moving object |
CN107277387B (en) | 2019-11-05 | High dynamic range images image pickup method, terminal and computer readable storage medium |
CN107231530B (en) | 2019-11-26 | A kind of photographic method and mobile terminal |
CN106254682B (en) | 2019-07-26 | A kind of photographic method and mobile terminal |
CN105847674B (en) | 2019-06-07 | A kind of preview image processing method and mobile terminal based on mobile terminal |
CN106060419B (en) | 2019-05-17 | A kind of photographic method and mobile terminal |
CN105227858B (en) | 2019-03-05 | A kind of image processing method and mobile terminal |
CN106027907B (en) | 2019-08-20 | A kind of method and mobile terminal of adjust automatically camera |
CN107659769B (en) | 2019-07-26 | A kind of image pickup method, first terminal and second terminal |
CN107222680A (en) | 2017-09-29 | The image pickup method and mobile terminal of a kind of panoramic picture |
CN107395976B (en) | 2019-11-19 | A kind of acquisition parameters setting method and mobile terminal |
CN106170058B (en) | 2019-05-17 | A kind of exposure method and mobile terminal |
CN108322646A (en) | 2018-07-24 | Image processing method, device, storage medium and electronic equipment |
CN107592453B (en) | 2019-11-05 | A kind of image pickup method and mobile terminal |
CN107395898A (en) | 2017-11-24 | A kind of image pickup method and mobile terminal |
CN106161967A (en) | 2016-11-23 | A kind of backlight scene panorama shooting method and mobile terminal |
CN107395998A (en) | 2017-11-24 | A kind of image capturing method and mobile terminal |
CN106416222A (en) | 2017-02-15 | Real-time capture exposure adjust gestures |
CN106454085B (en) | 2019-09-27 | A kind of image processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2017-01-11 | PB01 | Publication | |
2017-01-11 | PB01 | Publication | |
2017-02-08 | C10 | Entry into substantive examination | |
2017-02-08 | SE01 | Entry into force of request for substantive examination | |
2019-10-15 | GR01 | Patent grant | |
2019-10-15 | GR01 | Patent grant | |
2021-12-28 | TR01 | Transfer of patent right |
Effective date of registration: 20211215 Address after: 311121 Room 305, Building 20, Longquan Road, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province Patentee after: VIVO MOBILE COMMUNICATION (HANGZHOU) Co.,Ltd. Address before: 523860 No. 283 BBK Avenue, Changan Town, Changan, Guangdong. Patentee before: VIVO MOBILE COMMUNICATION Co.,Ltd. |
2021-12-28 | TR01 | Transfer of patent right |