patents.google.com

CN111107281B - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

  • ️Tue Apr 12 2022

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

When the shot image of the single camera is processed, the shot view angle of the single camera is single, the shot view angle is limited, the shot image cannot be processed from other view angles, and the processing effect of the shot image is not natural. For example, the object of the captured image is distorted due to beauty, so that the processing effect is hard.

In the foregoing situation, embodiments of the present invention provide an image processing method, apparatus, device and computer storage medium. The following first describes an image processing method provided by an embodiment of the present invention.

Fig. 1 shows a flowchart of an image processing method according to an embodiment of the present invention. The image processing method is applied to the electronic equipment, and as shown in FIG. 1, the image processing method comprises the following steps:

step

101, an electronic device acquires at least two shot images, wherein the at least two shot images are obtained by shooting the same shooting scene from different shooting visual angles.

The at least two captured images may be captured images captured by at least two cameras of the electronic device respectively. Alternatively, the at least two captured images may be images captured by the same camera of the electronic device from different capturing perspectives respectively at least twice for the same capturing scene.

The at least two captured images may be photographs that have been captured, or the at least two captured images may be framed views of at least two cameras of the electronic device.

And 102, the electronic equipment beautifies a second shot image of the at least two shot images according to the image information of a first shot image of the at least two shot images and a preset image beautification processing mode to obtain a processed second shot image.

The number of the first captured images may be one or at least two, and the second captured image may be any one of the at least two captured images. Alternatively, the second captured image may be an image captured by a predetermined camera of the at least two captured images. For example, the predetermined camera is a main camera.

Optionally, in one or more embodiments of the invention,

step

102 comprises: and the electronic equipment beautifies the second shot image according to the image information of the first shot image in the process of beautifying the second shot image according to a preset image beautification processing mode. The preset image beautification processing mode can be an image beautification mode. For example, the preset image beautification processing mode includes a beauty mode or a filter processing mode.

In the embodiment of the present invention, the beautification processing is performed on the second captured image of the at least two captured images based on the first captured image of the at least two captured images. Because at least two shot images are shot from different shooting visual angles, the beautification processing can be carried out on the second shot image from a plurality of visual angles, and the beautification effect of the second shot image is more three-dimensional and natural.

The embodiment of the invention can be suitable for scenes for taking pictures, for example, when the electronic equipment takes pictures, each camera takes the picture of the camera as the main part and takes the pictures taken by other cameras as the auxiliary part for processing.

The embodiment of the invention can also be applied to scenes for shooting videos. For example, in the process of recording videos by the electronic device, at least two cameras work simultaneously, and for each camera, the shooting video of the camera is taken as the main video, and the shooting videos of other cameras are taken as the auxiliary videos to be processed.

In addition, a problem of image distortion may occur when performing image processing, such as face slimming or slimming using a beauty algorithm, and a predetermined content area, such as a blank area or an area having specific image content, may be generated due to inward pushing of lines. In order to compensate for the predetermined content area, the scene around the face or body is deformed such as stretched or distorted, thereby filling the predetermined content area. However, this can result in deformation of the face or the area around the body.

In order to solve the above technical problem, optionally, in one or more embodiments of the present invention, a beautification processing on a second captured image of at least two captured images according to image information of a first captured image of the at least two captured images and a preset image beautification processing method includes:

the electronic equipment processes the first area content in the second shot image into second area content according to an image beautification processing mode;

the electronic equipment acquires third area content related to the first shot image under the condition that a preset content area is generated after the first area content is processed into second area content, wherein the third area content corresponds to the first area content;

and the electronic equipment repairs the preset content area according to the third area content.

For example, the image beautification processing mode is a face slimming processing mode or a slimming processing mode. Referring to fig. 2, in the case of thinning a face area (i.e., the above-described first area content) in the second captured

image

001, a

blank area

002 is formed between the outline of the face before thinning and the outline of the face after thinning. A first captured image in which a face is captured from the side is acquired, and the

blank region

002 is restored from the face region in the first captured image.

In the embodiment of the present invention, in the case where the predetermined content area is generated during the beautification processing of the second captured image, the predetermined content area is restored based on the first captured image captured at a different capture angle from the second captured image. The content of the first area in the first shot image is prevented from being stretched or distorted in order to repair the predetermined content area, so that the stretching or distortion of the first shot image is prevented, and the processing effect of the first shot image is better and natural.

Alternatively, in one or more embodiments of the present invention, there are two following embodiments that implement acquiring the second region associated with the first captured image.

The first implementation mode comprises the following steps:

before acquiring the second region associated with the first captured image, the image processing method further includes:

constructing a three-dimensional image model according to at least two shot images;

wherein acquiring third area content associated with the first captured image comprises:

and acquiring third area content corresponding to the first area content in the three-dimensional stereo image model. For example, the content of the first region is the face chin, the content of the face chin region is obtained from the three-dimensional image model, and the repairing is performed according to the content of the face chin region in the three-dimensional image model.

In this embodiment, since the third region content corresponding to the first region content is acquired in the three-dimensional stereoscopic image model, the third region content can be made more stereoscopic. Therefore, when the predetermined content area is repaired using the third area content, the effect of repairing the predetermined content area can be made more stereoscopic.

The second embodiment:

acquiring third area content associated with the first captured image, including: third area content corresponding to the first area content is acquired in the first captured image.

Optionally, in one or more embodiments of the present invention, performing beautification processing on a second captured image of the at least two captured images according to image information of a first captured image of the at least two captured images and a preset image beautification processing manner includes:

the method comprises the steps that the electronic equipment obtains light source direction information when at least two shot images are shot; for example, the light source direction information includes a direction toward the west 40 ° in a predetermined reference direction.

The electronic equipment acquires a backlight area which is not in the illumination range of the light source in the second shot image according to the light source direction information;

the electronic device performs at least one of the following processing on the backlight area: the process of reducing the luminance and the process of emphasizing the chrominance.

Wherein, the light source direction information when at least two shot images are shot can be determined according to the change situation of the brightness value of the area in the shot images. Or after the three-dimensional stereo image model is constructed according to at least two shot images, the light source direction information is determined according to the change situation of the brightness value of the area in the three-dimensional stereo image model.

For example, the luminance value of the area a in the captured image is greater than the first luminance value threshold, and the luminance value of the area B in the captured image is less than the second luminance value threshold. In this case, the region a is a region irradiated with the light source, and the region B is a region not irradiated with the light source. Therefore, the light source direction information can be determined from the area irradiated by the light source and the area not irradiated by the light source.

The electronic device performs a process of lowering the luminance and a process of emphasizing the chromaticity on the backlight area, thereby making a shadow for the backlight area. Where a color is commonly represented by luminance and chrominance, which is a property of a color excluding luminance, and reflects the hue and saturation of the color.

For example, the backlight area is a partial area of a nose, and the electronic device shades the partial area of the nose, so that a stereoscopic beauty effect is formed.

Compared with the method for beautifying by using a digital means in the prior art, the method and the device for beautifying the face of the shot image shot according to different shooting visual angles can beautify the face of the shot image, increase shadows for the shot image and generate a real three-dimensional beautifying effect.

Optionally, in one or more embodiments of the present invention, the electronic device has at least two cameras; before acquiring at least two captured images, the image processing method further includes:

the method comprises the steps that the electronic equipment receives a first input of a shooting preview interface; the first input may be an input to a photographing control on the photographing preview interface, or the first input may be a predetermined gesture input.

The electronic equipment responds to the first input, and controls the at least two cameras to shoot respectively to obtain shot images shot by the at least two cameras respectively.

Before receiving a first input to the shooting preview interface, the image processing method may further include: the electronic equipment receives input for selecting the beauty shooting mode under the condition of starting the camera, responds to the input, starts at least two cameras of the electronic equipment and displays a shooting preview interface under the beauty shooting mode.

In the embodiment of the invention, at least two cameras of the electronic equipment can be controlled to shoot simultaneously, so that the same shooting scene can be shot to obtain at least two shot images. The shooting efficiency of the shot image is effectively improved.

As one example, activating at least two cameras of an electronic device includes: and starting at least two cameras in the unfolding process of the flexible screen or the folding screen of the electronic equipment. The at least two cameras may include cameras disposed at both ends of the electronic device and a camera disposed at a center of the electronic device.

As another example, activating at least two cameras of the electronic device includes: and under the condition that the electronic equipment is provided with the telescopic camera, controlling the camera hidden in the electronic equipment to pop up from the inside of the electronic equipment, and starting the camera. For example, the cameras are popped up at the left and the right of the electronic equipment.

Optionally, in one or more embodiments of the present invention, performing beautification processing on a second captured image of the at least two captured images according to image information of a first captured image of the at least two captured images and a preset image beautification processing manner includes:

and processing the target face in the second shot image according to the target face in the first shot image and a preset image beautifying processing mode under the condition that the first shot image and the second shot image both comprise the target face and the camera shooting the first shot image is the camera closest to the target face.

Wherein the at least two photographed images may be images photographed at the same point in time in the at least two videos. In the process of shooting the video, the electronic equipment can detect the distance between each camera and the shot face through the distance sensor.

For example, three cameras A, B and C of the electronic device respectively capture videos, and the captured videos include a first video captured by camera a, a second video captured by camera B, and a third video captured by camera C. The first video, the second video and the third video comprise the face of the first user, the camera closest to the face of the first user is a camera B, and then the face of the first user in the second video shot by the camera B is used for beautifying the face of the first user in the first video and the face of the first user in the third video. The videos respectively shot by the three cameras A, B and C may respectively include faces of a plurality of users, and the scheme of the embodiment of the present invention may be adopted to perform face beautifying processing for the face of each user.

In the embodiment of the invention, as the camera closest to the target face can shoot more details of the target face, the image with the target face is subjected to beautifying processing according to the image shot by the camera closest to the target face, so that the beautifying effect of the target face is better.

Optionally, in one or more embodiments of the present invention, the image processing method further includes:

the electronic equipment receives a second input for selecting the first camera from the at least two cameras;

and the electronic equipment responds to the second input and displays a framing picture of the first camera in the shooting preview interface.

In the embodiment of the present invention, the user may select the first camera, and the first camera may be any one of the at least two cameras. Therefore, the picture displayed in the shooting preview interface by the electronic equipment is the framing picture of the first camera, and the shooting preview requirement of the user is met. In addition, if the user does not select a camera, a finder screen of a predetermined camera is displayed in the photographing preview interface.

Optionally, in one or more embodiments of the present invention, the screen displayed in the shooting preview interface is a finder screen of a camera closest to the target photographic subject.

For example, in a mode of taking a picture or in a case of taking a video, the electronic device displays a shooting preview interface. If the camera closest to the target photographic subject is switched from the camera A to the camera B, the picture displayed in the shooting preview interface is switched from the view-finding picture of the camera A to the view-finding picture of the camera B.

In the embodiment of the invention, because the camera closest to the target shooting object can clearly shoot the target shooting object, the framing picture of the camera closest to the target shooting object is displayed in the shooting preview interface, so that the picture with the best framing effect of the target shooting object is displayed in the shooting preview interface, and the user can conveniently preview the shooting effect of the target shooting object.

Optionally, in one or more embodiments of the present invention, before controlling at least two cameras to respectively perform shooting, the image processing method further includes:

acquiring brightness information of at least two shooting objects in a shooting preview interface under the condition that a shooting scene is a backlight shooting scene and the current time is within a preset time range; for example, the predetermined time range may be a sunrise time range and/or a sunset time range;

determining an exposure object corresponding to each camera from the at least two shot objects according to the brightness information of the at least two shot objects;

wherein, control two at least cameras and shoot respectively, include:

and for each camera, exposing an exposure object corresponding to the camera in a framing picture of the camera, and controlling the camera to shoot.

As one example, it is optionally detected whether the current photographing scene is a backlight photographing scene through a High-Dynamic Range (HDR).

As an example, optionally, determining an exposure object corresponding to each camera from the at least two photographic objects according to the brightness information of the at least two photographic objects includes:

dividing at least two shooting objects into at least two groups of shooting objects according to the brightness information of the at least two shooting objects, wherein each group of shooting objects comprises at least one shooting object, and the group number of the shooting objects is the number of the cameras; each camera is assigned a group of photographic subjects.

For example, the brightness information includes a brightness value, and the electronic device acquires a shooting object in the shooting preview interface, where the shooting object is a person, a beach, sea water, a blue sky, and the sun; in the shooting preview interface, the brightness value of a person is in a first preset range, the brightness values of a beach, seawater and a blue sky are in a second preset range, and the brightness value of a sun is in a third preset range. Therefore, the characters are classified into a first group of subjects, the sand, sea, and blue sky are classified into a second group of subjects, and the sun is classified into a third group of subjects. The camera A corresponds to a first group of shooting objects, the camera B corresponds to a second group of shooting objects, and the camera C corresponds to a third group of shooting objects.

In this case, when the camera a performs shooting, a person in a finder screen of the camera a is exposed and shot; when the camera B shoots, the beach, the seawater and the blue sky in a view-finding picture of the camera B are exposed and shot; when the camera C performs shooting, the sun in the finder screen of the camera C is exposed and shot.

As another example, optionally, determining an exposure object corresponding to each camera from the at least two photographic objects according to the brightness information of the at least two photographic objects includes:

in the case where the number of the at least two photographic subjects is the same as the number of the cameras and the luminance information of the at least two photographic subjects is different from each other, one exposure subject is assigned to each camera among the at least two cameras.

The at least two shooting objects comprise target shooting objects, the shooting objects are sequentially distributed to the cameras according to the sequence that the cameras are far away from the target shooting objects from near to far and the sequence that the brightness values of the shooting objects are from low to high, and the shooting objects distributed to the cameras are exposure objects of the cameras. There is no exposure object overlap between at least two cameras.

For example, three photographic subjects with different brightness values in the photographic preview interface are obtained, and the three photographic subjects are a person, seawater and the sun respectively. Since the person is the target photographic subject, the exposure subject of the camera B closest to the person is the person. The exposure object of the camera a next to the person is seawater. The exposure object of the camera C farthest from the person is the sun.

In this case, when the camera B performs shooting, a person in a finder screen of the camera B is exposed and shot; when the camera A shoots, exposing the seawater in a view-finding picture of the camera A and shooting; when the camera C performs shooting, the sun in the finder screen of the camera C is exposed and shot.

In the embodiment of the invention, different cameras expose different shooting objects, so that each camera can be exposed in a targeted manner, the different shooting effects of each camera are ensured, and images with different effects are shot.

Optionally, in one or more embodiments of the present invention, after the at least two cameras respectively perform shooting, the shot images of the at least two cameras are synthesized to obtain a synthesized image.

In this case, if the images captured by the at least two cameras include a human face, the synthesized image can form a softer picture with a lower color temperature in addition to the chromaticity adjustment of the region of the human face. Therefore, the human face and the environment can be separated according to sunrise and sunset scenes, and the sunrise and sunset scenes can be optimized.

Optionally, in one or more embodiments of the present invention, the image capturing method further includes:

the electronic equipment receives a third input selected by at least one shot image under the condition of displaying at least two shot images;

the electronic device saves the selected at least one captured image in response to the third input.

For example, when the electronic device takes a plurality of photos from different angles, the user can select a favorite photo to be stored in the electronic device, and the rest photos that are not selected are deleted. Therefore, the user can more intuitively provide the actual picture to be selected by the user without knowing which angle is more suitable for the user before shooting.

Fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. The image processing apparatus is applied to an electronic device, and as shown in fig. 3, the

image processing apparatus

200 includes:

the

image acquisition module

201 is configured to acquire at least two captured images, where the at least two captured images are images obtained by capturing a same captured scene from different capturing perspectives;

the

image processing module

202 is configured to perform beautification processing on a second captured image of the at least two captured images according to image information of a first captured image of the at least two captured images and a preset image beautification processing mode, so as to obtain a processed second captured image.

In the embodiment of the present invention, the beautification processing is performed on the second captured image of the at least two captured images based on the first captured image of the at least two captured images. Because at least two shot images are shot from different shooting visual angles, the beautification processing can be carried out on the second shot image from a plurality of visual angles, and the beautification effect of the second shot image is more three-dimensional and natural.

Optionally, in one or more embodiments of the present invention, the

image processing module

202 includes:

the area processing module is used for processing the first area content in the second shot image into second area content according to an image beautification processing mode;

the area acquisition module is used for acquiring third area content related to the first shot image under the condition that a preset content area is generated after the first area content is processed into second area content, wherein the third area content corresponds to the first area content;

and the area repairing module is used for repairing the preset content area according to the third area content.

Optionally, the

image processing apparatus

200 further includes:

the image model building module is used for building a three-dimensional image model according to at least two shot images;

the area acquisition module includes:

and the model area acquisition module is used for acquiring third area content corresponding to the first area content in the three-dimensional stereo image model.

Optionally, the

image processing module

202 comprises:

the light source direction acquisition module is used for acquiring light source direction information when at least two shot images are shot;

the backlight area acquisition module is used for acquiring a backlight area which is not in the illumination range of the light source in the second shot image according to the light source direction information;

the backlight area processing module is used for processing at least one of the following backlight areas: the process of reducing the luminance and the process of emphasizing the chrominance.

Optionally, in one or more embodiments of the invention, the electronic device has at least two cameras;

the

image processing apparatus

200 further includes:

the first input receiving module is used for receiving first input of a shooting preview interface;

and the first input response module is used for responding to the first input and controlling the at least two cameras to shoot respectively to obtain shot images shot by the at least two cameras respectively.

Optionally, in one or more embodiments of the present invention, the

image processing module

202 includes:

and the face processing module is used for processing the target face in the second shot image according to the target face in the first shot image and a preset image beautifying processing mode under the condition that the first shot image and the second shot image both comprise the target face and the camera shooting the first shot image is the camera closest to the target face.

Optionally, in one or more embodiments of the present invention, the

image processing apparatus

200 further includes:

the second input receiving module is used for receiving a second input for selecting the first camera from the at least two cameras;

and the second input response module is used for responding to a second input and displaying a framing picture of the first camera in the shooting preview interface.

Optionally, in one or more embodiments of the present invention, the screen displayed in the shooting preview interface is a finder screen of a camera closest to the target photographic subject.

Optionally, in one or more embodiments of the present invention, the

image processing apparatus

200 further includes:

the device comprises a brightness value acquisition module, a brightness value acquisition module and a brightness value display module, wherein the brightness value acquisition module is used for acquiring the brightness information of at least two shooting objects in a shooting preview interface under the condition that a shooting scene is a backlight shooting scene and the current time is within a preset time range;

the exposure object determining module is used for determining an exposure object corresponding to each camera from the at least two shot objects according to the brightness information of the at least two shot objects;

wherein the first input response module comprises:

and the camera shooting control module is used for exposing the exposure object corresponding to the camera in the framing picture of the camera and controlling the camera to shoot for each camera.

Fig. 4 shows a schematic diagram of a hardware structure of an

electronic device

300 according to an embodiment of the present invention, where the

electronic device

300 includes, but is not limited to:

radio frequency unit

301,

network module

302,

audio output unit

303,

input unit

304,

sensor

305,

display unit

306,

user input unit

307,

interface unit

308,

memory

309,

processor

310, and

power supply

311. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.

The

processor

310 is configured to acquire at least two captured images, where the at least two captured images are images captured from different capturing angles of a same capturing scene; and beautifying a second shot image of the at least two shot images according to the image information of a first shot image of the at least two shot images and a preset image beautifying processing mode to obtain the processed second shot image.

In the embodiment of the present invention, the beautification processing is performed on the second captured image of the at least two captured images based on the first captured image of the at least two captured images. Because at least two shot images are shot from different shooting visual angles, the beautification processing can be carried out on the second shot image from a plurality of visual angles, and the beautification effect of the second shot image is more three-dimensional and natural.

It should be understood that, in the embodiment of the present invention, the

radio frequency unit

301 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the

processor

310; in addition, the uplink data is transmitted to the base station. In general,

radio frequency unit

301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the

radio frequency unit

301 can also communicate with a network and other devices through a wireless communication system.

The electronic device provides wireless broadband internet access to the user via the

network module

302, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.

The

audio output unit

303 may convert audio data received by the

radio frequency unit

301 or the

network module

302 or stored in the

memory

309 into an audio signal and output as sound. Also, the

audio output unit

303 may also provide audio output related to a specific function performed by the electronic apparatus 300 (e.g., a call signal reception sound, a message reception sound, etc.). The

audio output unit

303 includes a speaker, a buzzer, a receiver, and the like.

The

input unit

304 is used to receive audio or video signals. The

input Unit

304 may include a Graphics Processing Unit (GPU) 3041 and a

microphone

3042, and the

Graphics processor

3041 processes image data of a still picture or video obtained by an image capturing apparatus (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the

display unit

306. The image frames processed by the

graphic processor

3041 may be stored in the memory 309 (or other storage medium) or transmitted via the

radio frequency unit

301 or the

network module

302. The

microphone

3042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the

radio frequency unit

301 in case of the phone call mode.

The

electronic device

300 also includes at least one

sensor

305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the

display panel

3061 according to the brightness of ambient light, and a proximity sensor that turns off the

display panel

3061 and/or the backlight when the

electronic device

300 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the

sensors

305 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.

The

display unit

306 is used to display information input by the user or information provided to the user. The

Display unit

306 may include a

Display panel

3061, and the

Display panel

3061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.

The

user input unit

307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the

user input unit

307 includes a

touch panel

3071 and

other input devices

3072. The

touch panel

3071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 3071 (e.g., operations by a user on or near the

touch panel

3071 using a finger, a stylus, or any suitable object or attachment). The

touch panel

3071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the

processor

310, and receives and executes commands sent by the

processor

310. In addition, the

touch panel

3071 may be implemented using various types, such as resistive, capacitive, infrared, and surface acoustic wave. The

user input unit

307 may include

other input devices

3072 in addition to the

touch panel

3071. Specifically, the

other input devices

3072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.

Further, the

touch panel

3071 may be overlaid on the

display panel

3061, and when the

touch panel

3071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the

processor

310 to determine the type of the touch event, and then the

processor

310 provides a corresponding visual output on the

display panel

3061 according to the type of the touch event. Although the

touch panel

3071 and the

display panel

3061 are shown as two separate components in fig. 4 to implement the input and output functions of the electronic device, in some embodiments, the

touch panel

3071 and the

display panel

3061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.

The

interface unit

308 is an interface for connecting an external device to the

electronic apparatus

300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The

interface unit

308 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the

electronic apparatus

300 or may be used to transmit data between the

electronic apparatus

300 and the external device.

The

memory

309 may be used to store software programs as well as various data. The

memory

309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the

memory

309 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The

processor

310 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the

memory

309 and calling data stored in the

memory

309, thereby performing overall monitoring of the electronic device.

Processor

310 may include one or more processing units; optionally, the

processor

310 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the

processor

310.

The

electronic device

300 may further include a power supply 311 (such as a battery) for supplying power to each component, and optionally, the

power supply

311 may be logically connected to the

processor

310 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.

In addition, the

electronic device

300 includes some functional modules that are not shown, and are not described in detail herein.

An embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the image processing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.

The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.

While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.