patents.google.com

CN118450234B - Image generation method, medium and electronic device - Google Patents

  • ️Fri Dec 20 2024

Disclosure of Invention

In view of this, the present application provides an image generation method, medium, and electronic apparatus.

In a first aspect, an image generating method is provided, and the method includes the steps of firstly acquiring first shooting parameters used by an image sensor of shooting equipment to acquire first image signals, then determining that the first shooting parameters correspond to a first image format, and sending a first output instruction to the image sensor, wherein the first output instruction is used for instructing the image sensor to output the acquired first image signals in the form of first image data with the first image format. However, corresponding to a second photographing parameter for acquiring a second image signal by the image sensor of the photographing apparatus, determining that the second photographing parameter corresponds to a second image format, and transmitting a second output instruction to the image sensor, wherein the second output instruction is used for instructing the image sensor to output the acquired second image signal as second image data having the second image format.

In the above scheme, the electronic device may control the camera to output image data in different formats according to the current shooting parameters. For example, the first shooting parameter and the second shooting parameter can be different shooting parameters, further, corresponding different image formats can be determined under different scenes, then, the image with the best effect under the current scene is generated, further, the overall quality of the image is improved, the time spent by a user for debugging the parameters of the image is avoided, and the shooting experience of the user is improved.

With reference to the first aspect, in some implementations, the first photographing parameter and the second photographing parameter include one or more of a photographing mode, a zoom factor, and an ambient light parameter.

In the above scheme, the electronic device may further determine the corresponding shooting environment according to the current shooting parameters, for example, the shooting mode, and determine the requirements of the shooting environment for definition and/or photosensitivity according to the zoom multiple and the ambient light parameters, so as to control the camera to output image data in different formats. The method has the advantages that images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved. In some implementations, the shooting parameters may also include an exposure rate set by the user, whether a filter is added, and so on.

In combination with the first aspect, in some implementations, the first shooting parameter includes a shooting mode that is a normal mode or a live mode, the second shooting parameter includes a shooting mode that is a portrait mode or a large aperture mode, and the definition of the first image format is lower than the definition of the second image format, or the first shooting parameter includes a shooting mode that is a video mode, and the second shooting parameter includes a shooting mode that is a normal mode, a live mode, a portrait mode, or a large aperture mode, and the definition of the first image format is lower than the definition of the second image format.

In the above scheme, the electronic device may determine the corresponding image data format according to the photographing mode. For example, corresponding to the shooting mode being "normal", "live", an image having higher output photosensitivity and sharpness, such as a bayer image, can be determined. Further, the phenomenon of uneven brightness in the image can be surely reduced while ensuring a certain sharpness. Corresponding to the shooting modes of "portrait", "large aperture" mode, it is possible to determine to output an image with higher definition, for example, a four-in-one bayer image, to ensure that the output image has higher definition.

With reference to the first aspect, in some implementations, the first shooting parameter includes a zoom multiple that is less than a zoom multiple that the second shooting parameter includes, and the number of pixels in the first image format is less than the number of pixels in the second image format.

In the above-described scheme, for a scene where zooming exists, it is necessary to output full-size images, such as a four-in-one bayer image and a bayer image, so as to be able to complete digital zooming, and thus to ensure a certain sharpness after digital zooming.

With reference to the first aspect, in some implementations, the first shooting parameter includes an ambient light parameter that is smaller than an ambient light parameter included in the second shooting parameter, and the brightness contrast parameter of the first image format is higher than the brightness contrast parameter of the second image format.

In the above-described scheme, for an environment where the brightness level is darker, an image with better brightness rendering capability, such as a pixel combination image, may be determined. The output bayer image, the four-in-one bayer image, or the like may be determined corresponding to a medium-high brightness environment. Further, it is ensured that the object contour is better presented in the image in darker scenes.

With reference to the first aspect, in some implementations, the corresponding first image format is a bayer image format, the second image format is a four-in-one bayer image format or a pixel-by-pixel image format, the corresponding first image format is a four-in-one bayer image format, the second image format is a bayer image format or a pixel-by-pixel image format, and the corresponding first image format is a pixel-by-pixel image format, the second image format is a bayer image format or a four-in-one bayer image format.

In the scheme, the four-in-one Bayer image and the Bayer image are all full-size, the number of pixels is more, the four-in-one Bayer image and the Bayer image are more suitable for digital zoom scenes, and the definition of the four-in-one Bayer image is Yu Baier images. The definition of the combined image is higher, but the number of pixel points is smaller, so that the combined image is not suitable for a zoom scene.

In combination with the first aspect, in some implementations, an image format configuration table is obtained, the image format configuration table includes a corresponding relationship between a shooting mode and an ability value and an image format, a first ability value corresponding to the first shooting mode in a first shooting parameter is determined based on the image format configuration table, the image format corresponding to the first shooting mode in the image format configuration table is determined to be a first preset value corresponding to the first ability value, the image format corresponding to the first shooting mode in the image format configuration table is determined to be the image format corresponding to the first shooting mode, and the image format corresponding to the first ability value is determined to be a preset image format not to be the first preset value.

In the above scheme, if the corresponding capability value is a preset value, it indicates that the corresponding image data format needs to be selected according to the image format configuration table. If the corresponding capability value is not a preset value, the image may be output in a default image data format. The capability values for "normal", "live", "portrait", "large aperture", etc. scenes, for example, may be preset values. The electronic equipment can determine the corresponding shooting environment according to the current shooting parameters, and further control the camera to output image data in different formats. The method has the advantages that images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

In combination with the first aspect, in some implementations, a first identification bit corresponding to the first shooting parameter is determined based on a first zoom multiple and a first ambient light parameter in the first shooting parameter, where the first identification bit is used to indicate whether a scene corresponding to the first shooting parameter is a zoom scene, the first image format is determined to be an image format corresponding to the first shooting mode in the configuration table based on the first capability value being a first preset value and the first identification bit being a second preset value, and the first image format is determined to be a preset image format based on the first capability value being the first preset value and the first identification bit not being the second preset value.

In the above scheme, when the electronic device determines that the status identification bit is a digital zoom scene, the electronic device determines the corresponding image data format according to the setting in the image format configuration table. If not, it is indicated that the scene can output an image in a default image data format. And determining whether the digital zoom scene is a digital zoom scene or not through the state identification bit, further determining the requirements of the shooting environment on definition and/or photosensitivity, and further controlling the camera to output image data in different formats. The method has the advantages that images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

With reference to the first aspect, in some implementations, the electronic device includes a hardware abstraction layer that includes a camera platform architecture, and the camera hardware interface and the camera development kit chi-cdk in the camera platform architecture include an image format configuration table.

With reference to the first aspect, in some implementations, the electronic device further includes an application layer, the camera platform architecture further includes a camx interface, and the chi-cdk is configured to obtain a first photographing mode and a first zoom factor of the camera application from the application layer, and obtain a first ambient light parameter from the camx interface, determine a first image format based on the first photographing mode, the first zoom factor, and the first ambient light parameter, and send a first output instruction to the image sensor through the camx interface.

In the above scheme, the chi-cdk is used for determining the format required to be output by the image sensor, and then an instruction is sent to the image sensor through camx interfaces, so that image data with different formats can be output at the image sensor. The method has the advantages that images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

In a second aspect, the application provides an electronic device, which comprises a processor and an image sensor, wherein the image sensor is used for acquiring image signals and outputting image data corresponding to the image signals in an image format indicated by an output instruction according to the output instruction sent by the processor, and the processor is used for acquiring shooting parameters used by the image sensor for acquiring the image signals, determining the image format corresponding to the shooting parameters and sending the output instruction to the image sensor.

With reference to the second aspect, in some implementations, the processor determines the image format of the image data output by the image sensor by determining that the output format of the image data is the first image format corresponding to the first shooting parameter in the image format configuration table corresponding to the matching of the shooting parameter to the first shooting parameter in the image format configuration table, and determining that the output format of the image data is the second image format corresponding to the second shooting parameter in the image format configuration table corresponding to the matching of the shooting parameter to the second shooting parameter in the image format configuration table.

With reference to the second aspect, in some implementations, the first shooting parameter includes a shooting mode that is a normal mode or a live mode, the second shooting parameter includes a shooting mode that is a portrait mode or a large aperture mode, and the definition of the first image format is lower than the definition of the second image format, or the first shooting parameter includes a shooting mode that is a video mode, and the second shooting parameter includes a shooting mode that is a normal mode, a live mode, a portrait mode, or a large aperture mode, and the definition of the first image format is lower than the definition of the second image format.

With reference to the second aspect, in some implementations, the processor is one or more of an image signal processor, a central processing unit, a digital signal processor, and a graphics processor.

In a third aspect, the application provides a computer readable storage medium having instructions stored therein which, when run on an electronic device, perform a method as described in the first aspect.

In a fourth aspect, the present application provides a computer program product comprising computer instructions which, when executed by a computing device, performs a method as described in the first aspect.

Detailed Description

Illustrative embodiments of the application include, but are not limited to, image generation methods, media, and electronic devices.

As described above, a user can take a picture using an electronic device with a camera such as a mobile phone. Taking the mobile phone 10 as an example, an interface of a camera application of the electronic device in some embodiments is described, and as shown in (a) in fig. 1, the camera application interface of the mobile phone 10 may include a preview screen area 11, a zoom magnification area 12, a shooting mode area 13, an album button 14, a shutter button 15, and a camera switching button 16.

Wherein the zoom magnification area 12 provides a variety of selectable zoom magnification including, but not limited to, 0.6, 1, 2, etc. Illustratively, based on the user selecting "1.0×" zoom magnification, the preview screen area 11 is a screen that can be photographed at 1 zoom magnification. Based on the user selecting other zoom factors, the camera can capture objects at different distances. For example, as shown in fig. 1 (B), the preview screen area 12 of the mobile phone 10 is an effect displayed after the screen is enlarged 2 times, based on the user adjustment to "2.0×" zoom magnification.

It should be appreciated that the camera may implement a zoom function (zoom), and that optical zoom and/or digital zoom techniques may be employed. The optical zoom may be implemented by changing the lens type, for example, the mobile phone 10 may include a plurality of cameras such as a short-focus (wide-angle) camera, a mid-focus camera (main camera), and a long-focus camera. The digital zoom technology may also be called as lossless zoom (ISZ), where the ISZ generates a full size (full size) image, then cuts (crop) a picture, and finally increases the area of pixels in the cut picture and/or inserts pixel values into the cut picture, thereby achieving a zoom effect of enlarging or reducing the shot picture.

The photographing mode area 13 provides a variety of selectable photographing modes including, but not limited to, a photographing mode such as "large aperture", "night view", "portrait", "photograph", "video" and "multi-mirror video". The mobile phone 10 may take different setting parameters and image compensation parameters, such as aperture size, shutter speed, sensitivity (ISO), focusing mode, white balance, exposure compensation, etc., for different shooting modes. It should be appreciated that in some embodiments, the electronic device may also have other modes of shooting, such as "live (livephoto)", "slow-motion" or "panoramic", etc., as the application is not limited in this regard.

The compensation of the captured image by the handset 10 may be processed by an image signal processor (IMAGE SIGNAL processor, ISP) or a digital signal processor (DIGITAL SIGNAL processor, DSP).

For example, fig. 2 shows a schematic structural diagram of a camera 20, where the camera 20 may be disposed in an electronic device, such as the mobile phone 10, or may be connected to the electronic device independently of the electronic device by bluetooth or a wired connection. The camera 20 may also be referred to as a photographing device.

The camera 20 includes a lens 21, an image sensor (hereinafter referred to as sensor) 22, and an ISP23. The sensor22 is specifically a photosensitive element, which may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor.

In some embodiments, ISP23 may be built into camera 20 or a module external to camera 20. ISP23 may also be integrated with sensor22, as the application is not particularly limited in this regard.

The process of capturing still images or video by the camera 20 includes that an object generates an optical image through the lens 21 and projects the optical image onto the sensor22, and the photosensitive element of the sensor22 converts the optical signal into image data, and then the image data is transferred to the ISP23 for processing, wherein the digital image signal unprocessed by the ISP23 may also be referred to as raw data (raw). The ISP23 may also output the digital image signal to DSP processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format.

In some cases, when the lens 21 focuses light on the sensor22, light attenuation occurs, and in particular, light attenuation of the edge region of the lens 21 is greatest, which may be referred to as a lens shading (LENS SHADING) phenomenon. Further, the brightness of the image outputted from the ISP23 is not uniform due to the optical characteristics of the lens 21 and the sensor22, for example, the center portion of the image is bright and the edge portion is dark.

The ISP23 or DSP performs Lens Shading Correction (LSC) on the image data for the lens shading phenomenon. For example, an ideal flat illumination image created by software is acquired first, and then an image under uniform illumination is captured as a corrected reference image using the lens 21 and sensor 22. A correction table is generated based on the ideal flat illumination image and the reference image, each element in the table representing a luminance correction coefficient for a corresponding pixel location. Further, during photographing, either the ISP23 or the DSP will use the correction table to perform brightness correction on the output image.

In practical application, the same manufacturer can adopt the same correction table for the same type of lens and sensor so as to overcome the lens shadow condition. However, in actual production and assembly of the camera, there may be production differences among the modules in the camera, and there may be differences in the mounting angles of the lens 21 and the sensor 22. Further, the ISP23 cannot obtain an image having a good brightness correction effect even with the correction table.

In addition, since the angle of the incident light has different influence on the shadow phenomenon of the lens during the shooting process, for example, when the light is incident from different angles, the light attenuation of the edge part of the lens is more serious, so that a more obvious shadow effect is caused. Particularly, in the case of movement during shooting, the lens shading phenomenon is more obvious, for example, when a user holds the mobile phone 10 and shoots while moving, the situation of uneven image brightness under light rays of different angles is more obvious.

It should be appreciated that the above-described correction method of the ISP23 is exemplified only by the lens shading phenomenon. In practical applications, the lens 21 and the sensor22 may have other phenomena, and the ISP23 may perform other corresponding corrections, such as black level correction (black level correction, BLC), green Balance (GB), and the like.

In summary, because of the difference of calibration parameters and manufacturing processes of each module in the camera, the images output by the camera have larger quality differences, such as the problem of uneven brightness distribution of the images, and the conditions of uneven brightness are different in different shooting environments.

The format of the image data that can be output by the sensor22 includes bayer image (bayer array), four bayer image (quad bayer array, also referred to as quad array or quard raw), and an image (binning image) output after pixel binning. Both the bayer raw and the quad raw are full size images, and the quad raw may be reprocessed (remosaic) to be restored to a higher definition bayer format image (also referred to as remosaic image). While the binning image combines the readout values of adjacent pixels in the full size image, adds the induced charges of adjacent or same color pixels, and reads out in a pattern of one analog pixel.

Wherein, both quad raw and bayer raw are fullsize, the number of pixels is more, the method is more suitable for ISZ scene, and the definition of quad raw is higher than that of bayer raw. The resolution of the binning is higher, but the number of pixel points is smaller, and the method is not suitable for an ISZ scene.

Currently, the sensor22 generally outputs only image data of one format in different scenes, for example, outputs only an image of quad raw or bayer raw in the case of an ISZ scene, that is, a zoom factor of "2×" or more. Although the quad raw has higher definition, the photosensitivity is poor, that is, the brightness rendering capability of the image is poor, so that the conditions of correction parameters of module calibration and uneven brightness caused by the manufacturing process are more obvious. If only the image of the bayer array can be output, the image definition is lower, and the scene with higher image definition requirement cannot be met. Further, in different shooting scenes, there is a problem that image quality is poor such as uneven brightness or low sharpness.

Even if some electronic devices provide adjustment functions of various setting parameters such as white balance and exposure compensation in order to achieve a certain shooting effect, the process requires repeated manual adjustment and combination by a user, and the whole process is tedious, long in time consumption and poor in accuracy, so that user experience is reduced. In addition, the adjustment of the setting parameters has high requirements on the professionality, and most users can hardly obtain satisfactory photos through the adjustment of the setting parameters.

In order to solve the problem of large difference in image quality, the present application provides an image generating method, in which a processor of an electronic device may control a sensor22 to output image data in different formats, for example, a bayer raw image, a binding image, a quad raw image, or the like, under different shooting environments according to a preset correspondence between shooting parameters and image data formats. Specifically, the electronic device obtains shooting parameters, such as one or more of shooting modes, zoom factors, ambient light parameters, and the like, when the camera 20 shoots, and then sends an instruction for outputting image data in a corresponding format to the sensor22 according to the shooting parameters, so that the sensor22 can output image data in different formats according to different shooting environments.

For example, corresponding to a scene requiring higher definition and/or a case where there is a zoom, such as a photographing mode of "portrait" or "large aperture", and/or a case where the zoom magnification is within "2×" to "4×", image data having higher definition, such as a quad raw image, may be selected to be output to ensure that the output image has higher definition.

Corresponding to the scene with high photosensitivity requirement and/or the condition of zooming, for example, the shooting mode is the shooting mode such as 'common' or 'livephoto', and/or the condition of zooming multiple is '4×' or more, the image data with good photosensitivity, such as a bayer image, can be adopted, and the phenomenon of uneven brightness in the image can be ensured to be reduced under the condition of ensuring a certain definition.

For a scene with higher definition requirements, higher photosensitivity requirements or higher image frame rate requirements, for example, in the case that the photographing mode is "video", the zoom factor is "1×", and/or the ambient light is darker, the image data with higher photosensitivity, for example, a canning image, can be selectively output to ensure that the phenomenon of uneven brightness in the image is reduced. In addition, the pixels are combined by the binding image, so that the data volume of the image is reduced, and the frame rate of the output image can be improved.

It should be understood that the photographing parameters may also include an exposure rate set by a user, whether a filter is added, and the like. For example, the user sets a higher exposure rate, and thus can select output of a binning image or the like. The format of the image data may also include a 4K ultra high definition (ultra high definition, UHD) image, a nine-in-one binding image (nona pixel), or a 3F raw image, etc. The present application is not particularly limited as to the type of photographing parameters and the type of image data format.

In some embodiments, the terminal device may pre-store the correspondence between multiple shooting parameters and image data formats, for example, generate an extensible markup language (extensible markup language, XML) configuration table of the correspondence, where the XML configuration table may include capability values corresponding to different shooting modes, identification bits corresponding to different zoom multiples and/or ambient light parameters, and image data formats corresponding to different capability values and identification bits.

After the camera application is started, a processor of the terminal device, such as the ISP23 or a central processing unit (central processing unit, CPU), may obtain parameters such as a shooting mode and a zoom factor selected by a user in the camera application, and/or read an ambient light parameter of the current shooting through the sensor22 or an ambient light sensor. And then the processor of the terminal equipment can determine the capacity value and the identification position corresponding to the current shooting environment according to shooting parameters such as shooting modes, zooming times, ambient light parameters and the like, and determine the corresponding image data format in an XML configuration table. Further, the sensor can be controlled to output image data of a corresponding format.

In other embodiments, the terminal device may also send the shooting parameters to other processors of the terminal device, or the processors of other devices may process the shooting parameters. The other processor determines the image data format corresponding to the shooting parameters and sends an instruction to output the corresponding image data format to the sensor 22.

Furthermore, based on the image generation method, the terminal device can determine the corresponding shooting environment according to the current shooting parameters, and the requirements of the shooting environment on definition and/or photosensitivity so as to control the camera to output image data in different formats. Furthermore, images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

The terminal device may be any device with a camera, such as the aforementioned mobile phone 10, or a tablet computer, a wearable device, a vehicle-mounted device, an AR/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), or a special camera (e.g., a single-lens reflex camera, a card-type camera), etc. The present application does not particularly limit the type of the terminal device.

The terminal device according to the embodiment of the present application will be described first.

Referring to fig. 3A, fig. 3A shows a schematic structural diagram of an exemplary terminal device 100 according to an embodiment of the present application.

The terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. Wherein the sensor module 180 may include an ambient light sensor 180A or the like.

It is to be understood that the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the terminal device 100. In other embodiments of the application, terminal device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.

The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an ISP, a controller, a memory, a video codec, a DSP, a baseband processor, and/or a neural Network Processor (NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.

The controller may be a neural center and a command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.

A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.

In some embodiments of the present application, memory in the processor 110 may store a correspondence between shooting parameters and image data formats, such as an XML configuration table. The processor 110 may obtain parameters such as a shooting mode and zoom factor selected by a user in a camera application, and/or read ambient light parameters of a current shot by the sensor22 or the ambient light sensor 180A. And then determining the capacity value and the identification bit corresponding to the current shooting environment according to shooting parameters such as shooting modes, zooming times, ambient light parameters and the like, and determining the corresponding image data format in an XML configuration table. Further, the sensor22 may be controlled to output an image in a corresponding image data format.

The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger.

The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.

The wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.

The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the terminal device 100 may be used to cover a single or multiple communication bands.

The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the terminal device 100. The wireless communication module 160 may provide solutions for wireless communication including UWB, wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wiFi) network), bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), near field communication (NEAR FIELD communication, NFC), infrared (IR), etc., applied to the terminal device 100.

The terminal device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.

The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, an organic light-emitting diode (OLED), an active matrix organic light-emitting diode (AMOLED) or the like. In some embodiments, the terminal device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.

In some embodiments of the application, the display 194 displays the interface content that is currently output by the system. For example, the interface content may be an interface provided by a camera application, and reference may be made specifically to (a) in fig. 1 and (B) in fig. 1 and their related descriptions.

The terminal device 100 may implement a photographing function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.

The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.

The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a CCD or CMOS phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, the terminal device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.

In some embodiments of the present application, the structure of the camera 193 may also refer to fig. 2 and the related description, and will not be described herein.

The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the terminal device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.

Video codecs are used to compress or decompress digital video. The terminal device 100 may support one or more video codecs. In this way, the terminal device 100 can play or record video in various encoding formats, such as moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.

The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the terminal device 100.

The internal memory 121 may be used to store computer executable program code including instructions.

The terminal device 100 may implement audio functions through an audio module 170, an application processor, and the like. Such as music playing, recording, etc.

The ambient light sensor 180A is used to sense ambient light level. The terminal device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180A may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180A may also cooperate with the proximity light sensor to detect whether terminal device 100 is in a pocket to prevent false touches.

In some embodiments of the present application, the ambient light sensor 180A may also be used to obtain ambient light parameters of the current shooting environment, and the processor may be used to control the sensor to output image data in different formats.

The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc. The SIM card interface 195 is used to connect a SIM card. The SIM card may be contacted and separated from the terminal apparatus 100 by being inserted into the SIM card interface 195 or by being withdrawn from the SIM card interface 195.

Fig. 3B shows a software configuration block diagram of the terminal device 100 of the embodiment of the present application.

The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the system is divided into an application layer, an application framework layer, a hardware abstraction layer (hard abstraction layer, HAL), and a kernel layer from top to bottom, respectively.

The application layer may include a series of application packages. As shown in fig. 3B, the application package may include a camera (CAMERA APK). In addition, the application package may further include applications such as gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, game, shopping, travel, instant messaging (e.g., short message), etc., which are not shown in the figure.

The application framework layer provides an application programming interface (application programming interface, API) and programming framework for the application of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 3B, the application framework layer may include camera services. In addition, the application framework layer may also include an input manager, a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a display manager, an activity manager (ACTIVITY MANAGER), and the like. It should be noted that any two modules of the camera service, the input manager, the window manager, the content provider, the view system, and the activity manager may be called each other.

The camera service is used for receiving instructions or requests reported by the lower layers such as the kernel layer, the hardware abstraction layer and the like, for example, pictures collected by the sensor and the like, and sending the collected pictures to the camera application for display.

The hardware abstraction layer HAL layer is an interface layer of operating system software and hardware components that provides an interface for interaction between upper layer software and lower layer hardware. The HAL layer abstracts the underlying hardware into software containing corresponding hardware interfaces, and the setting of the underlying hardware devices can be achieved by accessing the HAL layer, for example, related hardware components can be enabled or disabled at the HAL layer. In some embodiments, the core architecture of the HAL layer is composed of at least one of c++ or C.

Fig. 3B shows camera hardware interfaces (CAMERA HARDWARE INTERFACE, CHI) and camera development kits (camera development kit, CDK), hereinafter collectively referred to as CHI-CDK, to which the HAL layer relates in accordance with an embodiment of the present application. The HAL layer also includes camx architecture, self-lapping camera system (os_camera), and kernel mode driver for the linux video device driver framework (video for linux 2kernel mode driver,V4L2 KMD). os_camera includes a plurality of custom camera modes, such as ForceSensorMode.

The chi-cdk contains a code implementation set of customizable requirements, namely, a plurality of function driving features (features) can be included, for example, features for realizing the image generation method or features for realizing an artificial intelligence photographing mode. Fig. 3B illustrates a feature of the image generation method as chifeature. Chiframework is used to process the acquired photograph request, etc., and call camx to acquire an image. The sensor XML includes configuration information corresponding to features, for example, an XML configuration table of correspondence between shooting parameters and image data formats required for chifeature.

Camx a code implementation set containing a camera general-purpose functional interface, camx a sensor node (sensor node) includes an interface that can obtain parameters in a sensor, such as an interface that obtains parameters of Auto Focus (AF), auto Exposure (AE), and Auto White Balance (AWB), sensor image signals, optical anti-shake parameters (optical image stabilizer, OIS), and the like. Among them, AF, AE, and AWB may also be collectively referred to as 3A. The 3A status module (3 Astaus module) may be configured to perform status recognition on the parameter acquired by the 3A interface, for example, determine a current ambient light parameter according to the AE value, and further determine a current ambient light level, for example, the current environment is a dark environment, a brighter environment, or a medium-high bright environment.

The V4L2 KMD may implement communication between modules such as camx and sensors in the underlying core layer, for example camx and the sensor may communicate via a ioct function, for example sending an instruction to the sensor to output image data in a corresponding format.

The kernel layer is a layer between hardware and software. The kernel layer at least comprises an image sensor (sensor) driver, and can also comprise a display driver, an audio driver, a sensor driver, a touch chip driver and input (input) system and the like.

Through the software architecture, when a user triggers a photographing request through a photographing button provided by a camera application program, chiframework can send the photographing request to the camx architecture, and the camx architecture acquires an image signal generated by a sensor through the V4L2 KMD and feeds the image signal back to chiframework so as to display a preview image in the camera application program.

In some embodiments of the present application, the chi-cdk may also obtain information such as shooting mode and zoom parameters in the camera application in the application layer, and obtain the ambient brightness value in 3A through camx. The chi-cdk can determine the corresponding image data format in an XML configuration table according to the acquired shooting mode, zoom parameters and environment brightness values, wherein each shooting parameter has corresponding set image data format information. Then, the sensor is controlled to output image data of a corresponding format.

For example, the chi-cdk may first obtain the currently used shooting modes from the camera APK, and each shooting mode has a corresponding scene configuration parameter in the XML configuration table, for example, the scene configuration parameter of the normal mode is "0". Corresponding to different shooting modes, corresponding capability values are also configured in the XML configuration table. For example, the shooting mode is a normal mode, a live mode, a portrait mode, or a large aperture mode, and the corresponding capability value is a preset value of "1", which indicates that the corresponding image data needs to be selected according to the XML configuration table. Further, corresponding capability values may be determined in an XML configuration table corresponding to the currently used shooting mode.

If the chi-cdk determines that the capability value corresponding to the currently used shooting mode is a preset value, the chi-cdk can also acquire the zoom parameter used by the current shooting from the camera APK, and the brightness value in 3A is read from camx. It is then determined whether the current is in an ISZ scene based on the zoom factor and the ambient light parameter, e.g., if the current zoom factor is greater than "2×" and/or the current ambient light parameter indicates that the current is a medium highlight scene, the current identification bit may be determined to be an ISZ scene. And further, image data in a corresponding format can be controlled and output through V4L2 KMD through an image data format setting value (setting) preset in an XML configuration table.

In some embodiments, the chi-cdk may determine the corresponding image data format based only on the correspondence between the scene configuration parameters and setting in the XML configuration table, without acquiring whether or not the ISZ scene is in.

It is to be understood that the structure illustrated in the present application does not constitute a specific limitation on the terminal device 100. In other embodiments, terminal device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.

An image generating method according to an embodiment of the present application is described below, and the method may be applied to the terminal device 100, for example, the mobile phone 10. As shown in FIG. 4A, the steps in FIG. 4A may be specifically performed by ISP23 as in FIG. 2 or other processor that may send instructions to sensor22, hereinafter collectively referred to as processor 110.

S401, acquiring shooting parameters.

The user can open a camera application through the terminal device 100 to take an image or video. When a user opens a camera application or a photographing button provided through the camera application, for example, when the user clicks the shutter button 15 in the interface as shown in (a) of fig. 1, the processor 110 will acquire photographing parameters used for current photographing, for example, one or more of photographing parameters such as a photographing mode, a zoom magnification, an ambient light parameter, and the like.

The shooting modes include, but are not limited to, shooting modes such as "large aperture", "night view", "portrait", "photograph", "video" and "multi-mirror video". The zoom magnification includes, but is not limited to, "1.0×", "2.0×", and the like. The ambient light parameter may specifically be a current ambient light level, or may be a brightness level determined according to the current ambient light level, for example, the current environment is a dark environment, a brighter environment, or a medium-high bright environment.

In some embodiments, processor 110 may obtain a capture mode, zoom factor, specifically from a camera application (CAMERA APK) as in fig. 3B, and may obtain the ambient light level value in 3A from camx.

In other embodiments, the processor 110 may acquire the identifiers corresponding to the shooting parameters, in addition to the specific values of the shooting mode, the zoom magnification, and the ambient light parameters. For example, the chi-cdk may also determine a corresponding capability value according to the photographing mode, and a corresponding ISZ identification bit according to the zoom parameter and the ambient brightness value.

It should be understood that the photographing parameters may also include an exposure rate set by a user, whether a filter is added, etc., which is not particularly limited by the present application.

In other embodiments, when the terminal device 100 has a plurality of cameras, the processor 110 may further obtain a corresponding shooting parameter for each camera, and generate a private variable corresponding to each camera. Or the processor 110 may generate its corresponding private variable only for the currently active camera. Furthermore, the problem that the processor 110 cannot determine which image data format is adopted by different sensors according to different image data formats determined by different cameras according to shooting parameters is avoided.

And S402, determining a corresponding image data format according to the shooting parameters.

The processor 110 may determine an image data format corresponding to the current photographing parameter according to the obtained photographing parameter. Further, the processor 110 may issue an instruction to output image data in a corresponding format, such as the above-mentioned bayer row, binning image or quad row image, UHD image, nona pixel, etc., to the sensor22, and the type of the image data format is not particularly limited in the present application.

In some embodiments, the processor 110 may determine the corresponding image data format under the current shooting parameter according to the preset correspondence between the shooting parameter and the image data format, for example, the correspondence may be an XML configuration table, and the XML configuration table may include capability values corresponding to different shooting modes, identification bits corresponding to different zoom multiples and/or ambient light parameters, and image data formats corresponding to different capability values and identification bits.

The image data formats corresponding to different shooting parameters are exemplified below.

The processor 110 may determine the corresponding image data format based solely on the ambient light parameters or the corresponding brightness level. For example, for a dark environment at a brightness level, an output binning image may be determined. The output bayer raw image may be determined corresponding to a medium-high brightness environment. The output quad raw image may be determined corresponding to a brighter environment of brightness level.

The processor 110 may determine the corresponding image data format according to only the photographing mode. For example, for a shooting mode such as "video", "night view", etc., it is possible to determine to output a binding image. The output bayer row image can be determined corresponding to the shooting mode being "normal", "live". The output quad raw image can be determined corresponding to the shooting mode being the "portrait" or "large aperture" mode.

The processor 110 may determine the corresponding image data format based solely on the zoom factor. For example, for a zoom magnification of "1.0×", an output binding image may be determined. An image of full size such as a bayer raw image or a quad raw image can be specified to be output corresponding to a zoom magnification of "2.0×" to "4.0×". An image of full size such as a bayer image or a quad image can be specified to be output in response to a zoom magnification of "4.0×" or more, and the number of pixel points is ensured to be sufficient for digital zooming.

In other embodiments, the processor 110 may also determine the corresponding image data format based on a plurality of ambient light parameters, a photographing mode, and a zoom factor. When determining the image data format according to the plurality of photographing parameters, priorities may be set for the different photographing parameters, and if the corresponding image data formats determined by the plurality of photographing parameters are different, the image data format may be determined according to the photographing parameters having a high priority. Alternatively, the processor 110 may count image data formats corresponding to different shooting parameters, and determine a final image data format according to image data formats corresponding to more shooting parameters.

In other embodiments, the processor 110 may also preset the correspondence between various shooting parameters and image data formats, for example, a quad raw image may be selected for shooting modes such as "portrait" or "large aperture", zooming times within "2×" to "4×", and middle-high-brightness scenes. A bayer array image may be used in a case where the photographing mode is "normal" or "livephoto" or the like, the zoom magnification is "4×" or more, and the scene is a medium-high-brightness scene. The output of the binding image may be selected corresponding to the other remaining scenes.

It should be understood that the foregoing illustrates, by way of example only, the correspondence between the shooting parameters and the image data format, and in practical application, different terminal devices may set different correspondence according to the needs, which is not particularly limited by the comparison of the present application.

In other embodiments, the processor 110 may obtain the image data formats corresponding to different zoom factors, ambient light parameters and shooting modes according to the correspondence between the various shooting parameters and the image data formats pre-stored in the chi-cdk, for example, an XML configuration table. Reference is also made in particular to fig. 3B and its associated description.

S403 controls the sensor22 to output the image data in the corresponding image data format.

The processor 110 determines a corresponding image data format based on the zoom multiple, the ambient light parameter, the photographing mode, and other photographing parameters, and may send a corresponding instruction to the sensor22, so that the sensor22 may output the acquired image signal as image data according to the determined image data format.

In some embodiments, the image data output by the sensor22 may be used to generate a preview image at the camera interface, or may be used to generate an image corresponding to the image stored at the terminal device after the user clicks the shutter button.

In other embodiments, the image used for preview and the final saved image may be images in the same image data format, for example, images in the image data format determined based on step S402 described above. Or the image for preview and the final saved image may also be images of different image data formats, for example, the image for preview may both be a binding image, and the final saved image may be an image of an image data format determined based on step S402 described above. The present application is not particularly limited thereto.

Furthermore, based on the image generation method, the terminal device can determine the corresponding shooting environment according to the current shooting parameters, and the requirements of the shooting environment on definition and/or photosensitivity so as to control the camera to output image data in different formats. Furthermore, images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

It should be appreciated that in some embodiments, the terminal device may also repeat the steps shown in fig. 4A described above. Specific processes may refer to fig. 4B, including:

S410, detecting a first shooting parameter.

When the user opens the camera application or triggers a photographing request through a photographing button provided by the camera application program, the processor 110 will acquire a first photographing parameter used for current photographing, such as one or more of photographing parameters including a photographing mode, a zoom factor, an ambient light parameter, and the like. Reference may be made specifically to the aforementioned step S401, and details thereof are not repeated here.

S420, controlling the sensor22 to output the image according to the first image format.

The processor 110 may determine a corresponding first format under the current first photographing parameter according to the obtained first photographing parameter, and control the sensor22 to output an image according to the determined first format from the current collected first image signal. Reference may be made specifically to the foregoing steps S402 to S403, and details are not repeated here.

And S430, detecting a second shooting parameter.

The processor 110 may execute the above steps again when it is obtained that the user has changed the photographing mode or the zoom factor, or when it is detected that the ambient light parameter is greatly changed. Alternatively, the processor 110 may acquire the current shooting parameters periodically at a preset time. Furthermore, the processor 110 will acquire the second shooting parameters used in the current shooting again, and reference may be made to the aforementioned step S401, which is not described herein.

S440, controlling the sensor22 to output the image according to the second image format.

The processor 110 may determine a second format corresponding to the current second photographing parameter according to the obtained second photographing parameter, and control the sensor22 to output the image of the second image signal according to the determined second format. Reference may be made specifically to the foregoing steps S402 to S403, and details are not repeated here.

Furthermore, based on the image generation method, the terminal device can acquire the current shooting scene based on shooting parameters when a user opens a camera application or triggers a shooting request, so as to control the camera to output image data in different formats. Furthermore, images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

In some embodiments, the processor 110 may determine the capability value corresponding to the currently used shooting mode, determine whether the currently used shooting mode is in an ISZ scene according to the zoom multiple and the ambient light parameter if the currently used shooting mode is determined as the preset value, and control the sensor22 to output an image according to the image data format set in the XML configuration table if the currently used shooting mode is in the ISZ. Reference may be made specifically to the steps shown in fig. 5, including:

S510, acquiring a current scene.

When the user opens the camera application or triggers a photographing request through a photographing button provided by the camera application program, the processor 110 may acquire a currently used photographing mode, such as a photographing mode of "large aperture", "night scene", "portrait", "photographing", "video" and "multi-mirror video", from the camera APK. Each shooting mode has a corresponding scene configuration parameter, for example, a normal mode, and the scene configuration parameter of the "shooting" mode is "0".

And, corresponding to a plurality of cameras, the ID of each camera, that is, camera ID is also acquired. Corresponding to the condition that a plurality of cameras are opened, different cameras can acquire corresponding shooting modes respectively, and corresponding private variables are generated.

S520, matching the XML configuration item of the product.

The processor 110 may find a configuration item of a corresponding scene in the sensorXML according to a scene configuration parameter corresponding to the photographing mode. For example, fig. 6 shows a schematic diagram of an XML configuration table named "forceSelectSensorModeConfig", which includes a sensor name "sensorName", a scene configuration parameter "sceneMode", a frame rate "fps", a capability value "capability", and a setting sequence number "forceSelectSensorMode" of a corresponding image format, and the like. For example, the corresponding scene configuration parameter is "0", the frame rate is 30, the capability value is 1, and the setting number is 14. Or the corresponding scene configuration parameter is 23, the frame rate is 30, the capability value is 1, and the setting serial number is 3. And, corresponding sub-scene configuration parameters (subSceneMode) may also be configured, subSceneMode for further subdividing the scene into multiple sub-scenes.

S530, whether the scene is successfully matched. If yes, go to step S540, if no, end.

If the processor 110 finds the configuration item of the corresponding scene in the XML configuration table according to the currently used shooting mode, the scene matching is successful, and step S540 is further performed. If no corresponding configuration item is found, the process is ended, for example, outputting an image in a default image data format.

S540, acquiring the capability value of the current scene configuration.

As shown in fig. 6, corresponding capability values are also configured in the XML configuration table corresponding to different shooting modes. ISP23 may determine corresponding capability values based on corresponding configuration items, see in particular FIG. 6.

S550, whether the capability value is 0. If yes, ending. If not, go to step S560.

If the corresponding capability value is a preset value of '1', the corresponding image data format needs to be selected according to the XML configuration table. Further, step S560 may be performed. If so, the representational capability value is "0", and the processor 110 may output the image in a default image data format.

In some embodiments, the capability value may be "1" for a scene such as "normal," "live," "portrait," "large aperture," or the like.

In some embodiments, corresponding to a capability value of "0", the processor 110 may also determine the corresponding image data format from setting in the XML configuration table. Alternatively, the processor 110 may not perform step 550, i.e., determine the corresponding image data format according to setting in the XML configuration table, regardless of the capability value.

It should be appreciated that the capability values described above are by way of example, and that in some embodiments, the capability values may be provided as other values or symbols, as the application is not limited in particular.

S560, obtaining the current state identification bit.

The chi-cdk in the processor 110 may determine whether the current ISZ scene is based on the zoom factor and the ambient light parameter, e.g., if the current zoom factor is greater than "2×" and/or the current ambient light parameter indicates that the current is a medium highlight scene, the current status flag may be determined to be an ISZ scene.

It should be appreciated that the chi-cdk may determine the current identification bit at step S510, for example, when the user opens a camera application or triggers a photographing request via a photographing button provided by the camera application program. The present application does not specifically limit the time for generating the status flag.

S570, whether the status identification bit is ISZ. If yes, go to step S580, if no, end.

Upon determining that the status identification bit is ISZ, the processor 110 will determine the corresponding image data format according to setting in the XML configuration table. If not, it is indicated that the scene can output an image in a default image data format.

S580, sentting serial numbers of the current scene configuration are obtained.

The setting in the XML configuration table corresponds to different image data formats, and the processor 110 may send the corresponding setting value to the sensor22 according to the determined setting value, so that the sensor22 outputs an image according to the image data format corresponding to the setting.

Furthermore, based on the image generation method, the terminal device can acquire the current shooting scene based on shooting parameters when a user opens a camera application or triggers a shooting request, so as to control the camera to output image data in different formats. Furthermore, images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

In order to solve the above-mentioned problem of large difference in image quality, the present application provides an image generating apparatus 700 including an acquisition unit 710, a determination unit 720, and a transmission unit 730.

The acquisition unit 710 is configured to acquire a first photographing parameter for an image sensor of a photographing apparatus to acquire a first image signal. The determining unit 720 is configured to determine that the first capturing parameter corresponds to a first image format, and the transmitting unit 730 is configured to transmit a first output instruction to the image sensor, where the first output instruction is configured to instruct the image sensor to output the acquired first image signal as first image data having the first image format. The acquisition unit 710 is further configured to acquire a second photographing parameter for the image sensor of the photographing apparatus to acquire a second image signal. The determining unit 720 is further configured to determine that the second capturing parameter corresponds to a second image format, and the sending unit 730 is further configured to send a second output instruction to the image sensor, where the second output instruction is configured to instruct the image sensor to output the acquired second image signal in a second image data with the second image format.

In some embodiments, the first photographing parameter and the second photographing parameter include one or more of a photographing mode, a zoom factor, and an ambient light parameter

In other embodiments, the first photographing parameter includes a photographing mode that is a normal mode or a live mode, the second photographing parameter includes a photographing mode that is a portrait mode or a large aperture mode, and the sharpness of the first image format is lower than the sharpness of the second image format, or the first photographing parameter includes a photographing mode that is a video mode, and the second photographing parameter includes a photographing mode that is a normal mode, a live mode, a portrait mode, or a large aperture mode, and the sharpness of the first image format is lower than the sharpness of the second image format.

In other embodiments, the first photographing parameter includes a zoom factor that is less than a zoom factor included in the second photographing parameter, and the number of pixels in the first image format is greater than the number of pixels in the second image format.

In other embodiments, the first photographing parameter includes an ambient light parameter that is less than an ambient light parameter included in the second photographing parameter, and the brightness contrast parameter of the first image format is higher than the brightness contrast parameter of the second image format.

In other embodiments, the second image format is a four-in-one bayer image format or a pixel-by-pixel image format corresponding to the first image format, the second image format is a bayer image format or a pixel-by-pixel image format corresponding to the first image format, and the second image format is a bayer image format or a four-in-bayer image format corresponding to the first image format.

In other embodiments, the obtaining unit 710 is further configured to obtain an image format configuration table, where the image format configuration table includes a correspondence between a shooting mode and a capability value and an image format. The determining unit 720 is further configured to determine a first capability value corresponding to the first shooting mode in the first shooting parameter based on the image format configuration table. The determining unit 720 is further configured to determine that the first image format is the image format corresponding to the first shooting mode in the image format configuration table, corresponding to the first capability value being a first preset value, and the determining unit 720 is further configured to determine that the first image format is the preset image format, corresponding to the first capability value not being the first preset value.

In other embodiments, the determining unit 720 is further configured to determine a first identification bit corresponding to the first shooting parameter based on the first zoom multiple and the first ambient light parameter in the first shooting parameter, where the first identification bit is used to indicate whether the scene corresponding to the first shooting parameter is a zoom scene, the determining unit 720 is further configured to determine that the first image format is an image format corresponding to the first shooting mode in the configuration table based on the first capability value being a first preset value and the first identification bit being a second preset value, and the determining unit 720 is further configured to determine that the first image format is a preset image format based on the first capability value being the first preset value and the first identification bit not being the second preset value.

In other embodiments, the terminal device includes a hardware abstraction layer that includes a camera platform architecture, a camera hardware interface in the camera platform architecture and the camera development kit chi-cdk include an image format configuration table.

In other embodiments, the terminal device further comprises an application layer, the camera platform architecture further comprises a camx interface, the chi-cdk is configured to obtain a first shooting mode and a first zoom factor of the camera application from the application layer, obtain a first ambient light parameter from the camx interface, determine a first image format based on the first shooting mode, the first zoom factor and the first ambient light parameter, and send a first output instruction to the image sensor through the camx interface.

Further, based on the image generating apparatus 700, the current shooting scene may be acquired based on the shooting parameters when the user opens the camera application or triggers the shooting request, so as to control the camera to output image data in different formats. Furthermore, images with the best effect can be generated under different scenes, the image quality is improved, the time spent by a user for debugging the parameters of the images is avoided, and the photographing experience of the user is improved.

In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.

Those of ordinary skill in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by a computer program to instruct related hardware, the program may be stored in a computer readable storage medium, and the program may include the above-described method embodiments when executed. The storage medium includes a Read Only Memory (ROM) or a random access memory (random access memory, RAM), a magnetic disk or an optical disk, and the like, which can store program codes.