CN112200726B - Urinary sediment visible component detection method and system based on lensless microscopic imaging - Google Patents
- ️Fri Apr 07 2023
Info
-
Publication number
- CN112200726B CN112200726B CN202011177179.XA CN202011177179A CN112200726B CN 112200726 B CN112200726 B CN 112200726B CN 202011177179 A CN202011177179 A CN 202011177179A CN 112200726 B CN112200726 B CN 112200726B Authority
- CN
- China Prior art keywords
- image
- network
- sub
- urinary sediment
- microscopic Prior art date
- 2020-10-29 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a urinary sediment visible component detection method and system based on lensless microscopic imaging, the method adopts lensless microscopic imaging to replace a traditional optical microscope, and simplifies lighting and imaging light paths, so that the size of microscopic equipment is miniaturized, the weight is light, the cost is low, the operation is simple, and the medical cost is further saved; the method adopts an artificial intelligence algorithm to carry out phase recovery and super-resolution reconstruction on the holographic intensity image acquired by the lens-free microscopic imaging, realizes the microscopic imaging with wide field of view and ultrahigh resolution, greatly shortens the time of the microscopic imaging, improves the imaging speed of the microscope, improves the imaging quality of the microscope, and can be applied to clinical medical examination. In addition, the invention adopts a deep neural network to concentrate the detection, the segmentation and the identification of the urinary sediment tangible components in one model for multi-task learning, and realizes the full-automatic end-to-end detection, the segmentation and the identification of the urinary sediment tangible components.
Description
Technical Field
The invention relates to the technical field of microscopic imaging and the technical field of medical image processing, in particular to a urinary sediment visible component detection method and system based on lensless microscopic imaging.
Background
The urine routine examination generally comprises examination of color, transparency, specific gravity, pH value, protein, sugar and urine sediment formed components of urine, wherein the examination of the urine sediment formed components is one of routine examination items in hospitals, and some components in the urine sediment have definite pathological significance, such as bacteria, red blood cells, white blood cells and the like, have very important diagnosis values, can help clinicians to know changes of all parts of the urinary system, and play an important role in assisting positioning diagnosis, differential diagnosis and prognosis judgment of urinary system diseases.
The conventional urinary sediment microscopic examination analysis method is to collect urinary sediment images through a microscope and then observe and manually identify the urinary sediment images. On the one hand, the workload of manual observation and analysis is large, the operation is complex, the influence of human subjective factors is easy to cause the problem of missed detection and false detection. On the other hand, in order to obtain a high-resolution, wide-field microscopic image, urinary sediment microscopy must employ a high-magnification microscopic imaging system, and the field of view and the depth of field are enlarged by mechanical scanning, image stitching and focal length adjustment, which not only complicates the imaging process, but also significantly increases the overall cost of the microscopic imaging system.
Disclosure of Invention
The invention provides a urinary sediment visible component detection method and system based on lens-free microscopic imaging, which are used for overcoming the defects of manual analysis, complicated imaging process, high cost and the like in the prior art.
In order to achieve the purpose, the invention provides a urinary sediment visible component detection method based on lens-free microscopic imaging, which comprises the following steps:
collecting a urinary sediment holographic intensity map by using a lens-free microscopic imaging mode;
inputting the holographic intensity map into a trained urinary sediment visible component detection model, wherein the urinary sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network;
carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image;
performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image;
and obtaining the type and the quantity of the formed components of each urinary sediment by using an image classification network according to the super-resolution microscopic image.
In order to achieve the above object, the present invention further provides a urinary sediment visible component detection system based on lens-free microscopic imaging, comprising:
the image acquisition module is used for acquiring a urinary sediment holographic intensity map by using a lensless microscopic imaging mode;
the urine sediment visible component detection module is used for inputting the holographic intensity map into a trained urine sediment visible component detection model, and the urine sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network; carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image; performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image; and obtaining the type and the quantity of the formed components of each urinary sediment by using an image classification network according to the super-resolution microscopic image.
To achieve the above object, the present invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
To achieve the above object, the present invention further proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method described above.
Compared with the prior art, the invention has the beneficial effects that:
according to the urine sediment visible component detection method based on lensless microscopic imaging, lensless microscopic imaging is adopted to replace a traditional optical microscope, an illumination and imaging light path is simplified, an expensive and heavy optical lens is abandoned, the size of microscopic equipment is miniaturized, the weight is light, the cost is low, the operation is simplified, and the medical cost is further saved. Meanwhile, the invention adopts an artificial intelligence algorithm to carry out phase recovery and super-resolution reconstruction on the holographic intensity image acquired by the lens-free microscopic imaging, realizes the microscopic imaging with wide field of view and ultrahigh resolution, does not need to rely on mechanical scanning, image splicing and focusing adjustment to enlarge the field of view and the depth of field, greatly shortens the time of the microscopic imaging, improves the imaging speed of the microscope, improves the imaging quality of the microscope, and can be applied to clinical medical examination. In addition, the invention adopts the deep neural network to concentrate the detection, the segmentation and the identification of the urinary sediment tangible components in one model for multi-task learning, realizes the end-to-end full-automatic detection, the segmentation and the identification of the urinary sediment tangible components, greatly improves the detection speed and the detection precision and lightens the workload of doctors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a flow chart of a urinary sediment visible component detection method based on lensless microscopy imaging, provided by the invention;
FIG. 2 is a block diagram of a first image transformation network according to an embodiment of the present invention;
FIG. 3 is a block diagram of a second image conversion network according to an embodiment of the present invention;
fig. 4 is a structural diagram of an image classification network according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the technical solutions in the embodiments of the present invention may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should not be considered to exist, and is not within the protection scope of the present invention.
As shown in fig. 1, the present invention provides a method for detecting urinary sediment visible components based on lensless microscopy, comprising:
101: collecting a urinary sediment holographic intensity map by using a lens-free microscopic imaging mode;
102: inputting the holographic intensity map into a trained urinary sediment visible component detection model, wherein the urinary sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network;
the first image conversion network, the second image conversion network and the image classification network are all convolutional neural networks, but the network structures are different.
103: carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image;
104: performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image;
105: and obtaining the type and the quantity of the formed components of each urinary sediment by using an image classification network according to the super-resolution microscopic image.
Types of urine sediment constituents include 7 types, bacteria, crystals, casts, mucus filaments, red blood cells, white blood cells, and squamous epithelial cells, respectively.
According to the urine sediment visible component detection method based on lensless microscopic imaging, lensless microscopic imaging is adopted to replace a traditional optical microscope, an illumination and imaging light path is simplified, an expensive and heavy optical lens is abandoned, the size of microscopic equipment is miniaturized, the weight is light, the cost is low, the operation is simplified, and the medical cost is further saved. Meanwhile, the invention adopts an artificial intelligence algorithm to carry out phase recovery and super-resolution reconstruction on the holographic intensity image acquired by the lens-free microscopic imaging, realizes the microscopic imaging with wide field of view and ultrahigh resolution, does not need to rely on mechanical scanning, image splicing and focusing adjustment to enlarge the field of view and the depth of field, greatly shortens the time of the microscopic imaging, improves the imaging speed of the microscope, improves the imaging quality of the microscope, and can be applied to clinical medical examination. In addition, the invention adopts the deep neural network to concentrate the detection, the segmentation and the identification of the urinary sediment tangible components in one model for multi-task learning, realizes the end-to-end full-automatic detection, the segmentation and the identification of the urinary sediment tangible components, greatly improves the detection speed and the detection precision and lightens the workload of doctors.
In one embodiment, for
step101, a urine sample of a patient is collected, centrifuged, the supernatant removed and shaken to mix the urine sediment components, and a predetermined amount of the sample is deposited on a slide and covered with a cover slip. Then a lens-free microscopic imaging system is used for shooting a urinary sediment holographic intensity map.
The working principle of the lens-free microscopic imaging system is as follows: light emitted by the light source is projected onto the sample, wherein the light transmitted through the sample is called reference light wave, the light scattered with the sample is called scattered light wave, the reference light wave and the scattered light wave emit interference on the imaging sensor to generate interference fringes, and the interference fringes are recorded by the imaging sensor to form a holographic intensity pattern.
The lens-free microscopic imaging system is a system for imaging without any imaging lens, and is based on a Gabor coaxial holographic principle, and utilizes an imaging sensor to sample interference images among transmitted light penetrating through a sample and perform holographic reconstruction through a digital image processing technology so as to obtain digital microscopic imaging of the sample. The lens-free microscopic imaging system breaks through the limitation of space bandwidth product brought by lenses of the traditional optical microscope, realizes high-resolution imaging in a large visual field range, and can provide rapid diagnosis and accurate detection of clinical samples in a large visual field.
In a next embodiment, for
step102, the urinary sediment tangible element detection model includes a first image conversion network, a second image conversion network, and an image classification network.
The first image transformation network is shown in fig. 2 and comprises an encoder sub-network, a decoder sub-network and a frequency-time domain transformation sub-network;
the encoder sub-network is composed of 5 convolutional layers and is used for extracting high-level feature representations of an input initial phase image and an input initial amplitude image; the convolution kernel size of each convolution layer is 5 × 5 with a step size of 1.
The decoder sub-network consists of 5 deconvolution layers, and the 5 deconvolution layers are respectively connected with corresponding convolution layers in the encoder sub-network in a jump connection mode and are used for decoding to obtain a clear phase image and an amplitude image in a frequency domain after phase recovery and automatic focusing; the convolution kernel size of each deconvolution layer was 5 × 5, with a step size of 1.
The sub-network of frequency-time domain transformation comprises an inverse Fourier transform convolutional layer for converting the clear phase image and amplitude image from frequency domain to time domain by inverse Fourier transformation to obtain the original microscopic image of the urinary sediment.
In another embodiment, in lens-free microscopic imaging, due to the limited size of the pixel size of the imaging sensor, the resolution of the acquired original microscopic image of the sample is low, and in order to obtain a high-resolution microscopic image for detection and identification by a subsequent computer-aided analysis system, super-resolution image reconstruction is required. The present embodiment constructs the second image conversion network based on deep learning, a high resolution microscope image is reconstructed from the input low resolution microscope image.
The second image conversion network is shown in fig. 3 and comprises a feature extraction sub-network, a nonlinear mapping sub-network and an up-sampling sub-network;
the feature extraction sub-network comprises 4 convolutional layers, wherein the first three convolutional layers are used for carrying out high-dimensional feature extraction, and the fourth convolutional layer is used for carrying out feature dimension reduction; the convolution kernel size for the first three convolutional layers is 3 × 3, and the convolution kernel size for the fourth convolutional layer is 1 × 1.
The nonlinear mapping sub-network comprises 3 convolutional layers, wherein the first two convolutional layers are used for carrying out nonlinear processing on input features, and the third convolutional layer is used for carrying out feature dimension expansion; the convolution kernel sizes of the first two convolutional layers are 3 × 3, and the convolution kernel size of the third convolutional layer is 1 × 1.
The up-sampling sub-network comprises two deconvolution layers and is used for carrying out up-sampling operation on input features to obtain a super-resolution microscopic image. The convolution kernel size of the two deconvolution layers is 3 × 3, with a step size of 2.
In a certain embodiment, the image classification network is shown in fig. 4 and includes a feature extraction sub-network, a feature suggestion sub-network, a region of interest alignment sub-network, a mask generation sub-network, and a classification sub-network;
the feature extraction sub-network comprises a series of convolution layers and is used for extracting high-dimensional feature representation of an input image and outputting a feature map;
the feature proposing sub-network comprises a plurality of convolution layers and is used for distributing a fixed surrounding frame to each pixel position in the feature map, and performing binary classification and coordinate regression on pixels in the surrounding frame to obtain a proposing surrounding frame; the enclosure containing the visible components of the urinary sediment is the proposed enclosure.
The interested region alignment sub-network comprises a plurality of convolution layers and is used for mapping the proposed bounding box to the high-dimensional feature and obtaining the interested region feature with fixed size in a bilinear interpolation mode;
the mask generation sub-network adopts a full convolution network structure and is used for performing convolution and deconvolution operations on the characteristics of the region of interest, accurately classifying each pixel and generating an image segmentation mask;
the classification sub-network comprises a plurality of convolution layers and is used for carrying out full connection operation on the characteristics of the region of interest and outputting a final surrounding frame in a regression mode and the category of the urinary sediment visible components in the surrounding frame.
In the model training process, the mask generation sub-network and the classification sub-network are mutually supervised, and finally, the optimal condition is achieved. The image segmentation mask output by the mask generation sub-network optimizes the parameters of the classification sub-network, and meanwhile, the classification category and the bounding box output by the classification sub-network optimize the parameters of the mask generation sub-network, and finally, the parameters of the classification sub-network and the bounding box are optimal, so that the precision of the image classification network is higher.
The image classification network is a two-stage framework, the first stage carries out feature extraction on the input super-resolution microscopic image and generates an offer bounding box, and the second stage classifies the generated offer bounding box and obtains a corresponding segmentation mask and a corresponding bounding box. The number and position of each component in the urine sediment forming components can be obtained through the surrounding frame.
In one embodiment, in order to ensure that the image classification network correctly segments and identifies each tangible component in the super-resolution microscopic image, a loss function as shown below is used to guide the training process of the network:
L=L c +L b +L m
wherein L represents the total loss error; l is c Representing a classification error; l is b Indicating a detection error; l is a radical of an alcohol m Indicating a segmentation error.
In a next embodiment, for step 103, phase recovery and auto-focusing of the holographic intensity map using the first image conversion network to obtain the original microscopic image, comprises:
301: reversely propagating the holographic intensity image for a distance by utilizing an angular spectrum reverse propagation mode in a free space to obtain a group of twin images of a frequency domain, wherein the twin images comprise an initial phase image and an initial amplitude image;
the distance of back propagation is very short, approximately 1mm.
302: and inputting the initial phase image and the initial amplitude image into a first image conversion network, performing phase recovery and automatic focusing to obtain a clear phase image and an amplitude image in a frequency domain, and converting the clear phase image and the clear amplitude image from the frequency domain to a time domain to obtain an original microscopic image of the urinary sediment.
In a certain embodiment, for step 301, reversely propagating the holographic intensity map by a distance using a backward propagation manner of angular spectrum in free space to obtain a set of twin images in the frequency domain, includes:
3011: calculating a light field E (x) obtained by reversely propagating the holographic intensity pattern I (x, y) by a distance-z by utilizing an angular spectrum reverse propagation mode in free space 0 ,y 0 ),
E(x 0 ,y 0 )=f{I(x,y);λ,-z}
Wherein f { } represents a transfer function, and λ represents a wavelength of the light source;
3012: according to said light field E (x) 0 ,y 0 ) And obtaining a set of twin images of the frequency domain.
The invention also provides a urinary sediment visible component detection system based on lensless microscopic imaging, which comprises:
the image acquisition module is used for acquiring a urinary sediment holographic intensity map by using a lensless microscopic imaging mode;
the urine sediment visible component detection module is used for inputting the holographic intensity map into a trained urine sediment visible component detection model, and the urine sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network; carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image; performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image; and obtaining the type and the quantity of the formed components of each urinary sediment by using an image classification network according to the super-resolution microscopic image.
In one embodiment, the image acquisition module further comprises:
reversely propagating the holographic intensity image for a distance by utilizing an angular spectrum reverse propagation mode in a free space to obtain a group of twin images of a frequency domain, wherein the twin images comprise an initial phase image and an initial amplitude image;
and inputting the initial phase image and the initial amplitude image into a first image conversion network, performing phase recovery and automatic focusing to obtain a clear phase image and an amplitude image in a frequency domain, and converting the clear phase image and the clear amplitude image from the frequency domain to a time domain to obtain an original microscopic image of the urinary sediment.
In a next embodiment, the image acquisition module further comprises:
calculating a light field E (x) obtained by reversely propagating the holographic intensity pattern I (x, y) by a distance-z by utilizing an angular spectrum reverse propagation mode in free space 0 ,y 0 ),
E(x 0 ,y 0 )=f{I(x,y);λ,-z}
Wherein f { } represents a transfer function, and λ represents a wavelength of the light source;
from the light field E (x) 0 ,y 0 ) And obtaining a set of twin images of the frequency domain.
In another embodiment, for the urinary sediment visible component detection module, the first image conversion network includes an encoder subnetwork, a decoder subnetwork, and a frequency-time domain transform subnetwork;
the encoder sub-network is composed of 5 convolutional layers and is used for extracting high-level feature representations of an input initial phase image and an input initial amplitude image;
the decoder sub-network consists of 5 deconvolution layers, and the 5 deconvolution layers are respectively connected with corresponding convolution layers in the encoder sub-network in a jump connection mode and are used for decoding to obtain a clear phase image and an amplitude image in a frequency domain after phase recovery and automatic focusing;
the sub-network comprises an inverse Fourier transform convolutional layer for converting the clear phase image and the clear amplitude image from a frequency domain to a time domain through inverse Fourier transform to obtain an original microscopic image of the urinary sediment.
In a next embodiment, for the urinary sediment visible component detection module, the second image conversion network comprises a feature extraction sub-network, a nonlinear mapping sub-network, and an up-sampling sub-network;
the feature extraction sub-network comprises 4 convolutional layers, wherein the first three convolutional layers are used for carrying out high-dimensional feature extraction, and the fourth convolutional layer is used for carrying out feature dimension reduction;
the nonlinear mapping sub-network comprises 3 convolutional layers, the first two convolutional layers are used for carrying out nonlinear processing on input features, and the third convolutional layer is used for carrying out feature dimension expansion;
the up-sampling sub-network comprises two deconvolution layers and is used for performing up-sampling operation on input features to obtain a super-resolution microscopic image.
In another embodiment, for the urinary sediment tangible element detection module, the image classification network comprises a feature extraction sub-network, a feature proposal sub-network, a region of interest alignment sub-network, a mask generation sub-network and a classification sub-network;
the feature extraction sub-network comprises a series of convolution layers and is used for extracting high-dimensional feature representation of an input image and outputting a feature map;
the feature proposing sub-network comprises a plurality of convolution layers and is used for distributing a fixed surrounding frame to each pixel position in the feature map, and performing binary classification and coordinate regression on pixels in the surrounding frame to obtain a proposing surrounding frame;
the interested region alignment sub-network comprises a plurality of convolution layers and is used for mapping the proposed bounding box to the high-dimensional feature and obtaining the interested region feature with fixed size in a bilinear interpolation mode;
the mask generation sub-network adopts a full convolution network structure and is used for performing convolution and deconvolution operations on the characteristics of the region of interest and accurately classifying each pixel to generate an image segmentation mask;
the classification sub-network comprises a plurality of convolution layers and is used for carrying out full connection operation on the characteristics of the region of interest and outputting a final surrounding frame in a regression mode and the category of the urinary sediment visible components in the surrounding frame.
In one embodiment, for the urinary sediment formed component detection module, the image classification network uses a loss function of,
L=L c +L b +L m
wherein L represents the total loss error; l is c Representing a classification error; l is b Indicating a detection error; l is m Indicating a segmentation error.
The invention further provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The invention also proposes a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (9)
1. A urinary sediment visible component detection method based on lens-free microscopic imaging is characterized by comprising the following steps:
collecting a urinary sediment holographic intensity map by using a lens-free microscopic imaging mode;
inputting the holographic intensity map into a trained urinary sediment visible component detection model, wherein the urinary sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network;
carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image, and reversely propagating the holographic intensity image for a distance by using an angular spectrum reverse propagation mode in a free space to obtain a group of twin images in a frequency domain, wherein the twin images comprise an initial phase image and an initial amplitude image; inputting the initial phase image and the initial amplitude image into a first image conversion network, performing phase recovery and automatic focusing to obtain a clear phase image and an amplitude image in a frequency domain, and converting the clear phase image and the clear amplitude image from the frequency domain to a time domain to obtain an original microscopic image of the urinary sediment;
performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image;
obtaining the type and the quantity of the visible components of each urinary sediment by using an image classification network according to the super-resolution microscopic image; the types of the urine sediment visible components include 7 types, respectively bacteria, crystals, casts, mucous filaments, red blood cells, white blood cells, and squamous epithelial cells.
2. The method for detecting urinary sediment tangible elements based on lensless microscopy as claimed in claim 1, wherein the step of obtaining a set of twin images in the frequency domain by backward propagating the holographic intensity map for a distance using the angular spectrum backward propagation in free space comprises:
calculating a light field E (x) obtained by reversely propagating the holographic intensity pattern I (x, y) by a distance-z by utilizing an angular spectrum reverse propagation mode in free space 0 ,y 0 ),
E(x 0 ,y 0 )=f{I(x,y);λ,-z}
Wherein f { } represents a transfer function, and λ represents a wavelength of the light source;
from the light field E (x) 0 ,y 0 ) And obtaining a set of twin images of the frequency domain.
3. The urinary sediment tangible element detection method based on lensless microscopy as defined in claim 1, wherein the first image transformation network comprises an encoder sub-network, a decoder sub-network, and a frequency-time domain transformation sub-network;
the encoder sub-network is composed of 5 convolutional layers and is used for extracting high-level feature representations of an input initial phase image and an input initial amplitude image;
the decoder sub-network consists of 5 deconvolution layers, and the 5 deconvolution layers are respectively connected with corresponding convolution layers in the encoder sub-network in a jump connection mode and are used for decoding to obtain a clear phase image and an amplitude image in a frequency domain after phase recovery and automatic focusing;
the sub-network comprises an inverse Fourier transform convolutional layer for converting the clear phase image and the clear amplitude image from a frequency domain to a time domain through inverse Fourier transform to obtain an original microscopic image of the urinary sediment.
4. The urinary sediment tangible element detection method based on lensless microscopy as defined in claim 1, wherein the second image transformation network comprises a feature extraction sub-network, a non-linear mapping sub-network and an up-sampling sub-network;
the feature extraction sub-network comprises 4 convolutional layers, wherein the first three convolutional layers are used for carrying out high-dimensional feature extraction, and the fourth convolutional layer is used for carrying out feature dimension reduction;
the nonlinear mapping sub-network comprises 3 convolution layers, the first two convolution layers are used for carrying out nonlinear processing on input features, and the third convolution layer is used for carrying out feature dimension expansion;
the up-sampling sub-network comprises two deconvolution layers and is used for performing up-sampling operation on input features to obtain a super-resolution microscopic image.
5. The urinary sediment tangible element detection method based on lensless microscopy imaging as claimed in claim 1, wherein the image classification network comprises a feature extraction sub-network, a feature proposal sub-network, a region of interest alignment sub-network, a mask generation sub-network and a classification sub-network;
the feature extraction sub-network comprises a series of convolution layers and is used for extracting high-dimensional feature representation of an input image and outputting a feature map;
the feature proposing sub-network comprises a plurality of convolution layers and is used for distributing a fixed surrounding frame to each pixel position in the feature map, and performing binary classification and coordinate regression on pixels in the surrounding frame to obtain a proposing surrounding frame;
the interested region alignment sub-network comprises a plurality of convolution layers and is used for mapping the proposed bounding box to the high-dimensional feature and obtaining the interested region feature with fixed size in a bilinear interpolation mode;
the mask generation sub-network adopts a full convolution network structure and is used for performing convolution and deconvolution operations on the characteristics of the region of interest and accurately classifying each pixel to generate an image segmentation mask;
the classification sub-network comprises a plurality of convolution layers and is used for carrying out full connection operation on the characteristics of the region of interest and outputting a final surrounding frame in a regression mode and the category of the urinary sediment visible components in the surrounding frame.
6. The urinary sediment tangible element detection method based on lensless microscopy as defined in any one of claims 1 to 5, wherein the image classification network employs a loss function of,
L=L c +L b +L m
wherein L represents the total loss error; l is c Representing a classification error; l is b Indicating a detection error; l is m Indicating a segmentation error.
7. A urinary sediment visible component detection system based on lensless microscopic imaging is characterized by comprising:
the image acquisition module is used for acquiring a urinary sediment holographic intensity map by using a lensless microscopic imaging mode;
the urine sediment visible component detection module is used for inputting the holographic intensity map into a trained urine sediment visible component detection model, and the urine sediment visible component detection model comprises a first image conversion network, a second image conversion network and an image classification network; carrying out phase recovery and automatic focusing on the holographic intensity image by using a first image conversion network to obtain an original microscopic image, and reversely propagating the holographic intensity image for a distance by using an angular spectrum reverse propagation mode in a free space to obtain a group of twin images in a frequency domain, wherein the twin images comprise an initial phase image and an initial amplitude image; inputting the initial phase image and the initial amplitude image into a first image conversion network, performing phase recovery and automatic focusing to obtain a clear phase image and an amplitude image in a frequency domain, and converting the clear phase image and the clear amplitude image from the frequency domain to a time domain to obtain an original microscopic image of the urinary sediment; performing super-resolution reconstruction on the original microscopic image by using a second image conversion network to obtain a super-resolution microscopic image; obtaining the type and the quantity of the formed components of each urinary sediment by using an image classification network according to the super-resolution microscopic image; the types of the urine sediment visible components include 7 types, respectively bacteria, crystals, casts, mucous filaments, red blood cells, white blood cells, and squamous epithelial cells.
8. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011177179.XA CN112200726B (en) | 2020-10-29 | 2020-10-29 | Urinary sediment visible component detection method and system based on lensless microscopic imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011177179.XA CN112200726B (en) | 2020-10-29 | 2020-10-29 | Urinary sediment visible component detection method and system based on lensless microscopic imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112200726A CN112200726A (en) | 2021-01-08 |
CN112200726B true CN112200726B (en) | 2023-04-07 |
Family
ID=74011880
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011177179.XA Active CN112200726B (en) | 2020-10-29 | 2020-10-29 | Urinary sediment visible component detection method and system based on lensless microscopic imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112200726B (en) |
Families Citing this family (3)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113822845A (en) * | 2021-05-31 | 2021-12-21 | 腾讯科技(深圳)有限公司 | Method, apparatus, device and medium for hierarchical segmentation of tissue structure in medical image |
CN113408545B (en) * | 2021-06-17 | 2024-03-01 | 浙江光仑科技有限公司 | End-to-end photoelectric detection system and method based on micro-optical device |
CN113554636B (en) * | 2021-07-30 | 2024-06-28 | 西安电子科技大学 | Chip defect detection method based on generation of countermeasure network and calculation hologram |
Citations (2)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108051930A (en) * | 2017-12-29 | 2018-05-18 | 南京理工大学 | Big visual field super-resolution dynamic phasing is without lens microscopic imaging device and reconstructing method |
CN110473166A (en) * | 2019-07-09 | 2019-11-19 | 哈尔滨工程大学 | A kind of urinary formed element recognition methods based on improvement Alexnet model |
Family Cites Families (10)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11514325B2 (en) * | 2018-03-21 | 2022-11-29 | The Regents Of The University Of California | Method and system for phase recovery and holographic image reconstruction using a neural network |
CN108508588B (en) * | 2018-04-23 | 2019-11-15 | 南京大学 | A lensless holographic microscopic phase recovery method and device with multi-constraint information |
US11222415B2 (en) * | 2018-04-26 | 2022-01-11 | The Regents Of The University Of California | Systems and methods for deep learning microscopy |
CN109740697B (en) * | 2019-03-05 | 2023-04-14 | 重庆大学 | Recognition method of formed components in microscopic images of urinary sediment based on deep learning |
CN109884018B (en) * | 2019-03-22 | 2020-09-18 | 华中科技大学 | A method and system for submicron lensless microscopic imaging based on neural network |
CN110473167B (en) * | 2019-07-09 | 2022-06-17 | 哈尔滨工程大学 | Urine sediment image recognition system and method based on deep learning |
CN110308547B (en) * | 2019-08-12 | 2021-09-07 | 青岛联合创智科技有限公司 | Dense sample lens-free microscopic imaging device and method based on deep learning |
CN111189828B (en) * | 2020-01-14 | 2020-12-22 | 北京大学 | A rotating lensless pixel super-resolution imaging system and imaging method thereof |
CN111414880B (en) * | 2020-03-26 | 2022-10-14 | 电子科技大学 | Method for detecting target of active component in microscopic image based on improved RetinaNet |
CN111524138B (en) * | 2020-07-06 | 2020-09-22 | 湖南国科智瞳科技有限公司 | A method and device for microscopic image cell recognition based on multi-task learning |
-
2020
- 2020-10-29 CN CN202011177179.XA patent/CN112200726B/en active Active
Patent Citations (2)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108051930A (en) * | 2017-12-29 | 2018-05-18 | 南京理工大学 | Big visual field super-resolution dynamic phasing is without lens microscopic imaging device and reconstructing method |
CN110473166A (en) * | 2019-07-09 | 2019-11-19 | 哈尔滨工程大学 | A kind of urinary formed element recognition methods based on improvement Alexnet model |
Non-Patent Citations (1)
* Cited by examiner, † Cited by third partyTitle |
---|
沈美丽 ; 陈殿仁 ; .支持向量机在尿沉渣有形成分分类中的应用.电子器件.2006,(01),98-101. * |
Also Published As
Publication number | Publication date |
---|---|
CN112200726A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7459336B2 (en) | 2024-04-01 | Systems, devices and methods for image processing to generate images with predictive tagging - Patents.com |
Kang et al. | 2021 | Stainnet: a fast and robust stain normalization network |
Liu et al. | 2022 | Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields |
CN112200726B (en) | 2023-04-07 | Urinary sediment visible component detection method and system based on lensless microscopic imaging |
CN109670510B (en) | 2023-05-26 | Deep learning-based gastroscope biopsy pathological data screening system |
Viedma et al. | 2022 | Deep learning in retinal optical coherence tomography (OCT): A comprehensive survey |
CN110288597B (en) | 2021-04-02 | Video saliency detection method for wireless capsule endoscopy based on attention mechanism |
CN105182514B (en) | 2018-03-09 | Based on LED light source without lens microscope and its image reconstructing method |
EP2544141A1 (en) | 2013-01-09 | Diagnostic information distribution device and pathology diagnosis system |
Pradhan et al. | 2021 | Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning |
CN107993229B (en) | 2021-11-19 | Tissue classification method and device based on cardiovascular IVOCT image |
Wu et al. | 2022 | Learned end-to-end high-resolution lensless fiber imaging towards real-time cancer diagnosis |
CN110728666A (en) | 2020-01-24 | Typing method and system for chronic nasosinusitis based on digital pathological slide |
CN108564570A (en) | 2018-09-21 | A kind of method and apparatus of intelligentized pathological tissues positioning |
Kromp et al. | 2019 | Deep Learning architectures for generalized immunofluorescence based nuclear image segmentation |
Combalia Escudero et al. | 2019 | Digitally stained confocal microscopy through deep learning |
CN116843974A (en) | 2023-10-03 | Breast cancer pathological image classification method based on residual neural network |
Sengupta et al. | 2020 | FunSyn-Net: enhanced residual variational auto-encoder and image-to-image translation network for fundus image synthesis |
WO2021198247A1 (en) | 2021-10-07 | Optimal co-design of hardware and software for virtual staining of unlabeled tissue |
US20230105073A1 (en) | 2023-04-06 | Method and apparatus for removing honeycomb artifacts from optical fiber bundle images based on artificial intelligence |
Eadie et al. | 2023 | Fiber bundle image reconstruction using convolutional neural networks and bundle rotation in endomicroscopy |
CN113724150B (en) | 2024-08-02 | Structured light microscopic reconstruction method and device without high signal-to-noise ratio truth image |
JP2022126373A (en) | 2022-08-30 | Information processing device, information processing method, computer program, and medical diagnostic system |
CN113327233A (en) | 2021-08-31 | Cell image detection method based on transfer learning |
Chen et al. | 2024 | Single color digital H&E staining with In-and-Out Net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2021-01-08 | PB01 | Publication | |
2021-01-08 | PB01 | Publication | |
2021-01-26 | SE01 | Entry into force of request for substantive examination | |
2021-01-26 | SE01 | Entry into force of request for substantive examination | |
2023-04-07 | GR01 | Patent grant | |
2023-04-07 | GR01 | Patent grant |