CN110807732B - Panoramic stitching system and method for microscopic images - Google Patents
- ️Tue Aug 29 2023
CN110807732B - Panoramic stitching system and method for microscopic images - Google Patents
Panoramic stitching system and method for microscopic images Download PDFInfo
-
Publication number
- CN110807732B CN110807732B CN201910963671.0A CN201910963671A CN110807732B CN 110807732 B CN110807732 B CN 110807732B CN 201910963671 A CN201910963671 A CN 201910963671A CN 110807732 B CN110807732 B CN 110807732B Authority
- CN
- China Prior art keywords
- block
- view
- field
- sub
- matching Prior art date
- 2019-10-11 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000000007 visual effect Effects 0.000 claims abstract description 67
- 238000000605 extraction Methods 0.000 claims abstract description 14
- 230000004438 eyesight Effects 0.000 claims description 21
- 238000006073 displacement reaction Methods 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000003745 diagnosis Methods 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000003556 assay Methods 0.000 description 1
- 230000002380 cytological effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001613 neoplastic effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
The application provides a panoramic stitching system and a panoramic stitching method for microscopic images, wherein the panoramic stitching system comprises the following steps: the device comprises a visual field sub-block matching module, a visual field position fitting module and a block extracting module; the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images; the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions; the block extraction module is used for automatically extracting the spliced complete image. By adopting the scheme, the automatic splicing can be realized, the high-definition images can be obtained, the problem of ordering of clearly scanning pictures is not needed in the splicing process, the pictures in disordered arrangement can be adapted, the higher compatibility is realized, and the labor intensity is greatly reduced. The splicing process can be completed within a second range so as to adapt to the data processing intensity of large-scale cloud diagnosis.
Description
Technical Field
The application relates to the field of intelligent cell identification by using a computer, in particular to a panoramic stitching system and method for microscopic images.
Background
Cytological assays are capable of rapidly distinguishing diseases based on the number and morphology of cells, and are particularly of great significance for distinguishing and detecting neoplastic diseases. However, manual examination is extremely inefficient in clinic, and a doctor with high experience can only examine about 50 samples per day. With the development of technology, fully automatic image analyzers have been used in cytopathology analysis work. The efficiency can be greatly improved. The high-magnification microscanning device can generate a plurality of images with independent focuses for one sample, the images with independent focuses are needed to be spliced together, in the prior art, a software-hardware combination mode is generally adopted for splicing, for example, after a corresponding scaling ratio is calculated through a displacement distance of hardware, the images are spliced into a complete sample image. However, this requires very high precision of the displacement driving mechanism of the micro scanning device, resulting in high hardware use cost. Chinese patent document CN100433060C describes a method for stitching, storing and browsing full-automatic microscopic images, and in the section of stitched images in the specific embodiment of the specification, "a stitching graph structure G with weights is constructed, each node in the stitching graph G represents each captured unit image to be stitched, and the weight on the edge between two adjacent nodes is the stitching probability value between the two unit images. And generating a maximum spanning tree T of the spliced graph by taking any node as a starting point, wherein the traversal sequence of the graph is the optimal splicing sequence. The maximum spanning tree is generated as follows: firstly, selecting any node as a current node; finding a node with the largest weight of the edge between the node and the current node from nodes outside the current tree, taking the edge between the current node and the node as an edge of a generated tree, adding the node into the current tree, and taking the node as the current node; repeating the steps until no node is available. The splicing method of the two adjacent unit images comprises the following steps: assuming that the unit image 1 and the unit image 2 have overlapping ranges, the overlapping range in fig. 1 is R1, the overlapping range in fig. 2 is R2, a picture PT of a region RT1 is selected as a template in R1, matching is performed on R2 by PT, and the best matching position RT2 is found, so that the spliced relative position between fig. 1 and 2 can be obtained according to RT1 and RT 2. The splicing method of all the unit images comprises the following steps: and (3) setting the position of the unit image of the first node as (0, 0), splicing two adjacent unit images according to the optimal splicing sequence, acquiring the relative position of the next unit image and the previous unit image, and calculating the absolute position of each unit image. The memory is not increased along with the increase of the size of the whole large image, so that the occupation of the memory can not influence the performance of the system all the time under the condition of the increase of the size of the large image. And finally, obtaining absolute positions of each unit image to be spliced in the target large image. In the scheme, a tree structure is required to be generated firstly, then automatic identification is difficult to realize by selecting a node scheme, hardware and software are required to cooperate, namely, parameters such as row and column values of the tree structure, approximate positions of image units in the tree structure and the like are required to be clearly known in the scheme. The lack of hardware support results in a significant increase in the effort of the solution. Such as some later manually displaced microscopes, or inconsistent setting of displacement direction and path by different models of microscanning devices, tends to result in the solution being only manually sequenced, resulting in a significant increase in effort. Furthermore, the spliced images in the scheme may have fine position differences, so that the precision of the images is affected.
Disclosure of Invention
The technical problem to be solved by the application is to provide a panoramic stitching system and a panoramic stitching method for microscopic images, which can further improve the intelligent degree of panoramic stitching of microscopic images, improve fault tolerance and be compatible with a plurality of types of microscopic scanning devices as much as possible.
In order to solve the technical problems, the application adopts the following technical scheme: a panoramic stitching system for microscopic images, comprising: the device comprises a visual field sub-block matching module, a visual field position fitting module and a block extracting module;
the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
the block extraction module is used for automatically extracting the spliced complete image.
Analyzing pixel values of overlapping positions, enabling the images to be automatically matched with corresponding positions through a view sub-block matching module, and calculating initial values of a two-dimensional transformation matrix from platform offset to pixel offset according to matched characteristic points in adjacent views to obtain splicing parameters; specifically, adjacent positions of all view sub-blocks relative to other view sub-blocks are determined;
cutting public parts of adjacent visual fields, dividing the public parts into a plurality of small blocks, adopting template matching, searching a public coincidence area, and selecting matching blocks with a matching threshold value larger than 0.9;
calculating the correlation of template matching of all visual fields;
after the position matching is successful, the cell position is accurately spliced through a visual field position fitting intelligent algorithm; after template matching, the approximate position of each view pixel can be obtained, and the maximum pixel deviation is calculated through the initial splicing parameters and the maximum displacement deviation of the platform;
filtering the point of the matching relation between each visual field and the adjacent visual field by using the maximum pixel deviation, removing the point with the deviation larger than the maximum pixel deviation, recalculating the splicing parameters by using the filtered point, recalculating the pixel position of the visual field by using the latest splicing parameters, and continuously updating and perfecting the picture position of the visual field by continuously iterative filtering and recalculating.
In a preferred scheme, the operation process of the view sub-block matching module is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
In a preferred scheme, the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
In a preferred scheme, the block extraction process is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
A method for panoramic stitching of microscopic images, comprising the steps of:
s1, matching view sub-blocks; the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
s2, fitting the visual field position; the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
s3, extracting blocks; the block extraction module is used for automatically extracting the spliced complete image.
In a preferred scheme, the operation process of field sub-block matching is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
In a preferred scheme, the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
In a preferred scheme, the block extraction process is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
The application provides a panoramic stitching system and a panoramic stitching method for microscopic images, which can realize automatic stitching and obtain high-definition images by adopting the scheme, can adapt to images in disordered arrangement without definitely scanning the ordering problem of the images in the stitching process, realize higher compatibility and greatly reduce labor intensity. The scheme of the application can complete the splicing process within the range of seconds so as to adapt to the large-scale data processing intensity based on diagnosis on the cloud.
Drawings
The application is further illustrated by the following examples in conjunction with the accompanying drawings:
fig. 1 is a schematic flow chart of an image stitching process in the present application.
Fig. 2 is a schematic flow chart of field of view sub-block matching in the present application.
Fig. 3 is a schematic flow chart of field position fitting in the present application.
FIG. 4 is a flow chart of block extraction according to the present application.
Fig. 5 is a schematic diagram of a view field position fitting operation procedure in the present application.
FIG. 6 is a schematic diagram of a field of view sub-block matching operation in the present application.
Detailed Description
Example 1:
as shown in fig. 1, a panoramic stitching system for microscopic images, characterized in that it comprises: the device comprises a visual field sub-block matching module, a visual field position fitting module and a block extracting module;
the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
the block extraction module is used for automatically extracting the spliced complete image.
In a preferred scheme, the operation process of the view sub-block matching module is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
In a preferred scheme, the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
In a preferred scheme, the block extraction process is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
Example 2:
a method for panoramic stitching of microscopic images, comprising the steps of:
s1, matching view sub-blocks; the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
s2, fitting the visual field position; the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
s3, extracting blocks; the block extraction module is used for automatically extracting the spliced complete image.
6. The method for panoramic stitching of microscopic images according to claim 5, wherein: the operation process of the field of view sub-block matching is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
In a preferred scheme, the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
In a preferred scheme, the block extraction process is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
Example 3:
based on examples 1-2, as shown in fig. 5-6, a certain cytopathology analysis example is taken as an example: the images automatically acquired from the microscanning device are, as shown in the upper diagram of fig. 5, ordered irregularly for each sub-image, depending on the microscanning device automated acquisition path, ensuring that each image has a position of overlap with each other during acquisition. And analyzing pixel values of overlapping positions, automatically matching the images to corresponding positions through a vision sub-block matching intelligent algorithm, and calculating initial values of a two-dimensional transformation matrix from platform offset to pixel offset according to the matched characteristic points in adjacent vision to obtain splicing parameters. In particular, the neighboring positions of the respective view sub-blocks, i.e. sub-images, with respect to the other sub-images are determined. Cutting the public part of the adjacent visual field, dividing the public part into a plurality of small blocks, adopting template matching, searching a public coincidence area, and selecting a matching block with a matching threshold value larger than 0.9. And calculating the correlation of template matching of all the fields of view. As shown in fig. 6, after the position matching is successful, the positions of the cells are slightly deviated, and the positions of the cells are accurately spliced through a visual field position fitting intelligent algorithm. Specifically, after template matching, the approximate position of each view pixel can be obtained, the maximum pixel deviation is calculated through the initial splicing parameter and the maximum displacement deviation of the platform, the point of the matching relation between each view and the adjacent view is filtered through the maximum pixel deviation, the point with the deviation larger than the maximum pixel deviation is removed, the splicing parameter is recalculated through the screened point, the latest splicing parameter is used, the pixel position of the view is recalculated, and the picture position of the view can be updated and perfected continuously through continuous iterative screening and recalculation, so that the error is smaller, and the splicing effect is more perfect. After the picture position of each view field is calculated, the brightness of each view field is updated through a background image by using the calculated background in the scanning process, the visual perception of a doctor for watching each view field image is improved, a perfect slide picture can be spliced, and the whole spliced image is extracted as a block. The large map is then cut as required to obtain the desired width and height of the picture, as the large map with all fields of view stitched would be large and unnecessary.
The above embodiments are merely preferred embodiments of the present application, and should not be construed as limiting the present application, and the embodiments and features of the embodiments of the present application may be arbitrarily combined with each other without collision. The protection scope of the present application is defined by the claims, and the protection scope includes equivalent alternatives to the technical features of the claims. I.e., equivalent replacement modifications within the scope of this application are also within the scope of the application.
Claims (8)
1. A panoramic stitching system for microscopic images, comprising: the device comprises a visual field sub-block matching module, a visual field position fitting module and a block extracting module;
the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
the block extraction module is used for automatically extracting the spliced complete image;
analyzing pixel values of overlapping positions, enabling the images to be automatically matched with corresponding positions through a view sub-block matching module, and calculating initial values of a two-dimensional transformation matrix from platform offset to pixel offset according to matched characteristic points in adjacent views to obtain splicing parameters; specifically, adjacent positions of all view sub-blocks relative to other view sub-blocks are determined;
cutting public parts of adjacent visual fields, dividing the public parts into a plurality of small blocks, adopting template matching, searching a public coincidence area, and selecting matching blocks with a matching threshold value larger than 0.9;
calculating the correlation of template matching of all visual fields;
after the position matching is successful, the cell position is accurately spliced through a visual field position fitting intelligent algorithm; after template matching, the approximate position of each view pixel can be obtained, and the maximum pixel deviation is calculated through the initial splicing parameters and the maximum displacement deviation of the platform;
filtering the point of the matching relation between each visual field and the adjacent visual field by using the maximum pixel deviation, removing the point with the deviation larger than the maximum pixel deviation, recalculating the splicing parameters by using the filtered point, recalculating the pixel position of the visual field by using the latest splicing parameters, and continuously updating and perfecting the picture position of the visual field by continuously iterative filtering and recalculating.
2. A system for panoramic stitching of microscopic images according to claim 1, wherein: the operation process of the vision sub-block matching module is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
3. A system for panoramic stitching of microscopic images according to claim 1, wherein: the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
4. A system for panoramic stitching of microscopic images according to claim 1, wherein:
the process of block extraction is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
5. The panoramic stitching method for the microscopic images is characterized by comprising the following steps of:
the system comprises a visual field sub-block matching module, a visual field position fitting module and a block extracting module;
s1, matching view sub-blocks; the view sub-block matching module is used for identifying overlapping areas among the images and judging adjacent position relations among the sub-images, so that the sub-images acquired by the micro scanning device are automatically arranged according to the splicing sequence of the images;
s2, fitting the visual field position; the visual field position fitting module is used for finely adjusting the position according to the overlapping area among the sub-images so as to accurately splice the cell positions;
s3, extracting blocks; the block extraction module is used for automatically extracting the spliced complete image;
analyzing pixel values of overlapping positions, enabling the images to be automatically matched with corresponding positions through a view sub-block matching module, and calculating initial values of a two-dimensional transformation matrix from platform offset to pixel offset according to matched characteristic points in adjacent views to obtain splicing parameters; specifically, adjacent positions of all view sub-blocks relative to other view sub-blocks are determined;
cutting public parts of adjacent visual fields, dividing the public parts into a plurality of small blocks, adopting template matching, searching a public coincidence area, and selecting matching blocks with a matching threshold value larger than 0.9;
calculating the correlation of template matching of all visual fields;
after the position matching is successful, the cell position is accurately spliced through a visual field position fitting intelligent algorithm; after template matching, the approximate position of each view pixel can be obtained, and the maximum pixel deviation is calculated through the initial splicing parameters and the maximum displacement deviation of the platform;
filtering the point of the matching relation between each visual field and the adjacent visual field by using the maximum pixel deviation, removing the point with the deviation larger than the maximum pixel deviation, recalculating the splicing parameters by using the filtered point, recalculating the pixel position of the visual field by using the latest splicing parameters, and continuously updating and perfecting the picture position of the visual field by continuously iterative filtering and recalculating.
6. The method for panoramic stitching of microscopic images according to claim 5, wherein: the operation process of the field of view sub-block matching is as follows:
sa01, inputting, and initializing a result set M;
sa02, setting the current field i as the first field;
sa03, solving all adjacent view sets J of the current view i;
sa04, setting the current adjacent view J as the first view in J;
sa05, finding possible overlapping areas Ri and Rj of the field i and the field j;
sa06, rasterizing the template region Ri into a template sub-block set Pi;
sa07, arranging the template sub-block sets Pi in descending order of the dynamic range of the sub-blocks;
sa08, setting the current template sub-block P as the first one of the template sub-block sets Pi;
sa09, finding the possible overlapping area s of the template sub-block P in the field of view J;
sa10, taking a template sub-block P as a template and s as a search area, and carrying out template matching search;
sa11, adding the best match M into the result set M;
sa12, finding all matching set vision sets N consistent with M in a result set M;
sa13, comparing and judging whether the sum of the weights in N is larger than a threshold v;
if not, setting the current template sub-block P as the next template sub-block set Pi, and returning to Sa09;
if yes, the next step is performed;
sa14, comparing and judging whether the view J is the last view in the view set J;
if not, setting the view J as the next view in the view set J, and returning to Sa05;
if yes, the next step is performed;
sa15, comparing and judging that the field i is the last field;
if not, setting i as the next view, and returning to Sa03;
if yes, outputting a result.
7. The method for panoramic stitching of microscopic images according to claim 5, wherein: the fitting process of the visual field position is as follows:
sa16, inputting, namely initializing all view positions Xi and Yi;
sa17, setting the current field i as the first field;
sa18, in the sub-block matching set M, obtaining a matching subset Mi containing a field of view i;
sa19, recalculating the positions Xi and Yi of the visual field i according to the matching subset Mi;
sa20, judging that all the visual field updating is completed;
if not, setting the field of view i as the next field of view;
if yes, the next step is performed;
sa21, calculating the average value L of the current wheel vision position and the upper wheel vision position deviation;
sa22, comparing and judging that the deviation average value L is smaller than a threshold value 1;
if not, returning to Sa17;
if yes, the next step is performed;
sa23, visual field position normalization adjustment;
all fields of view are output.
8. The method for panoramic stitching of microscopic images according to claim 5, wherein:
the process of block extraction is as follows:
sa24, extracting the size W and H of the full graph;
sa25, dividing the full graph into a set B of blocks according to the block size;
sa26, calculating the positions of all blocks B in the set B;
sa27, setting block B as the first block in set B;
sa28, calculating a set Fb of all fields of view overlapping with the block b;
sa29, setting the field f as the first field of view in Fb;
sa30, overlapping areas Rb and Rf of the field of view f and the block b;
sa31, copying the image in Rf to Rb;
sa32, judging that the field f is the last field in the set Fb;
if not, setting the field f as the next field in Fb, and returning to Sa29;
if yes, the next step is performed;
sa33, save the block b image;
sa34, judging that the block B is the last block in the set B;
if not, setting the block B as the first block in the set B, and returning to Sa28;
if yes, outputting a result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910963671.0A CN110807732B (en) | 2019-10-11 | 2019-10-11 | Panoramic stitching system and method for microscopic images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910963671.0A CN110807732B (en) | 2019-10-11 | 2019-10-11 | Panoramic stitching system and method for microscopic images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110807732A CN110807732A (en) | 2020-02-18 |
CN110807732B true CN110807732B (en) | 2023-08-29 |
Family
ID=69488279
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910963671.0A Active CN110807732B (en) | 2019-10-11 | 2019-10-11 | Panoramic stitching system and method for microscopic images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110807732B (en) |
Families Citing this family (3)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469863B (en) * | 2021-06-28 | 2024-04-26 | 平湖莱顿光学仪器制造有限公司 | Method and equipment for acquiring microscopic image |
WO2023070409A1 (en) * | 2021-10-27 | 2023-05-04 | 华为技术有限公司 | Image splicing method and apparatus |
CN117422633B (en) * | 2023-11-15 | 2024-07-30 | 珠海横琴圣澳云智科技有限公司 | Sample visual field image processing method and device |
Citations (23)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093280A (en) * | 2006-06-22 | 2007-12-26 | 北京普利生仪器有限公司 | Method for preparing microscopic image of holographic digitalized sliced sheet |
CN101288585A (en) * | 2007-04-17 | 2008-10-22 | 天津市索维电子技术有限公司 | Method for panoramic imaging ophthalmology protomerite detected by ultrasound biological microscopes |
CN102129704A (en) * | 2011-02-23 | 2011-07-20 | 山东大学 | SURF operand-based microscope image splicing method |
CN103226822A (en) * | 2013-05-15 | 2013-07-31 | 清华大学 | Medical image stitching method |
CN104123708A (en) * | 2014-08-19 | 2014-10-29 | 中国科学院自动化研究所 | Splicing structure of microscopic scattering dark field image on surface of optical element |
JP2015207998A (en) * | 2014-04-22 | 2015-11-19 | 蘇州比特速浪電子科技有限公司 | Image processing system and image processing method |
CN105181538A (en) * | 2015-10-20 | 2015-12-23 | 丹东百特仪器有限公司 | Granularity and particle form analyzer with scanning and splicing functions for dynamic particle image and method |
CN105574815A (en) * | 2015-12-21 | 2016-05-11 | 湖南优象科技有限公司 | Image splicing method and device used for scanning mouse |
CN106707484A (en) * | 2016-12-16 | 2017-05-24 | 上海理工大学 | Super-resolution optical microscopic imaging method based on particle scattered light near-field lighting |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN106886977A (en) * | 2017-02-08 | 2017-06-23 | 徐州工程学院 | A kind of many figure autoregistrations and anastomosing and splicing method |
CN107451985A (en) * | 2017-08-01 | 2017-12-08 | 中国农业大学 | A kind of joining method of the micro- sequence image of mouse tongue section |
CN107833179A (en) * | 2017-09-05 | 2018-03-23 | 云南电网有限责任公司昆明供电局 | The quick joining method and system of a kind of infrared image |
CN107958442A (en) * | 2017-12-07 | 2018-04-24 | 中国科学院自动化研究所 | Gray correction method and device in several Microscopic Image Mosaicings |
WO2018157682A1 (en) * | 2017-02-28 | 2018-09-07 | 浙江大学 | Wide-field, multi-scale and high-resolution microimaging system and method |
CN108765285A (en) * | 2018-05-08 | 2018-11-06 | 北京科技大学 | A kind of large scale micro-image generation method based on video definition fusion |
CN108776823A (en) * | 2018-07-06 | 2018-11-09 | 武汉兰丁医学高科技有限公司 | Cervical carcinoma lesion analysis method based on cell image recognition |
CN108830788A (en) * | 2018-04-25 | 2018-11-16 | 安徽师范大学 | A kind of plain splice synthetic method of histotomy micro-image |
CN109191380A (en) * | 2018-09-10 | 2019-01-11 | 广州鸿琪光学仪器科技有限公司 | Joining method, device, computer equipment and the storage medium of micro-image |
CN109559275A (en) * | 2018-11-07 | 2019-04-02 | 苏州迈瑞科技有限公司 | A kind of Urine Analyzer MIcrosope image joining method |
CN109584156A (en) * | 2018-10-18 | 2019-04-05 | 中国科学院自动化研究所 | Micro- sequence image splicing method and device |
CN109740697A (en) * | 2019-03-05 | 2019-05-10 | 重庆大学 | A deep learning-based method for identifying formed components in microscopic images of urine sediment |
CN110288528A (en) * | 2019-06-25 | 2019-09-27 | 山东大学 | An image mosaic system and method for micro-nano visual observation |
Family Cites Families (2)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5434621B2 (en) * | 2010-01-19 | 2014-03-05 | ソニー株式会社 | Information processing apparatus, information processing method, and program thereof |
JP2014066788A (en) * | 2012-09-25 | 2014-04-17 | Sony Corp | Image display device, and image display system |
-
2019
- 2019-10-11 CN CN201910963671.0A patent/CN110807732B/en active Active
Patent Citations (23)
* Cited by examiner, † Cited by third partyPublication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093280A (en) * | 2006-06-22 | 2007-12-26 | 北京普利生仪器有限公司 | Method for preparing microscopic image of holographic digitalized sliced sheet |
CN101288585A (en) * | 2007-04-17 | 2008-10-22 | 天津市索维电子技术有限公司 | Method for panoramic imaging ophthalmology protomerite detected by ultrasound biological microscopes |
CN102129704A (en) * | 2011-02-23 | 2011-07-20 | 山东大学 | SURF operand-based microscope image splicing method |
CN103226822A (en) * | 2013-05-15 | 2013-07-31 | 清华大学 | Medical image stitching method |
JP2015207998A (en) * | 2014-04-22 | 2015-11-19 | 蘇州比特速浪電子科技有限公司 | Image processing system and image processing method |
CN104123708A (en) * | 2014-08-19 | 2014-10-29 | 中国科学院自动化研究所 | Splicing structure of microscopic scattering dark field image on surface of optical element |
CN105181538A (en) * | 2015-10-20 | 2015-12-23 | 丹东百特仪器有限公司 | Granularity and particle form analyzer with scanning and splicing functions for dynamic particle image and method |
CN105574815A (en) * | 2015-12-21 | 2016-05-11 | 湖南优象科技有限公司 | Image splicing method and device used for scanning mouse |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN106707484A (en) * | 2016-12-16 | 2017-05-24 | 上海理工大学 | Super-resolution optical microscopic imaging method based on particle scattered light near-field lighting |
CN106886977A (en) * | 2017-02-08 | 2017-06-23 | 徐州工程学院 | A kind of many figure autoregistrations and anastomosing and splicing method |
WO2018157682A1 (en) * | 2017-02-28 | 2018-09-07 | 浙江大学 | Wide-field, multi-scale and high-resolution microimaging system and method |
CN107451985A (en) * | 2017-08-01 | 2017-12-08 | 中国农业大学 | A kind of joining method of the micro- sequence image of mouse tongue section |
CN107833179A (en) * | 2017-09-05 | 2018-03-23 | 云南电网有限责任公司昆明供电局 | The quick joining method and system of a kind of infrared image |
CN107958442A (en) * | 2017-12-07 | 2018-04-24 | 中国科学院自动化研究所 | Gray correction method and device in several Microscopic Image Mosaicings |
CN108830788A (en) * | 2018-04-25 | 2018-11-16 | 安徽师范大学 | A kind of plain splice synthetic method of histotomy micro-image |
CN108765285A (en) * | 2018-05-08 | 2018-11-06 | 北京科技大学 | A kind of large scale micro-image generation method based on video definition fusion |
CN108776823A (en) * | 2018-07-06 | 2018-11-09 | 武汉兰丁医学高科技有限公司 | Cervical carcinoma lesion analysis method based on cell image recognition |
CN109191380A (en) * | 2018-09-10 | 2019-01-11 | 广州鸿琪光学仪器科技有限公司 | Joining method, device, computer equipment and the storage medium of micro-image |
CN109584156A (en) * | 2018-10-18 | 2019-04-05 | 中国科学院自动化研究所 | Micro- sequence image splicing method and device |
CN109559275A (en) * | 2018-11-07 | 2019-04-02 | 苏州迈瑞科技有限公司 | A kind of Urine Analyzer MIcrosope image joining method |
CN109740697A (en) * | 2019-03-05 | 2019-05-10 | 重庆大学 | A deep learning-based method for identifying formed components in microscopic images of urine sediment |
CN110288528A (en) * | 2019-06-25 | 2019-09-27 | 山东大学 | An image mosaic system and method for micro-nano visual observation |
Non-Patent Citations (1)
* Cited by examiner, † Cited by third partyTitle |
---|
陈劲 ; 孙兴华 ; 陆建峰 ; .积分图像在全景图技术中的应用.计算机工程与应用.2007,(第25期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN110807732A (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110807732B (en) | 2023-08-29 | Panoramic stitching system and method for microscopic images |
US11586028B2 (en) | 2023-02-21 | Mobile phone-based miniature microscopic image acquisition device and image stitching and recognition methods |
CN110794569A (en) | 2020-02-14 | Cell micro microscopic image acquisition device and image identification method |
CN114868010A (en) | 2022-08-05 | System and method for acquisition and processing of multiplex fluorescence in situ hybridization images |
CN110490878A (en) | 2019-11-22 | Image processing method and device, electronic equipment and storage medium |
CN102077086B (en) | 2013-06-05 | Mass spectroscope |
CN117765373B (en) | 2024-05-14 | A lightweight road crack detection method and system with adaptive crack size |
CN108021923B (en) | 2020-10-23 | Image feature extraction method for deep neural network |
KR20010041681A (en) | 2001-05-25 | Compression of hyperdata with orasis multisegment pattern sets:CHOMPS |
CN111626960A (en) | 2020-09-04 | Image defogging method, terminal and computer storage medium |
CN110797097B (en) | 2020-10-16 | Artificial intelligence cloud diagnosis platform |
EP3299811A1 (en) | 2018-03-28 | Image processing device, image processing method, and program for image processing |
KR102561214B1 (en) | 2023-07-27 | A method and apparatus for image segmentation using global attention |
CN110766668B (en) | 2023-09-05 | Cell detection and identification system and method |
CN116660173A (en) | 2023-08-29 | Image scanning method, terminal and storage medium for hyperspectral imaging technology |
CN114372962B (en) | 2024-06-18 | Laparoscopic surgery stage identification method and system based on double granularity time convolution |
Kölln et al. | 2022 | Label2label: training a neural network to selectively restore cellular structures in fluorescence microscopy |
CN106846310A (en) | 2017-06-13 | A kind of pathology aided analysis method based on immunohistochemistry technique |
US9286681B2 (en) | 2016-03-15 | Edit guided processing method for time-lapse image analysis |
CN117058014B (en) | 2024-03-29 | LAB color space matching-based dyeing normalization system and method |
CN112614073A (en) | 2021-04-06 | Image rain removing method based on visual quality evaluation feedback and electronic device |
US20040059522A1 (en) | 2004-03-25 | Method for partitioned layout of protein interaction networks |
WO2023061162A1 (en) | 2023-04-20 | Systems and methods for label-free multi-histochemical virtual staining |
CN106250390A (en) | 2016-12-21 | A kind of substep automatically generates the method and device of sql like language |
CN114187239B (en) | 2024-08-20 | Medical image analysis method and system combining image histology and spatial distribution characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
2020-02-18 | PB01 | Publication | |
2020-02-18 | PB01 | Publication | |
2020-03-13 | SE01 | Entry into force of request for substantive examination | |
2020-03-13 | SE01 | Entry into force of request for substantive examination | |
2023-08-18 | CB02 | Change of applicant information | |
2023-08-18 | CB02 | Change of applicant information |
Address after: Floor 1 and 2, unit B, C and D, building B7, medical instrument Park, 818 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province, 430000 Applicant after: Wuhan Lanting intelligent Medicine Co.,Ltd. Address before: 430073 floor 1 and 2, unit B, C and D, building B7, medical instrument Park, 818 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province Applicant before: WUHAN LANDING MEDICAL HI-TECH Ltd. |
2023-08-29 | GR01 | Patent grant | |
2023-08-29 | GR01 | Patent grant |