{"id":6518,"date":"2022-08-28T20:01:28","date_gmt":"2022-08-28T12:01:28","guid":{"rendered":"http:\/\/139.9.1.231\/?p=6518"},"modified":"2022-09-07T15:26:57","modified_gmt":"2022-09-07T07:26:57","slug":"unetpaper","status":"publish","type":"post","link":"http:\/\/139.9.1.231\/index.php\/2022\/08\/28\/unetpaper\/","title":{"rendered":"Unet\u8bba\u6587\u5408\u96c6\uff08\u5f85\u66f4\u65b0\uff09&#8211;\u533b\u5b66\u56fe\u50cf"},"content":{"rendered":"\n<p>      \u81ea2015\u5e74\u4ee5\u6765\uff0cUNET\u5728\u533b\u5b66\u56fe\u50cf\u7ec6\u5206\u4e2d\u53d6\u5f97\u4e86\u91cd\u5927\u7a81\u7834\uff0c\u5f00\u653e\u4e86\u6df1\u5ea6\u5b66\u4e60\u65f6\u4ee3\u3002\u540e\u6765\u7684\u7814\u7a76\u4eba\u5458\u5728UNET\u7684\u57fa\u7840\u4e0a\u505a\u51fa\u4e86\u5f88\u591a\u6539\u8fdb\uff0c\u4ee5\u63d0\u9ad8\u8bed\u4e49\u7ec6\u5206\u7684\u6027\u80fd\u3002<\/p>\n\n\n\n<p>\u6458\u81ea\uff1ahttps:\/\/github.com\/ShawnBIT\/UNet-family<\/p>\n\n\n\n<p>\u5982\u4f55\u67e5\u627e\u4ee3\u7801\uff1a\u5728  <a rel=\"noreferrer noopener\" href=\"https:\/\/paperswithcode.com\/\" target=\"_blank\">https:\/\/paperswithcode.com\/<\/a> \u67e5\u627e\u8bba\u6587\u5373\u53ef<\/p>\n\n\n\n<h1>UNet-family<\/h1>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2015\"><\/a>2015<\/h2>\n\n\n\n<ul><li>U-Net: Convolutional Networks for Biomedical Image Segmentation (MICCAI) [<a href=\"https:\/\/arxiv.org\/pdf\/1505.04597.pdf\">paper<\/a>] [<a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family\/blob\/master\/networks\/UNet.py\">my-pytorch<\/a>][<a href=\"https:\/\/github.com\/zhixuhao\/unet\">keras<\/a>]<\/li><\/ul>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2016\"><\/a>2016<\/h2>\n\n\n\n<ul><li>V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation [<a href=\"http:\/\/campar.in.tum.de\/pub\/milletari2016Vnet\/milletari2016Vnet.pdf\">paper<\/a>] [<a href=\"https:\/\/github.com\/faustomilletari\/VNet\">caffe<\/a>][<a href=\"https:\/\/github.com\/mattmacy\/vnet.pytorch\">pytorch<\/a>]<\/li><li>3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation [<a href=\"https:\/\/arxiv.org\/pdf\/1606.06650.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/wolny\/pytorch-3dunet\">pytorch<\/a>]<\/li><\/ul>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2017\"><\/a>2017<\/h2>\n\n\n\n<ul><li>H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes (IEEE Transactions on Medical Imaging)[<a href=\"https:\/\/arxiv.org\/pdf\/1709.07330.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/xmengli999\/H-DenseUNet\">keras<\/a>]<\/li><li>GP-Unet: Lesion Detection from Weak Labels with a 3D Regression Network (MICCAI) [<a href=\"https:\/\/arxiv.org\/pdf\/1705.07999.pdf\">paper<\/a>]<\/li><\/ul>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2018\"><\/a>2018<\/h2>\n\n\n\n<ul><li>UNet++: A Nested U-Net Architecture for Medical Image Segmentation (MICCAI) [<a href=\"https:\/\/arxiv.org\/pdf\/1807.10165.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family\/blob\/master\/networks\/UNet_Nested.py\" target=\"_blank\" rel=\"noreferrer noopener\">my-pytorch<\/a>][<a href=\"https:\/\/github.com\/MrGiovanni\/UNetPlusPlus\">keras<\/a>]<\/li><li>MDU-Net: Multi-scale Densely Connected U-Net for biomedical image segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1812.00352.pdf\">paper<\/a>]<\/li><li>DUNet: A deformable network for retinal vessel segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1811.01206.pdf\">paper<\/a>]<\/li><li>RA-UNet: A hybrid deep attention-aware network to extract liver and tumor in CT scans [<a href=\"https:\/\/arxiv.org\/pdf\/1811.01328.pdf\">paper<\/a>]<\/li><li>Dense Multi-path U-Net for Ischemic Stroke Lesion Segmentation in Multiple Image Modalities [<a href=\"https:\/\/arxiv.org\/pdf\/1810.07003.pdf\">paper<\/a>]<\/li><li>Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment [<a href=\"https:\/\/arxiv.org\/pdf\/1812.01936.pdf\">paper<\/a>]<\/li><li>Prostate Segmentation using 2D Bridged U-net [<a href=\"https:\/\/arxiv.org\/pdf\/1807.04459.pdf\">paper<\/a>]<\/li><li>nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1809.10486.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/MIC-DKFZ\/nnUNet\">pytorch<\/a>]<\/li><li>SUNet: a deep learning architecture for acute stroke lesion segmentation and outcome prediction in multimodal MRI [<a href=\"https:\/\/arxiv.org\/pdf\/1810.13304.pdf\">paper<\/a>]<\/li><li>IVD-Net: Intervertebral disc localization and segmentation in MRI with a multi-modal UNet [<a href=\"https:\/\/arxiv.org\/pdf\/1811.08305.pdf\">paper<\/a>]<\/li><li>LADDERNET: Multi-Path Networks Based on U-Net for Medical Image Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1810.07810.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/juntang-zhuang\/LadderNet\">pytorch<\/a>]<\/li><li>Glioma Segmentation with Cascaded Unet [<a href=\"https:\/\/arxiv.org\/pdf\/1810.04008.pdf\">paper<\/a>]<\/li><li>Attention U-Net: Learning Where to Look for the Pancreas [<a href=\"https:\/\/arxiv.org\/pdf\/1804.03999.pdf\">paper<\/a>]<\/li><li>Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1802.06955.pdf\">paper<\/a>]<\/li><li>Concurrent Spatial and Channel \u2018Squeeze &amp; Excitation\u2019 in Fully Convolutional Networks\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1803.02579.pdf\">[paper]<\/a><\/li><li>A Probabilistic U-Net for Segmentation of Ambiguous Images (NIPS) [<a href=\"https:\/\/arxiv.org\/pdf\/1806.05034.pdf\">paper<\/a>] [<a href=\"https:\/\/github.com\/SimonKohl\/probabilistic_unet\">tensorflow<\/a>]<\/li><li>AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy [<a href=\"https:\/\/arxiv.org\/pdf\/1808.05238.pdf\">paper<\/a>]<\/li><li>3D RoI-aware U-Net for Accurate and Efficient Colorectal Cancer Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1806.10342.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/huangyjhust\/3D-RU-Net\">pytorch<\/a>]<\/li><li>Detection and Delineation of Acute Cerebral Infarct on DWI Using Weakly Supervised Machine Learning (Y-Net) (MICCAI) [<a href=\"https:\/\/link.springer.com\/content\/pdf\/10.1007%2F978-3-030-00931-1.pdf\">paper<\/a>](Page 82)<\/li><li>Fully Dense UNet for 2D Sparse Photoacoustic Tomography Artifact Removal [<a href=\"https:\/\/arxiv.org\/pdf\/1808.10848.pdf\">paper<\/a>]<\/li><\/ul>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2019\"><\/a>2019<\/h2>\n\n\n\n<ul><li>MultiResUNet : Rethinking the U-Net Architecture for Multimodal Biomedical Image Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1902.04049v1.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/nibtehaz\/MultiResUNet\">keras<\/a>]<\/li><li>U-NetPlus: A Modified Encoder-Decoder U-Net Architecture for Semantic and Instance Segmentation of Surgical Instrument [<a href=\"https:\/\/arxiv.org\/pdf\/1902.08994.pdf\">paper<\/a>]<\/li><li>Probability Map Guided Bi-directional Recurrent UNet for Pancreas Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1903.00923.pdf\">paper<\/a>]<\/li><li>CE-Net: Context Encoder Network for 2D Medical Image Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1903.02740.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/Guzaiwang\/CE-Net\">pytorch<\/a>]<\/li><li>Graph U-Net [<a href=\"https:\/\/openreview.net\/pdf?id=HJePRoAct7\">paper<\/a>]<\/li><li>A Novel Focal Tversky Loss Function with Improved Attention U-Net for Lesion Segmentation (ISBI) [<a href=\"https:\/\/arxiv.org\/pdf\/1810.07842.pdf\">paper<\/a>]<\/li><li>ST-UNet: A Spatio-Temporal U-Network for Graph-structured Time Series Modeling [<a href=\"https:\/\/arxiv.org\/pdf\/1903.05631.pdf\">paper<\/a>]<\/li><li>Connection Sensitive Attention U-NET for Accurate Retinal Vessel Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1903.05558.pdf\">paper<\/a>]<\/li><li>CIA-Net: Robust Nuclei Instance Segmentation with Contour-aware Information Aggregation [<a href=\"https:\/\/arxiv.org\/pdf\/1903.05358.pdf\">paper<\/a>]<\/li><li>W-Net: Reinforced U-Net for Density Map Estimation [<a href=\"https:\/\/arxiv.org\/pdf\/1903.11249.pdf\">paper<\/a>]<\/li><li>Automated Segmentation of Pulmonary Lobes using Coordination-guided Deep Neural Networks (ISBI oral) [<a href=\"https:\/\/arxiv.org\/pdf\/1904.09106.pdf\">paper<\/a>]<\/li><li>U2-Net: A Bayesian U-Net Model with Epistemic Uncertainty Feedback for Photoreceptor Layer Segmentation in Pathological OCT Scans [<a href=\"https:\/\/arxiv.org\/pdf\/1901.07929.pdf\">paper<\/a>]<\/li><li>ScleraSegNet: an Improved U-Net Model with Attention for Accurate Sclera Segmentation (ICB Honorable Mention Paper Award) [<a href=\"https:\/\/github.com\/ShawnBIT\/Paper-Reading\/blob\/master\/ScleraSegNet.pdf\">paper<\/a>]<\/li><li>AHCNet: An Application of Attention Mechanism and Hybrid Connection for Liver Tumor Segmentation in CT Volumes [<a href=\"https:\/\/github.com\/ShawnBIT\/Paper-Reading\/blob\/master\/AHCNet.pdf\">paper<\/a>]<\/li><li>A Hierarchical Probabilistic U-Net for Modeling Multi-Scale Ambiguities [<a href=\"https:\/\/arxiv.org\/pdf\/1905.13077.pdf\">paper<\/a>]<\/li><li>Recurrent U-Net for Resource-Constrained Segmentation [<a href=\"https:\/\/arxiv.org\/pdf\/1906.04913.pdf\">paper<\/a>]<\/li><li>MFP-Unet: A Novel Deep Learning Based Approach for Left Ventricle Segmentation in Echocardiography [<a href=\"https:\/\/arxiv.org\/pdf\/1906.10486.pdf\">paper<\/a>]<\/li><li>A Partially Reversible U-Net for Memory-Efficient Volumetric Image Segmentation (MICCAI 2019) [<a href=\"https:\/\/arxiv.org\/pdf\/1906.06148.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/RobinBruegger\/PartiallyReversibleUnet\">pytorch<\/a>]<\/li><li>ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data [<a href=\"https:\/\/arxiv.org\/pdf\/1904.00592v2.pdf\">paper<\/a>]<\/li><li>A multi-task U-net for segmentation with lazy labels [<a href=\"https:\/\/arxiv.org\/pdf\/1906.12177.pdf\">paper<\/a>]<\/li><li>RAUNet: Residual Attention U-Net for Semantic Segmentation of Cataract Surgical Instruments [<a href=\"http:\/\/xxx.itp.ac.cn\/pdf\/1909.10360v1\">paper<\/a>]<\/li><li>3D U2-Net: A 3D Universal U-Net for Multi-Domain Medical Image Segmentation (MICCAI 2019) [<a href=\"https:\/\/arxiv.org\/pdf\/1909.06012.pdf\">paper<\/a>] [<a href=\"https:\/\/github.com\/huangmozhilv\/u2net_torch\/\">pytorch<\/a>]<\/li><li>SegNAS3D: Network Architecture Search with Derivative-Free Global Optimization for 3D Image Segmentation (MICCAI 2019) [<a href=\"https:\/\/arxiv.org\/pdf\/1909.05962.pdf\">paper<\/a>]<\/li><li>3D Dilated Multi-Fiber Network for Real-time Brain Tumor Segmentation in MRI [<a href=\"https:\/\/arxiv.org\/pdf\/1904.03355.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/China-LiuXiaopeng\/BraTS-DMFNet\">pytorch<\/a>] (MICCAI 2019)<\/li><li>The Domain Shift Problem of Medical Image Segmentation and Vendor-Adaptation by Unet-GAN [<a href=\"https:\/\/arxiv.org\/pdf\/1910.13681.pdf\">paper<\/a>]<\/li><li>Recurrent U-Net for Resource-Constrained Segmentation [<a href=\"http:\/\/openaccess.thecvf.com\/content_ICCV_2019\/papers\/Wang_Recurrent_U-Net_for_Resource-Constrained_Segmentation_ICCV_2019_paper.pdf\">paper<\/a>] (ICCV 2019)<\/li><li>Siamese U-Net with Healthy Template for Accurate Segmentation of Intracranial Hemorrhage (MICCAI 2019)<\/li><\/ul>\n\n\n\n<h2><a href=\"https:\/\/github.com\/ShawnBIT\/UNet-family#2020\"><\/a>2020<\/h2>\n\n\n\n<ul><li>U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection (Pattern Recognition 2020) [<a href=\"https:\/\/arxiv.org\/pdf\/2005.09007v1.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/NathanUA\/U-2-Net\">pytorch<\/a>]<\/li><li>UNET 3+: A Full-Scale Connected UNet for Medical Image Segmentation (ICASSP 2020) [<a href=\"https:\/\/arxiv.org\/pdf\/2004.08790.pdf\">paper<\/a>][<a href=\"https:\/\/github.com\/ZJUGiveLab\/UNet-Version\">pytorch<\/a>]<\/li><\/ul>\n\n\n\n<h2>2021<\/h2>\n\n\n\n<ul><li>TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation  [<a rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/pdf\/2005.09007v1.pdf\" target=\"_blank\">paper<\/a>][<a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/NathanUA\/U-2-Net\" target=\"_blank\">pytorch<\/a>]<\/li><\/ul>\n\n\n\n<ul><li>Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation <a rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/abs\/2105.05537\" target=\"_blank\">[paper]<\/a>[<a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/ZJUGiveLab\/UNet-Version\" target=\"_blank\">pytorch<\/a>]<\/li><\/ul>\n\n\n\n<ul><li>UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-wise Perspective with Transformer [<a href=\"https:\/\/arxiv.org\/abs\/2109.04335\" target=\"_blank\" rel=\"noreferrer noopener\">paper<\/a>][<a href=\"https:\/\/github.com\/mcgregorwwww\/uctransnet\" target=\"_blank\" rel=\"noreferrer noopener\">pytorch<\/a>]<\/li><\/ul>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u81ea2015\u5e74\u4ee5\u6765\uff0cUNET\u5728\u533b\u5b66\u56fe\u50cf\u7ec6\u5206\u4e2d\u53d6\u5f97\u4e86\u91cd\u5927\u7a81\u7834\uff0c\u5f00\u653e\u4e86\u6df1\u5ea6\u5b66\u4e60\u65f6\u4ee3\u3002\u540e\u6765\u7684\u7814\u7a76\u4eba\u5458\u5728UNET\u7684\u57fa\u7840\u4e0a &hellip; <a href=\"http:\/\/139.9.1.231\/index.php\/2022\/08\/28\/unetpaper\/\" class=\"more-link\">\u7ee7\u7eed\u9605\u8bfb<span class=\"screen-reader-text\">Unet\u8bba\u6587\u5408\u96c6\uff08\u5f85\u66f4\u65b0\uff09&#8211;\u533b\u5b66\u56fe\u50cf<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[24],"tags":[],"_links":{"self":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/6518"}],"collection":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/comments?post=6518"}],"version-history":[{"count":13,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/6518\/revisions"}],"predecessor-version":[{"id":7465,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/6518\/revisions\/7465"}],"wp:attachment":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/media?parent=6518"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/categories?post=6518"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/tags?post=6518"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}