{"id":14749,"date":"2023-03-08T22:47:53","date_gmt":"2023-03-08T14:47:53","guid":{"rendered":"http:\/\/139.9.1.231\/?p=14749"},"modified":"2023-03-08T22:50:59","modified_gmt":"2023-03-08T14:50:59","slug":"3d-photography-papers","status":"publish","type":"post","link":"http:\/\/139.9.1.231\/index.php\/2023\/03\/08\/3d-photography-papers\/","title":{"rendered":"3d-photography-papers"},"content":{"rendered":"\n<p class=\"has-text-align-center has-bright-blue-background-color has-background\"><strong><em>3D\u673a\u5668\u5b66\u4e60\u76f8\u5173\u5408\u96c6\uff1a<a href=\"https:\/\/github.com\/timzhang642\/3D-Machine-Learning\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/timzhang642\/3D-Machine-Learning<\/a><\/em><\/strong><\/p>\n\n\n\n<p>A paper list of 3D photography and cinemagraph.<\/p>\n\n\n\n<p>This list is non-exhaustive. Feel free to pull requests or create issues to add papers.<\/p>\n\n\n\n<p>Following <a href=\"https:\/\/github.com\/timzhang642\/3D-Machine-Learning\">this repo<\/a>, I use some icons to (imprecisely) differentiate the 3D representations:<\/p>\n\n\n\n<ul><li>\ud83c\udf43 Layered Depth Image<\/li><li>\ud83d\udc8e Mesh<\/li><li>\u2708\ufe0f Multiplane Images<\/li><li>\ud83d\ude95 Nerf<\/li><li>\u2601\ufe0f Point Cloud<\/li><li>\ud83d\udc7e Voxel<\/li><\/ul>\n\n\n\n<h2>3D Photography from a Single Image<\/h2>\n\n\n\n<p>Here I include the papers for novel view synthesis with <strong>a single input image<\/strong> based on <strong>3D geometry<\/strong>.<\/p>\n\n\n\n<ul><li><code>[ECCV 2022]<\/code> InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images <a href=\"https:\/\/infinite-nature-zero.github.io\/static\/pdfs\/InfiniteNatureZero.pdf\">[paper]<\/a> <a href=\"https:\/\/infinite-nature-zero.github.io\/\">[project page]<\/a><\/li><li><code>[SIGGRAPH 2022]<\/code> Single-View View Synthesis in the Wild with Learned Adaptive Multiplane Images <a href=\"https:\/\/arxiv.org\/pdf\/2205.11733.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/yxuhan\/AdaMPI\">[code]<\/a> <a href=\"https:\/\/yxuhan.github.io\/AdaMPI\/\">[project page]<\/a> \u2708\ufe0f<\/li><li><code>[CVPR 2022]<\/code> Efficient Geometry-aware 3D Generative Adversarial Networks <a href=\"https:\/\/arxiv.org\/pdf\/2112.07945.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/NVlabs\/eg3d\">[code]<\/a> <a href=\"https:\/\/matthew-a-chan.github.io\/EG3D\/\">[project page]<\/a><\/li><li><code>[CVPRW 2022]<\/code> Artistic Style Novel View Synthesis Based on A Single Image <a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2022W\/CVFAD\/papers\/Tseng_Artistic_Style_Novel_View_Synthesis_Based_on_a_Single_Image_CVPRW_2022_paper.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/Kuan-Wei-Tseng\/ArtNV\">[code]<\/a> <a href=\"https:\/\/kuan-wei-tseng.github.io\/ArtNV\">[project page]<\/a> \u2601\ufe0f<\/li><li><code>[CVPR 2022]<\/code> 3D Photo Stylization: Learning to Generate Stylized Novel Views from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/2112.00169.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/fmu2\/3d_photo_stylization\">[code]<\/a> <a href=\"http:\/\/pages.cs.wisc.edu\/~fmu\/style3d\/\">[project page]<\/a> \u2601\ufe0f<\/li><li><code>[ICCV 2021]<\/code> Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/2012.09855.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/google-research\/google-research\/tree\/master\/infinite_nature\">[code]<\/a> <a href=\"https:\/\/infinite-nature.github.io\/\">[project page]<\/a> \ud83d\udc8e<\/li><li><code>[ICCV 2021]<\/code> MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/2103.14910.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/vincentfung13\/MINE\">[code]<\/a> <a href=\"https:\/\/vincentfung13.github.io\/projects\/mine\/\">[project page]<\/a> \u2708\ufe0f \ud83d\ude95<\/li><li><code>[ICCV 2021]<\/code> PixelSynth: Generating a 3D-Consistent Experience from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/2108.05892.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/crockwell\/pixelsynth\">[code]<\/a> <a href=\"https:\/\/crockwell.github.io\/pixelsynth\/\">[project page]<\/a> \u2601\ufe0f<\/li><li><code>[ICCV 2021]<\/code> SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware Inpainting <a href=\"https:\/\/arxiv.org\/pdf\/2109.01068.pdf\">[paper]<\/a> <a href=\"https:\/\/varunjampani.github.io\/slide\/\">[project page]<\/a> \ud83d\udc8e<\/li><li><code>[ICCV 2021]<\/code> Video Autoencoder: self-supervised disentanglement of static 3D structure and motion <a href=\"https:\/\/arxiv.org\/pdf\/2110.02951.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/zlai0\/VideoAutoencoder\/\">[code]<\/a> <a href=\"https:\/\/zlai0.github.io\/VideoAutoencoder\/\">[project page]<\/a> \ud83d\udc7e<\/li><li><code>[ICCV 2021]<\/code> Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/2012.09854.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/facebookresearch\/worldsheet\">[code]<\/a> <a href=\"https:\/\/worldsheet.github.io\/\">[project page]<\/a> \ud83d\udc8e<\/li><li><code>[CVPR 2021]<\/code> Layout-Guided Novel View Synthesis from a Single Indoor Panorama <a href=\"https:\/\/openaccess.thecvf.com\/content\/CVPR2021\/papers\/Xu_Layout-Guided_Novel_View_Synthesis_From_a_Single_Indoor_Panorama_CVPR_2021_paper.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/bluestyle97\/PNVS\">[dataset]<\/a><\/li><li><code>[WACV 2021]<\/code> Adaptive Multiplane Image Generation from a Single Internet Picture <a href=\"https:\/\/openaccess.thecvf.com\/content\/WACV2021\/papers\/Luvizon_Adaptive_Multiplane_Image_Generation_From_a_Single_Internet_Picture_WACV_2021_paper.pdf\">[paper]<\/a> \u2708\ufe0f<\/li><li><code>[CVPR 2020]<\/code> Single-View View Synthesis with Multiplane Images <a href=\"https:\/\/single-view-mpi.github.io\/single_view_mpi.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/google-research\/google-research\/tree\/master\/single_view_mpi\">[code]<\/a> <a href=\"https:\/\/single-view-mpi.github.io\/\">[project page]<\/a> \u2708\ufe0f<\/li><li><code>[CVPR 2020]<\/code> SynSin: End-to-end View Synthesis from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/1912.08804.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/facebookresearch\/synsin\">[code]<\/a> <a href=\"https:\/\/www.robots.ox.ac.uk\/~ow\/synsin.html\">[project page]<\/a> \u2601\ufe0f<\/li><li><code>[CVPR 2020]<\/code> 3D Photography using Context-aware Layered Depth Inpainting <a href=\"https:\/\/arxiv.org\/pdf\/2004.04727.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/vt-vl-lab\/3d-photo-inpainting\">[code]<\/a> <a href=\"https:\/\/shihmengli.github.io\/3D-Photo-Inpainting\/\">[project page]<\/a> \ud83c\udf43<\/li><li><code>[Trans. Graph. 2020]<\/code> One Shot 3D Photography <a href=\"https:\/\/arxiv.org\/pdf\/2008.12298.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/facebookresearch\/one_shot_3d_photography\">[code]<\/a> <a href=\"https:\/\/facebookresearch.github.io\/one_shot_3d_photography\/\">[project page]<\/a> \ud83c\udf43 \ud83d\udc8e<\/li><li><code>[Trans. Graph. 2019]<\/code> 3D Ken Burns Effect from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/1909.05483.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/sniklaus\/3d-ken-burns\">[code]<\/a> \u2601\ufe0f<\/li><li><code>[ICCV 2019]<\/code> Monocular Neural Image-based Rendering with Continuous View Control <a href=\"https:\/\/arxiv.org\/pdf\/1901.01880.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/xuchen-ethz\/continuous_view_synthesis\">[code]<\/a><\/li><li><code>[ECCV 2018]<\/code> Layer-structured 3D Scene Inference via View Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/1807.10264.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/google\/layered-scene-inference\">[code]<\/a> <a href=\"https:\/\/shubhtuls.github.io\/lsi\/\">[project page]<\/a> \ud83c\udf43<\/li><li><code>[SIGGRAPH Posters 2011]<\/code> Layered Photo Pop-Up <a href=\"https:\/\/richardt.name\/publications\/photopopup\/LayeredPhotoPopup-poster.pdf\">[poster]<\/a> <a href=\"https:\/\/richardt.name\/publications\/photopopup\/LayeredPhotoPopup-abstract.pdf\">[abstract]<\/a> <a href=\"https:\/\/richardt.name\/publications\/photopopup\/\">[project page]<\/a><\/li><\/ul>\n\n\n\n<h2>Binocular-Input Novel View Synthesis<\/h2>\n\n\n\n<p>Not a complete list.<\/p>\n\n\n\n<ul><li><code>[CVPR 2022]<\/code> 3D Moments from Near-Duplicate Photos <a href=\"https:\/\/3d-moments.github.io\/static\/pdfs\/3d_moments.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/google-research\/3d-moments\">[code]<\/a> <a href=\"https:\/\/3d-moments.github.io\/\">[project page]<\/a> \ud83c\udf43\u2601\ufe0f<\/li><li><code>[CVPR 2022]<\/code> Stereo Magnification with Multi-Layer Images <a href=\"https:\/\/arxiv.org\/pdf\/2201.05023.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/SamsungLabs\/StereoLayers\">[code]<\/a> <a href=\"https:\/\/samsunglabs.github.io\/StereoLayers\/\">[project page]<\/a> \u2708\ufe0f\ud83d\udc8e<\/li><li><code>[ICCV 2019]<\/code> Extreme View Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/1812.04777\">[paper]<\/a> <a href=\"https:\/\/github.com\/NVlabs\/extreme-view-synth\">[code]<\/a><\/li><li><code>[CVPR 2019]<\/code> Pushing the Boundaries of View Extrapolation with Multiplane Images <a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Srinivasan_Pushing_the_Boundaries_of_View_Extrapolation_With_Multiplane_Images_CVPR_2019_paper.pdf\">[paper]<\/a> \u2708\ufe0f<\/li><li><code>[SIGGRAPH 2018]<\/code> Stereo Magnification: Learning View Synthesis using Multiplane Images <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3197517.3201323\">[paper]<\/a> <a href=\"https:\/\/github.com\/google\/stereo-magnification\">[code]<\/a> <a href=\"https:\/\/tinghuiz.github.io\/projects\/mpi\/\">[project page]<\/a> \u2708\ufe0f<\/li><\/ul>\n\n\n\n<h2>Landscape Animation<\/h2>\n\n\n\n<p>Animating landscape: running water, moving clouds, etc.<\/p>\n\n\n\n<ul><li><code>[SA 2022]<\/code> Water Simulation and Rendering from a Still Photograph <a href=\"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3550469.3555415\">[paper]<\/a> <a href=\"https:\/\/rsugimoto.net\/WaterAnimationProject\/\">[project page]<\/a><\/li><li><code>[arXiv 2022]<\/code> DiffDreamer: Consistent Single-view Perpetual View Generation with Conditional Diffusion Models <a href=\"https:\/\/arxiv.org\/abs\/2211.12131\">[paper]<\/a> <a href=\"https:\/\/primecai.github.io\/diffdreamer\">[project page]<\/a><\/li><li><code>[arXiv 2022]<\/code> Towards Smooth Video Composition <a href=\"https:\/\/arxiv.org\/abs\/2212.07413\">[paper]<\/a> <a href=\"https:\/\/genforce.github.io\/StyleSV\">[project page]<\/a><\/li><li><code>[arXiv 2022]<\/code> Simulating Fluids in Real-World Still Images <a href=\"https:\/\/arxiv.org\/pdf\/2204.11335.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/simon3dv\/SLR-SFS\">[code]<\/a> <a href=\"https:\/\/slr-sfs.github.io\/\">[project page]<\/a><\/li><li><code>[CVPR 2022]<\/code> Controllable Animation of Fluid Elements in Still Images <a href=\"https:\/\/arxiv.org\/pdf\/2112.03051v1.pdf\">[paper]<\/a> <a href=\"https:\/\/controllable-cinemagraphs.github.io\/\">[project page]<\/a><\/li><li><code>[CVPR 2022]<\/code> StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2 <a href=\"https:\/\/kaust-cair.s3.amazonaws.com\/stylegan-v\/stylegan-v-paper.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/universome\/stylegan-v\">[code]<\/a> <a href=\"https:\/\/universome.github.io\/stylegan-v\">[project page]<\/a><\/li><li><code>[CVPR 2021]<\/code> Animating Pictures with Eulerian Motion Fields <a href=\"https:\/\/eulerian.cs.washington.edu\/animating_pictures_2020.pdf\">[paper]<\/a> <a href=\"https:\/\/eulerian.cs.washington.edu\/\">[project page]<\/a><\/li><li><code>[MultiMedia 2021]<\/code> Learning Fine-Grained Motion Embedding for Landscape Animation <a href=\"https:\/\/arxiv.org\/pdf\/2109.02216.pdf\">[paper]<\/a><\/li><li><code>[ECCV 2020]<\/code> DeepLandscape: Adversarial Modeling of Landscape Videos <a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123680256.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/saic-mdal\/deep-landscape\">[code]<\/a> <a href=\"https:\/\/saic-mdal.github.io\/deep-landscape\/\">[project page]<\/a><\/li><li><code>[ECCV 2020]<\/code> DTVNet: Dynamic Time-lapse Video Generation via Single Still Image <a href=\"https:\/\/www.ecva.net\/papers\/eccv_2020\/papers_ECCV\/papers\/123500290.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/zhangzjn\/dtvnet\">[code]<\/a><\/li><li><code>[SIGGRAPH Asia 2019]<\/code> Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/1910.07192.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/endo-yuki-t\/Animating-Landscape\">[code]<\/a> <a href=\"http:\/\/www.cgg.cs.tsukuba.ac.jp\/~endo\/projects\/AnimatingLandscape\/\">[project page]<\/a><\/li><li><code>[CVPR 2018]<\/code> Learning to Generate Time-lapse Videos Using Multi-stage Dynamic Generative Adversarial Networks <a href=\"https:\/\/arxiv.org\/pdf\/1709.07592.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/weixiong-ur\/mdgan\">[code]<\/a> <a href=\"https:\/\/sites.google.com\/site\/whluoimperial\/mdgan\">[project page]<\/a><\/li><\/ul>\n\n\n\n<h2>Some Other Papers<\/h2>\n\n\n\n<p>Some other interesting papers for novel view synthesis or cinemagraph.<\/p>\n\n\n\n<ul><li><code>[arXiv 2022]<\/code> Make-A-Video: Text-to-Video Generation without Text-Video Data <a href=\"https:\/\/arxiv.org\/abs\/2209.14792\">[paper]<\/a> <a href=\"https:\/\/make-a-video.github.io\/\">[project page]<\/a><\/li><li><code>[ECCV 2022]<\/code> SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image <a href=\"https:\/\/arxiv.org\/pdf\/2204.00928.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/Ir1d\/SinNeRF\">[code]<\/a> <a href=\"https:\/\/vita-group.github.io\/SinNeRF\/\">[project page]<\/a> \ud83d\ude95<\/li><li><code>[CVPR 2022]<\/code> Look Outside the Room: Synthesizing A Consistent Long-Term 3D Scene Video from A Single Image <a href=\"https:\/\/arxiv.org\/abs\/2203.09457\">[paper]<\/a> <a href=\"https:\/\/github.com\/xrenaa\/Look-Outside-Room\">[code]<\/a> <a href=\"https:\/\/xrenaa.github.io\/look-outside-room\/\">[project page]<\/a><\/li><li><code>[ICCV 2021]<\/code> Geometry-Free View Synthesis: Transformers and no 3D Priors <a href=\"https:\/\/arxiv.org\/pdf\/2104.07652.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/CompVis\/geometry-free-view-synthesis\">[code]<\/a> <a href=\"https:\/\/compvis.github.io\/geometry-free-view-synthesis\/\">[project page]<\/a><\/li><li><code>[ICCV 2021]<\/code> iPOKE: Poking a Still Image for Controlled Stochastic Video Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/2107.02790.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/CompVis\/ipoke\">[code]<\/a> <a href=\"https:\/\/compvis.github.io\/ipoke\/\">[project page]<\/a><\/li><li><code>[ICCV 2021]<\/code> Learning to Stylize Novel Views <a href=\"https:\/\/arxiv.org\/pdf\/2105.13509.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/hhsinping\/stylescene\">[code]<\/a> <a href=\"https:\/\/hhsinping.github.io\/3d_scene_stylization\/\">[project page]<\/a> \u2601\ufe0f<\/li><li><code>[ICCV 2021]<\/code> Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/2104.00677.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/ajayjain\/DietNeRF\">[code]<\/a> <a href=\"https:\/\/www.ajayj.com\/dietnerf\">[project page]<\/a> \ud83d\ude95<\/li><li><code>[CVPR 2021]<\/code> Stochastic Image-to-Video Synthesis Using cINNs <a href=\"https:\/\/arxiv.org\/pdf\/2105.04551.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/CompVis\/image2video-synthesis-using-cINNs\">[code]<\/a> <a href=\"https:\/\/compvis.github.io\/image2video-synthesis-using-cINNs\/\">[project page]<\/a><\/li><li><code>[CVPR 2021]<\/code> Understanding Object Dynamics for Interactive Image-to-Video Synthesis <a href=\"https:\/\/arxiv.org\/pdf\/2106.11303.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/CompVis\/interactive-image2video-synthesis\">[code]<\/a> <a href=\"https:\/\/compvis.github.io\/interactive-image2video-synthesis\/\">[project page]<\/a><\/li><li><code>[SIGGRAPH 2021]<\/code> Endless Loops: Detecting and Animating Periodic Patterns in Still Images <a href=\"https:\/\/storage.googleapis.com\/ltx-public-images\/Endless_Loops__Detecting_and_animating_periodic_patterns_in_still_images.pdf\">[paper]<\/a> <a href=\"https:\/\/pub.res.lightricks.com\/endless-loops\/\">[project page]<\/a><\/li><li><code>[ECCV 2018]<\/code> Flow-Grounded Spatial-Temporal Video Prediction from Still Images <a href=\"https:\/\/arxiv.org\/pdf\/1807.09755.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/Yijunmaverick\/FlowGrounded-VideoPrediction\">[code]<\/a><\/li><li><code>[CVPR 2018]<\/code> Controllable Video Generation with Sparse Trajectories <a href=\"https:\/\/vision.cornell.edu\/se3\/wp-content\/uploads\/2018\/03\/1575.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/zekunhao1995\/ControllableVideoGen\">[code]<\/a> <a href=\"http:\/\/www.cs.cornell.edu\/~xhuang\/publication\/videogen\/\">[project page]<\/a><\/li><li><code>[CVPR 2018]<\/code> MoCoGAN: Decomposing Motion and Content for Video Generation <a href=\"https:\/\/arxiv.org\/pdf\/1707.04993.pdf\">[paper]<\/a> <a href=\"https:\/\/github.com\/sergeytulyakov\/mocogan\">[code]<\/a><\/li><li><code>[ICCV 2017]<\/code> Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning <a href=\"https:\/\/openaccess.thecvf.com\/content_ICCV_2017\/papers\/Oh_Personalized_Cinemagraphs_Using_ICCV_2017_paper.pdf\">[paper]<\/a><\/li><\/ul>\n","protected":false},"excerpt":{"rendered":"<p>3D\u673a\u5668\u5b66\u4e60\u76f8\u5173\u5408\u96c6\uff1ahttps:\/\/github.com\/timzhang642\/3D-Machine-Le &hellip; <a href=\"http:\/\/139.9.1.231\/index.php\/2023\/03\/08\/3d-photography-papers\/\" class=\"more-link\">\u7ee7\u7eed\u9605\u8bfb<span class=\"screen-reader-text\">3d-photography-papers<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[35,33],"tags":[],"_links":{"self":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/14749"}],"collection":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/comments?post=14749"}],"version-history":[{"count":6,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/14749\/revisions"}],"predecessor-version":[{"id":14756,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/posts\/14749\/revisions\/14756"}],"wp:attachment":[{"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/media?parent=14749"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/categories?post=14749"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/139.9.1.231\/index.php\/wp-json\/wp\/v2\/tags?post=14749"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}