litbaza книги онлайнРазная литератураОхота на электроовец. Большая книга искусственного интеллекта - Сергей Сергеевич Марков

Шрифт:

-
+

Интервал:

-
+

Закладка:

Сделать
1 ... 461 462 463 464 465 466 467 468 469 ... 482
Перейти на страницу:
S., Courville A., Bengio Y. (2014). GenerativeAdversarialNetworks // https://arxiv.org/abs/1406.2661

2763

Alberge D. (2021). Was famed Samson and Delilah really painted by Rubens? No, says AI / The Guardian, 26 Sep 2021 // https://www.theguardian.com/artanddesign/2021/sep/26/was-famed-samson-and-delilah-really-painted-by-rubens-no-says-ai

2764

Schmidhuber J. (1992). Learning factorial codes by predictability minimization / Neural Computation, Vol. 4 (6), pp. 863—879 // https://doi.org/10.1162/neco.1992.4.6.863

2765

Mirza M., Osindero S. (2014). Conditional Generative Adversarial Nets // https://arxiv.org/abs/1411.1784

2766

Isola P., Zhu J.-Y., Zhou T., Efros A. A. (2016). Image-to-Image Translation with Conditional Adversarial Networks // https://arxiv.org/abs/1611.07004

2767

Zhu J.-Y., Park T., Isola P., Efros A. A. (2017). Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks // https://arxiv.org/abs/1703.10593

2768

Shrivastava A., Pfister T., Tuzel O., Susskind J., Wang W., Webb R. (2016). Learning from Simulated and Unsupervised Images through Adversarial Training // https://arxiv.org/abs/1612.07828

2769

Isola P., Zhu J.-Y., Zhou T., Efros A. A. (2016). Image-to-Image Translation with Conditional Adversarial Networks // https://arxiv.org/abs/1611.07004

2770

Choi Y., Choi M., Kim M., Ha J.-W., Kim S., Choo J. (2017). StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation // https://arxiv.org/abs/1711.09020

2771

Iizuka S., Simo-Serra E., Ishikawa H. (2017). Globally and Locally Consistent Image Completion / ACM Transactions on Graphics, Vol. 36, Iss. 4, Article 107, July 2017 // http://dx.doi.org/10.1145/3072959.3073659

2772

 Sagong M.-C., Shin Y.-G., Kim S.-W., Park S., Ko S.-J. (2019). PEPSI: Fast Image Inpainting With Parallel Decoding Network / 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) // https://doi.org/10.1109/CVPR.2019.01162

2773

Shin Y.-G., Sagong M.-C., Yeo Y.-J., Kim S.-W., Ko S.-J. (2019). PEPSI++: Fast and Lightweight Network for Image Inpainting // https://arxiv.org/abs/1905.09010

2774

DeepCreamPy: Decensoring Hentai with Deep Neural Networks // https://github.com/deeppomf/DeepCreamPy

2775

Radford A., Metz L., Chintala S. (2015). Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks // https://arxiv.org/abs/1511.06434

2776

Chen X., Duan Y., Houthooft R., Schulman J., Sutskever I., Abbeel P. (2016). InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets // https://arxiv.org/abs/1606.03657

2777

Kim T., Cha M., Kim H., Lee J. K., Kim J. (2017). Learning to Discover Cross-Domain Relations with Generative Adversarial Networks // https://arxiv.org/abs/1703.05192

2778

Karras T., Aila T., Laine S., Lehtinen J. (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation // https://arxiv.org/abs/1710.10196

2779

Arjovsky M., Chintala S., Bottou L. (2017). Wasserstein GAN // https://arxiv.org/abs/1701.07875

2780

Gulrajani I., Ahmed F., Arjovsky M., Dumoulin V., Courville A. (2017). Improved Training of Wasserstein GANs // https://arxiv.org/abs/1704.00028

2781

Karras T., Laine S., Aila T. (2018). A Style-Based Generator Architecture for Generative Adversarial Networks // https://arxiv.org/abs/1812.04948

2782

Karras T., Laine S., Aittala M., Hellsten J., Lehtinen J., Aila T. (2019). Analyzing and Improving the Image Quality of StyleGAN // https://arxiv.org/abs/1912.04958

2783

Karras T., Aittala M., Laine S., Härkönen E., Hellsten J., Lehtinen J., Aila T. (2021). Alias-Free Generative Adversarial Networks // https://arxiv.org/abs/2106.12423

2784

Choi Y., Uh Y., Yoo J., Ha J.-W. (2019). StarGAN v2: Diverse Image Synthesis for Multiple Domains // https://arxiv.org/abs/1912.01865

2785

Mokady R., Yarom M., Tov O., Lang O., Cohen-Or D., Dekel T., Irani M., Mosseri I. (2022). Self-Distilled StyleGAN: Towards Generation from Internet Photos // https://arxiv.org/abs/2202.12211

2786

Stanford Human-Centered Artificial Intelligence (HAI) (2021). Artificial Intelligence Index Report 2021 // https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report_Master.pdf

2787

Akbari H., Yuan L., Qian R., Chuang W.-H., Chang S.-F., Cui Y., Gong B. (2021). VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text // https://arxiv.org/abs/2104.11178

2788

Baevski A., Hsu W.-N., Xu Q., Babu A., Gu J., Auli M. (2022). The first high-performance self-supervised algorithm that works for speech, vision, and text / Meta AI, January 20, 2022

2789

Mitrovic J., McWilliams B., Walker J., Buesing L., Blundell C. (2020). Representation Learning via Invariant Causal Mechanisms // https://arxiv.org/abs/2010.07922

2790

Tomasev N., Bica I., McWilliams B., Buesing L., Pascanu R., Blundell C., Mitrovic J. (2022). Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? // https://arxiv.org/abs/2201.05119

2791

* В машинном обучении авторегрессионными обычно называют модели для предсказания следующего элемента последовательности на основе предыдущих её элементов.

2792

van den Oord A., Kalchbrenner N., Kavukcuoglu K. (2016). Pixel Recurrent Neural Networks // https://arxiv.org/abs/1601.06759

2793

van den Oord A., Kalchbrenner N., Vinyals O., Espeholt L., Graves A., Kavukcuoglu K. (2016). Conditional Image Generation with PixelCNN Decoders // https://arxiv.org/abs/1606.05328

2794

Salimans T., Karpathy A., Chen X., Kingma D. P. (2017). PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications // https://arxiv.org/abs/1701.05517

2795

Sohl-Dickstein J., Weiss E. A., Maheswaranathan N., Ganguli S. (2015). Deep Unsupervised Learning using Nonequilibrium Thermodynamics // https://arxiv.org/abs/1503.03585

2796

Ho J., Jain A., Abbeel P. (2020). Denoising Diffusion Probabilistic Models // https://arxiv.org/abs/2006.11239

2797

Nichol A., Dhariwal P. (2021). Improved denoising diffusion probabilistic models // https://arxiv.org/abs/2102.09672

2798

Dhariwal P., Nichol A. (2021). Diffusion Models Beat GANs on Image Synthesis // https://arxiv.org/abs/2105.05233

2799

Jiang Y.,

1 ... 461 462 463 464 465 466 467 468 469 ... 482
Перейти на страницу:

Комментарии
Минимальная длина комментария - 20 знаков. Уважайте себя и других!
Комментариев еще нет. Хотите быть первым?