litbaza книги онлайнРазная литератураОхота на электроовец. Большая книга искусственного интеллекта - Сергей Сергеевич Марков

Шрифт:

-
+

Интервал:

-
+

Закладка:

Сделать
1 ... 428 429 430 431 432 433 434 435 436 ... 482
Перейти на страницу:
G. (2020). MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs / TheNextWeb, July 1, 2020 // https://thenextweb.com/neural/2020/07/01/mit-removes-huge-dataset-that-teaches-ai-systems-to-use-racist-misogynistic-slurs/

1845

Gorey C. (2020). 80m images used to train AI pulled after researchers find string of racist terms / siliconrepublic, 13 Jul 2020 // https://www.siliconrepublic.com/machines/mit-database-racist-misogynist-discovery-abeba-birhane

1846

Quach K. (2020). MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. Top uni takes action after El Reg highlights concerns by academics / The Register, 1 Jul 2020 // https://www.theregister.com/2020/07/01/mit_dataset_removed/

1847

Krizhevsky A., Sutskever I., Hinton G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks / Advances in Neural Information Processing Systems 25 (NIPS 2012) // https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

1848

Bai K. (2019). A Comprehensive Introduction to Different Types of Convolutions in Deep Learning: Towards intuitive understanding of convolutions through visualizations / Towards Data Science, Feb 12, 2019 // https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215

1849

Hahnloser R. H. R., Sarpeshkar R., Mahowald M. A., Douglas R. J., Seung S. (2000). Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit / Nature, Vol. 405, pp. 947—951 // https://doi.org/10.1038/35016072

1850

Glorot X., Bordes A., Bengio Y. (2011). Deep Sparse Rectifier Neural Networks. / Journal of Machine Learning Research 15 (2011), pp. 315-323 // https://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf

1851

Liu D. (2017). A Practical Guide to ReLU: Start using and understanding ReLU without BS or fancy equations // https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7

1852

Krizhevsky A., Sutskever I., Hinton G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks (Slides) // http://image-net.org/challenges/LSVRC/2012/supervision.pdf

1853

Godoy D. (2018). Hyper-parameters in Action! Part II — Weight Initializers / Towards Data Science, Jun 18, 2018 // https://towardsdatascience.com/hyper-parameters-in-action-part-ii-weight-initializers-35aee1a28404

1854

Glorot X., Bengio Y. (2010). Understanding the difficulty of training deep feedforward neural networks / Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Journal of Machine Learning Research, Vol. 9, pp. 249—256 // http://www.jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

1855

He K., Zhang X., Ren S., Sun J. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification / Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026—1034 // https://doi.org/10.1109/ICCV.2015.123

1856

Liang X. (2019). Understand Kaiming Initialization and Implementation Detail in PyTorch: Initialization Matters! Know how to set the fan_in and fan_out mode with kaiming_uniform_ function / Towards Data Science, Aug 7, 2019 // https://towardsdatascience.com/understand-kaiming-initialization-and-implementation-detail-in-pytorch-f7aa967e9138

1857

Godoy D. (2018). Hyper-parameters in Action! Part II — Weight Initializers / Towards Data Science, Jun 18, 2018 // https://towardsdatascience.com/hyper-parameters-in-action-part-ii-weight-initializers-35aee1a28404

1858

Zhu C., Ni R., Xu Z., Kong K., Huang W. R., Goldstein T. (2021). GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training // https://arxiv.org/abs/2102.08098

1859

Krizhevsky A., Sutskever I., Hinton G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks / Advances in Neural Information Processing Systems 25 (NIPS 2012) // https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

1860

Krizhevsky A., Sutskever I., Hinton G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks (Slides) // http://image-net.org/challenges/LSVRC/2012/supervision.pdf

1861

Karpathy A. CS231n Convolutional Neural Networks for Visual Recognition (Stanford CS class) // http://cs231n.github.io/convolutional-networks/

1862

Girard R. (2015). How does Krizhevsky's '12 CNN get 253,440 neurons in the first layer? / StackExchange // https://stats.stackexchange.com/questions/132897/how-does-krizhevskys-12-cnn-get-253-440-neurons-in-the-first-layer

1863

Chellapilla K., Puri S., Simard P. (2006). High performance convolutional neural networks for document processing / International Workshop on Frontiers in Handwriting Recognition, 2006 // https://hal.inria.fr/inria-00112631

1864

Nasse F., Thurau C., Fink G. A. (2009). Face Detection Using GPU-Based Convolutional Neural Networks / International Conference on Computer Analysis of Images and Patterns, CAIP 2009 // https://doi.org/10.1007/978-3-642-03767-2_10

1865

* Под ансамблем в машинном обучении понимают объединение нескольких моделей для решения одной задачи, позволяющее достичь лучшего результата, чем при использовании каждой модели по отдельности; для получения результирующего прогноза ансамбля результаты входящих в него моделей могут усредняться либо комбинироваться каким-то более сложным образом.

1866

Cireșan D., Meier U., Masci J., Schmidhuber J. (2012). Multi-Column Deep Neural Network for Traffic Sign Classification // http://people.idsia.ch/~juergen/nn2012traffic.pdf

1867

Schmidhuber J. 2011: First Superhuman Visual Pattern Recognition. IJCNN 2011 competition in Silicon Valley: twice better than humans, three times better than the closest artificial competitor, six times better than the best non-neural method // http://people.idsia.ch/~juergen/superhumanpatternrecognition.html

1868

Tsang S.-H. (2018). Review: ZFNet — Winner of ILSVRC 2013 (Image Classification) // https://medium.com/coinmonks/paper-review-of-zfnet-the-winner-of-ilsvlc-2013-image-classification-d1a5a0c45103

1869

Tsang S. H. (2018). Review: ZFNet — Winner of ILSVRC 2013 (Image Classification) // https://medium.com/coinmonks/paper-review-of-zfnet-the-winner-of-ilsvlc-2013-image-classification-d1a5a0c45103

1870

* Во многих популярных статьях, посвящённых результатам ILSVRC-2014, результирующая ошибка указана равной 6,67%. На самом деле точное значение ошибки — 0,06656, то есть 6,66%. Интересно, кто так «округлил» результат? И сделано ли это было во славу Господа?

1871

Das S. (2017). CNN Architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet and more… // https://medium.com/analytics-vidhya/cnns-architectures-lenet-alexnet-vgg-googlenet-resnet-and-more-666091488df5

1872

Tsang S. H. (2018). Review: GoogLeNet (Inception v1)— Winner of ILSVRC 2014 (Image Classification) // https://medium.com/coinmonks/paper-review-of-googlenet-inception-v1-winner-of-ilsvlc-2014-image-classification-c2b3565a64e7

1873

Simonyan K., Zisserman A. (2015). Very deep convolutional networks for large-scale image recognition // https://arxiv.org/abs/1409.1556

1874

Shao J., Zhang X., Ding Z., Zhao Y., Chen Y., Zhou J., Wang W., Mei L., Hu C. (2016). Good Practices for Deep Feature Fusion // http://image-net.org/challenges/talks/2016/[email protected]

1875

Hu J., Shen

1 ... 428 429 430 431 432 433 434 435 436 ... 482
Перейти на страницу:

Комментарии
Минимальная длина комментария - 20 знаков. Уважайте себя и других!
Комментариев еще нет. Хотите быть первым?