Шрифт:
Интервал:
Закладка:
3265
Kaiser L., Gomez A. N., Shazeer N., Vaswani A., Parmar N., Jones L., Uszkoreit J. (2017). One Model To Learn Them All // https://arxiv.org/abs/1706.05137
3266
Zoph B., Vasudevan V., Shlens J., Le Q. V. (2017). Learning Transferable Architectures for Scalable Image Recognition // https://arxiv.org/abs/1707.07012
3267
Chen L.-C., Collins M. D., Zhu Y., Papandreou G., Zoph B., Schroff F., Adam H., Shlens J. (2018). Searching for Efficient Multi-Scale Architectures for Dense Image Prediction // https://arxiv.org/abs/1809.04184
3268
Liu H., Simonyan K., Yang Y. (2018). DARTS: Differentiable Architecture Search // https://arxiv.org/abs/1806.09055
3269
Howard A., Sandler M., Chu G., Chen L.-C., Chen B., Tan M., Wang W., Zhu Y., Pang R., Vasudevan V., Le Q. V., Adam H. (2019). Searching for MobileNetV3 // https://arxiv.org/abs/1905.02244v5
3270
Xiong Y., Liu H., Gupta S., Akin B., Bender G., Kindermans P.-J., Tan M., Singh V., Chen B. (2020). MobileDets: Searching for Object Detection Architectures for Mobile Accelerators // https://arxiv.org/abs/2004.14525v2
3271
Abdelfattah M. S., Mehrotra A., Dudziak Ł., Lane N. D. (2021). Zero-Cost Proxies for Lightweight NAS // https://arxiv.org/abs/2101.08134
3272
Dudziak Ł., Chau T., Abdelfattah M. S., Lee R., Kim H., Lane N. D. (2020). BRP-NAS: Prediction-based NAS using GCNs // https://arxiv.org/abs/2007.08668
3273
Zhang Y., Zhang Q., Yang Y. (2020). How Does Supernet Help in Neural Architecture Search? // https://arxiv.org/abs/2010.08219
3274
Dai X., Zhang P., Wu B., Yin H., Sun F., Wang Y., Dukhan M., Hu Y., Wu Y., Jia Y., Vajda P., Uyttendaele M., Jha N. K. (2018). ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation // https://arxiv.org/abs/1812.08934
3275
Wan A., Dai X., Zhang P., He Z., Tian Y., Xie S., Wu B., Yu M., Xu T., Chen K., Vajda P., Gonzalez J. E. (2020). FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions // https://arxiv.org/abs/2004.05565
3276
Awad N., Mallik N., Hutter F. (2020). Differential Evolution for Neural Architecture Search // https://arxiv.org/abs/2012.06400
3277
Jie R., Gao J. (2021). Differentiable Neural Architecture Search with Morphism-based Transformable Backbone Architectures // https://arxiv.org/abs/2106.07211
3278
Tian Y., Shen L., Shen L., Su G., Li Z., Liu W. (2020). AlphaGAN: Fully Differentiable Architecture Search for Generative Adversarial Networks // https://arxiv.org/abs/2006.09134
3279
Ding M., Lian X., Yang L., Wang P., Jin X., Lu Z., Luo P. (2021). HR-NAS: Searching Efficient High-Resolution Neural Architectures with Lightweight Transformers // https://arxiv.org/abs/2106.06560
3280
Yang Y., You S., Li H., Wang F., Qian C., Lin Z. (2021). Towards Improving the Consistency, Efficiency, and Flexibility of Differentiable Neural Architecture Search // https://arxiv.org/abs/2101.11342
3281
Jin H., Song Q., Hu X. (2018). Auto-Keras: An Efficient Neural Architecture Search System // https://arxiv.org/abs/1806.10282
3282
Ying C., Klein A., Real E., Christiansen E., Murphy K., Hutter F. (2019). NAS-Bench-101: Towards Reproducible Neural Architecture Search // https://arxiv.org/abs/1902.09635
3283
Zela A., Siems J., Hutter F. (2020). NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural Architecture Search // https://arxiv.org/abs/2001.10422
3284
Dong X., Yang Y. (2020). NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search // https://arxiv.org/abs/2001.00326
3285
Tu R., Khodak M., Roberts N., Talwalkar A. (2021). NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search // https://arxiv.org/abs/2110.05668
3286
Yan S., White C., Savani Y., Hutter F. (2021). NAS-Bench-x11 and the Power of Learning Curves // https://arxiv.org/abs/2111.03602
3287
Li C., Yu Z., Fu Y., Zhang Y., Zhao Y., You H., Yu Q., Wang Y., Lin Y. (2021). HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark // https://arxiv.org/abs/2103.10584
3288
Mehrotra A., Ramos A. G. C. P., Bhattacharya S., Dudziak Ł., Vipperla R., Chau T., Abdelfattah M. S., Ishtiaq S., Lane N. D. (2020). NAS-Bench-ASR: Reproducible Neural Architecture Search for Speech Recognition // https://openreview.net/forum?id=CU0APx9LMaL
3289
Dong X., Liu L., Musial K., Gabrys B. (2020). NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size // https://arxiv.org/abs/2009.00437
3290
Klein A., Hutter F. (2019). Tabular Benchmarks for Joint Architecture and Hyperparameter Optimization // https://arxiv.org/abs/1905.04970
3291
Hirose Y., Yoshinari N., Shirakawa S. (2021). NAS-HPO-Bench-II: A Benchmark Dataset on Joint Optimization of Convolutional Neural Network Architecture and Training Hyperparameters // https://arxiv.org/abs/2110.10165
3292
Tan M., Le Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks // https://arxiv.org/abs/1905.11946
3293
Arora A. (2020). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks // https://amaarora.github.io/2020/08/13/efficientnet.html
3294
Tan M., Le Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks // https://arxiv.org/abs/1905.11946
3295
Huang Y., Cheng Y., Bapna A., Firat O., Chen M. X., Chen D., Lee H. J., Ngiam J., Le Q. V., Wu Y., Chen Z. (2018). GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism // https://arxiv.org/abs/1811.06965
3296
Pham H., Dai Z., Xie Q., Luong M.-T., Le Q. V. (2020). Meta Pseudo Labels // https://arxiv.org/abs/2003.10580
3297
Wang Z., Yang E., Shen L., Huang H. (2023). A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning // https://arxiv.org/abs/2307.09218
3298
Kirkpatrick J., Pascanu R., Rabinowitz N.,