litbaza книги онлайнРазная литератураОхота на электроовец. Большая книга искусственного интеллекта - Сергей Сергеевич Марков

Шрифт:

-
+

Интервал:

-
+

Закладка:

Сделать
1 ... 452 453 454 455 456 457 458 459 460 ... 482
Перейти на страницу:
W.-t., Wang S., Tang J. (2019). Blockwise Self-Attention for Long Document Understanding / CLR 2020 Conference Blind Submission // https://openreview.net/forum?id=H1gpET4YDB

2553

Wang S., Li B. Z., Khabsa M., Fang H., Ma H. (2020). Linformer: Self-Attention with Linear Complexity // https://arxiv.org/abs/2006.04768

2554

Zaheer M., Guruganesh G., Dubey A., Ainslie J., Alberti C., Ontanon S., Pham P., Ravula A., Wang Q., Yang L., Ahmed A. (2020). Big Bird: Transformers for Longer Sequences // https://arxiv.org/abs/2007.14062

2555

Choromanski K., Likhosherstov V., Dohan D., Song X., Gane A., Sarlos T., Hawkins P., Davis J., Mohiuddin A., Kaiser L., Belanger D., Colwell L., Weller A. (2020). Rethinking Attention with Performers // https://arxiv.org/abs/2009.14794

2556

Martins P. H., Marinho Z., Martins A. F. T. (2021). ∞-former: Infinite Memory Transformer // https://arxiv.org/abs/2109.00301

2557

Ding J., Ma S., Dong L., Zhang X., Huang S., Wang W., Zheng N., Wei F. (2023). LongNet: Scaling Transformers to 1,000,000,000 Tokens // https://arxiv.org/abs/2307.02486

2558

Tay Y., Bahri D., Yang L., Metzler D., Juan D.-C. (2020). Sparse Sinkhorn Attention // https://arxiv.org/abs/2002.11296

2559

Tay Y., Bahri D., Metzler D., Juan D.-C., Zhao Z., Zheng C. (2020). Synthesizer: Rethinking Self-Attention in Transformer Models // https://arxiv.org/abs/2005.00743

2560

Ma X., Zhou C., Kong X., He J., Gui L., Neubig G., May J., Zettlemoyer L. (2022). Mega: Moving Average Equipped Gated Attention // https://arxiv.org/abs/2209.10655

2561

Yu L., Simig D., Flaherty C., Aghajanyan A., Zettlemoyer L., Lewis M. (2023). MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers // https://arxiv.org/abs/2305.07185

2562

Tay Y., Dehghani M., Abnar S., Shen Y., Bahri D., Pham P., Rao J., Yang L., Ruder S., Metzler D. (2020). Long Range Arena: A Benchmark for Efficient Transformers // https://arxiv.org/abs/2011.04006

2563

Long-range modeling on LRA (2023) // https://paperswithcode.com/sota/long-range-modeling-on-lra

2564

An C., Gong S., Zhong M., Zhao X., Li M., Zhang J., Kong L., Qiu X. (2023). L-Eval: Instituting Standardized Evaluation for Long Context Language Models // https://arxiv.org/abs/2307.11088

2565

Bai Y., Lv X., Zhang J., Lyu H., Tang J., Huang Z., Du Z., Liu X., Zeng A., Hou L., Dong Y., Tang J., Li J. (2023). LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding // https://arxiv.org/abs/2308.14508

2566

Li Y., Cai T., Zhang Y., Chen D., Dey D. (2022). What Makes Convolutional Models Great on Long Sequence Modeling? // https://arxiv.org/abs/2210.09298

2567

Poli M., Massaroli S., Nguyen E., Fu D. Y., Dao T., Baccus S., Bengio Y., Ermon S., Ré C. (2023). Hyena Hierarchy: Towards Larger Convolutional Language Models // https://arxiv.org/abs/2302.10866

2568

Brown T. B., Mann B., Ryder N., Subbiah M., Kaplan J., Dhariwal P., Neelakantan A., Shyam P., Sastry G., Askell A., Agarwal S., Herbert-Voss A., Krueger G., Henighan T., Child R., Ramesh A., Ziegler D. M., Wu J., Winter C., Hesse C., Chen M., Sigler E., Litwin M., Gray S., Chess B., Clark J., Berner C., McCandlish S., Radford A., Sutskever I., Amodei D. (2020). Language Models are Few-Shot Learners // https://arxiv.org/abs/2005.14165

2569

Karpathy A. (2020) / Twitter // https://twitter.com/karpathy/status/1273788774422441984

2570

Branwen G. (2020). GPT-3 Creative Fiction // https://www.gwern.net/GPT-3

2571

Reynolds L., McDonell K. (2021). Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm // https://arxiv.org/abs/2102.07350

2572

Rebuffi S.-A., Bilen H., Vedaldi A. (2017). Learning multiple visual domains with residual adapters // https://arxiv.org/abs/1705.08045

2573

Houlsby N., Giurgiu A., Jastrzebski S., Morrone B., de Laroussilhe Q., Gesmundo A., Attariyan M., Gelly S. (2019). Parameter-Efficient Transfer Learning for NLP // https://arxiv.org/abs/1902.00751

2574

Hu E. J., Shen Y., Wallis P., Allen-Zhu Z., Li Y., Wang S., Wang L., Chen W. (2021). LoRA: Low-Rank Adaptation of Large Language Models // https://arxiv.org/abs/2106.09685

2575

Xu R., Luo F., Zhang Z., Tan C., Chang B., Huang S., Huang F. (2021). Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning // https://arxiv.org/abs/2109.05687

2576

Duan Z., Zhang H., Wang C., Wang Z., Chen B., Zhou M. (2021). EnsLM: Ensemble Language Model for Data Diversity by Semantic Clustering / Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 2954—2967 // https://doi.org/10.18653/v1/2021.acl-long.230

2577

Conneau A., Kruszewski G., Lample G., Barrault L., Baroni M. (2018). What you can cram into a single vector: Probing sentence embeddings for linguistic properties // https://arxiv.org/abs/1805.01070

2578

Şahin G. G., Vania C., Kuznetsov I., Gurevych I. (2019). LINSPECTOR: Multilingual Probing Tasks for Word Representations // https://arxiv.org/abs/1903.09442

2579

Kim N., Patel R., Poliak A., Wang A., Xia P., McCoy R. T., Tenney I., Ross A., Linzen T., Durme B. V., Bowman S. R., Pavlick E. (2019). Probing What Different NLP Tasks Teach Machines about Function Word Comprehension // https://arxiv.org/abs/1904.11544

2580

Shi X., Padhi I., Knight K. (2016). Does String-Based Neural MT Learn Source Syntax? / Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1526—1534 // https://doi.org/10.18653/v1/D16-1159

2581

Lee J., Tang R., Lin J. (2019). What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning // https://arxiv.org/abs/1911.03090

2582

Li X. L., Liang P. (2021). Prefix-Tuning: Optimizing Continuous Prompts for Generation // https://arxiv.org/abs/2101.00190

1 ... 452 453 454 455 456 457 458 459 460 ... 482
Перейти на страницу:

Комментарии
Минимальная длина комментария - 20 знаков. Уважайте себя и других!
Комментариев еще нет. Хотите быть первым?