Шрифт:
Интервал:
Закладка:
2636
Anil R., Dai A. M., Firat O., Johnson M., Lepikhin D., Passos A., Shakeri S., Taropa E., Bailey P., Chen Z., Chu E., Clark J. H., Shafey L. E., Huang Y., Meier-Hellstern K., Mishra G., Moreira E., Omernick M., Robinson K., Ruder S., Tay Y., Xiao K., Xu Y., Zhang Y., Abrego G. H., Ahn J., Austin J., Barham P., Botha J., Bradbury J., Brahma S., Brooks K., Catasta M., Cheng Y., Cherry C., Choquette-Choo C. A., Chowdhery A., Crepy C., Dave S., Dehghani M., Dev S., Devlin J., Díaz M., Du N., Dyer E., Feinberg V., Feng F., Fienber V., Freitag M., Garcia X., Gehrmann S., Gonzalez L., Gur-Ari G., Hand S., Hashemi H., Hou L., Howland J., Hu A., Hui J., Hurwitz J., Isard M., Ittycheriah A., Jagielski M., Jia W., Kenealy K., Krikun M., Kudugunta S., Lan C., Lee K., Lee B., Li E., Li M., Li W., Li Y., Li J., Lim H., Lin H., Liu Z., Liu F., Maggioni M., Mahendru A., Maynez J., Misra V., Moussalem M., Nado Z., Nham J., Ni E., Nystrom A., Parrish A., Pellat M., Polacek M., Polozov A., Pope R., Qiao S., Reif E., Richter B., Riley P., Ros A. C., Roy A., Saeta B., Samuel R., Shelby R., Slone A., Smilkov D., So D. R., Sohn D., Tokumine S., Valter D., Vasudevan V., Vodrahalli K., Wang X., Wang P., Wang Z., Wang T., Wieting J., Wu Y., Xu K., Xu Y., Xue L., Yin P., Yu J., Zhang Q., Zheng S., Zheng C., Zhou W., Zhou D., Petrov S., Wu Y. (2023). PaLM 2 Technical Report // https://arxiv.org/abs/2305.10403
2637
Chen X., Liang C., Huang D., Real E., Wang K., Liu Y., Pham H., Dong X., Luong T., Hsieh C.-J., Lu Y., Le Q. V. (2023). Symbolic Discovery of Optimization Algorithms // https://arxiv.org/abs/2302.06675
2638
Liu H., Li Z., Hall D., Liang P., Ma T. (2023). Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training // https://arxiv.org/abs/2305.14342
2639
Tay Y., Dehghani M., Tran V. Q., Garcia X., Wei J., Wang X., Chung H. W., Shakeri s., Bahri D., Schuster T., Zheng H. S., Zhou D., Houlsby N., Metzler D. (2022). UL2: Unifying Language Learning Paradigms // https://arxiv.org/abs/2205.05131
2640
Змитрович Д. (2023). FRED-T5. Новая SOTA модель для русского языка от SberDevices. / Хабр, 19 апр 2023 // https://habr.com/ru/companies/sberdevices/articles/730088/
2641
Bavarian M., Jun H., Tezak N., Schulman J., McLeavey C., Tworek J., Chen M. (2022). Efficient Training of Language Models to Fill in the Middle // https://arxiv.org/abs/2207.14255
2642
Ouyang L., Wu J., Jiang X., Almeida D., Wainwright C. L., Mishkin P., Zhang C., Agarwal S., Slama K., Ray A., Schulman J., Hilton J., Kelton F., Miller L., Simens M., Askell A., Welinder P., Christiano P., Leike J., Lowe R. (2022). Training language models to follow instructions with human feedback // https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf
2643
Branwen G. (2022). GPT-3 2nd Anniversary / Reddit, May 28, 2022 // https://www.reddit.com/r/mlscaling/comments/uznkhw/gpt3_2nd_anniversary/
2644
OpenAI (2023). GPT-4 Technical Report // https://arxiv.org/abs/2303.08774
2645
Pichai S. (2023). An important next step on our AI journey // https://blog.google/technology/ai/bard-google-ai-search-updates/
2646
Anthropic PBC (2023). Introducing Claude // https://www.anthropic.com/index/introducing-claude
2647
SambaNova Systems, Together Computer (2023). BLOOMChat: a New Open Multilingual Chat LLM // https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1
2648
Taori R., Gulrajani I., Zhang T, Dubois Y., Li X., Guestrin C., Liang P., Hashimoto T. B. (2023). Stanford Alpaca: An Instruction-following LLaMA model // https://github.com/tatsu-lab/stanford_alpaca
2649
Touvron H., Lavril T., Izacard G., Martinet X., Lachaux M.-A., Lacroix T., Rozière B., Goyal N., Hambro E., Azhar F., Rodriguez A., Joulin A., Grave E., Lample G. (2023). LLaMA: Open and Efficient Foundation Language Models // https://arxiv.org/abs/2302.13971
2650
Zhang S., Roller S., Goyal N., Artetxe M., Chen M., Chen S., Dewan C., Diab M., Li X., Lin X. V., Mihaylov T., Ott M., Shleifer S., Shuster K., Simig D., Koura P. S., Sridhar A., Wang T., Zettlemoyer L. (2022). OPT: Open Pre-trained Transformer Language Models // https://arxiv.org/abs/2205.01068
2651
Taori R., Gulrajani I., Zhang T, Dubois Y., Li X., Guestrin C., Liang P., Hashimoto T. B. (2023). Stanford Alpaca: An Instruction-following LLaMA model // https://github.com/tatsu-lab/stanford_alpaca
2652
Vicuna Team (2023). Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality // https://lmsys.org/blog/2023-03-30-vicuna/
2653
Dettmers T., Pagnoni A., Holtzman A., Zettlemoyer L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs // https://arxiv.org/abs/2305.14314
2654
Geng X., Gudibande A., Liu H., Wallace E., Abbeel P., Levine S., Song D. (2023). Koala: A Dialogue Model for Academic Research // https://bair.berkeley.edu/blog/2023/04/03/koala/
2655
Patil S. G., Zhang T., Wang X., Gonzalez J. E. (2023). Gorilla: Large Language Model Connected with Massive APIs // https://arxiv.org/abs/2305.15334
2656
Mukherjee S., Mitra A., Jawahar G., Agarwal s., Palangi H., Awadallah A. (2023). Orca: Progressive