litbaza книги онлайнРазная литератураОхота на электроовец. Большая книга искусственного интеллекта - Сергей Сергеевич Марков

Шрифт:

-
+

Интервал:

-
+

Закладка:

Сделать
1 ... 458 459 460 461 462 463 464 465 466 ... 482
Перейти на страницу:
для языковых моделей, например прямую оптимизацию политики (Direct Policy Optimization, DPO) и даже обучение с обратной связью от ИИ (RL from AI Feedback, RLAIF).

2677

Rafailov R., Sharma A., Mitchell E., Ermon S., Manning C. D., Finn C. (2023). Direct Preference Optimization: Your Language Model is Secretly a Reward Model // https://arxiv.org/abs/2305.18290

2678

Bai Y., Kadavath S., Kundu S., Askell A., Kernion J., Jones A., Chen A., Goldie A., Mirhoseini A., McKinnon C., Chen C., Olsson C., Olah C., Hernandez D., Drain D., Ganguli D., Li D., Tran-Johnson E., Perez E., Kerr J., Mueller J., Ladish J., Landau J., Ndousse K., Lukosuite K., Lovitt L., Sellitto M., Elhage N., Schiefer N., Mercado N., DasSarma N., Lasenby R., Larson R., Ringer S., Johnston S., Kravec S., Showk S. E., Fort S., Lanham T., Telleen-Lawton T., Conerly T., Henighan T., Hume T., Bowman S. R., Hatfield-Dodds Z., Mann B., Amodei D., Joseph N., McCandlish S., Brown T., Kaplan J. (2022). Constitutional AI: Harmlessness from AI Feedback // https://arxiv.org/abs/2212.08073

2679

Аверкиев С. (2023). Это не чат, это GigaChat. Русскоязычная ChatGPT от Сбера. / Хабр, 24 апр 2023 // https://habr.com/ru/companies/sberbank/articles/730108/

2680

Bommasani R., Hudson D. A, Adeli E., Altman R., Arora S., von Arx S., Bernstein M. S., Bohg J., Bosselut A., Brunskill E., Brynjolfsson E., Buch S., Card D., Castellon R., Chatterji N., Chen A., Creel K., David J. Q., Demszky D., Donahue C., Doumbouya M., Durmus E., Ermon S., Etchemendy J., Ethayarajh K., Fei-Fei L., Finn C., Gale T., Gillespie L., Goel K., Goodman N., Grossman S., Guha N., Hashimoto T., Henderson P., Hewitt J., Ho D. E., Hong J., Hsu K., Huang J., Icard T., Jain S., Jurafsky D., Kalluri P., Karamcheti S., Keeling G., Khani F., Khattab O., Koh P. W., Krass M., Krishna R., Kuditipudi R., Kumar A., Ladhak F., Lee M., Lee T., Leskovec J., Levent I., Li X. L., Li X., Ma T., Malik A., Manning C. D., Mirchandani S., Mitchell E., Munyikwa Z., Nair S., Narayan A., Narayanan D., Newman B., Nie A., Niebles J. C., Nilforoshan H., Nyarko J., Ogut G., Orr L., Papadimitriou I., Park J. S., Piech C., Portelance E., Potts C., Raghunathan A., Reich R., Ren H., Rong F., Roohani Y., Ruiz C., Ryan J., Ré C., Sadigh D., Sagawa S., Santhanam K., Shih A., Srinivasan K., Tamkin A., Taori R., Thomas A. W., Tramèr F., Wang R. E., Wang W., Wu B., Wu J., Wu Y., Xie S. M., Yasunaga M., You J., Zaharia M., Zhang M., Zhang T., Zhang X., Zhang Y. (2021). On the Opportunities and Risks of Foundation Models // https://arxiv.org/abs/2108.07258

2681

Dao T., Fu D. Y., Ermon S., Rudra A., Ré C. (2022). FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness // https://arxiv.org/abs/2205.14135

2682

Dao T. (2023). FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning // https://arxiv.org/abs/2307.08691

2683

Shang Y., Yuan Z., Wu Q., Dong Z. (2023). PB-LLM: Partially Binarized Large Language Models // https://arxiv.org/abs/2310.00034

2684

Nagel M., Fournarakis M., Amjad R. A., Bondarenko Y., van Baalen M., Blankevoort T. (2021). A White Paper on Neural Network Quantization // https://arxiv.org/abs/2106.08295

2685

Gholami A., Kim S., Dong Z., Yao Z., Mahoney M. W., Keutzer K. (2021). A Survey of Quantization Methods for Efficient Neural Network Inference // https://arxiv.org/abs/2103.13630

2686

Dettmers T., Pagnoni A., Holtzman A., Zettlemoyer L. (2023). QLoRA: Efficient Finetuning of Quantized LLMs // https://arxiv.org/abs/2305.14314

2687

Rush A. (2023). llama2.rs // https://github.com/srush/llama2.rs

2688

Li X., Yao Y., Jiang X., Fang X., Meng X., Fan S., Han P., Li J., Du L., Qin B., Zhang Z., Sun A., Wang Y. (2023). FLM-101B: An Open LLM and How to Train It with $100K Budget // https://arxiv.org/abs/2309.03852

2689

Bengio Y., Louradour J., Collobert R., Weston J. (2009). Curriculum Learning / ICML '09: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 41–48. // https://doi.org/10.1145/1553374.1553380

2690

Graves A., Bellemare M. G., Menick J., Munos R., Kavukcuoglu K. (2017). Automated Curriculum Learning for Neural Networks // https://arxiv.org/abs/1704.03003

2691

Li C., Zhang M., He Y. (2022). The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models // https://openreview.net/forum?id=JpZ5du_Kdh

2692

Li S. (2023). Variable Sequence Length Training for Long-Context Large Language Models / Large Language Model, NLP, Deep Learning, Machine Learning, Blog, Developer Blog, July 22, 2023. // https://www.cerebras.net/blog/variable-sequence-length-training-for-long-context-large-language-models/

2693

DeepSpeed Data Efficiency: A composable library that makes better use of data, increases training efficiency, and improves model quality (2023). / deepspeed.ai, September 26, 2023. // https://www.deepspeed.ai/tutorials/data-efficiency/

2694

Fernandez J., Downey D. (2018). Sampling Informative Training Data for RNN Language Models / Proceedings of ACL 2018, Student Research Workshop, pp. 9–13. // https://doi.org/10.18653/v1/P18-3002

2695

Wang H., Huang M., Huang R., Hong L., Xu H., Hu T., Liang X., Li Z. (2023). Boosting Visual-Language Models by Exploiting Hard Samples // https://arxiv.org/abs/2305.05208

2696

Keles F. D., Hegde C. (2023). On The Computational Complexity of Self-Attention. / Proceedings of Machine Learning Research, Vol. 201, pp. 1–23, 2023 // https://proceedings.mlr.press/v201/duman-keles23a/duman-keles23a.pdf

2697

* Серебряная пуля — метафора, означающая простое решение сложной проблемы.

2698

Tay Y., Dehghani M., Abnar S., Chung H. W., Fedus W., Rao J., Narang S., Tran V. Q., Yogatama D., Metzler D. (2022). Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling? // https://arxiv.org/abs/2207.10551

1 ... 458 459 460 461 462 463 464 465 466 ... 482
Перейти на страницу:

Комментарии
Минимальная длина комментария - 20 знаков. Уважайте себя и других!
Комментариев еще нет. Хотите быть первым?