Шрифт:
Интервал:
Закладка:
1968
Wu D. J. (2019). Accelerating Self-Play Learning in Go // https://arxiv.org/abs/1902.10565
1969
Boardsize 19x19 - 15 minutes per side (2021) / Computer Go ServerH, Last Update: 2021-03-14 14:43:24 UTC // http://www.yss-aya.com/cgos/19x19/standings.html
1970
Nasu Y. (2018). ƎUИИ: Efficiently Updatable Neural-Network-based Evaluation Functions for Computer Shogi // https://www.apply.computer-shogi.org/wcsc28/appeal/the_end_of_genesis_T.N.K.evolution_turbo_type_D/nnue.pdf
1971
Chess Programming Wiki contributors. (2020, August 31). Stockfish NNUE. In Wikipedia, Chess Programming Wiki contributors. Retrieved 08:00, September 2, 2020, from https://www.chessprogramming.org/Stockfish_NNUE
1972
Poundstone W. (2011). Prisoner's Dilemma. Knopf Doubleday Publishing Group // https://books.google.ru/books?id=twNXXfYVB1UC
1973
Bowling M., Burch N., Johanson M., Tammelin O. (2015). Heads-up Limit Hold’em Poker is Solved / Science, Vol. 347, Iss. 6218, pp. 145—149 // https://doi.org/10.1126/science.1259433
1974
Moravčík M., Schmid M., Burch N., Lisý V., Morrill D., Bard N., Davis T., Waugh K., Johanson M., Bowling M. (2017). DeepStack: Expert-level artificial intelligence in heads-up no-limit poker / Science, Vol. 356, Iss. 6337, pp. 508—513 // https://doi.org/10.1126/science.aam6960
1975
Mets C. (2017). Inside Libratus, the Poker AI That Out-Bluffed the Best Humans / Wired, 02.01.17 // https://www.wired.com/2017/02/libratus/
1976
Rodriguez J. (2019). Inside Pluribus: Facebook’s New AI That Just Mastered the World’s Most Difficult Poker Game / KDnuggets // https://www.kdnuggets.com/2019/08/inside-pluribus-facebooks-new-ai-poker.html
1977
Blair A., Saffidine A. (2019). AI surpasses humans at six-player poker / Science, Vol. 365, Iss. 6456, pp. 864–865 // https://doi.org/10.1126/science.aay7774
1978
Brown N., Lerer A., Gross S., Sandholm T. (2019). Deep Counterfactual Regret Minimization / Proceedings of the 36th International Conference on Machine Learning, PMLR 97:793-802 // http://proceedings.mlr.press/v97/brown19b.html
1979
Ontañón S., Synnaeve G., Uriarte A., Richoux F., Churchill D., Preuss M. (2013). A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft / IEEE Transactions on Computational Intelligence and AI in Games, Vol. 5, No. 4, pp. 293—311 // https://doi.org/10.1109/TCIAIG.2013.2286295
1980
Schulman J., Klimov O., Wolski F., Dhariwal P., Radford A. (2017). Proximal Policy Optimization / OpenAI blog, July 20, 2017 // https://openai.com/blog/openai-baselines-ppo/
1981
Chan B., Tang J., Pondé H., Raiman J., Wolski F., Petrov M., Zhang S., Dennison C., Farhi D., Sidor S., Dębiak P., Pachocki J., Brockman G. (2018). OpenAI Five: Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2 / OpenAI blog // https://openai.com/blog/openai-five/
1982
Matiisen T. (2018). The use of Embeddings in OpenAI Five / Computational Neuroscience Lab, Institute of Computer Science, University of Tartu, September 9, 2018 // https://neuro.cs.ut.ee/the-use-of-embeddings-in-openai-five/
1983
Chan B., Tang J., Pondé H., Raiman J., Wolski F., Petrov M., Zhang S., Dennison C., Farhi D., Sidor S., Dębiak P., Pachocki J., Brockman G. (2018). OpenAI Five: Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2 / OpenAI blog // https://openai.com/blog/openai-five/
1984
OpenAI Five Defeats Dota 2 World Champions (2019) / OpenAI blog, April 15, 2019 // https://openai.com/blog/openai-five-defeats-dota-2-world-champions/
1985
Vinyals O., Babuschkin I., Chung J., Mathieu M., Jaderberg M., Czarnecki W., Dudzik A., Huang A., Georgiev P., Powell R., Ewalds T., Horgan D., Kroiss M., Danihelka I., Agapiou J., Oh J., Dalibard V., Choi D., Sifre L., Sulsky Y., Vezhnevets S., Molloy J., Cai T., Budden D., Paine T., Gulcehre C., Wang Z., Pfaff T., Pohlen T., Yogatama D., Cohen J., McKinney K., Smith O., Schaul T., Lillicrap T., Apps C., Kavukcuoglu K., Hassabis D., Silver D. (2019). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II / DeepMind blog, 24 Jan 2019 // https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/
1986
Wünsch D. (2019) / Twitter // https://twitter.com/liquidtlo/status/1088524496246657030
1987
Solimito S. (2019). Is Alphastar really impressive? // https://medium.com/@stefano.solimito/is-alphastar-really-impressive-31ab02bf0882
1988
Kosker S. (2019). Künstliche Intelligenz gegen Mensch: DeepMind AlphaStar // https://stefankosker.com/alphastar-starcraft-deepmind-kuenstliche-intelligenz/#Prominente_Meinungen_zu_AlphaStar
1989
Lee T. B. (2019). An AI crushed two human pros at StarCraft—but it wasn’t a fair fight / Ars Technica // https://arstechnica.com/gaming/2019/01/an-ai-crushed-two-human-pros-at-starcraft-but-it-wasnt-a-fair-fight/
1990
SoulDrivenOlives (2019). DeepMind's PR regarding Alphastar is unbelievably bafflingg / Reddit // https://www.reddit.com/r/MachineLearning/comments/dr2vir/d_deepminds_pr_regarding_alphastar_is/
1991
Lee T. B. (2019). An AI crushed two human pros at StarCraft—but it wasn’t a fair fight. Superhuman speed and precision helped a StarCraft AI defeat two top players / Ars Technica, 1/30/2019 // https://arstechnica.com/gaming/2019/01/an-ai-crushed-two-human-pros-at-starcraft-but-it-wasnt-a-fair-fight/
1992
u/SoulDrivenOlives (2019).[D] An analysis on how AlphaStar's superhuman speed is a band-aid fix for the limitations of imitation learning / Reddit // https://www.reddit.com/r/MachineLearning/comments/ak3v4i/d_an_analysis_on_how_alphastars_superhuman_speed/
1993
Vinyals O., Babuschkin I., Czarnecki W. M., Mathieu M., Dudzik A., Chung J., Choi D. H., Powell R., Ewalds T., Georgiev P., Oh J., Horgan D., Kroiss M., Danihelka I., Huang A., Sifre L., Cai T., Agapiou J. P., Jaderberg M., Vezhnevets A. S., Leblond R., Pohlen T., Dalibard V., Budden D., Sulsky Y., Molloy J., Paine T. L., Gulcehre C., Wang Z., Pfaff T., Wu Y., Ring R., Yogatama D., Wünsch D., McKinney K., Smith O., Schaul T., Lillicrap T., Kavukcuoglu K., Hassabis D., Apps C., Silver D. (2019). Grandmaster level in StarCraft II using multi-agent reinforcement learning / Nature, Vol. 575, pp. 350–354 (2019) // https://doi.org/10.1038/s41586-019-1724-z
1994
* Пер. М. Лозинского.
1995
Pandya D. A., Dennis B. H., Russell R. D. (2017). A computational fluid dynamics based artificial neural network model to predict solid particle erosion / Wear, Vol. 378—379, 15 May 2017, pp. 198—210 // https://doi.org/10.1016/j.wear.2017.02.028