Stor sprogmodel



En stor sprogmodel (på engelsk Large language model forkortet LLM) er en type maskinlæringsmodel designet til sprogteknologiopgaver (NLP) såsom sproggenerering. LLM'er er sprogmodeller med mange parametre og trænes med self-supervised learning på en stor mængde tekst.
De største og mest dygtige LLM'er er generative pretrained transformers (GPT'er). Moderne modeller kan finjusteres til specifikke opgaver eller guides af hurtig teknik.[1] Disse modeller opnår forudsigelseskraft med hensyn til syntaks, semantik og ontologier[2], der er iboende i menneskelige sprogkorpora, men de arver også unøjagtigheder og skævheder, der er til stede i de data, de er trænet i.[3]
Se også
Referencer
- ^ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (december 2020). Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (red.). "Language Models are Few-Shot Learners" (PDF). Advances in Neural Information Processing Systems. Curran Associates, Inc. 33: 1877-1901. Arkiveret (PDF) fra originalen 2023-11-17. Hentet 2023-03-14.
- ^ Fathallah, Nadeen; Das, Arunav; De Giorgis, Stefano; Poltronieri, Andrea; Haase, Peter; Kovriguina, Liubov (2024-05-26). NeOn-GPT: A Large Language Model-Powered Pipeline for Ontology Learning (PDF). Extended Semantic Web Conference 2024. Hersonissos, Greece.
- ^ Manning, Christopher D. (2022). "Human Language Understanding & Reasoning". Daedalus. 151 (2): 127-138. doi:10.1162/daed_a_01905. S2CID 248377870. Arkiveret fra originalen 2023-11-17. Hentet 2023-03-09.
Yderligere læsning
- Jurafsky, Dan, Martin, James. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 3rd Edition draft, 2023.
- Zhao, Wayne Xin; et al. (2023). "A Survey of Large Language Models". arXiv:2303.18223 [cs.CL].
- Kaddour, Jean; et al. (2023). "Challenges and Applications of Large Language Models". arXiv:2307.10169 [cs.CL].
- Yin, Shukang; Fu, Chaoyou; Zhao, Sirui; Li, Ke; Sun, Xing; Xu, Tong; Chen, Enhong (2024). "A Survey on Multimodal Large Language Models". National Science Review. 11 (12): nwae403. arXiv:2306.13549. doi:10.1093/nsr/nwae403. PMC 11645129. PMID 39679213.
- "AI Index Report 2024 – Artificial Intelligence Index". aiindex.stanford.edu. Hentet 2024-05-05.
- Frank, Michael C. (27. juni 2023). "Baby steps in evaluating the capacities of large language models". Nature Reviews Psychology. 2 (8): 451-452. doi:10.1038/s44159-023-00211-x. ISSN 2731-0574. S2CID 259713140. Hentet 2. juli 2023.
Spire Denne artikel er en spire som bør udbygges. Du er velkommen til at hjælpe Wikipedia ved at udvide den. |
Medier brugt på denne side
Forfatter/Opretter: Stanford Institute for Human-Centered Artificial Intelligence (permission obtained by email from the AI index research manager), Licens: CC BY-SA 4.0
It shows the rapid increase in the computing used to train large language models. It also shows that the training cost of the best closed large language models seems much higher than the training cost of the best open-weight ones, leading to higher performance. The training cost of models like GPT-4 is not publicly known, so this is just an estimate. The data is from Epoch in 2023, and the chart is from Stanford University's 2024 AI index.
Forfatter/Opretter: Jason Wei et al, Licens: CC BY 4.0
Performance of large language models on a number of NLP benchmarks as a function of training computation.
Forfatter/Opretter: Epoch AI, Licens: CC BY 4.0
https://epochai.org/blog/training-compute-of-frontier-ai-models-grows-by-4-5x-per-year Figure 1: Summary of the compute growth trends we found for overall notable models (top left), frontier models (top right), top language models (bottom left) and top models within leading companies (bottom right). All point to a recent trend of 4-5x/year growth.