scroll to top
Press enter or spacebar to select a desired language.
Press enter or spacebar to select a desired language.
Your source for trusted research content

EBSCO Auth Banner

Let's find your institution. Click here.

Advanced Search Results For "LANGUAGE models"

1 - 10 of 26,042 results for
 "LANGUAGE models"
Press enter or spacebar to adjust the number of results displayed.

A Computational Inflection for Scientific Discovery. (cover story)

Publication Type: Periodical

Source(s): Communications of the ACM. Aug2023, Vol. 66 Issue 8, p62-73. 12p. 2 Color Photographs, 2 Diagrams, 2 Charts, 2 Graphs.

Abstract: This article presents an overview on task-guided scientific knowledge retrieval as a way for researchers to overcome the limitations of human cognitive capacity that in the age of explosive digital information creates a cognitive bottleneck. Topics inc...

F-ALBERT: A Distilled Model from a Two-Time Distillation System for Reduced Computational Complexity in ALBERT Model.

Publication Type: Academic Journal

Source(s): Applied Sciences (2076-3417). Sep2023, Vol. 13 Issue 17, p9530. 16p.

Abstract: Recently, language models based on the Transformer architecture have been predominantly used in AI natural language processing. These models, which have been proven to perform better with more parameters, have led to a significant increase in model siz...

Cargo Cult AI: Is the ability to think scientifically the defining essence of intelligence?

Publication Type: Periodical

Source(s): Communications of the ACM. Sep2023, Vol. 66 Issue 9, p46-51. 6p. 1 Black and White Photograph.

Authors:

Abstract: This article discusses large language models (LLMs) and artificial general intelligence (AGI), the ability to perform any general task a human is capable of doing, in this article scientific thinking is the task of interest. Topics include a look at de...

Molecular Descriptors Property Prediction Using Transformer-Based Approach.

Publication Type: Academic Journal

Source(s): International Journal of Molecular Sciences. Aug2023, Vol. 24 Issue 15, p11948. 15p.

Abstract: In this study, we introduce semi-supervised machine learning models designed to predict molecular properties. Our model employs a two-stage approach, involving pre-training and fine-tuning. Particularly, our model leverages a substantial amount of labe...

The Smallness of Large Language Models: There is so much more to language and human beings than large language models can possibly master.

Publication Type: Periodical

Source(s): Communications of the ACM. Sep2023, Vol. 66 Issue 9, p24-27. 4p. 1 Color Photograph.

Abstract: In this article, the author discusses the limitations and dangers associated with large language models (LLMs), such as the GPT series. The author argues that LLMs, despite being touted as capable of containing all human knowledge, are actually limited...

Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems.

Publication Type: Academic Journal

Source(s): Mathematics (2227-7390). Jul2023, Vol. 11 Issue 14, p3048. 14p.

Abstract: The use of Transformer-based pre-trained language models has become prevalent in enhancing the performance of task-oriented dialogue systems. These models, which are pre-trained on large text data to grasp the language syntax and semantics, fine-tune t...

Hire Education.

Publication Type: Periodical

Source(s): Fast Company. Sep2023, Issue 258, p30-36. 4p. 2 Color Photographs.

Authors:

Abstract: A few of Davis's friends turned him on to the idea of coding boot camps - intensive programs that teach essential skills to people seeking jobs in computer science. In July 2022, Davis enrolled in a boot camp through Lighthouse Labs, a Canadian for-pro...

Exploring Prompts in Few-Shot Cross-Linguistic Topic Classification Scenarios.

Publication Type: Academic Journal

Source(s): Applied Sciences (2076-3417). Sep2023, Vol. 13 Issue 17, p9944. 15p.

Abstract: In recent years, large-scale pretrained language models have become widely used in natural language processing tasks. On this basis, prompt learning has achieved excellent performance in specific few-shot classification scenarios. The core idea of prom...

THE FRONT-RUNNER.

Publication Type: Periodical

Source(s): Fast Company. Sep2023, Issue 258, p50-102. 9p. 10 Color Photographs.

Authors:

Multi-Intent Natural Language Understanding Framework for Automotive Applications: A Heterogeneous Parallel Approach.

Publication Type: Academic Journal

Source(s): Applied Sciences (2076-3417). Sep2023, Vol. 13 Issue 17, p9919. 21p.

Abstract: Natural language understanding (NLU) is an important aspect of achieving human–machine interactions in the automotive application field, consisting of two core subtasks, multiple-intent detection, and slot filling (ID-SF). However, existing joint multi...

sponsored