Nest AI Research - LLM & Tasks

Posted by Riino on
Nest AI Research - LLM & Tasks
Nest AI Research - LLM & Tasks

Nest AI Research - LLM & Tasks

 
Selected Key Researches based on LLM
 

Survey

notion image
LLM family survey (LLM Summary)
notion image

Tools:

PLMs

Colossal (based on LLAMA)
Falcon
LaMini-Flan-T5
Stable
Vicuna : 99% GPT-3.5
Dolly V2 (Pythia) : Business Lisence LLaMA

Factuality

CONNER COmpreheNsive kNowledge Evaluation fRamework
Are You Sure That This Happened? Assessing the Factuality Degree of Events in Text
Abstract. Identifying the veracity, or factuality, of event mentions in text is fundamental for reasoning about eventualities in discourse. Inferences derived from events judged as not having happened, or as being only possible, are different from those derived from events evaluated as factual. Event factuality involves two separate levels of information. On the one hand, it deals with polarity, which distinguishes between positive and negative instantiations of events. On the other, it has to do with degrees of certainty (e.g., possible, probable), an information level generally subsumed under the category of epistemic modality. This article aims at contributing to a better understanding of how event factuality is articulated in natural language. For that purpose, we put forward a linguistic-oriented computational model which has at its core an algorithm articulating the effect of factuality relations across levels of syntactic embedding. As a proof of concept, this model has been implemented in De Facto, a factuality profiler for eventualities mentioned in text, and tested against a corpus built specifically for the task, yielding an F1 of 0.70 (macro-averaging) and 0.80 (micro-averaging). These two measures mutually compensate for an over-emphasis present in the other (either on the lesser or greater populated categories), and can therefore be interpreted as the lower and upper bounds of the De Facto's performance.
Are You Sure That This Happened? Assessing the Factuality Degree of Events in Text

Papers - LLM Enhancement

[Text-to-SQL]
GNN to Transformer
[Multimodal] Single Diffuser for Video/Voice/Text/Image
notion image
[Model Structurel] RNN + Transfermer
notion image
[Multimodal] Output Keyboard/Mouse Opeartion Command (Recursive Criticism and Improvement (RCI))
[Multimodal] HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face(JARVIS)
[Model Structurel] ‘My story has several episodes, kindly watch the ‘previously on’ (Amplify input length)
‘GPT-4 will judge you, rather than me.’ (LLM benchmark strategy)
notion image
‘You guys talk with each other’ (GAN-like LLM tuning)
‘Look at what you just said.’ (Key concept of AutoGPT family’)
‘You can join the exam many times’ (Self-consistency optimizing)

Backbones

[Interpretability]Elementary Proven of Relationships between Neurons in LLM and Knowlege Relationships (Selected Relationships in Knowlege Graph) by supressing or amplifying ‘knowledge neurons’ while observing Knowledge Attribution
notion image
A new Gradient descent method
notion image
[Model Structurel]Soft Tuning
notion image
[Model Structurel]Attention
notion image