This repository aims to keep track of the most worthwhile reading materials related to LLM. Rather than including all available materials, we focus on curating a collection of the most valuable, useful, and interesting materials.
We are eager to seek more collaborators, so if you would like to contribute, more than WELCOME!
There are tons of materials on LLM. To keep from getting lost in them, we built this repository. We read every material in person to make sure they are worthy of being here. Since this takes a long time, we also include the materials that look good for future reading. If some materials you think are good aren't included, PULL A REQUEST! If you find this useful or want to follow our progress, we look forward to your ⭐!
✅ indicates that we have read a material and believe it is worth preserving here
🚩 indicates that we think a material is excellent and probably worth reading
🔥 indicates that a material is well-known in its field or popular now
-
Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs
-
(Video) Openai Talk: A Survey of Techniques for Maximizing LLM Performance ✅🚩
A good talk that describes the specific steps of the LLM development process
- Decomposed Prompting: A Modular Approach for Solving Complex Tasks (ICLR2023)
- Step-Back Prompt, useful for RAG Step-Back Prompting Enables Reasoning Via Abstraction in Large Language Models (ICLR2024高分888)
- Large Language Models and Search
- ACL 2023 Tutorial: Retrieval-based Language Models and Applications
- LlamaIndex talk
- Retrieval-Augmented Generation (RAG): From Theory to LangChain Implementation
- A Guide on 12 Tuning Strategies for Production-Ready RAG Applications
- A Survey Retrieval-Augmented Generation for Large Language Models: A Survey and Analysis in Chinese✅🚩
- 🔥Facebook Introduces the concept of RAG: Retrieval-augmented generation for knowledge-intensive nlp tasks (NIPS2020)
- 🔥HyDE: Precise Zero-Shot Dense Retrieval without Relevance Labels (ACL2023)
- Benchmarking Large Language Models in Retrieval-Augmented Generation (AAAI2024)
- REALM: Retrieval-Augmented Language Model Pre-Training (ICML2020)
- may be useful: RA-DIT: Retrieval-Augmented Dual Instruction Tuning (ICLR2024)
- 🔥Explode on Twitter: Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection(ICLR2024高分6888)
- Interesting, may be useful: Lift Yourself Up: Retrieval-augmented Text Generation with Self-Memory (NIPS2023)
- Retrieve from training data: Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data (ACL2022)
- Retrieve from recitation Recitation-Augmented Language Models (ICLR2023)
- Generate rather than retrieve: Large language models are strong context generators (ICLR2023)
- Promptagator: Few-shot Dense Retrieval From 8 Examples (ICLR2023)
- PRCA: Fitting Black-Box Large Language Models for Retrieval Question Answering via Pluggable Reward-Driven Contextual Adapter (EMNLP2023)
- Augmentation-Adapted Retriever Improves Generalization of Language Models as Generic Plug-In (ACL2023)
- Understanding Retrieval Augmentation for Long-Form Question Answering (ICLR2024)
- Diversify Question Generation with Retrieval-Augmented Style Transfer (EMNLP2023)
- Rewrite query Query Rewriting for Retrieval-Augmented Large Language Models (EMNLP2023)
- Retrieve prompt for 0-shot task UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation (EMNLP2023)
- Large Language Models Can Be Easily Distracted by Irrelevant Context (ICML2023)
- Making Retrieval-Augmented Language Models Robust to Irrelevant Context (ICLR2024)
- Few-shot Learning with Retrieval Augmented Language Models (JMLR2022)
- Retrieval-Generation Synergy Augmented Large Language Models (Arxiv)
- Enabling Large Language Models to Generate Text with Citations (Arxiv)
- Dense X Retrieval: What Retrieval Granularity Should We Use? (Arxiv)
- 🔥LoRA: LoRA: Low-Rank Adaptation of Large Language Models (ICLR2022)
- LLaMA-Adapter- Efficient Fine-tuning of Language Models with Zero-init Attention (ICLR2024)
-
A survey: The Rise and Potential of Large Language Model Based Agents: A Survey (Arxiv) A nice survey for LLM Agent ✅🚩
-
Another survey: A Survey on Large Language Model based Autonomous Agents (Arxiv)
-
Cognitive Architectures for Language Agents (Arxiv) An Architectural Paradigm for Agents ✅
-
War Agent: War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars (Arxiv)
-
Game Agent: Human-level play in the game of Diplomacy by combining language models with strategic reasoning (Science) ✅
Example of agent participating in game
-
Agent Society CAMEL: CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (NeurIPS2023)