論文
以下は、プロンプトエンジニアリングに関する最新の論文(リリース日順に並べ替え)です。私たちはこれを毎日更新し、新しい論文が入手可能になります。私たちは、これらの論文の要約を毎週、上記のガイドに取り入れています。
概要
- Nature Language Reasoning, A Survey (opens in a new tab) (March 2023)
- Augmented Language Models: a Survey (opens in a new tab) (Feb 2023)
- A Survey for In-context Learning (opens in a new tab) (Dec 2022)
- Towards Reasoning in Large Language Models: A Survey (opens in a new tab) (Dec 2022)
- Reasoning with Language Model Prompting: A Survey (opens in a new tab) (Dec 2022)
- Emergent Abilities of Large Language Models (opens in a new tab) (Jun 2022)
- A Taxonomy of Prompt Modifiers for Text-To-Image Generation (opens in a new tab) (Apr 2022)
- Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (opens in a new tab) (Jul 2021)
取り組み
- Self-Refine: Iterative Refinement with Self-Feedback (opens in a new tab) (Mar 2023)
- kNN Prompting: Beyond-Context Learning with Calibration-Free Nearest Neighbor Inference (opens in a new tab) (Mar 2023)
- Visual-Language Prompt Tuning with Knowledge-guided Context Optimization (opens in a new tab) (Mar 2023)
- Fairness-guided Few-shot Prompting for Large Language Models (opens in a new tab) (Mar 2023)
- Context-faithful Prompting for Large Language Models (opens in a new tab) (Mar 2023)
- Is Prompt All You Need? No. A Comprehensive and Broader View of Instruction Learning (opens in a new tab) (Mar 2023)
- UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation (opens in a new tab) (Mar 2023)
- Model-tuning Via Prompts Makes NLP Models Adversarially Robust (opens in a new tab) (Mar 2023)
- Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer (opens in a new tab) (March 2023)
- CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification (opens in a new tab) (March 2023)
- Larger language models do in-context learning differently (opens in a new tab) (March 2023)
- OpenICL: An Open-Source Framework for In-context Learning (opens in a new tab) (March 2023)
- Dynamic Prompting: A Unified Framework for Prompt Tuning (opens in a new tab) (March 2023)
- Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning (opens in a new tab) (March 2023)
- Effectiveness of Data Augmentation for Prefix Tuning with Limited Data (opens in a new tab) (March 2023)
- Mixture of Soft Prompts for Controllable Data Generation (opens in a new tab) (March 2023)
- Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners (opens in a new tab) (March 2023)
- How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks (opens in a new tab) (March 2023)
- Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT (opens in a new tab) (Feb 2023)
- EvoPrompting: Language Models for Code-Level Neural Architecture Search (opens in a new tab) (Feb 2023)
- In-Context Instruction Learning (opens in a new tab) (Feb 2023)
- Chain of Hindsight Aligns Language Models with Feedback (opens in a new tab) (Feb 2023)
- Language Is Not All You Need: Aligning Perception with Language Models (opens in a new tab) (Feb 2023)
- Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data (opens in a new tab) (Feb 2023)
- Active Prompting with Chain-of-Thought for Large Language Models (opens in a new tab) (Feb 2023)
- More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models (opens in a new tab) (Feb 2023)
- A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (opens in a new tab) (Feb 2023)
- Guiding Large Language Models via Directional Stimulus Prompting (opens in a new tab) (Feb 2023)
- How Does In-Context Learning Help Prompt Tuning? (opens in a new tab) (Feb 2023)
- Scalable Prompt Generation for Semi-supervised Learning with Language Models (opens in a new tab) (Feb 2023)
- Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints (opens in a new tab) (Feb 2023)
- À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting (opens in a new tab) (Feb 2023)
- GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks (opens in a new tab) (Feb 2023)
- The Capacity for Moral Self-Correction in Large Language Models (opens in a new tab) (Feb 2023)
- SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains (opens in a new tab) (Feb 2023)
- Evaluating the Robustness of Discrete Prompts (opens in a new tab) (Feb 2023)
- Compositional Exemplars for In-context Learning (opens in a new tab) (Feb 2023)
- Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery (opens in a new tab) (Feb 2023)
- Multimodal Chain-of-Thought Reasoning in Language Models (opens in a new tab) (Feb 2023)
- Large Language Models Can Be Easily Distracted by Irrelevant Context (opens in a new tab) (Feb 2023)
- Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models (opens in a new tab) (Feb 2023)
- Progressive Prompts: Continual Learning for Language Models (opens in a new tab) (Jan 2023)
- Batch Prompting: Efficient Inference with LLM APIs (opens in a new tab) (Jan 2023)
- Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (opens in a new tab) (Dec 2022)
- On Second Thought, Let's Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning (opens in a new tab) (Dec 2022)
- Constitutional AI: Harmlessness from AI Feedback (opens in a new tab) (Dec 2022)
- Successive Prompting for Decomposing Complex Questions (opens in a new tab) (Dec 2022)
- Large Language Models are reasoners with Self-Verification (opens in a new tab) (Dec 2022)
- Discovering Language Model Behaviors with Model-Written Evaluations (opens in a new tab) (Dec 2022)
- Structured Prompting: Scaling In-Context Learning to 1,000 Examples (opens in a new tab) (Dec 2022)
- PAL: Program-aided Language Models (opens in a new tab) (Nov 2022)
- Large Language Models Are Human-Level Prompt Engineers (opens in a new tab) (Nov 2022)
- Ignore Previous Prompt: Attack Techniques For Language Models (opens in a new tab) (Nov 2022)
- Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods (opens in a new tab) (Nov 2022)
- Teaching Algorithmic Reasoning via In-context Learning (opens in a new tab) (Nov 2022)
- Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference (opens in a new tab) (Nov 2022)
- Ask Me Anything: A simple strategy for prompting language models (opens in a new tab) (Oct 2022)
- Recitation-Augmented Language Models (opens in a new tab) (Oct 2022)
- ReAct: Synergizing Reasoning and Acting in Language Models (opens in a new tab) (Oct 2022)
- Prompting GPT-3 To Be Reliable (opens in a new tab) (Oct 2022)
- Decomposed Prompting: A Modular Approach for Solving Complex Tasks (opens in a new tab) (Oct 2022)
- Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought (opens in a new tab) (Oct 2022)
- Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples (opens in a new tab) (Sep 2022)
- Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning (opens in a new tab) (Sep 2022)
- Promptagator: Few-shot Dense Retrieval From 8 Examples (opens in a new tab) (Sep 2022)
- Atlas: Few-shot Learning with Retrieval Augmented Language Models (opens in a new tab) (Nov 2022)
- DocPrompting: Generating Code by Retrieving the Docs (opens in a new tab) (July 2022)
- On the Advance of Making Language Models Better Reasoners (opens in a new tab) (June 2022)
- Large Language Models are Zero-Shot Reasoners (opens in a new tab) (May 2022)
- Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations (opens in a new tab) (May 2022)
- MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning (opens in a new tab) (May 2022)
- PPT: Pre-trained Prompt Tuning for Few-shot Learning (opens in a new tab) (Mqy 2022)
- Toxicity Detection with Generative Prompt-based Inference (opens in a new tab) (May 2022)
- Learning to Transfer Prompts for Text Generation (opens in a new tab) (May 2022)
- The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning (opens in a new tab) (May 2022)
- A Taxonomy of Prompt Modifiers for Text-To-Image Generation (opens in a new tab) (Apr 2022)
- PromptChainer: Chaining Large Language Model Prompts through Visual Programming (opens in a new tab) (Mar 2022)
- Self-Consistency Improves Chain of Thought Reasoning in Language Models (opens in a new tab) (March 2022)
- Training language models to follow instructions with human feedback (opens in a new tab)
- Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? (opens in a new tab) (Feb 2022)
- Chain of Thought Prompting Elicits Reasoning in Large Language Models (opens in a new tab) (Jan 2022)
- Show Your Work: Scratchpads for Intermediate Computation with Language Models (opens in a new tab) (Nov 2021)
- AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts (opens in a new tab) (Oct 2021)
- Generated Knowledge Prompting for Commonsense Reasoning (opens in a new tab) (Oct 2021)
- Multitask Prompted Training Enables Zero-Shot Task Generalization (opens in a new tab) (Oct 2021)
- Reframing Instructional Prompts to GPTk's Language (opens in a new tab) (Sep 2021)
- Design Guidelines for Prompt Engineering Text-to-Image Generative Models (opens in a new tab) (Sep 2021)
- Making Pre-trained Language Models Better Few-shot Learners (opens in a new tab) (Aug 2021)
- Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity (opens in a new tab) (April 2021)
- BERTese: Learning to Speak to BERT (opens in a new tab) (April 2021)
- The Power of Scale for Parameter-Efficient Prompt Tuning (opens in a new tab) (April 2021)
- Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm (opens in a new tab) (Feb 2021)
- Calibrate Before Use: Improving Few-Shot Performance of Language Models (opens in a new tab) (Feb 2021)
- Prefix-Tuning: Optimizing Continuous Prompts for Generation (opens in a new tab) (Jan 2021)
- Learning to Generate Task-Specific Adapters from Task Description (opens in a new tab) (Jan 2021)
- Making Pre-trained Language Models Better Few-shot Learners (opens in a new tab) (Dec 2020)
- Learning from Task Descriptions (opens in a new tab) (Nov 2020)
- AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts (opens in a new tab) (Oct 2020)
- Language Models are Few-Shot Learners (opens in a new tab) (May 2020)
- How Can We Know What Language Models Know? (opens in a new tab) (July 2020)
- Scaling Laws for Neural Language Models (opens in a new tab) (Jan 2020)
Applications
- PaLM 2 Technical Report (opens in a new tab) (May 2023)
- BloombergGPT: A Large Language Model for Finance (opens in a new tab) (March 2023)
- Medical Intervention Duration Estimation Using Language-enhanced Transformer Encoder with Medical Prompts (opens in a new tab) (March 2023)
- Soft-prompt tuning to predict lung cancer using primary care free-text Dutch medical notes (opens in a new tab) (March 2023)
- TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs (opens in a new tab) (March 2023)
- Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning (opens in a new tab) (March 2023)
- Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses (opens in a new tab) (March 2023)
- Knowledge-augmented Frame Semantic Parsing with Hybrid Prompt-tuning (opens in a new tab) (March 2023)
- Debiasing Scores and Prompts of 2D Diffusion for Robust Text-to-3D Generation (opens in a new tab) (March 2023)
- Zero-shot Model Diagnosis (opens in a new tab) (March 2023)
- Prompting Large Language Models to Generate Code-Mixed Texts: The Case of South East Asian Languages (opens in a new tab) (March 2023)
- SPeC: A Soft Prompt-Based Calibration on Mitigating Performance Variability in Clinical Notes Summarization (opens in a new tab) (March 2023)
- Large Language Models and Simple, Stupid Bugs (opens in a new tab) (March 2023)
- Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses? (opens in a new tab) (Mar 2023)
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models (opens in a new tab) (Mar 2023)
- ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction (opens in a new tab) (March 2023)
- MathPrompter: Mathematical Reasoning using Large Language Models (opens in a new tab) (March 2023)
- Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums (opens in a new tab) (March 2023)
- Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting (opens in a new tab) (March 2023)
- Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering (opens in a new tab) (March 2023)
- Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis (opens in a new tab) (March 2023)
- SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks (opens in a new tab) (March 2023)
- Goal Driven Discovery of Distributional Differences via Language Descriptions (opens in a new tab) (Feb 2023)
- Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models (opens in a new tab) (Feb 2023)
- TabGenie: A Toolkit for Table-to-Text Generation (opens in a new tab) (Feb 2023)
- SGL-PT: A Strong Graph Learner with Graph Prompt Tuning (opens in a new tab) (Feb 2023)
- Few-Shot Table-to-Text Generation with Prompt-based Adapter (opens in a new tab) (Feb 2023)
- Language Models Are Few-shot Learners for Prognostic Prediction (opens in a new tab) (Feb 2023)
- STA: Self-controlled Text Augmentation for Improving Text Classifications (opens in a new tab) (Feb 2023)
- Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback (opens in a new tab) (Feb 2023)
- How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study (opens in a new tab) (Feb 2023)
- Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales (opens in a new tab) (Feb 2023)
- LabelPrompt: Effective Prompt-based Learning for Relation Classification (opens in a new tab) (Feb 2023)
- Language Model Crossover: Variation through Few-Shot Prompting (opens in a new tab) (Feb 2023)
- Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition (opens in a new tab) (Feb 2023)
- The Capacity for Moral Self-Correction in Large Language Models (opens in a new tab) (Feb 2023)
- Prompting for Multimodal Hateful Meme Classification (opens in a new tab) (Feb 2023)
- PLACES: Prompting Language Models for Social Conversation Synthesis (opens in a new tab) (Feb 2023)
- Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation (opens in a new tab) (Feb 2023)
- Crawling the Internal Knowledge-Base of Language Models (opens in a new tab) (Jan 2023)
- Legal Prompt Engineering for Multilingual Legal Judgement Prediction (opens in a new tab) (Dec 2022)
- Investigating Prompt Engineering in Diffusion Models (opens in a new tab) (Nov 2022)
- Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering (opens in a new tab) (Sep 2022)
- Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language (opens in a new tab) (Oct 2022)
- Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? (opens in a new tab) (Oct 2022)
- Plot Writing From Scratch Pre-Trained Language Models (opens in a new tab) (July 2022)
- Survey of Hallucination in Natural Language Generation (opens in a new tab) (Feb 2022)