Showing 108 of 108on this page. Filters & sort apply to loaded results; URL updates for sharing.108 of 108 on this page
Phuong-Hang Le - Lightweight Adapter Tuning for Multilingual Speech ...
Adapter Tuning Overview:在CV,NLP,多模态领域的代表性工作_多模态融合adapter-CSDN博客
Continual Adapter Tuning (CAT): A Parameter-Efficient Machine Learning ...
Prompt Tuning vs Adapter Tuning | AI Tutorial | Next Electronics
Summary Of Adapter Based Performance Efficient Fine Tuning (PEFT ...
[논문 리뷰] Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for ...
Fulcrum Adapter Tuning Protocol with the transmission of í µí² + í ...
COX Tuning Adapter Instructions
预训练模型微调 | 一文带你了解Adapter Tuning - 知乎
Adapter Layers
depicts these three representative parameter efficient tuning models ...
Figure 1 from LLM-Adapters: An Adapter Family for Parameter-Efficient ...
Ensemble of the proposed fine-tuning, adapter and proposed ...
Adapter Tuning:高效NLP迁移学习方法_adapter tuning算法原理-CSDN博客
Embedding Adapter Fine-Tuning Guide for RAG | LlamaIndex
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of ...
《Adapter is All You Need for Tuning Visual Tasks》论文逐句详解-CSDN博客
[2106.03164] On the Effectiveness of Adapter-based Tuning for ...
[大语言模型]LLM-Adapters: An Adapter Family for Parameter-Efficient Fine ...
The overall architecture of Adapter Tuning, Note that the original ...
预训练模型微调 | 一文带你了解Adapter Tuning-腾讯云开发者社区-腾讯云
PARAMETER-EFFICIENT TRANSFER LEARNING 之Adapter tuning的论文汇总1 - 知乎
Understanding Parameter-Efficient Finetuning of Large Language Models ...
一文彻底搞懂Fine-tuning - 参数高效微调(Parameter-Efficient Fine-Tuning) - 53AI-AI知识 ...
Finetuning LLMs Efficiently with Adapters
大模型参数高效微调技术原理综述(四)-Adapter Tuning及其变体 - 知乎
AdapterHub - Adapter-Transformers v3 - Unifying Efficient Fine-Tuning
【论文笔记】Parameter-Efficient Transfer Learning for NLP-CSDN博客
[领域总结] [PEFT] 浅谈Adapter-tuning - 知乎
让天下没有难Tuning的大模型-PEFT技术简介 - 知乎
大模型-微调技术(六):MAM Adapter(统一框架)(统一Adapter-Tuning、Prefix-Tuning、LoRA)【冻结大 ...
[1902.00751] Parameter-Efficient Transfer Learning for NLP
[2305.15036] Exploring Adapter-based Transfer Learning for Recommender ...
Parameter-efficient transfer learning系列之Adapter-CSDN博客
Adapters: A Compact and Extensible Transfer Learning Method for NLP ...
论文阅读之Parameter-Efficient Transfer Learning for NLP(2019) - 知乎
Finetuning Falcon LLMs More Efficiently With LoRA and Adapters ...
大模型微调技术(Adapter-Tuning、Prefix-Tuning、Prompt-Tuning(P-Tuning)、P-Tuning ...
一文搞清楚LORA、Prompt Tuning、P-Tuning、Adapter 、Prefix等大模型微调方法 - 知乎
Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA ...
大语言模型(15)–Adapter Tuning-社区大别野-米游社
Parameter-Efficient LLM Finetuning With Low-Rank Adaptation (LoRA)
Parameter Efficient LLM Fine-Tuning
Results of adapter-tuning RoBERTa with different strategies | Download ...
大模型PEFT技术原理(三):Adapter Tuning及其变体 - 知乎
AdapterEM: Pre-trained Language Model Adaptation for Generalized Entity ...
【NLP学习】Adapter的简介 - 知乎
LLMs Fine-tuning 学习笔记(一):trl+peft - 云野Winfield - 博客园
Overview: Efficient Fine-Tuning Methods — adapter-transformers ...
VoiceLab
预训练大语言模型的三种微调技术总结:fine-tuning、parameter-efficient fine-tuning和prompt ...
Adapting LLMs to Downstream Tasks Using Federated Learning on ...
Meet LLaMA-Adapter: A Lightweight Adaption Method For Fine-Tuning ...
Training Large Language Models: From Pre-training to Fine-tuning and ...
INTERSPEECH2023|达摩院语音实验室入选论文全况速览_cam++: a fast and efficient network ...
Parameter-Efficient Fine-Tuning Guide for LLM | Towards Data Science
Finetuning Generative AI Large Language Model (LLM) Falcon (40B,7B ...
Boost Fine-Tuning Performance of LLM: Optimal Architecture w/ PEFT LoRA ...
大型语言模型微调Fine-Tuning技术——14种主流方法的原理、适用场景及实践指南-CSDN博客
大模型高效微调综述上:Adapter Tuning、AdaMix、PET、Prefix-Tuning、Prompt Tuning、P ...
大模型的领域适配 —— Parameter-Efficient Fine-Tuning (PEFT) - 知乎
Performance of prefix-tuning, adapter-tuning and PLM fine-tuning on ...
AdapterFusion: Non-Destructive Task Composition for Transfer Learning ...
# LLM高效微调详解-从Adpter、PrefixTuning到LoRA_llm高效微调技术, 零基础入门到精通,收藏这一篇就够了-CSDN博客