Benutzer: Gast  Login
Titel:

Optimizing Token Usage on Large Language Model Conversations Using the Design Structure Matrix

Dokumenttyp:
Konferenzbeitrag
Art des Konferenzbeitrags:
Vortrag / Präsentation
Autor(en):
Ramon Maria Garcia Alarcia and Alessandro Golkar
Seitenangaben Beitrag:
069-078
Abstract:
As Large Language Models become ubiquitous in many sectors and tasks, there is a need to reduce token usage, overcoming challenges such as short context windows, limited output sizes, and costs associated with token intake and generation, especially in API-served LLMs. This work brings the Design Structure Matrix from the engineering design discipline into LLM conversation optimization. Applied to a use case in which the LLM conversation is about the design of a spacecraft and its subsystems, th...     »
Stichworte:
Large Language Models, token usage optimization, context window, output tokens, Design Structure Matrix
Kongress- / Buchtitel:
Proceedings of the 26th International DSM Conference (DSM 2024)
Kongress / Zusatzinformationen:
26th International Dependency and Structure Modeling Conference, DSM 2024
Band / Teilband / Volume:
DS 134
Datum der Konferenz:
26.09.2024
Jahr:
2024
Quartal:
3. Quartal
Jahr / Monat:
2024-09
Monat:
Sep
Reviewed:
ja
Sprache:
en
Volltext / DOI:
doi:10.35199/dsm2024.08
WWW:
https://www.designsociety.org/publication/47698/Optimizing%2BToken%2BUsage%2Bon%2BLarge%2BLanguage%2BModel%2BConversations%2BUsing%2Bthe%2BDesign%2BStructure%2BMatrix
CC-Lizenz:
by-nc, http://creativecommons.org/licenses/by-nc/4.0
 BibTeX