User: Guest  Login
Title:

Optimizing Token Usage on Large Language Model Conversations Using the Design Structure Matrix

Document type:
Konferenzbeitrag
Contribution type:
Vortrag / Präsentation
Author(s):
Ramon Maria Garcia Alarcia and Alessandro Golkar
Pages contribution:
069-078
Abstract:
As Large Language Models become ubiquitous in many sectors and tasks, there is a need to reduce token usage, overcoming challenges such as short context windows, limited output sizes, and costs associated with token intake and generation, especially in API-served LLMs. This work brings the Design Structure Matrix from the engineering design discipline into LLM conversation optimization. Applied to a use case in which the LLM conversation is about the design of a spacecraft and its subsystems, th...     »
Keywords:
Large Language Models, token usage optimization, context window, output tokens, Design Structure Matrix
Book / Congress title:
Proceedings of the 26th International DSM Conference (DSM 2024)
Congress (additional information):
26th International Dependency and Structure Modeling Conference, DSM 2024
Volume:
DS 134
Date of congress:
26.09.2024
Year:
2024
Quarter:
3. Quartal
Year / month:
2024-09
Month:
Sep
Reviewed:
ja
Language:
en
Fulltext / DOI:
doi:10.35199/dsm2024.08
WWW:
https://www.designsociety.org/publication/47698/Optimizing%2BToken%2BUsage%2Bon%2BLarge%2BLanguage%2BModel%2BConversations%2BUsing%2Bthe%2BDesign%2BStructure%2BMatrix
CC license:
by-nc, http://creativecommons.org/licenses/by-nc/4.0
 BibTeX