In the context of transformer based LLMs, a sequence refers to an ordered list of tokens.
A sequence contains the entire “window” of context that the model can see.
LLMs are constrained in the sequences it ingests based on the max sequence length.
In the context of transformer based LLMs, a sequence refers to an ordered list of tokens.
A sequence contains the entire “window” of context that the model can see.
LLMs are constrained in the sequences it ingests based on the max sequence length.