Skip Navigation

Fine tuned models for summarisation?

I have a db with a lot of data that all need precise summarisation, I would do it myself if it wasn't 20 thousand fields long

It is about 300k tokens, and Gemini 2.5 struggles missing points and making up facts

Separating them into smaller sections is not an option, because even when seperated they can take up 30k tokens, and the info that needs summarisation may span 100k token ranges

I learnt that fine tuning may have better results than general purpose models, and now I'm wondering if there is anything high token count for summarisation.

Any help would be appreciated, even if its to suggest another general purpose model that has better coherency

6 comments
  • From my personal experience, I'd say generative AI isn't the best tool for summarization. It also frequently misses the point when I try. Or makes up additional facts which haven't been in the input text. (Or starts going on (wrong) tangents despite the task being to keep it short and concise.) And I'd say all(?) models do that. Even the ones that are supposed to be big and clever.

    Edit: Lots of people use ChatGPT etc for summarization, though. So I really don't know who's right here. Maybe my standards are too high, but what I've read as output from small to big models like ChatGPT wasn't great.

    There are other approaches in NLP. For example extractive summarization like the BART model from Facebook. That's precise. Some Lemmy bot uses LsaSummarizer, but I don't really know how that works. Or maybe you can re-think what you're trying to do and use RAG instead of summarization.

  • As other commenter said your workflow requires more than what LLMs are currently capable of.

    Summarization capability in LLMs is an equation of LLMs capacity for coherence over long conversational scaling operated on by the LLMs ability to navigate and distill internal structural mappings of conceptual & contextual archetype patterns as discrete objects across a continuous ambiguity sheaf.

    This technical jargon that boils down to the idea that an llms summarization capability depends on its parameter size and enough vram for context lengths. Higher parameter and less quantized models maintaining more coherence over long conversations/datasets.

    While enterprise llms are able to get up to 128k tokens while maintaining some level of coherence, the local models of medium quantization can handle 16-32k reliably. Theoretically 70b could maybe handle around 64k tokens but even thats stretching it.

    Then comes the problem of transformer attention. You can't just put a whole books worth of text into an LLMs input and expect it to inspect any part in real detail. For best results you have to chunk it section by section, chapter by chapter.

    So local llms may not be what you're looking for. If you are willing to go enterprise then Claude sonnet and deepseek R1 might be good especially if you set up a API interface.

6 comments