If it uses a pruned model, it would be difficult to give anything better than a percentage based on size and neurons pruned.
If I'm right in my semi-educated guess below, then technically all the training data is recallable to some degree, but it's also practically luck-based without having an almost actually infinite data set of how neuron weightings are increased/decreased based on input.
I really want to know how this works. It's not like the training data is sitting there in nicely formatted plain text waiting to be spat out, it's all tangled in the neurons. I can't even begin to conceptualise what is going on here.
Maybe... maybe with each iteration of the word, it loses it's own weighting, until there is nothing left but the raw neurons which start to re-enforce themselves until they reach more coherence. Once there is a single piece like 'phone' that by chance becomes the dominant weighted piece of the output, the 'related' parts are in turn enforced because they are actually tied to that 'phone' neuron.
Hyperphantasia
A unique thinking style characterized by an extraordinary ability to visualize, enabling a vivid and immersive inner world.
I can pretty clearly create images in my mind, but I just learned I really suck at doing people. Lake with mountains and trees? Well, what kind of trees? Little ripples in the wind or small waves? Snowy or rocky mountains? Easy as. My sister's face whom I just saw? ... uh... kinda?
Note that there isn't a Linux version of the protondrive app. ... I know! what the fuck right?
Secondly, I would just shove Linux Mint onto a USB and use that as a live distro with persistence for a while, just to get used to things. I'm not a fan of debian(/-based) or apt, but it works.
I think you mean this one