• 0 Posts
  • 43 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • Increase context length, probably enable flash attention in ollama too. Llama3.1 support up to 128k context length, for example. That’s in tokens and a token is on average a bit under 4 letters.

    Note that higher context length requires more ram and it’s slower, so you ideally want to find a sweet spot for your use and hardware. Flash attention makes this more efficient

    Oh, and the model needs to have been trained at larger contexts, otherwise it tends to handle it poorly. So you should check what max length the model you want to use was trained to handle







  • I remember back in the day this automated downloader program… the links had a limit of one download at a time and you had to solve a captcha to start each download.

    So the downloader had built in “solve other’s captcha” system, where you could build up credit.

    So when you had say 20 links to download you spent some minutes solving other’s captchas and get some credit, then the program would use that crowdsourcing to solve yours as they popped up.







  • It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.

    Most tpu’s don’t have much ram, and especially cheap ones.


  • Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.

    For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.

    And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.

    As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.

    So no, you’re not loading all the notes directly, and you won’t have a smart model.

    For your hardware and use case… try phi3-mini with a RAG system as a start.