https://llm.mlc.ai/docs/deploy/android.html
Or does it have to be on the play store or some other BS you use to backpedal?
https://llm.mlc.ai/docs/deploy/android.html
Or does it have to be on the play store or some other BS you use to backpedal?
Are you familiar with the difference between title and paragraph? Apparently not.
Answered the same question here
Feel free to not respond when you realize you are wrong and you have no clue what I’m talking about.
IEEE defines it as any software whos actions automate a human behavior. All those fall under the definition.
What does that have to do with CACHING? That’s client server.
No clue what you’re talking about
MLC LLM does the exact same thing. Lots of apps have low quality LLMs embedded in chat apps. Low res image generation apps via diffusion models similar to DallE mini have been around a while.
Also Qualcomm Used its AI stack to deploy SD to mobile back in February. And this is not the low res one.
Think before you write.
Why would that matter?
I was talking about the title, not the 10th paragraph way down. Use your reading skills and tell me where the fuck “generative” is in the title.
No. Autocomplete is a feature. The model behind it can be gen AI and was for a number of years. IDGAF if it’s not general purpose.
The point it you have no fucking clue what you’re defending. LLMs and diffusion models have been in apps for months. You can say that General purpose LLMs embedded into mobile OS functions is novel, the rest of it is bullshit.
That’s my point. AI includes features that were added years ago. Even ML is too broad. Autocomplete uses small ML models. Spam filters as well.
I think they mean LLMs, and specifically distilled BARDs. So a subset of a subset of a subset of AI.
Neckbeard marketing
It says AI not genAI. Anyway, autocomplete is genAI, even though it may be simple glove embeddings and MC.
You don’t know what the fuck you’re talking about.
“The first phone with AI built in.”
LOL Google are dellirious
What about autocomplete? Face detection? Virtual assistants
Write a matrix multiplication program in c. Then make it at least 10x faster from asm.
Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.
Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.
But they ARE NOT the first to deploy gen AI on mobile.