• 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: August 6th, 2023

help-circle
  • You can do this by replacing an existing scope or creating a new one. In some cases I’ve needed to replace an existing scope with custom mappings, and add the information needed to it. For example I created a custom scope of ‘profile’, added the relevant claim needed along with the standard scope information, and then associated that to the provider.

    To do that, you add an OAuth scope mapping,. That mapping will then add the desired claim information. These are created with small python scripts. Set them to add the relevant claim when a case is matched (ex. User is in group “Admins”). Name the scope “profile”, though it could be a new scope (preferred) if owncloud lets you specify them.

    In the provider for owncloud add that new or replacement scope. In the Edit settings that’s found under Advanced Protocol Settings. You’d add the named scope that correlates to your recently created Claim.

    Then verify everything is working as expected; Go to Preview for that Provider. While it won’t show you scope names, it will combine the claims into the JWT preview which is convenient for validating you did everything correctly. It helps reduce the extra variable of Owncloud until you get to that point.


  • That’s the solution I take. I use Proxmox for a Windows VM which runs Ollama. That VM can then be used for gaming in the off chance a LLM isn’t loaded. It usually is. I use only one 3090 due to the power load of my two servers on top of my [many] HDDs. The extra load of 2 isn’t something I want to worry about.

    I point to that machine through LiteLLM* which is then accessed through nginx which allows only Local IPs. Those two are in a different VM that hosts most of my docker containers.

    *I found using Ollama and Open WebUI causes the model to get unloaded since they send slightly different calls. LiteLLM reduces that variance.