You should probably have a requirements.txt file instead of just a list of requirements. It's often hard to tell which combination of package versions will 'actually' work when running these things
punnerud 52 days ago [-]
Forgot that. Fixed now
Patrick_Devine 52 days ago [-]
If you don't want to make direct API calls, there are actual official Ollama python bindings[1]. Cool project though!
Nice, thanks for the feedback. I have a prototype of also using the embeddings for categorizing the steps, with "tags/labels". Almost take it as a challenge to be able to reason better with a smaller modell than those >70B that you can not run on your own laptop.
Patrick_Devine 52 days ago [-]
I actually built something similar to this a couple days ago for finding duplicate bugs in our gh repo. Some differences:
* I used json to store the blobs in sqlite instead of converting it to byte form (I think they're roughly equivalent in the end?)
* For the distances calculations I use `numpy.linalg.norm(a-b)` to subtract the two vectors and then take the normal
* `ollama.embed()` and `ollama.generate()` will cut down on the requests code
Switching to a low level integration will probably not improve the speed, the waiting is primarily on the llama generation of text.
Should be easy to switch embeddings.
Already playing with adding different tags to previous answers using embeddings, then using that to improve the reasoning.
gunalx 51 days ago [-]
Is does this utilize the knowledge graph features or is it just for tracking.
punnerud 51 days ago [-]
I have a new version that utilize the graph. Not pushed it yet.
Then I use the embeddings to tag the answers and use tags + graph to try to understand if it is a good or bad reasoning.
Hope to have it out next week.
A bit to many bugs now.
isaacremuant 52 days ago [-]
[dead]
jijojohnxx 51 days ago [-]
[flagged]
jijojohnxx 51 days ago [-]
[flagged]
jijojohnxx 51 days ago [-]
[flagged]
jijojohnxx 51 days ago [-]
[flagged]
Rendered at 16:45:11 GMT+0000 (UTC) with Wasmer Edge.
[1] https://github.com/ollama/ollama-python
* I used json to store the blobs in sqlite instead of converting it to byte form (I think they're roughly equivalent in the end?) * For the distances calculations I use `numpy.linalg.norm(a-b)` to subtract the two vectors and then take the normal * `ollama.embed()` and `ollama.generate()` will cut down on the requests code
speaking of embeddings, you saw https://jina.ai/news/jina-embeddings-v3-a-frontier-multiling... ?
Should be easy to switch embeddings.
Already playing with adding different tags to previous answers using embeddings, then using that to improve the reasoning.
A bit to many bugs now.