Error “Group(s) not found: ui (via –with)” installing PrivateGPT 0.4 on WSL

Note: the below assumes you already have followed any other guide, but you have gotten an error up to that point. The below will pick up directly where you left off in the your original guide.

As of a few days ago, when you try to install PrivateGPT with Poetry on WSL following various guides, the commands: “cd privateGPT”, “poetry install –with ui”, “poetry install –with local”, Poetry returns the errors.

Group(s) not found: ui (via –with)
Group(s) not found: local (via –with)

When going through the commit (which can be found here: 45f0571), it turns out that ui is moved from its own group and has moved to the “extras” group. It’s unclear to me where “local” went though.

Now, before running the commands below, I would suggest to first install the CUDA toolkit from the official Nvidia download portal and follow Nvidia’s steps to install. Most guides tell you to do this after, I’ve had better results doing this before.

Note: when exporting your path, make sure you have the right version number in the path. Nvidia is updating their versions regularly. This article was updated on 17/03/24 to reflect a newer version (previous version was 12.3) after publishing this post.

export PATH=”/usr/local/cuda-12.4/bin:$PATH”
export LD_LIBRARY_PATH=”/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH”

Going through the commit, I came to the conclusion (under dockerfile section), that the complete new poetry command to successfully install PrivateGPT is:

cd privateGPT
poetry install –extras “ui embeddings-huggingface llms-llama-cpp vector-stores-qdrant”

After you run this last poetry –extras command, you can finish installing LLAMA CUDA libraries and the Python bindings with:

CMAKE_ARGS=’-DLLAMA_CUBLAS=on’ poetry run pip install –force-reinstall –no-cache-dir llms-llama-cpp-python

Then finish with:
poetry run python scripts/setup
make run

To check if GPU offload is working, scroll to the line below and check for BLAS = 1. If you see BLAS = 0, GPU offloading is not working. This could be for various reasons, including that, as mentioned above, the path is not properly exported.

llama_new_context_with_model: graph splits (measure): 2
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |

In your browser, go to http://127.0.0.1:8001 and you should see the PrivateGPT screen.



Leave a Reply

Your email address will not be published. Required fields are marked *