Got it, I see, for logging in docker-compose deployment, there is a
logs folder under the folder
nebula-docker-compose for you to check :-).
From the query you help provided, this query is a full-scan like query, which could end up consuming large RAM.
Could you check whether storageD was exited/stopped (from pod status, logs)?
In case no pod/process exited due to OOM or other reasons, this could be only a timeout case RPC failure.
Then only changing timer in graphd.conf would help:
nebula-graphd.conf, modify or add the item
--storage_client_timeout_ms=60000 to change the timeout(ms)。
ref: FAQ - Nebula Graph Database Manual
While in case it exited due to some reasons like RAM not enough, the only way forward are either revise the query or scaling up the node.
To using more resources with a larger node clusterred could help but not sure if it goes well with windows(as docker/k8s on windows is not that native).
For containerised multiple server deployment, there are actually three options, where for production in K8s, the operator solution is recommended and what we are focusing now. While for your case, for test purpose and with limit resources, operators comes with more footprint may not be the best fit, you can try with k8s helm or swarm.
While, if your another PC/node is with more spec than your laptop and it’s in Linux, you can deploy cluster only on that node first.