crantpy.utils.neuroglancer module#

Neuroglancer scene generation and URL encoding/decoding for CRANT datasets.

This module provides tools to create, manipulate, and share neuroglancer scenes for visualizing CRANT neurons, annotations, and connectivity data.

Key Features: - Create neuroglancer URLs with selected segments, annotations, and skeletons - Decode existing neuroglancer URLs to extract information - Build custom scenes with different layer combinations - Add annotations (points, lines, ellipsoids) to scenes - Color and group segments for better visualization

Examples

>>> import crantpy as crt
>>> from crantpy.utils.neuroglancer import encode_url, decode_url, construct_scene
>>>
>>> # Create a simple scene with some neurons
>>> url = encode_url(segments=[720575940621039145, 720575940621039146])
>>>
>>> # Decode an existing URL
>>> info = decode_url(url, format='brief')
>>> print(info['selected'])
>>>
>>> # Create a custom scene with specific layers
>>> scene = construct_scene(image=True, segmentation=True, brain_mesh=True)
>>> url = encode_url(scene=scene, segments=[720575940621039145])
crantpy.utils.neuroglancer.add_annotation_layer(annotations, scene, name=None, connected=False)[source]#

Add annotations as new layer to scene.

Parameters:
  • annotations (array or list) – Coordinates for annotations (in voxel space): - (N, 3): Point annotations at x/y/z coordinates - (N, 2, 3): Line segments with start and end points - (N, 4): Ellipsoids with x/y/z center and radius

  • scene (dict) – Scene to add annotation layer to.

  • name (str, optional) – Name for the annotation layer.

  • connected (bool, default False) – If True, point annotations will be connected as a path (TODO).

Returns:

Modified scene with annotation layer added.

Return type:

dict

Examples

>>> # Add point annotations
>>> points = np.array([[100, 200, 50], [150, 250, 60]])
>>> scene = add_annotation_layer(points, scene, name="my_points")
>>>
>>> # Add line annotations
>>> lines = np.array([
...     [[100, 200, 50], [150, 250, 60]],
...     [[150, 250, 60], [200, 300, 70]]
... ])
>>> scene = add_annotation_layer(lines, scene, name="my_lines")
crantpy.utils.neuroglancer.add_skeleton_layer(skeleton, scene, name=None)[source]#

Add skeleton as line annotation layer to scene.

Parameters:
  • skeleton (TreeNeuron or DataFrame) – Neuron skeleton to add. Coordinates must be in nanometers. Will be automatically converted to voxel space.

  • scene (dict) – Scene to add skeleton layer to.

  • name (str, optional) – Name for the skeleton layer.

Returns:

Modified scene with skeleton layer added.

Return type:

dict

Examples

>>> skeleton = crt.viz.get_skeletons([720575940621039145])[0]
>>> scene = construct_scene()
>>> scene = add_skeleton_layer(skeleton, scene)
crantpy.utils.neuroglancer.construct_scene(*, image=True, segmentation=True, brain_mesh=True, merge_biased_seg=False, nuclei=False, base_neuroglancer=False, layout='xy-3d', dataset=None)[source]#

Construct a basic neuroglancer scene for CRANT data.

Parameters:
  • image (bool, default True) – Whether to add the aligned EM image layer.

  • segmentation (bool, default True) – Whether to add the proofreadable segmentation layer.

  • brain_mesh (bool, default True) – Whether to add the brain mesh layer.

  • merge_biased_seg (bool, default False) – Whether to add the merge-biased segmentation layer (for proofreading).

  • nuclei (bool, default False) – Whether to add the nuclei segmentation layer.

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer (affects segmentation layer format).

  • layout (str, default "xy-3d") – Layout to show. Options: β€œ3d”, β€œxy-3d”, β€œxy”, β€œ4panel”.

  • dataset (str, optional) – Which dataset to use (β€œlatest” or β€œsandbox”). If None, uses default.

Returns:

Neuroglancer scene dictionary with requested layers.

Return type:

dict

Examples

>>> # Create a minimal visualization scene
>>> scene = construct_scene(image=True, segmentation=True, brain_mesh=True)
>>>
>>> # Create a full proofreading scene
>>> scene = construct_scene(
...     image=True,
...     segmentation=True,
...     brain_mesh=True,
...     merge_biased_seg=True,
...     nuclei=True
... )
crantpy.utils.neuroglancer.decode_url(url, format='json')[source]#

Decode neuroglancer URL to extract information.

Parameters:
  • url (str or list of str) – Neuroglancer URL(s) to decode.

  • format (str, default "json") – Output format: - β€œjson”: Full scene dictionary - β€œbrief”: Dict with position, selected segments, and annotations - β€œdataframe”: DataFrame with segment IDs and their layers

Returns:

Decoded information in requested format.

Return type:

dict or DataFrame

Examples

>>> url = "https://spelunker.cave-explorer.org/#!{...}"
>>> info = decode_url(url, format='brief')
>>> print(info['selected'])  # List of selected segment IDs
>>> print(info['position'])  # [x, y, z] coordinates
crantpy.utils.neuroglancer.encode_url(segments=None, annotations=None, coords=None, skeletons=None, skeleton_names=None, seg_colors=None, seg_groups=None, invis_segs=None, scene=None, base_neuroglancer=False, layout='xy-3d', open=False, to_clipboard=False, shorten=False, *, dataset=None)[source]#

Encode data as CRANT neuroglancer scene URL.

Parameters:
  • segments (int or list of int, optional) – Segment IDs (root IDs) to have selected in the scene.

  • annotations (array or dict, optional) – Coordinates for annotations: - (N, 3) array: Point annotations at x/y/z coordinates (in voxels) - dict: Multiple annotation layers {name: (N, 3) array}

  • coords ((3,) array, optional) – X, Y, Z coordinates (in voxels) to center the view on.

  • skeletons (TreeNeuron or NeuronList, optional) – Skeleton(s) to add as annotation layer(s). Must be in nanometers.

  • skeleton_names (str or list of str, optional) – Names for the skeleton(s) to display in the UI. If a single string is provided, it will be used for all skeletons. If a list is provided, its length must match the number of skeletons.

  • seg_colors (str, tuple, list, dict, or array, optional) – Colors for segments: - str or tuple: Single color for all segments - list: List of colors matching segments - dict: Mapping of segment IDs to colors - array: Labels that will be converted to colors

  • seg_groups (list or dict, optional) – Group segments into separate layers: - list: Group labels matching segments - dict: {group_name: [seg_id1, seg_id2, …]}

  • invis_segs (int or list, optional) – Segment IDs to select but keep invisible.

  • scene (dict or str, optional) – Existing scene to modify (as dict or URL string).

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer instead of CAVE Spelunker.

  • layout (str, default "xy-3d") – Layout to show. Options: β€œ3d”, β€œxy-3d”, β€œxy”, β€œ4panel”.

  • open (bool, default False) – If True, opens the URL in a web browser.

  • to_clipboard (bool, default False) – If True, copies the URL to clipboard (requires pyperclip).

  • shorten (bool, default False) – If True, creates a shortened URL (requires state server).

  • dataset (str, optional) – Which dataset to use. If None, uses default.

Returns:

Neuroglancer URL.

Return type:

str

Examples

>>> # Simple scene with segments
>>> url = encode_url(segments=[720575940621039145, 720575940621039146])
>>>
>>> # Scene with colored segments
>>> url = encode_url(
...     segments=[720575940621039145, 720575940621039146],
...     seg_colors={720575940621039145: 'red', 720575940621039146: 'blue'}
... )
>>>
>>> # Scene with skeleton and centered view
>>> import navis
>>> skeleton = crt.viz.get_skeletons([720575940621039145])[0]
>>> url = encode_url(
...     segments=[720575940621039145],
...     skeletons=skeleton,
...     coords=[24899, 14436, 3739]
... )
crantpy.utils.neuroglancer.neurons_to_url(neurons, include_skeleton=True, downsample=None, **kwargs)[source]#

Create neuroglancer URLs for a list of neurons.

Parameters:
  • neurons (NeuronList) – List of neurons to create URLs for. Must have root_id attribute.

  • include_skeleton (bool, default True) – Whether to include the skeleton in the URL.

  • downsample (int, optional) – Factor by which to downsample skeletons before adding to scene.

  • **kwargs – Additional arguments passed to encode_url().

Returns:

DataFrame with columns: id, name, url

Return type:

DataFrame

Examples

>>> neurons = crt.viz.get_skeletons([720575940621039145, 720575940621039146])
>>> urls = neurons_to_url(neurons)
>>> print(urls[['id', 'url']])
crantpy.utils.neuroglancer.scene_to_url(scene, base_neuroglancer=False, shorten=False, open=False, to_clipboard=False)[source]#

Convert neuroglancer scene dictionary to URL.

Parameters:
  • scene (dict) – Neuroglancer scene dictionary.

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer instead of CAVE Spelunker.

  • shorten (bool, default False) – Whether to create a shortened URL (requires state server).

  • open (bool, default False) – If True, opens URL in web browser.

  • to_clipboard (bool, default False) – If True, copies URL to clipboard.

Returns:

Neuroglancer URL.

Return type:

str