crantpy package#

Subpackages#

Module contents#

exception crantpy.FilteringError(message=None)[source]#

Bases: ValueError

Raised if a filtering operation fails.

Parameters:

message (str, optional) – The error message.

Return type:

None

exception crantpy.NoMatchesError(message=None)[source]#

Bases: ValueError

Raised if no matches are found.

Parameters:

message (str, optional) – The error message.

Return type:

None

crantpy.add_annotation_layer(annotations, scene, name=None, connected=False)[source]#

Add annotations as new layer to scene.

Parameters:
  • annotations (array or list) – Coordinates for annotations (in voxel space): - (N, 3): Point annotations at x/y/z coordinates - (N, 2, 3): Line segments with start and end points - (N, 4): Ellipsoids with x/y/z center and radius

  • scene (dict) – Scene to add annotation layer to.

  • name (str, optional) – Name for the annotation layer.

  • connected (bool, default False) – If True, point annotations will be connected as a path (TODO).

Returns:

Modified scene with annotation layer added.

Return type:

dict

Examples

>>> # Add point annotations
>>> points = np.array([[100, 200, 50], [150, 250, 60]])
>>> scene = add_annotation_layer(points, scene, name="my_points")
>>>
>>> # Add line annotations
>>> lines = np.array([
...     [[100, 200, 50], [150, 250, 60]],
...     [[150, 250, 60], [200, 300, 70]]
... ])
>>> scene = add_annotation_layer(lines, scene, name="my_lines")
crantpy.add_skeleton_layer(skeleton, scene, name=None)[source]#

Add skeleton as line annotation layer to scene.

Parameters:
  • skeleton (TreeNeuron or DataFrame) – Neuron skeleton to add. Coordinates must be in nanometers. Will be automatically converted to voxel space.

  • scene (dict) – Scene to add skeleton layer to.

  • name (str, optional) – Name for the skeleton layer.

Returns:

Modified scene with skeleton layer added.

Return type:

dict

Examples

>>> skeleton = crt.viz.get_skeletons([720575940621039145])[0]
>>> scene = construct_scene()
>>> scene = add_skeleton_layer(skeleton, scene)
crantpy.attach_synapses(neurons, pre=True, post=True, threshold=1, min_size=None, materialization='latest', clean=True, max_distance=10000.0, update_ids=True, dataset=None)[source]#

Attach synapses as connectors to skeleton neurons.

This function fetches synapses for the given neuron(s) and maps them to the closest node on each skeleton using a KD-tree. The synapses are attached as a .connectors table with columns for connector_id, x, y, z, type (pre/post), partner_id, and node_id.

Adapted from fafbseg-py (Philipp Schlegel) to work with CRANTb data.

Parameters:
  • neurons (navis.TreeNeuron or navis.NeuronList) – Skeleton neuron(s) to attach synapses to. Must be TreeNeuron objects with node coordinates.

  • pre (bool, default True) – Whether to fetch and attach presynapses (outputs) for the given neurons.

  • post (bool, default True) – Whether to fetch and attach postsynapses (inputs) for the given neurons.

  • threshold (int, default 1) – Minimum number of synapses required between neuron pairs to be included.

  • min_size (int, optional) – Minimum synapse size for filtering.

  • materialization (str, default 'latest') – Materialization version to use. Either β€˜latest’ or β€˜live’.

  • clean (bool, default True) – Whether to perform cleanup of synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) - Remove synapses that are too far from skeleton nodes (see max_distance)

  • max_distance (float, default 10000.0) – Maximum distance (in nanometers) between a synapse and its nearest skeleton node. Synapses further than this are removed if clean=True. The default of 10um helps filter out spurious synapse annotations far from the actual neuron.

  • update_ids (bool, default True) – Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if you’re certain all IDs are current (faster but risky).

  • dataset (str, optional) – Dataset to use for queries.

Returns:

The same neuron(s) with .connectors table attached. The connectors table includes columns: - connector_id: Unique ID for each synapse (sequential) - x, y, z: Synapse coordinates in nanometers - type: β€˜pre’ for presynapses, β€˜post’ for postsynapses - partner_id: Root ID of the partner neuron - node_id: ID of the skeleton node closest to this synapse

Note: The input neurons are modified in place and also returned.

Return type:

navis.TreeNeuron or navis.NeuronList

Raises:
  • TypeError – If neurons is not a TreeNeuron or NeuronList of TreeNeurons.

  • ValueError – If both pre and post are False.

Examples

>>> import crantpy as cp
>>> # Get a skeleton neuron
>>> skeleton = cp.get_l2_skeleton(576460752664524086)
>>>
>>> # Attach synapses to it
>>> skeleton = cp.attach_synapses(skeleton)
>>>
>>> # View the connectors table
>>> print(skeleton.connectors.head())
>>>
>>> # Get only presynapses
>>> skeleton = cp.attach_synapses(skeleton, post=False)
>>>
>>> # Filter distant synapses more aggressively
>>> skeleton = cp.attach_synapses(skeleton, max_distance=5000)
>>>
>>> # Skip ID updates for faster queries (use only if IDs are known to be current)
>>> skeleton = cp.attach_synapses(skeleton, update_ids=False)

See also

get_synapses

Fetch synapse data without attaching to neurons.

Notes

  • This function modifies the input neurons in place by adding/updating the .connectors attribute.

  • Synapses are mapped to skeleton nodes using scipy’s KDTree for efficient nearest neighbor search.

  • The connector_id is a sequential integer starting from 0, not the original synapse ID from the database.

  • If a neuron already has a .connectors table, it will be overwritten.

  • Synapse coordinates are automatically converted from pixels to nanometers to match skeleton coordinate system (using SCALE_X=8, SCALE_Y=8, SCALE_Z=42).

  • When update_ids=True (default), IDs are automatically updated with efficient caching

crantpy.cached_per_id(cache_name, id_param='x', max_age=7200, result_id_column='old_id')[source]#

Decorator for caching function results on a per-ID basis.

This decorator caches results for individual IDs rather than entire function calls. When the function is called with a list of IDs, it will: 1. Check which IDs have valid cached results 2. Only call the function for uncached IDs 3. Merge cached and new results 4. Cache the new results

This is particularly useful for functions like update_ids() where we want to avoid re-computing results for IDs we’ve already processed.

Parameters:
  • cache_name (str) – Name of the global cache to use.

  • id_param (str, default 'x') – Name of the parameter containing the IDs to cache.

  • max_age (int, default MAXIMUM_CACHE_DURATION) – Maximum age of cached results in seconds.

  • result_id_column (str, default 'old_id') – Column name in the result DataFrame that contains the ID.

Returns:

The decorated function with per-ID caching capabilities.

Return type:

callable

Notes

  • The decorated function must return a pandas DataFrame

  • The ID parameter can be a list, array, or single ID

  • Cache entries are stored with timestamps for staleness checking

  • The function gains a clear_cache method to manually clear the cache

Examples

>>> @cached_per_id(cache_name="update_ids_cache", id_param="x")
>>> def update_ids(x, dataset=None):
>>>     # Process IDs
>>>     return pd.DataFrame({'old_id': x, 'new_id': x, 'changed': False})
>>>
>>> # First call - computes all IDs
>>> result1 = update_ids([1, 2, 3])
>>>
>>> # Second call - uses cached results for IDs 2 and 3
>>> result2 = update_ids([2, 3, 4])
crantpy.cached_result(cache_name, max_age=7200, key_fn=None, should_cache_fn=None, validate_cache_fn=None)[source]#

Decorator for caching function results.

WARNING: This decorator is not thread-safe. It is recommended to use threading.Lock() to ensure thread safety when using this decorator in a multi-threaded environment.

This decorator provides a flexible caching mechanism for function results. It supports custom cache keys, validation, and conditional caching, making it suitable for a variety of use cases.

The cache stores entries in a dictionary structure: {

β€˜result’: original_function_result, β€˜metadata’: {

β€˜_created_at’: timestamp

}

}

This approach avoids modifying the original result objects directly, ensuring compatibility with immutable types.

Parameters:
  • cache_name (str) – Name of the global cache to use. This is used to group cached results under a specific namespace.

  • max_age (int, default MAXIMUM_CACHE_DURATION) – Maximum age of cached result in seconds. Cached results older than this duration are considered stale and will be refreshed.

  • key_fn (callable, optional) – Function to generate a unique cache key based on the function’s arguments. Defaults to using the first positional argument or the β€˜dataset’ keyword argument. If the function returns None, an error will be raised.

  • should_cache_fn (callable, optional) – Function to determine whether the result of the function should be cached. It takes the function result and arguments as input and returns a boolean.

  • validate_cache_fn (callable, optional) – Function to validate if a cached result is still valid beyond the age check. It takes the cached result and the function arguments as input and returns a boolean.

Returns:

The decorated function with caching capabilities.

Return type:

callable

Examples

>>> # Basic Caching:
>>> @cached_result(cache_name="example_cache")
>>> def expensive_function(x):
>>>     return x ** 2
>>> # Custom Cache Key:
>>> @cached_result(cache_name="example_cache", key_fn=lambda x: f"key_{x}")
>>> def expensive_function(x):
>>>     return x ** 2
>>> # Conditional Caching:
>>> @cached_result(cache_name="example_cache", should_cache_fn=lambda result, *args: result > 10)
>>> def expensive_function(x):
>>>     return x ** 2
>>> # Cache Validation:
>>> def validate_cache(result, *args):
>>>     return result is not None
>>> @cached_result(cache_name="example_cache", validate_cache_fn=validate_cache)
>>> def expensive_function(x):
>>>     return x ** 2

Notes

  • The decorated function gains a clear_cache method to manually clear the cache for the specified cache_name.

  • The check_stale parameter can be used to skip staleness checks when calling the decorated function.

crantpy.chunks_to_nm(xyz_ch, vol, voxel_resolution=[4, 4, 40])[source]#

Map a chunk location to Euclidean space. CV workaround Implemented here after Giacomo’s suggestion

Parameters:
  • xyz_ch (array-like) – (N, 3) array of chunk indices.

  • vol (cloudvolume.CloudVolume) – CloudVolume object associated with the chunked space.

  • voxel_resolution (list, optional) – Voxel resolution.

Returns:

(N, 3) array of spatial points.

Return type:

np.array

crantpy.clear_all_caches()[source]#

Clears all caches.

Return type:

None

crantpy.clear_cave_client_cache()[source]#

Clears the CAVE client cache.

Return type:

None

crantpy.clear_cloudvolume_cache()[source]#

Clears the cloudvolume cache.

Return type:

None

crantpy.clear_global_cache(cache_name)[source]#

Clear a named global cache.

Parameters:

cache_name (str) – Name of the cache to clear

Return type:

None

crantpy.configure_urllib3_warning_suppression(enable=None)[source]#

Enable suppression of known cosmetic urllib3 warnings.

Trade-offs: Hiding warnings can make it harder to notice real connectivity issues. When enabled, only the specific connection pool message is filtered; other warnings remain visible. Does not call urllib3.disable_warnings().

Control via enable or environment variable CRANTPY_SUPPRESS_URLLIB3_WARNINGS.

Returns True if suppression is enabled, False otherwise.

Parameters:

enable (bool | None)

Return type:

bool

crantpy.construct_scene(*, image=True, segmentation=True, brain_mesh=True, merge_biased_seg=False, nuclei=False, base_neuroglancer=False, layout='xy-3d', dataset=None)[source]#

Construct a basic neuroglancer scene for CRANT data.

Parameters:
  • image (bool, default True) – Whether to add the aligned EM image layer.

  • segmentation (bool, default True) – Whether to add the proofreadable segmentation layer.

  • brain_mesh (bool, default True) – Whether to add the brain mesh layer.

  • merge_biased_seg (bool, default False) – Whether to add the merge-biased segmentation layer (for proofreading).

  • nuclei (bool, default False) – Whether to add the nuclei segmentation layer.

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer (affects segmentation layer format).

  • layout (str, default "xy-3d") – Layout to show. Options: β€œ3d”, β€œxy-3d”, β€œxy”, β€œ4panel”.

  • dataset (str, optional) – Which dataset to use (β€œlatest” or β€œsandbox”). If None, uses default.

Returns:

Neuroglancer scene dictionary with requested layers.

Return type:

dict

Examples

>>> # Create a minimal visualization scene
>>> scene = construct_scene(image=True, segmentation=True, brain_mesh=True)
>>>
>>> # Create a full proofreading scene
>>> scene = construct_scene(
...     image=True,
...     segmentation=True,
...     brain_mesh=True,
...     merge_biased_seg=True,
...     nuclei=True
... )
crantpy.create_sql_query(table_name, fields, condition=None, limit=None, start=None)[source]#

Creates a SQL query to get the specified fields from the specified table.

Parameters:
  • table_name (str) – The name of the table to query.

  • fields (List[str]) – The list of field names to include in the query.

  • condition (str, optional) – The WHERE clause of the query.

  • limit (int, optional) – The maximum number of rows to return.

  • start (int, optional) – The number of rows to skip (OFFSET).

Returns:

The constructed SQL query string.

Return type:

str

crantpy.decode_url(url, format='json')[source]#

Decode neuroglancer URL to extract information.

Parameters:
  • url (str or list of str) – Neuroglancer URL(s) to decode.

  • format (str, default "json") – Output format: - β€œjson”: Full scene dictionary - β€œbrief”: Dict with position, selected segments, and annotations - β€œdataframe”: DataFrame with segment IDs and their layers

Returns:

Decoded information in requested format.

Return type:

dict or DataFrame

Examples

>>> url = "https://spelunker.cave-explorer.org/#!{...}"
>>> info = decode_url(url, format='brief')
>>> print(info['selected'])  # List of selected segment IDs
>>> print(info['position'])  # [x, y, z] coordinates
crantpy.detect_soma_mesh(mesh)[source]#

Try detecting the soma based on vertex clusters.

Identifies dense vertex clusters that likely represent the soma.

Parameters:

mesh (trimesh.Trimesh) – Coordinates in nanometers. Mesh must not be downsampled for accurate detection.

Returns:

Array of vertex indices that belong to the detected soma region. Returns empty array if no soma is detected.

Return type:

np.ndarray

crantpy.detect_soma_skeleton(s, min_rad=800, N=3)[source]#

Try detecting the soma based on radii.

Looks for consecutive nodes with large radii to identify soma. Includes additional checks to ensure the skeleton is valid.

Parameters:
  • s (navis.TreeNeuron) – The skeleton to analyze for soma detection.

  • min_rad (int, default 800) – Minimum radius for a node to be considered a soma candidate (in nm).

  • N (int, default 3) – Number of consecutive nodes with radius > min_rad needed to consider them soma candidates.

Returns:

Node ID of the detected soma, or None if no soma found.

Return type:

int or None

crantpy.divide_local_neighbourhood(mesh, radius)[source]#

Divide the mesh into locally connected patches of a given size (overlapping).

Parameters:
  • mesh (trimesh.Trimesh) – The mesh to divide.

  • radius (float) – The radius (in mesh units) for local neighborhoods.

Returns:

Each set contains vertex indices belonging to a local patch.

Return type:

list of sets

crantpy.encode_url(segments=None, annotations=None, coords=None, skeletons=None, skeleton_names=None, seg_colors=None, seg_groups=None, invis_segs=None, scene=None, base_neuroglancer=False, layout='xy-3d', open=False, to_clipboard=False, shorten=False, *, dataset=None)[source]#

Encode data as CRANT neuroglancer scene URL.

Parameters:
  • segments (int or list of int, optional) – Segment IDs (root IDs) to have selected in the scene.

  • annotations (array or dict, optional) – Coordinates for annotations: - (N, 3) array: Point annotations at x/y/z coordinates (in voxels) - dict: Multiple annotation layers {name: (N, 3) array}

  • coords ((3,) array, optional) – X, Y, Z coordinates (in voxels) to center the view on.

  • skeletons (TreeNeuron or NeuronList, optional) – Skeleton(s) to add as annotation layer(s). Must be in nanometers.

  • skeleton_names (str or list of str, optional) – Names for the skeleton(s) to display in the UI. If a single string is provided, it will be used for all skeletons. If a list is provided, its length must match the number of skeletons.

  • seg_colors (str, tuple, list, dict, or array, optional) – Colors for segments: - str or tuple: Single color for all segments - list: List of colors matching segments - dict: Mapping of segment IDs to colors - array: Labels that will be converted to colors

  • seg_groups (list or dict, optional) – Group segments into separate layers: - list: Group labels matching segments - dict: {group_name: [seg_id1, seg_id2, …]}

  • invis_segs (int or list, optional) – Segment IDs to select but keep invisible.

  • scene (dict or str, optional) – Existing scene to modify (as dict or URL string).

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer instead of CAVE Spelunker.

  • layout (str, default "xy-3d") – Layout to show. Options: β€œ3d”, β€œxy-3d”, β€œxy”, β€œ4panel”.

  • open (bool, default False) – If True, opens the URL in a web browser.

  • to_clipboard (bool, default False) – If True, copies the URL to clipboard (requires pyperclip).

  • shorten (bool, default False) – If True, creates a shortened URL (requires state server).

  • dataset (str, optional) – Which dataset to use. If None, uses default.

Returns:

Neuroglancer URL.

Return type:

str

Examples

>>> # Simple scene with segments
>>> url = encode_url(segments=[720575940621039145, 720575940621039146])
>>>
>>> # Scene with colored segments
>>> url = encode_url(
...     segments=[720575940621039145, 720575940621039146],
...     seg_colors={720575940621039145: 'red', 720575940621039146: 'blue'}
... )
>>>
>>> # Scene with skeleton and centered view
>>> import navis
>>> skeleton = crt.viz.get_skeletons([720575940621039145])[0]
>>> url = encode_url(
...     segments=[720575940621039145],
...     skeletons=skeleton,
...     coords=[24899, 14436, 3739]
... )
crantpy.filter_df(df, column, value, regex=False, case=False, match_all=False, exact=True)[source]#

This function filters a df based on a column and a value. It can handle string, numeric, and list-containing columns.

Parameters:
  • (pd.DataFrame) (df)

  • (str) (column)

  • (Any) (value)

  • (bool) (exact)

  • (bool)

  • (bool) – if True, requires all filter values to be present in the cell’s list. If False, requires at least one filter value to be present. Defaults to False.

  • (bool)

  • df (pandas.DataFrame)

  • column (str)

  • value (Any)

  • regex (bool)

  • case (bool)

  • match_all (bool)

  • exact (bool)

Returns:

pd.DataFrame

Return type:

The filtered df.

crantpy.generate_cave_token(save=False)[source]#

Generates a token for the CAVE client. If save is True, the token will be saved (overwriting any existing token).

Parameters:

save (bool, default False) – Whether to save the token after generation.

Return type:

None

crantpy.get_adjacency(pre_ids=None, post_ids=None, threshold=1, min_size=None, materialization='latest', symmetric=False, clean=True, update_ids=True, dataset=None)[source]#

Construct an adjacency matrix from synaptic connections between neurons.

This function queries the synapses table to get connections between specified pre- and post-synaptic neurons, then constructs an adjacency matrix showing the number of synapses between each pair.

Parameters:
  • pre_ids (int, str, list, NeuronCriteria, optional) – Pre-synaptic neuron root IDs or criteria. If None, all pre-synaptic neurons in the dataset will be included.

  • post_ids (int, str, list, NeuronCriteria, optional) – Post-synaptic neuron root IDs or criteria. If None, all post-synaptic neurons in the dataset will be included.

  • threshold (int, default 1) – Minimum number of synapses required between a pair to be included in the adjacency matrix.

  • min_size (int, optional) – Minimum size for filtering synapses before constructing adjacency matrix.

  • materialization (str, default 'latest') – Materialization version to use. β€˜latest’ (default) or β€˜live’ for live table.

  • symmetric (bool, default False) – If True, return a symmetric adjacency matrix with the same set of IDs on both rows and columns. The neuron set includes all neurons that appear in the filtered synapses data (union of all pre- and post-synaptic neurons). This provides a complete view of connectivity among all neurons involved in the queried connections. If False (default), rows represent pre-synaptic neurons and columns represent post-synaptic neurons from the actual synapses data.

  • clean (bool, default True) – Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_synapses().

  • update_ids (bool, default True) – Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if you’re certain all IDs are current (faster but risky).

  • dataset (str, optional) – Dataset to use for the query.

Returns:

An adjacency matrix where each entry [i, j] represents the number of synapses from neuron i (pre-synaptic) to neuron j (post-synaptic). Rows are pre-synaptic neurons, columns are post-synaptic neurons.

Return type:

pd.DataFrame

Examples

>>> import crantpy as cp
>>> # Get adjacency between specific neurons
>>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050])
>>>
>>> # Get adjacency with minimum threshold
>>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], threshold=3)
>>>
>>> # Get symmetric adjacency matrix
>>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], symmetric=True)
>>>
>>> # Get adjacency matrix with autapses included
>>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], clean=False)
>>>
>>> # Skip ID updates for faster queries (use only if IDs are known to be current)
>>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], update_ids=False)

Notes

  • This function uses get_synapses() internally to retrieve synaptic connections

  • If both pre_ids and post_ids are None, this will query all synapses in the dataset

  • The threshold parameter filters connection pairs, not individual synapses

  • When symmetric=True, the resulting matrix includes all neurons that appear in the filtered synapses data, ensuring complete connectivity visualization

  • When symmetric=False, the matrix may be rectangular with different neuron sets for rows (pre-synaptic) and columns (post-synaptic)

  • When clean=True (default), autapses and background connections are removed

  • When update_ids=True (default), IDs are automatically updated with efficient caching

crantpy.get_cave_client(dataset=None, clear_cache=False, check_stale=True)[source]#

Returns a CAVE client instance. If a token is already set, it will be used for authentication. Otherwise, a new token will be generated.

Parameters:
  • clear_cache (bool, default False) – If True, bypasses the cache and fetches a new client.

  • check_stale (bool, default True) – If True, checks if the cached client is stale based on materialization and maximum cache duration.

  • dataset (str, optional) – The dataset to use. If not provided, uses the default dataset.

Returns:

A CAVE client instance authenticated with the token.

Return type:

CAVEclient

Raises:

ValueError – If no token is found after attempting to generate one.

crantpy.get_cave_datastacks()[source]#

Get available CAVE datastacks.

Return type:

list

crantpy.get_cloudvolume(dataset=None, clear_cache=False, check_stale=True, **kwargs)[source]#

Returns a cloudvolume instance.

Parameters:
  • dataset (str | None)

  • clear_cache (bool)

  • check_stale (bool)

Return type:

caveclient.CAVEclient

crantpy.get_connectivity(neuron_ids, upstream=True, downstream=True, threshold=1, min_size=None, materialization='latest', clean=True, update_ids=True, dataset=None)[source]#

Fetch connectivity information for given neuron(s) in CRANTb.

This function retrieves synaptic connections for the specified neurons, returning a table of connections with pre-synaptic neurons, post-synaptic neurons, and synapse counts.

Parameters:
  • neuron_ids (int, str, list, NeuronCriteria) – Neuron root ID(s) to query connectivity for. Can be a single ID, list of IDs, or NeuronCriteria object.

  • upstream (bool, default True) – Whether to fetch upstream (incoming) connectivity to the query neurons.

  • downstream (bool, default True) – Whether to fetch downstream (outgoing) connectivity from the query neurons.

  • threshold (int, default 1) – Minimum number of synapses required between a pair to be included in the results.

  • min_size (int, optional) – Minimum size for filtering synapses before aggregating connections.

  • materialization (str, default 'latest') – Materialization version to use. β€˜latest’ (default) or β€˜live’ for live table.

  • clean (bool, default True) – Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_synapses().

  • update_ids (bool, default True) – Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if you’re certain all IDs are current (faster but risky).

  • dataset (str, optional) – Dataset to use for the query.

Returns:

Connectivity table with columns: - β€˜pre’: pre-synaptic neuron ID - β€˜post’: post-synaptic neuron ID - β€˜weight’: number of synapses between the pair

Return type:

pd.DataFrame

Raises:

ValueError – If both upstream and downstream are False.

Examples

>>> import crantpy as cp
>>> # Get all connections for a neuron
>>> conn = cp.get_connectivity(576460752641833774)
>>>
>>> # Get only downstream connections with threshold
>>> conn = cp.get_connectivity(576460752641833774, upstream=False, threshold=3)
>>>
>>> # Get connectivity for multiple neurons
>>> conn = cp.get_connectivity([576460752641833774, 576460752777916050])
>>>
>>> # Skip ID updates for faster queries (use only if IDs are known to be current)
>>> conn = cp.get_connectivity(576460752641833774, update_ids=False)

Notes

  • This function uses get_synapses() internally to retrieve synaptic connections

  • Results are aggregated by pre-post neuron pairs and sorted by synapse count

  • When clean=True, autapses and background connections are removed

  • When update_ids=True (default), IDs are automatically updated with efficient caching

crantpy.get_current_cave_token()[source]#

Retrieves the current token from the CAVE client.

Returns:

The current CAVE token.

Return type:

str

Raises:

ValueError – If no token is found.

crantpy.get_dataset_segmentation_source(dataset)[source]#

Get segmentation source for given dataset.

Parameters:

dataset (str)

Return type:

str

crantpy.get_datastack_segmentation_source(datastack)[source]#

Get segmentation source for given CAVE datastack.

Return type:

str

crantpy.get_global_cache(cache_name)[source]#

Get a named global cache dictionary.

Parameters:

cache_name (str) – Name of the cache to retrieve

Returns:

The requested cache dictionary

Return type:

dict

crantpy.get_skeletons(root_ids, dataset='latest', progress=True, omit_failures=None, max_threads=6, **kwargs)[source]#

Fetch skeletons for multiple neurons.

Tries to get precomputed skeletons first, then falls back to on-demand skeletonization if needed. if id more than one root_id, it will use the parallel skeletonization function.

Parameters:
  • root_ids (list of int or np.ndarray) – Root IDs of neurons to fetch skeletons for.

  • dataset (str, default 'latest') – Dataset to query against.

  • progress (bool, default True) – Show progress during fetching.

  • omit_failures (bool, optional) – None: raise exception on failures True: skip failed neurons False: return empty TreeNeuron for failed cases

  • max_threads (int, default 6) – Number of parallel threads for fetching skeletons.

  • **kwargs – Additional arguments passed to skeletonization if needed.

Returns:

List of successfully fetched/generated skeletons.

Return type:

navis.NeuronList

crantpy.get_soma_from_annotations(root_id, client, dataset=None)[source]#

Try to get soma location from nucleus annotations.

Parameters:
  • root_id (int) – Root ID of the neuron to get soma information for.

  • client (CAVEclient) – CAVE client for data access.

  • dataset (str, optional) – Dataset identifier (handled by decorators if not provided).

Returns:

(x, y, z) coordinates of the soma in nanometers, or None if not found.

Return type:

tuple or None

crantpy.get_synapse_counts(neuron_ids, threshold=1, min_size=None, materialization='latest', clean=True, update_ids=True, dataset=None)[source]#

Get synapse counts (pre and post) for given neuron IDs in CRANTb.

This function returns the total number of presynaptic and postsynaptic connections for each specified neuron, aggregated across all their partners.

Parameters:
  • neuron_ids (int, str, list, NeuronCriteria) – Neuron root ID(s) to get synapse counts for. Can be a single ID, list of IDs, or NeuronCriteria object.

  • threshold (int, default 1) – Minimum number of synapses required between a pair to be counted towards the total. Pairs with fewer synapses are excluded.

  • min_size (int, optional) – Minimum size for filtering individual synapses before counting.

  • materialization (str, default 'latest') – Materialization version to use. β€˜latest’ (default) or β€˜live’ for live table.

  • clean (bool, default True) – Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_connectivity().

  • update_ids (bool, default True) – Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if you’re certain all IDs are current (faster but risky).

  • dataset (str, optional) – Dataset to use for the query.

Returns:

DataFrame with columns: - index: neuron IDs - β€˜pre’: number of presynaptic connections (outgoing) - β€˜post’: number of postsynaptic connections (incoming)

Return type:

pd.DataFrame

Examples

>>> import crantpy as cp
>>> # Get synapse counts for a single neuron
>>> counts = cp.get_synapse_counts(576460752641833774)
>>>
>>> # Get counts for multiple neurons with threshold
>>> counts = cp.get_synapse_counts([576460752641833774, 576460752777916050], threshold=3)
>>>
>>> # Skip ID updates for faster queries (use only if IDs are known to be current)
>>> counts = cp.get_synapse_counts(576460752641833774, update_ids=False)

Notes

  • This function uses get_connectivity() internally to get connection data

  • Counts represent the number of distinct synaptic partners, not individual synapses

  • The threshold is applied at the connection level (pairs of neurons)

  • When update_ids=True (default), IDs are automatically updated with efficient caching

crantpy.get_synapses(pre_ids=None, post_ids=None, threshold=1, min_size=None, materialization='latest', return_pixels=True, clean=True, update_ids=True, dataset=None)[source]#

Fetch synapses for a given set of pre- and/or post-synaptic neuron IDs in CRANTb.

Parameters:
  • pre_ids (int, str, list of int/str, NeuronCriteria, optional) – Pre-synaptic neuron root ID(s) to include. Can be a single ID, list of IDs, or NeuronCriteria object.

  • post_ids (int, str, list of int/str, NeuronCriteria, optional) – Post-synaptic neuron root ID(s) to include. Can be a single ID, list of IDs, or NeuronCriteria object.

  • threshold (int, default 1) – Minimum number of synapses required for a partner to be retained. Currently we don’t know what a good threshold is.

  • min_size (int, optional) – Minimum size for filtering synapses. Currently we don’t know what a good size is.

  • materialization (str, default 'latest') – Materialization version to use. β€˜latest’ (default) or β€˜live’ for live table.

  • return_pixels (bool, default True) – Whether to convert coordinate columns from nanometers to pixels. If True (default), coordinates in ctr_pt_position, pre_pt_position, and post_pt_position are converted using dataset scale factors. If False, coordinates remain in nanometer units.

  • clean (bool, default True) – Whether to perform cleanup of the synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background)

  • update_ids (bool, default True) – Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if you’re certain all IDs are current (faster but risky).

  • dataset (str, optional) – Dataset to use for the query.

Returns:

DataFrame of synaptic connections.

Return type:

pd.DataFrame

Raises:

ValueError – If neither pre_ids nor post_ids are provided.

Notes

  • When update_ids=True (default), outdated root IDs are automatically updated using supervoxel IDs from annotations when available for fast, reliable updates

  • ID updates are cached per-ID, so repeated queries with overlapping IDs are efficient

  • Updated IDs are used for the query, but the original IDs are not modified in place

See also

update_ids

Manually update root IDs to their latest versions

crantpy.inject_dataset(allowed=None, disallowed=None, param_name='dataset')[source]#

Inject current default dataset.

Parameters:
  • allowed (List[str] or str, optional) – List of allowed datasets or a single allowed dataset.

  • disallowed (List[str] or str, optional) – List of disallowed datasets or a single disallowed dataset.

  • param_name (str, default 'dataset') – Name of the parameter to inject the dataset into.

Returns:

Decorator function that injects the dataset.

Return type:

Callable

crantpy.is_latest_roots(x, timestamp=None, dataset=None, progress=True, batch_size=100000, validate_ids=True, use_http_session=True)[source]#

Check if the given root IDs are the latest based on the timestamp.

Parameters:
  • x (IDs = str | int | np.int64) – The root IDs to check.

  • timestamp (Timestamp = str | int | np.int64 | datetime | np.datetime64 | pd.Timestamp) – The timestamp to compare against. Can also be β€œmat” for the latest materialization timestamp.

  • dataset (str, optional) – The dataset to use.

  • progress (bool, default True) – Whether to show progress bar for large batches.

  • batch_size (int, default 100_000) – Batch size for processing large numbers of IDs.

  • validate_ids (bool, default True) – Whether to validate root IDs before processing.

  • use_http_session (bool, default True) – Whether to use direct HTTP session for better performance.

Returns:

A boolean array indicating whether each root ID is the latest.

Return type:

np.ndarray

Examples

>>> from crantpy.utils.cave.helpers import is_latest_roots
>>> is_latest_roots([123456789, 987654321])
array([ True, False])
>>> # Check against latest materialization
>>> is_latest_roots([123456789], timestamp="mat")
array([ True])
crantpy.is_valid_root(x, dataset=None, raise_exc=False)[source]#

Check if ID is (potentially) valid root ID.

Parameters:
  • x (IDs = str | int | np.int64) – The root IDs to check.

  • dataset (str, optional) – The dataset to use.

  • raise_exc (bool, default False) – Whether to raise an exception if invalid IDs are found.

Returns:

A boolean array indicating whether each root ID is valid.

Return type:

np.ndarray

Raises:

ValueError – If raise_exc is True and invalid IDs are found.

crantpy.is_valid_supervoxel(x, dataset=None, raise_exc=False)[source]#

Check if ID is (potentially) valid supervoxel ID.

Parameters:
  • x (IDs = str | int | np.int64) – The supervoxel IDs to check.

  • dataset (str, optional) – The dataset to use.

  • raise_exc (bool, default False) – Whether to raise an exception if invalid IDs are found.

Returns:

If x is a single ID, returns bool. If x is iterable, returns array.

Return type:

bool or np.ndarray

Raises:

ValueError – If raise_exc is True and invalid IDs are found.

See also

is_valid_root

Use this function to check if a root ID is valid.

crantpy.make_iterable(x, force_type=None)[source]#

Convert input to an numpy array.

Parameters:
  • x (Any) – The input to convert.

  • force_type (Optional[type]) – If specified, the input will be cast to this type.

Returns:

The converted numpy array.

Return type:

np.ndarray

crantpy.map_position_to_node(neuron, position, return_distance=False)[source]#

Map a spatial position to the nearest node in a skeleton.

This utility function finds the closest node in a skeleton to a given position using a KDTree for efficient spatial lookup. Useful for soma detection, synapse attachment, and other spatial queries.

Parameters:
  • neuron (navis.TreeNeuron) – The skeleton neuron to search.

  • position (list or np.ndarray) – The [x, y, z] coordinates to map. Should be in the same coordinate system as the neuron (typically nanometers).

  • return_distance (bool, optional) – If True, also return the Euclidean distance to the nearest node. Default is False.

Returns:

  • node_id (int) – The node_id of the nearest node.

  • distance (float (optional)) – The Euclidean distance to the nearest node in nanometers. Only returned if return_distance=True.

Return type:

int | tuple[int, float]

Examples

>>> import crantpy as cp
>>> import numpy as np
>>> skel = cp.get_l2_skeleton(576460752664524086)
>>> # Map a position to nearest node
>>> node_id = cp.map_position_to_node(skel, [240000, 85000, 96000])
>>> print(f"Nearest node: {node_id}")
>>> # Get distance too
>>> node_id, dist = cp.map_position_to_node(skel, [240000, 85000, 96000], return_distance=True)
>>> print(f"Nearest node: {node_id} at distance {dist:.2f} nm")

See also

reroot_at_soma

Reroot a skeleton at its soma location.

detect_soma

Detect soma location in a neuron.

crantpy.match_dtype(value, dtype)[source]#

Match the dtype of a value to a given dtype.

Parameters:
  • value (Any) – The value to convert.

  • dtype (str or type) – The target dtype to convert to.

Returns:

The converted value.

Return type:

Any

Raises:

ValueError – If the dtype is not supported.

crantpy.neurons_to_url(neurons, include_skeleton=True, downsample=None, **kwargs)[source]#

Create neuroglancer URLs for a list of neurons.

Parameters:
  • neurons (NeuronList) – List of neurons to create URLs for. Must have root_id attribute.

  • include_skeleton (bool, default True) – Whether to include the skeleton in the URL.

  • downsample (int, optional) – Factor by which to downsample skeletons before adding to scene.

  • **kwargs – Additional arguments passed to encode_url().

Returns:

DataFrame with columns: id, name, url

Return type:

DataFrame

Examples

>>> neurons = crt.viz.get_skeletons([720575940621039145, 720575940621039146])
>>> urls = neurons_to_url(neurons)
>>> print(urls[['id', 'url']])
crantpy.parse_neuroncriteria(allow_empty=False)[source]#

Parse all NeuronCriteria arguments into an array of root IDs.

This decorator automatically converts any NeuronCriteria objects in function arguments to arrays of root IDs. It uses type checking by class name to avoid circular imports.

Parameters:

allow_empty (bool, default False) – Whether to allow the NeuronCriteria to not match any neurons.

Returns:

Decorator function that processes NeuronCriteria arguments.

Return type:

Callable

Examples

>>> @parse_neuroncriteria()
>>> def process_neurons(neurons):
>>>     # neurons will be an array of root IDs
>>>     return neurons
>>>
>>> # Can be called with a NeuronCriteria object
>>> result = process_neurons(NeuronCriteria(cell_class='example'))
crantpy.parse_root_ids(neurons)[source]#

Parse various neuron input types to a list of root ID strings. :param neurons: The neuron(s) to parse. Can be a single root ID (int or str),

a list of root IDs, or a NeuronCriteria object.

Returns:

A list of root ID strings.

Return type:

List[str]

Parameters:

neurons (Union[int, str, List[Union[int, str]], NeuronCriteria])

crantpy.parse_timestamp(x)[source]#

Parse a timestamp string to Unix timestamp.

Parameters:

x (Timestamp) – The timestamp string to parse. Int must be unix timestamp. String must be ISO 8601 - e.g. β€˜2021-11-15’. datetime, np.datetime64, pd.Timestamp are also accepted.

Returns:

The Unix timestamp.

Return type:

str

crantpy.plot_em_image(x, y, z, size=1000)[source]#

Fetch and return an EM image slice from the precomputed CloudVolume. Currently only supports slices through the Z axis (i.e. XY plane).

Parameters:
  • x (int) – The x coordinate of the center of the image slice.

  • y (int) – The y coordinate of the center of the image slice.

  • z (int) – The z coordinate of the image slice.

  • size (int, optional) – The size of the image slice (default is 1000).

Returns:

The EM image slice as a numpy array.

Return type:

np.ndarray

crantpy.reroot_at_soma(neurons, soma_coords=None, detect_soma_kwargs=None, inplace=True, progress=False)[source]#

Reroot skeleton(s) at their soma location.

This convenience function combines soma detection and rerooting. If soma coordinates are not provided, they will be automatically detected using detect_soma(). The skeleton is then rerooted at the node nearest to the soma location.

Parameters:
  • neurons (TreeNeuron | NeuronList) – Single neuron or list of neurons to reroot.

  • soma_coords (np.ndarray or list of np.ndarray, optional) – Soma coordinates in pixels [x, y, z]. If not provided, soma will be automatically detected using detect_soma(). For multiple neurons, provide a list of coordinates in the same order as neurons.

  • detect_soma_kwargs (dict, optional) – Additional keyword arguments to pass to detect_soma() if soma coordinates are not provided.

  • inplace (bool, optional) – If True, reroot neurons in place. If False, return rerooted copies. Default is True.

  • progress (bool, optional) – If True, show progress bar when processing multiple neurons or detecting somas. Default is False.

Returns:

Rerooted neuron(s). Same as input if inplace=True, otherwise copies.

Return type:

TreeNeuron | NeuronList

Examples

>>> import crantpy as cp
>>> # Get skeleton
>>> skel = cp.get_l2_skeleton(576460752664524086)
>>> # Reroot at automatically detected soma
>>> skel_rerooted = cp.reroot_at_soma(skel)
>>> print(f"Root node: {skel_rerooted.root}")
>>> # Reroot with provided soma coordinates
>>> soma = [28000, 9000, 2200]  # in pixels
>>> skel_rerooted = cp.reroot_at_soma(skel, soma_coords=soma)
>>> # Process multiple neurons
>>> skels = cp.get_l2_skeleton([576460752664524086, 576460752590602315])
>>> skels_rerooted = cp.reroot_at_soma(skels, progress=True)

See also

map_position_to_node

Map a position to the nearest node.

detect_soma

Detect soma location in a neuron.

crantpy.retry(func, retries=5, cooldown=2)[source]#

Retry function on HTTPError.

This also suppresses UserWarnings (commonly raised by l2 cache requests)

Parameters:
  • cooldown (int | float) – Cooldown period in seconds between attempts.

  • retries (int) – Number of retries before we give up. Every subsequent retry will delay by an additional retry.

crantpy.retry_func(retries=5, cooldown=2)[source]#

Retry decorator for functions on HTTPError. This also suppresses UserWarnings (commonly raised by l2 cache requests) :param cooldown: Cooldown period in seconds between attempts. :type cooldown: int | float :param retries: Number of retries before we give up. Every subsequent retry

will delay by an additional retry.

crantpy.scene_to_url(scene, base_neuroglancer=False, shorten=False, open=False, to_clipboard=False)[source]#

Convert neuroglancer scene dictionary to URL.

Parameters:
  • scene (dict) – Neuroglancer scene dictionary.

  • base_neuroglancer (bool, default False) – Whether to use base neuroglancer instead of CAVE Spelunker.

  • shorten (bool, default False) – Whether to create a shortened URL (requires state server).

  • open (bool, default False) – If True, opens URL in web browser.

  • to_clipboard (bool, default False) – If True, copies URL to clipboard.

Returns:

Neuroglancer URL.

Return type:

str

crantpy.set_cave_token(token)[source]#

Sets the CAVE token for the CAVE client.

Parameters:

token (str) – The CAVE token to set.

Return type:

None

crantpy.set_default_dataset(dataset)[source]#
Parameters:

dataset (str)

crantpy.set_logging_level(level)[source]#

Sets the logging level for the logger.

Parameters:

level (str) – The logging level to set. Options are β€˜DEBUG’, β€˜INFO’, β€˜WARNING’, β€˜ERROR’, β€˜CRITICAL’.

Return type:

None

crantpy.skeletonize_neuron(client, root_id, shave_skeleton=True, remove_soma_hairball=False, assert_id_match=False, threads=2, save_to=None, progress=True, use_pcg_skel=False, **kwargs)[source]#

Skeletonize a neuron the main function.

Parameters:
  • client (CAVEclient) – CAVE client for data access.

  • root_id (int) – Root ID of the neuron to skeletonize.

  • shave_skeleton (bool, default True) – Remove small protrusions and bristles from skeleton (from my understanding).

  • remove_soma_hairball (bool, default False) – Remove the hairball mesh from the soma

  • assert_id_match (bool, default False) – Verify skeleton nodes map to correct segment ID.

  • threads (int, default 2) – Number of parallel threads for mesh processing.

  • save_to (str, optional) – Save skeleton as SWC file to this path.

  • progress (bool, default True) – Show progress bars during processing.

  • use_pcg_skel (bool, default False) – Try pcg_skel first before skeletor (CAVE-client skeletonization).

  • **kwargs – Additional arguments for skeletonization algorithms.

Returns:

  • navis.TreeNeuron – The skeletonized neuron.

  • # TODOs from fafbseg

  • # - Use synapse locations as constraints

  • # - Mesh preprocessing options

  • # - Chunked skeletonization for large meshes

  • # - Use soma annotations from external sources

  • # - Better error handling/logging

  • # - Allow user-supplied soma location/radius

  • # - Option to return intermediate results

  • # - Support more skeletonization algorithms

  • # - Merge disconnected skeletons

  • # - Custom node/edge attributes

Return type:

navis.TreeNeuron | navis.NeuronList

crantpy.skeletonize_neurons_parallel(client, root_ids, n_cores=None, progress=True, color_map=None, **kwargs)[source]#

Skeletonize multiple neurons in parallel.

Parameters:
  • client (CAVEclient) – CAVE client for data access.

  • root_ids (list of int or np.ndarray) – Root IDs of neurons to skeletonize.

  • n_cores (int, optional) – Number of cores to use. If None, uses half of available cores.

  • progress (bool, default True) – Show progress bars during processing.

  • color_map (str, optional) – Generate colors for each neuron using this colormap. Returns tuple of (neurons, colors) instead of just neurons.

  • **kwargs – Additional arguments passed to skeletonize_neuron.

Returns:

NeuronList of skeletonized neurons, or tuple of (NeuronList, colors) if color_map is specified.

Return type:

navis.NeuronList or tuple

crantpy.suppress_urllib3_connectionpool_warnings()[source]#

Context manager to temporarily suppress urllib3 connectionpool messages.

Only filters the specific β€œConnection pool is full, discarding connection” warnings while inside the context.

crantpy.validate_cave_client(client, *args, **kwargs)[source]#

Validate if a cached CAVE client is still valid.