crantpy package#
Subpackages#
- crantpy.queries package
- crantpy.utils package
- Subpackages
- Submodules
- Module contents
FilteringErrorNoMatchesErroradd_annotation_layer()add_skeleton_layer()cached_per_id()cached_result()clear_all_caches()clear_cave_client_cache()clear_cloudvolume_cache()clear_global_cache()construct_scene()create_sql_query()decode_url()encode_url()filter_df()generate_cave_token()get_cave_client()get_cave_datastacks()get_cloudvolume()get_current_cave_token()get_dataset_segmentation_source()get_datastack_segmentation_source()get_global_cache()inject_dataset()is_latest_roots()is_valid_root()is_valid_supervoxel()make_iterable()map_position_to_node()match_dtype()neurons_to_url()parse_neuroncriteria()parse_root_ids()parse_timestamp()plot_em_image()reroot_at_soma()retry()retry_func()scene_to_url()set_cave_token()set_default_dataset()set_logging_level()validate_cave_client()
- crantpy.viz package
Module contents#
- exception crantpy.FilteringError(message=None)[source]#
Bases:
ValueErrorRaised if a filtering operation fails.
- Parameters:
message (str, optional) β The error message.
- Return type:
None
- exception crantpy.NoMatchesError(message=None)[source]#
Bases:
ValueErrorRaised if no matches are found.
- Parameters:
message (str, optional) β The error message.
- Return type:
None
- crantpy.add_annotation_layer(annotations, scene, name=None, connected=False)[source]#
Add annotations as new layer to scene.
- Parameters:
annotations (array or list) β Coordinates for annotations (in voxel space): - (N, 3): Point annotations at x/y/z coordinates - (N, 2, 3): Line segments with start and end points - (N, 4): Ellipsoids with x/y/z center and radius
scene (dict) β Scene to add annotation layer to.
name (str, optional) β Name for the annotation layer.
connected (bool, default False) β If True, point annotations will be connected as a path (TODO).
- Returns:
Modified scene with annotation layer added.
- Return type:
Examples
>>> # Add point annotations >>> points = np.array([[100, 200, 50], [150, 250, 60]]) >>> scene = add_annotation_layer(points, scene, name="my_points") >>> >>> # Add line annotations >>> lines = np.array([ ... [[100, 200, 50], [150, 250, 60]], ... [[150, 250, 60], [200, 300, 70]] ... ]) >>> scene = add_annotation_layer(lines, scene, name="my_lines")
- crantpy.add_skeleton_layer(skeleton, scene, name=None)[source]#
Add skeleton as line annotation layer to scene.
- Parameters:
- Returns:
Modified scene with skeleton layer added.
- Return type:
Examples
>>> skeleton = crt.viz.get_skeletons([720575940621039145])[0] >>> scene = construct_scene() >>> scene = add_skeleton_layer(skeleton, scene)
- crantpy.attach_synapses(neurons, pre=True, post=True, threshold=1, min_size=None, materialization='latest', clean=True, max_distance=10000.0, update_ids=True, dataset=None)[source]#
Attach synapses as connectors to skeleton neurons.
This function fetches synapses for the given neuron(s) and maps them to the closest node on each skeleton using a KD-tree. The synapses are attached as a .connectors table with columns for connector_id, x, y, z, type (pre/post), partner_id, and node_id.
Adapted from fafbseg-py (Philipp Schlegel) to work with CRANTb data.
- Parameters:
neurons (navis.TreeNeuron or navis.NeuronList) β Skeleton neuron(s) to attach synapses to. Must be TreeNeuron objects with node coordinates.
pre (bool, default True) β Whether to fetch and attach presynapses (outputs) for the given neurons.
post (bool, default True) β Whether to fetch and attach postsynapses (inputs) for the given neurons.
threshold (int, default 1) β Minimum number of synapses required between neuron pairs to be included.
min_size (int, optional) β Minimum synapse size for filtering.
materialization (str, default 'latest') β Materialization version to use. Either βlatestβ or βliveβ.
clean (bool, default True) β Whether to perform cleanup of synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) - Remove synapses that are too far from skeleton nodes (see max_distance)
max_distance (float, default 10000.0) β Maximum distance (in nanometers) between a synapse and its nearest skeleton node. Synapses further than this are removed if clean=True. The default of 10um helps filter out spurious synapse annotations far from the actual neuron.
update_ids (bool, default True) β Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if youβre certain all IDs are current (faster but risky).
dataset (str, optional) β Dataset to use for queries.
- Returns:
The same neuron(s) with .connectors table attached. The connectors table includes columns: - connector_id: Unique ID for each synapse (sequential) - x, y, z: Synapse coordinates in nanometers - type: βpreβ for presynapses, βpostβ for postsynapses - partner_id: Root ID of the partner neuron - node_id: ID of the skeleton node closest to this synapse
Note: The input neurons are modified in place and also returned.
- Return type:
navis.TreeNeuron or navis.NeuronList
- Raises:
TypeError β If neurons is not a TreeNeuron or NeuronList of TreeNeurons.
ValueError β If both pre and post are False.
Examples
>>> import crantpy as cp >>> # Get a skeleton neuron >>> skeleton = cp.get_l2_skeleton(576460752664524086) >>> >>> # Attach synapses to it >>> skeleton = cp.attach_synapses(skeleton) >>> >>> # View the connectors table >>> print(skeleton.connectors.head()) >>> >>> # Get only presynapses >>> skeleton = cp.attach_synapses(skeleton, post=False) >>> >>> # Filter distant synapses more aggressively >>> skeleton = cp.attach_synapses(skeleton, max_distance=5000) >>> >>> # Skip ID updates for faster queries (use only if IDs are known to be current) >>> skeleton = cp.attach_synapses(skeleton, update_ids=False)
See also
get_synapsesFetch synapse data without attaching to neurons.
Notes
This function modifies the input neurons in place by adding/updating the .connectors attribute.
Synapses are mapped to skeleton nodes using scipyβs KDTree for efficient nearest neighbor search.
The connector_id is a sequential integer starting from 0, not the original synapse ID from the database.
If a neuron already has a .connectors table, it will be overwritten.
Synapse coordinates are automatically converted from pixels to nanometers to match skeleton coordinate system (using SCALE_X=8, SCALE_Y=8, SCALE_Z=42).
When update_ids=True (default), IDs are automatically updated with efficient caching
- crantpy.cached_per_id(cache_name, id_param='x', max_age=7200, result_id_column='old_id')[source]#
Decorator for caching function results on a per-ID basis.
This decorator caches results for individual IDs rather than entire function calls. When the function is called with a list of IDs, it will: 1. Check which IDs have valid cached results 2. Only call the function for uncached IDs 3. Merge cached and new results 4. Cache the new results
This is particularly useful for functions like update_ids() where we want to avoid re-computing results for IDs weβve already processed.
- Parameters:
cache_name (str) β Name of the global cache to use.
id_param (str, default 'x') β Name of the parameter containing the IDs to cache.
max_age (int, default MAXIMUM_CACHE_DURATION) β Maximum age of cached results in seconds.
result_id_column (str, default 'old_id') β Column name in the result DataFrame that contains the ID.
- Returns:
The decorated function with per-ID caching capabilities.
- Return type:
callable
Notes
The decorated function must return a pandas DataFrame
The ID parameter can be a list, array, or single ID
Cache entries are stored with timestamps for staleness checking
The function gains a clear_cache method to manually clear the cache
Examples
>>> @cached_per_id(cache_name="update_ids_cache", id_param="x") >>> def update_ids(x, dataset=None): >>> # Process IDs >>> return pd.DataFrame({'old_id': x, 'new_id': x, 'changed': False}) >>> >>> # First call - computes all IDs >>> result1 = update_ids([1, 2, 3]) >>> >>> # Second call - uses cached results for IDs 2 and 3 >>> result2 = update_ids([2, 3, 4])
- crantpy.cached_result(cache_name, max_age=7200, key_fn=None, should_cache_fn=None, validate_cache_fn=None)[source]#
Decorator for caching function results.
WARNING: This decorator is not thread-safe. It is recommended to use threading.Lock() to ensure thread safety when using this decorator in a multi-threaded environment.
This decorator provides a flexible caching mechanism for function results. It supports custom cache keys, validation, and conditional caching, making it suitable for a variety of use cases.
The cache stores entries in a dictionary structure: {
βresultβ: original_function_result, βmetadataβ: {
β_created_atβ: timestamp
}
}
This approach avoids modifying the original result objects directly, ensuring compatibility with immutable types.
- Parameters:
cache_name (str) β Name of the global cache to use. This is used to group cached results under a specific namespace.
max_age (int, default MAXIMUM_CACHE_DURATION) β Maximum age of cached result in seconds. Cached results older than this duration are considered stale and will be refreshed.
key_fn (callable, optional) β Function to generate a unique cache key based on the functionβs arguments. Defaults to using the first positional argument or the βdatasetβ keyword argument. If the function returns None, an error will be raised.
should_cache_fn (callable, optional) β Function to determine whether the result of the function should be cached. It takes the function result and arguments as input and returns a boolean.
validate_cache_fn (callable, optional) β Function to validate if a cached result is still valid beyond the age check. It takes the cached result and the function arguments as input and returns a boolean.
- Returns:
The decorated function with caching capabilities.
- Return type:
callable
Examples
>>> # Basic Caching: >>> @cached_result(cache_name="example_cache") >>> def expensive_function(x): >>> return x ** 2
>>> # Custom Cache Key: >>> @cached_result(cache_name="example_cache", key_fn=lambda x: f"key_{x}") >>> def expensive_function(x): >>> return x ** 2
>>> # Conditional Caching: >>> @cached_result(cache_name="example_cache", should_cache_fn=lambda result, *args: result > 10) >>> def expensive_function(x): >>> return x ** 2
>>> # Cache Validation: >>> def validate_cache(result, *args): >>> return result is not None
>>> @cached_result(cache_name="example_cache", validate_cache_fn=validate_cache) >>> def expensive_function(x): >>> return x ** 2
Notes
The decorated function gains a clear_cache method to manually clear the cache for the specified cache_name.
The check_stale parameter can be used to skip staleness checks when calling the decorated function.
- crantpy.chunks_to_nm(xyz_ch, vol, voxel_resolution=[4, 4, 40])[source]#
Map a chunk location to Euclidean space. CV workaround Implemented here after Giacomoβs suggestion
- Parameters:
xyz_ch (array-like) β (N, 3) array of chunk indices.
vol (cloudvolume.CloudVolume) β CloudVolume object associated with the chunked space.
voxel_resolution (list, optional) β Voxel resolution.
- Returns:
(N, 3) array of spatial points.
- Return type:
np.array
- crantpy.clear_global_cache(cache_name)[source]#
Clear a named global cache.
- Parameters:
cache_name (str) β Name of the cache to clear
- Return type:
None
- crantpy.configure_urllib3_warning_suppression(enable=None)[source]#
Enable suppression of known cosmetic urllib3 warnings.
Trade-offs: Hiding warnings can make it harder to notice real connectivity issues. When enabled, only the specific connection pool message is filtered; other warnings remain visible. Does not call urllib3.disable_warnings().
Control via enable or environment variable CRANTPY_SUPPRESS_URLLIB3_WARNINGS.
Returns True if suppression is enabled, False otherwise.
- crantpy.construct_scene(*, image=True, segmentation=True, brain_mesh=True, merge_biased_seg=False, nuclei=False, base_neuroglancer=False, layout='xy-3d', dataset=None)[source]#
Construct a basic neuroglancer scene for CRANT data.
- Parameters:
image (bool, default True) β Whether to add the aligned EM image layer.
segmentation (bool, default True) β Whether to add the proofreadable segmentation layer.
brain_mesh (bool, default True) β Whether to add the brain mesh layer.
merge_biased_seg (bool, default False) β Whether to add the merge-biased segmentation layer (for proofreading).
nuclei (bool, default False) β Whether to add the nuclei segmentation layer.
base_neuroglancer (bool, default False) β Whether to use base neuroglancer (affects segmentation layer format).
layout (str, default "xy-3d") β Layout to show. Options: β3dβ, βxy-3dβ, βxyβ, β4panelβ.
dataset (str, optional) β Which dataset to use (βlatestβ or βsandboxβ). If None, uses default.
- Returns:
Neuroglancer scene dictionary with requested layers.
- Return type:
Examples
>>> # Create a minimal visualization scene >>> scene = construct_scene(image=True, segmentation=True, brain_mesh=True) >>> >>> # Create a full proofreading scene >>> scene = construct_scene( ... image=True, ... segmentation=True, ... brain_mesh=True, ... merge_biased_seg=True, ... nuclei=True ... )
- crantpy.create_sql_query(table_name, fields, condition=None, limit=None, start=None)[source]#
Creates a SQL query to get the specified fields from the specified table.
- Parameters:
table_name (str) β The name of the table to query.
fields (List[str]) β The list of field names to include in the query.
condition (str, optional) β The WHERE clause of the query.
limit (int, optional) β The maximum number of rows to return.
start (int, optional) β The number of rows to skip (OFFSET).
- Returns:
The constructed SQL query string.
- Return type:
- crantpy.decode_url(url, format='json')[source]#
Decode neuroglancer URL to extract information.
- Parameters:
- Returns:
Decoded information in requested format.
- Return type:
dict or DataFrame
Examples
>>> url = "https://spelunker.cave-explorer.org/#!{...}" >>> info = decode_url(url, format='brief') >>> print(info['selected']) # List of selected segment IDs >>> print(info['position']) # [x, y, z] coordinates
- crantpy.detect_soma_mesh(mesh)[source]#
Try detecting the soma based on vertex clusters.
Identifies dense vertex clusters that likely represent the soma.
- Parameters:
mesh (trimesh.Trimesh) β Coordinates in nanometers. Mesh must not be downsampled for accurate detection.
- Returns:
Array of vertex indices that belong to the detected soma region. Returns empty array if no soma is detected.
- Return type:
np.ndarray
- crantpy.detect_soma_skeleton(s, min_rad=800, N=3)[source]#
Try detecting the soma based on radii.
Looks for consecutive nodes with large radii to identify soma. Includes additional checks to ensure the skeleton is valid.
- Parameters:
- Returns:
Node ID of the detected soma, or None if no soma found.
- Return type:
int or None
- crantpy.divide_local_neighbourhood(mesh, radius)[source]#
Divide the mesh into locally connected patches of a given size (overlapping).
- crantpy.encode_url(segments=None, annotations=None, coords=None, skeletons=None, skeleton_names=None, seg_colors=None, seg_groups=None, invis_segs=None, scene=None, base_neuroglancer=False, layout='xy-3d', open=False, to_clipboard=False, shorten=False, *, dataset=None)[source]#
Encode data as CRANT neuroglancer scene URL.
- Parameters:
segments (int or list of int, optional) β Segment IDs (root IDs) to have selected in the scene.
annotations (array or dict, optional) β Coordinates for annotations: - (N, 3) array: Point annotations at x/y/z coordinates (in voxels) - dict: Multiple annotation layers {name: (N, 3) array}
coords ((3,) array, optional) β X, Y, Z coordinates (in voxels) to center the view on.
skeletons (TreeNeuron or NeuronList, optional) β Skeleton(s) to add as annotation layer(s). Must be in nanometers.
skeleton_names (str or list of str, optional) β Names for the skeleton(s) to display in the UI. If a single string is provided, it will be used for all skeletons. If a list is provided, its length must match the number of skeletons.
seg_colors (str, tuple, list, dict, or array, optional) β Colors for segments: - str or tuple: Single color for all segments - list: List of colors matching segments - dict: Mapping of segment IDs to colors - array: Labels that will be converted to colors
seg_groups (list or dict, optional) β Group segments into separate layers: - list: Group labels matching segments - dict: {group_name: [seg_id1, seg_id2, β¦]}
invis_segs (int or list, optional) β Segment IDs to select but keep invisible.
scene (dict or str, optional) β Existing scene to modify (as dict or URL string).
base_neuroglancer (bool, default False) β Whether to use base neuroglancer instead of CAVE Spelunker.
layout (str, default "xy-3d") β Layout to show. Options: β3dβ, βxy-3dβ, βxyβ, β4panelβ.
open (bool, default False) β If True, opens the URL in a web browser.
to_clipboard (bool, default False) β If True, copies the URL to clipboard (requires pyperclip).
shorten (bool, default False) β If True, creates a shortened URL (requires state server).
dataset (str, optional) β Which dataset to use. If None, uses default.
- Returns:
Neuroglancer URL.
- Return type:
Examples
>>> # Simple scene with segments >>> url = encode_url(segments=[720575940621039145, 720575940621039146]) >>> >>> # Scene with colored segments >>> url = encode_url( ... segments=[720575940621039145, 720575940621039146], ... seg_colors={720575940621039145: 'red', 720575940621039146: 'blue'} ... ) >>> >>> # Scene with skeleton and centered view >>> import navis >>> skeleton = crt.viz.get_skeletons([720575940621039145])[0] >>> url = encode_url( ... segments=[720575940621039145], ... skeletons=skeleton, ... coords=[24899, 14436, 3739] ... )
- crantpy.filter_df(df, column, value, regex=False, case=False, match_all=False, exact=True)[source]#
This function filters a df based on a column and a value. It can handle string, numeric, and list-containing columns.
- Parameters:
(pd.DataFrame) (df)
(str) (column)
(Any) (value)
(bool) (exact)
(bool)
(bool) β if True, requires all filter values to be present in the cellβs list. If False, requires at least one filter value to be present. Defaults to False.
(bool)
df (pandas.DataFrame)
column (str)
value (Any)
regex (bool)
case (bool)
match_all (bool)
exact (bool)
- Returns:
pd.DataFrame
- Return type:
The filtered df.
- crantpy.generate_cave_token(save=False)[source]#
Generates a token for the CAVE client. If save is True, the token will be saved (overwriting any existing token).
- Parameters:
save (bool, default False) β Whether to save the token after generation.
- Return type:
None
- crantpy.get_adjacency(pre_ids=None, post_ids=None, threshold=1, min_size=None, materialization='latest', symmetric=False, clean=True, update_ids=True, dataset=None)[source]#
Construct an adjacency matrix from synaptic connections between neurons.
This function queries the synapses table to get connections between specified pre- and post-synaptic neurons, then constructs an adjacency matrix showing the number of synapses between each pair.
- Parameters:
pre_ids (int, str, list, NeuronCriteria, optional) β Pre-synaptic neuron root IDs or criteria. If None, all pre-synaptic neurons in the dataset will be included.
post_ids (int, str, list, NeuronCriteria, optional) β Post-synaptic neuron root IDs or criteria. If None, all post-synaptic neurons in the dataset will be included.
threshold (int, default 1) β Minimum number of synapses required between a pair to be included in the adjacency matrix.
min_size (int, optional) β Minimum size for filtering synapses before constructing adjacency matrix.
materialization (str, default 'latest') β Materialization version to use. βlatestβ (default) or βliveβ for live table.
symmetric (bool, default False) β If True, return a symmetric adjacency matrix with the same set of IDs on both rows and columns. The neuron set includes all neurons that appear in the filtered synapses data (union of all pre- and post-synaptic neurons). This provides a complete view of connectivity among all neurons involved in the queried connections. If False (default), rows represent pre-synaptic neurons and columns represent post-synaptic neurons from the actual synapses data.
clean (bool, default True) β Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_synapses().
update_ids (bool, default True) β Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if youβre certain all IDs are current (faster but risky).
dataset (str, optional) β Dataset to use for the query.
- Returns:
An adjacency matrix where each entry [i, j] represents the number of synapses from neuron i (pre-synaptic) to neuron j (post-synaptic). Rows are pre-synaptic neurons, columns are post-synaptic neurons.
- Return type:
pd.DataFrame
Examples
>>> import crantpy as cp >>> # Get adjacency between specific neurons >>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050]) >>> >>> # Get adjacency with minimum threshold >>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], threshold=3) >>> >>> # Get symmetric adjacency matrix >>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], symmetric=True) >>> >>> # Get adjacency matrix with autapses included >>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], clean=False) >>> >>> # Skip ID updates for faster queries (use only if IDs are known to be current) >>> adj = cp.get_adjacency(pre_ids=[576460752641833774], post_ids=[576460752777916050], update_ids=False)
Notes
This function uses get_synapses() internally to retrieve synaptic connections
If both pre_ids and post_ids are None, this will query all synapses in the dataset
The threshold parameter filters connection pairs, not individual synapses
When symmetric=True, the resulting matrix includes all neurons that appear in the filtered synapses data, ensuring complete connectivity visualization
When symmetric=False, the matrix may be rectangular with different neuron sets for rows (pre-synaptic) and columns (post-synaptic)
When clean=True (default), autapses and background connections are removed
When update_ids=True (default), IDs are automatically updated with efficient caching
- crantpy.get_cave_client(dataset=None, clear_cache=False, check_stale=True)[source]#
Returns a CAVE client instance. If a token is already set, it will be used for authentication. Otherwise, a new token will be generated.
- Parameters:
clear_cache (bool, default False) β If True, bypasses the cache and fetches a new client.
check_stale (bool, default True) β If True, checks if the cached client is stale based on materialization and maximum cache duration.
dataset (str, optional) β The dataset to use. If not provided, uses the default dataset.
- Returns:
A CAVE client instance authenticated with the token.
- Return type:
CAVEclient
- Raises:
ValueError β If no token is found after attempting to generate one.
- crantpy.get_cloudvolume(dataset=None, clear_cache=False, check_stale=True, **kwargs)[source]#
Returns a cloudvolume instance.
- crantpy.get_connectivity(neuron_ids, upstream=True, downstream=True, threshold=1, min_size=None, materialization='latest', clean=True, update_ids=True, dataset=None)[source]#
Fetch connectivity information for given neuron(s) in CRANTb.
This function retrieves synaptic connections for the specified neurons, returning a table of connections with pre-synaptic neurons, post-synaptic neurons, and synapse counts.
- Parameters:
neuron_ids (int, str, list, NeuronCriteria) β Neuron root ID(s) to query connectivity for. Can be a single ID, list of IDs, or NeuronCriteria object.
upstream (bool, default True) β Whether to fetch upstream (incoming) connectivity to the query neurons.
downstream (bool, default True) β Whether to fetch downstream (outgoing) connectivity from the query neurons.
threshold (int, default 1) β Minimum number of synapses required between a pair to be included in the results.
min_size (int, optional) β Minimum size for filtering synapses before aggregating connections.
materialization (str, default 'latest') β Materialization version to use. βlatestβ (default) or βliveβ for live table.
clean (bool, default True) β Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_synapses().
update_ids (bool, default True) β Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if youβre certain all IDs are current (faster but risky).
dataset (str, optional) β Dataset to use for the query.
- Returns:
Connectivity table with columns: - βpreβ: pre-synaptic neuron ID - βpostβ: post-synaptic neuron ID - βweightβ: number of synapses between the pair
- Return type:
pd.DataFrame
- Raises:
ValueError β If both upstream and downstream are False.
Examples
>>> import crantpy as cp >>> # Get all connections for a neuron >>> conn = cp.get_connectivity(576460752641833774) >>> >>> # Get only downstream connections with threshold >>> conn = cp.get_connectivity(576460752641833774, upstream=False, threshold=3) >>> >>> # Get connectivity for multiple neurons >>> conn = cp.get_connectivity([576460752641833774, 576460752777916050]) >>> >>> # Skip ID updates for faster queries (use only if IDs are known to be current) >>> conn = cp.get_connectivity(576460752641833774, update_ids=False)
Notes
This function uses get_synapses() internally to retrieve synaptic connections
Results are aggregated by pre-post neuron pairs and sorted by synapse count
When clean=True, autapses and background connections are removed
When update_ids=True (default), IDs are automatically updated with efficient caching
- crantpy.get_current_cave_token()[source]#
Retrieves the current token from the CAVE client.
- Returns:
The current CAVE token.
- Return type:
- Raises:
ValueError β If no token is found.
- crantpy.get_dataset_segmentation_source(dataset)[source]#
Get segmentation source for given dataset.
- crantpy.get_datastack_segmentation_source(datastack)[source]#
Get segmentation source for given CAVE datastack.
- Return type:
- crantpy.get_skeletons(root_ids, dataset='latest', progress=True, omit_failures=None, max_threads=6, **kwargs)[source]#
Fetch skeletons for multiple neurons.
Tries to get precomputed skeletons first, then falls back to on-demand skeletonization if needed. if id more than one root_id, it will use the parallel skeletonization function.
- Parameters:
root_ids (list of int or np.ndarray) β Root IDs of neurons to fetch skeletons for.
dataset (str, default 'latest') β Dataset to query against.
progress (bool, default True) β Show progress during fetching.
omit_failures (bool, optional) β None: raise exception on failures True: skip failed neurons False: return empty TreeNeuron for failed cases
max_threads (int, default 6) β Number of parallel threads for fetching skeletons.
**kwargs β Additional arguments passed to skeletonization if needed.
- Returns:
List of successfully fetched/generated skeletons.
- Return type:
navis.NeuronList
- crantpy.get_soma_from_annotations(root_id, client, dataset=None)[source]#
Try to get soma location from nucleus annotations.
- Parameters:
- Returns:
(x, y, z) coordinates of the soma in nanometers, or None if not found.
- Return type:
tuple or None
- crantpy.get_synapse_counts(neuron_ids, threshold=1, min_size=None, materialization='latest', clean=True, update_ids=True, dataset=None)[source]#
Get synapse counts (pre and post) for given neuron IDs in CRANTb.
This function returns the total number of presynaptic and postsynaptic connections for each specified neuron, aggregated across all their partners.
- Parameters:
neuron_ids (int, str, list, NeuronCriteria) β Neuron root ID(s) to get synapse counts for. Can be a single ID, list of IDs, or NeuronCriteria object.
threshold (int, default 1) β Minimum number of synapses required between a pair to be counted towards the total. Pairs with fewer synapses are excluded.
min_size (int, optional) β Minimum size for filtering individual synapses before counting.
materialization (str, default 'latest') β Materialization version to use. βlatestβ (default) or βliveβ for live table.
clean (bool, default True) β Whether to perform cleanup of the underlying synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background) This parameter is passed to get_connectivity().
update_ids (bool, default True) β Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if youβre certain all IDs are current (faster but risky).
dataset (str, optional) β Dataset to use for the query.
- Returns:
DataFrame with columns: - index: neuron IDs - βpreβ: number of presynaptic connections (outgoing) - βpostβ: number of postsynaptic connections (incoming)
- Return type:
pd.DataFrame
Examples
>>> import crantpy as cp >>> # Get synapse counts for a single neuron >>> counts = cp.get_synapse_counts(576460752641833774) >>> >>> # Get counts for multiple neurons with threshold >>> counts = cp.get_synapse_counts([576460752641833774, 576460752777916050], threshold=3) >>> >>> # Skip ID updates for faster queries (use only if IDs are known to be current) >>> counts = cp.get_synapse_counts(576460752641833774, update_ids=False)
Notes
This function uses get_connectivity() internally to get connection data
Counts represent the number of distinct synaptic partners, not individual synapses
The threshold is applied at the connection level (pairs of neurons)
When update_ids=True (default), IDs are automatically updated with efficient caching
- crantpy.get_synapses(pre_ids=None, post_ids=None, threshold=1, min_size=None, materialization='latest', return_pixels=True, clean=True, update_ids=True, dataset=None)[source]#
Fetch synapses for a given set of pre- and/or post-synaptic neuron IDs in CRANTb.
- Parameters:
pre_ids (int, str, list of int/str, NeuronCriteria, optional) β Pre-synaptic neuron root ID(s) to include. Can be a single ID, list of IDs, or NeuronCriteria object.
post_ids (int, str, list of int/str, NeuronCriteria, optional) β Post-synaptic neuron root ID(s) to include. Can be a single ID, list of IDs, or NeuronCriteria object.
threshold (int, default 1) β Minimum number of synapses required for a partner to be retained. Currently we donβt know what a good threshold is.
min_size (int, optional) β Minimum size for filtering synapses. Currently we donβt know what a good size is.
materialization (str, default 'latest') β Materialization version to use. βlatestβ (default) or βliveβ for live table.
return_pixels (bool, default True) β Whether to convert coordinate columns from nanometers to pixels. If True (default), coordinates in ctr_pt_position, pre_pt_position, and post_pt_position are converted using dataset scale factors. If False, coordinates remain in nanometer units.
clean (bool, default True) β Whether to perform cleanup of the synapse data: - Remove autapses (self-connections) - Remove connections involving neuron ID 0 (background)
update_ids (bool, default True) β Whether to automatically update outdated root IDs to their latest versions before querying. This ensures accurate results even after segmentation edits. Uses efficient per-ID caching to minimize overhead for repeated queries. Set to False only if youβre certain all IDs are current (faster but risky).
dataset (str, optional) β Dataset to use for the query.
- Returns:
DataFrame of synaptic connections.
- Return type:
pd.DataFrame
- Raises:
ValueError β If neither pre_ids nor post_ids are provided.
Notes
When update_ids=True (default), outdated root IDs are automatically updated using supervoxel IDs from annotations when available for fast, reliable updates
ID updates are cached per-ID, so repeated queries with overlapping IDs are efficient
Updated IDs are used for the query, but the original IDs are not modified in place
See also
update_idsManually update root IDs to their latest versions
- crantpy.inject_dataset(allowed=None, disallowed=None, param_name='dataset')[source]#
Inject current default dataset.
- Parameters:
- Returns:
Decorator function that injects the dataset.
- Return type:
Callable
- crantpy.is_latest_roots(x, timestamp=None, dataset=None, progress=True, batch_size=100000, validate_ids=True, use_http_session=True)[source]#
Check if the given root IDs are the latest based on the timestamp.
- Parameters:
x (IDs = str | int | np.int64) β The root IDs to check.
timestamp (Timestamp = str | int | np.int64 | datetime | np.datetime64 | pd.Timestamp) β The timestamp to compare against. Can also be βmatβ for the latest materialization timestamp.
dataset (str, optional) β The dataset to use.
progress (bool, default True) β Whether to show progress bar for large batches.
batch_size (int, default 100_000) β Batch size for processing large numbers of IDs.
validate_ids (bool, default True) β Whether to validate root IDs before processing.
use_http_session (bool, default True) β Whether to use direct HTTP session for better performance.
- Returns:
A boolean array indicating whether each root ID is the latest.
- Return type:
np.ndarray
Examples
>>> from crantpy.utils.cave.helpers import is_latest_roots >>> is_latest_roots([123456789, 987654321]) array([ True, False])
>>> # Check against latest materialization >>> is_latest_roots([123456789], timestamp="mat") array([ True])
- crantpy.is_valid_root(x, dataset=None, raise_exc=False)[source]#
Check if ID is (potentially) valid root ID.
- Parameters:
- Returns:
A boolean array indicating whether each root ID is valid.
- Return type:
np.ndarray
- Raises:
ValueError β If raise_exc is True and invalid IDs are found.
- crantpy.is_valid_supervoxel(x, dataset=None, raise_exc=False)[source]#
Check if ID is (potentially) valid supervoxel ID.
- Parameters:
- Returns:
If x is a single ID, returns bool. If x is iterable, returns array.
- Return type:
bool or np.ndarray
- Raises:
ValueError β If raise_exc is True and invalid IDs are found.
See also
is_valid_rootUse this function to check if a root ID is valid.
- crantpy.make_iterable(x, force_type=None)[source]#
Convert input to an numpy array.
- Parameters:
x (Any) β The input to convert.
force_type (Optional[type]) β If specified, the input will be cast to this type.
- Returns:
The converted numpy array.
- Return type:
np.ndarray
- crantpy.map_position_to_node(neuron, position, return_distance=False)[source]#
Map a spatial position to the nearest node in a skeleton.
This utility function finds the closest node in a skeleton to a given position using a KDTree for efficient spatial lookup. Useful for soma detection, synapse attachment, and other spatial queries.
- Parameters:
neuron (navis.TreeNeuron) β The skeleton neuron to search.
position (list or np.ndarray) β The [x, y, z] coordinates to map. Should be in the same coordinate system as the neuron (typically nanometers).
return_distance (bool, optional) β If True, also return the Euclidean distance to the nearest node. Default is False.
- Returns:
node_id (int) β The node_id of the nearest node.
distance (float (optional)) β The Euclidean distance to the nearest node in nanometers. Only returned if return_distance=True.
- Return type:
Examples
>>> import crantpy as cp >>> import numpy as np >>> skel = cp.get_l2_skeleton(576460752664524086) >>> # Map a position to nearest node >>> node_id = cp.map_position_to_node(skel, [240000, 85000, 96000]) >>> print(f"Nearest node: {node_id}") >>> # Get distance too >>> node_id, dist = cp.map_position_to_node(skel, [240000, 85000, 96000], return_distance=True) >>> print(f"Nearest node: {node_id} at distance {dist:.2f} nm")
See also
reroot_at_somaReroot a skeleton at its soma location.
detect_somaDetect soma location in a neuron.
- crantpy.match_dtype(value, dtype)[source]#
Match the dtype of a value to a given dtype.
- Parameters:
- Returns:
The converted value.
- Return type:
Any
- Raises:
ValueError β If the dtype is not supported.
- crantpy.neurons_to_url(neurons, include_skeleton=True, downsample=None, **kwargs)[source]#
Create neuroglancer URLs for a list of neurons.
- Parameters:
neurons (NeuronList) β List of neurons to create URLs for. Must have root_id attribute.
include_skeleton (bool, default True) β Whether to include the skeleton in the URL.
downsample (int, optional) β Factor by which to downsample skeletons before adding to scene.
**kwargs β Additional arguments passed to encode_url().
- Returns:
DataFrame with columns: id, name, url
- Return type:
DataFrame
Examples
>>> neurons = crt.viz.get_skeletons([720575940621039145, 720575940621039146]) >>> urls = neurons_to_url(neurons) >>> print(urls[['id', 'url']])
- crantpy.parse_neuroncriteria(allow_empty=False)[source]#
Parse all NeuronCriteria arguments into an array of root IDs.
This decorator automatically converts any NeuronCriteria objects in function arguments to arrays of root IDs. It uses type checking by class name to avoid circular imports.
- Parameters:
allow_empty (bool, default False) β Whether to allow the NeuronCriteria to not match any neurons.
- Returns:
Decorator function that processes NeuronCriteria arguments.
- Return type:
Callable
Examples
>>> @parse_neuroncriteria() >>> def process_neurons(neurons): >>> # neurons will be an array of root IDs >>> return neurons >>> >>> # Can be called with a NeuronCriteria object >>> result = process_neurons(NeuronCriteria(cell_class='example'))
- crantpy.parse_root_ids(neurons)[source]#
Parse various neuron input types to a list of root ID strings. :param neurons: The neuron(s) to parse. Can be a single root ID (int or str),
a list of root IDs, or a NeuronCriteria object.
- crantpy.parse_timestamp(x)[source]#
Parse a timestamp string to Unix timestamp.
- Parameters:
x (Timestamp) β The timestamp string to parse. Int must be unix timestamp. String must be ISO 8601 - e.g. β2021-11-15β. datetime, np.datetime64, pd.Timestamp are also accepted.
- Returns:
The Unix timestamp.
- Return type:
- crantpy.plot_em_image(x, y, z, size=1000)[source]#
Fetch and return an EM image slice from the precomputed CloudVolume. Currently only supports slices through the Z axis (i.e. XY plane).
- Parameters:
- Returns:
The EM image slice as a numpy array.
- Return type:
np.ndarray
- crantpy.reroot_at_soma(neurons, soma_coords=None, detect_soma_kwargs=None, inplace=True, progress=False)[source]#
Reroot skeleton(s) at their soma location.
This convenience function combines soma detection and rerooting. If soma coordinates are not provided, they will be automatically detected using detect_soma(). The skeleton is then rerooted at the node nearest to the soma location.
- Parameters:
neurons (TreeNeuron | NeuronList) β Single neuron or list of neurons to reroot.
soma_coords (np.ndarray or list of np.ndarray, optional) β Soma coordinates in pixels [x, y, z]. If not provided, soma will be automatically detected using detect_soma(). For multiple neurons, provide a list of coordinates in the same order as neurons.
detect_soma_kwargs (dict, optional) β Additional keyword arguments to pass to detect_soma() if soma coordinates are not provided.
inplace (bool, optional) β If True, reroot neurons in place. If False, return rerooted copies. Default is True.
progress (bool, optional) β If True, show progress bar when processing multiple neurons or detecting somas. Default is False.
- Returns:
Rerooted neuron(s). Same as input if inplace=True, otherwise copies.
- Return type:
TreeNeuron | NeuronList
Examples
>>> import crantpy as cp >>> # Get skeleton >>> skel = cp.get_l2_skeleton(576460752664524086) >>> # Reroot at automatically detected soma >>> skel_rerooted = cp.reroot_at_soma(skel) >>> print(f"Root node: {skel_rerooted.root}") >>> # Reroot with provided soma coordinates >>> soma = [28000, 9000, 2200] # in pixels >>> skel_rerooted = cp.reroot_at_soma(skel, soma_coords=soma) >>> # Process multiple neurons >>> skels = cp.get_l2_skeleton([576460752664524086, 576460752590602315]) >>> skels_rerooted = cp.reroot_at_soma(skels, progress=True)
See also
map_position_to_nodeMap a position to the nearest node.
detect_somaDetect soma location in a neuron.
- crantpy.retry(func, retries=5, cooldown=2)[source]#
Retry function on HTTPError.
This also suppresses UserWarnings (commonly raised by l2 cache requests)
- crantpy.retry_func(retries=5, cooldown=2)[source]#
Retry decorator for functions on HTTPError. This also suppresses UserWarnings (commonly raised by l2 cache requests) :param cooldown: Cooldown period in seconds between attempts. :type cooldown: int | float :param retries: Number of retries before we give up. Every subsequent retry
will delay by an additional retry.
- crantpy.scene_to_url(scene, base_neuroglancer=False, shorten=False, open=False, to_clipboard=False)[source]#
Convert neuroglancer scene dictionary to URL.
- Parameters:
scene (dict) β Neuroglancer scene dictionary.
base_neuroglancer (bool, default False) β Whether to use base neuroglancer instead of CAVE Spelunker.
shorten (bool, default False) β Whether to create a shortened URL (requires state server).
open (bool, default False) β If True, opens URL in web browser.
to_clipboard (bool, default False) β If True, copies URL to clipboard.
- Returns:
Neuroglancer URL.
- Return type:
- crantpy.set_cave_token(token)[source]#
Sets the CAVE token for the CAVE client.
- Parameters:
token (str) β The CAVE token to set.
- Return type:
None
- crantpy.set_logging_level(level)[source]#
Sets the logging level for the logger.
- Parameters:
level (str) β The logging level to set. Options are βDEBUGβ, βINFOβ, βWARNINGβ, βERRORβ, βCRITICALβ.
- Return type:
None
- crantpy.skeletonize_neuron(client, root_id, shave_skeleton=True, remove_soma_hairball=False, assert_id_match=False, threads=2, save_to=None, progress=True, use_pcg_skel=False, **kwargs)[source]#
Skeletonize a neuron the main function.
- Parameters:
client (CAVEclient) β CAVE client for data access.
root_id (int) β Root ID of the neuron to skeletonize.
shave_skeleton (bool, default True) β Remove small protrusions and bristles from skeleton (from my understanding).
remove_soma_hairball (bool, default False) β Remove the hairball mesh from the soma
assert_id_match (bool, default False) β Verify skeleton nodes map to correct segment ID.
threads (int, default 2) β Number of parallel threads for mesh processing.
save_to (str, optional) β Save skeleton as SWC file to this path.
progress (bool, default True) β Show progress bars during processing.
use_pcg_skel (bool, default False) β Try pcg_skel first before skeletor (CAVE-client skeletonization).
**kwargs β Additional arguments for skeletonization algorithms.
- Returns:
navis.TreeNeuron β The skeletonized neuron.
# TODOs from fafbseg
# - Use synapse locations as constraints
# - Mesh preprocessing options
# - Chunked skeletonization for large meshes
# - Use soma annotations from external sources
# - Better error handling/logging
# - Allow user-supplied soma location/radius
# - Option to return intermediate results
# - Support more skeletonization algorithms
# - Merge disconnected skeletons
# - Custom node/edge attributes
- Return type:
navis.TreeNeuron | navis.NeuronList
- crantpy.skeletonize_neurons_parallel(client, root_ids, n_cores=None, progress=True, color_map=None, **kwargs)[source]#
Skeletonize multiple neurons in parallel.
- Parameters:
client (CAVEclient) β CAVE client for data access.
root_ids (list of int or np.ndarray) β Root IDs of neurons to skeletonize.
n_cores (int, optional) β Number of cores to use. If None, uses half of available cores.
progress (bool, default True) β Show progress bars during processing.
color_map (str, optional) β Generate colors for each neuron using this colormap. Returns tuple of (neurons, colors) instead of just neurons.
**kwargs β Additional arguments passed to skeletonize_neuron.
- Returns:
NeuronList of skeletonized neurons, or tuple of (NeuronList, colors) if color_map is specified.
- Return type:
navis.NeuronList or tuple