Deep Dive: From CRANTpy to Neuroglancer#

This comprehensive tutorial will guide you through creating, manipulating, and sharing neuroglancer visualizations for CRANT neurons. We’ll cover:

  1. Basic scene creation and URL generation

  2. Adding and coloring neurons

  3. Working with annotations (points, lines, ellipsoids)

  4. Adding neuron skeletons

  5. Grouping and organizing neurons

  6. Scene customization (layouts, layers, coordinates)

  7. URL shortening and sharing

  8. Integration with NeuronCriteria queries

  9. Advanced connectivity visualization

  10. Tips and best practices

# Import CRANTpy and other necessary libraries
import crantpy as cp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import IPython

# Import neuroglancer utilities
from crantpy.utils import neuroglancer as ngl

# Set up logging to see progress
cp.set_logging_level("WARNING")

print("CRANTpy loaded successfully!")
print(f"Default dataset: {cp.CRANT_DEFAULT_DATASET}")
CRANTpy loaded successfully!
Default dataset: latest

Part 1: Basic URL Creation#

Let’s start with the simplest case - creating a neuroglancer URL to visualize some neurons.

# Example neuron IDs (replace with real IDs from your dataset)
# These are placeholder IDs for demonstration
example_neurons = [576460752773799604, 576460752722405178]

# update to the latest neuron IDs
example_neurons = cp.update_ids(example_neurons)['new_id'].values
2025-10-07 07:38:24 - WARNING - Multiple supervoxel IDs found for 129 root IDs. Using first occurrence for each.
# Create a simple neuroglancer URL
url = ngl.encode_url(segments=example_neurons)

print("Generated neuroglancer URL:")
# print as a ipy clickable link
url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">Open Neuroglancer View</a>')
display(url_view)
print(f"URL length: {len(url)} characters")
Generated neuroglancer URL:
URL length: 2053 characters

Understanding the URL Structure#

The URL contains:

  • Base URL: Either Spelunker (default) or base neuroglancer

  • Scene JSON: Encoded scene description including layers, segments, settings

  • Layers: EM image, segmentation, brain mesh, etc.

Let’s decode this URL to see what’s inside:

# Decode the URL to see its contents
info = ngl.decode_url(url, format='brief')

print("Position (voxel coordinates):", info['position'])
print("Selected segments:", info['selected'])
print("Number of annotations:", len(info['annotations']))

# Get full scene as JSON
scene = ngl.decode_url(url, format='json')
print(f"\nScene has {len(scene['layers'])} layers:")
for i, layer in enumerate(scene['layers']):
    print(f"  {i+1}. {layer['name']} ({layer['type']})")
Position (voxel coordinates): [25148.333984375, 19334.7421875, 1531.5]
Selected segments: ['576460752773799604', '576460752722405178', '1']
Number of annotations: 0

Scene has 3 layers:
  1. aligned (image)
  2. proofreadable seg (segmentation)
  3. brain mesh (segmentation)

Part 2: Scene Construction#

You can build custom scenes with specific layer combinations. This is useful for creating visualization-only or proofreading scenes.

# Minimal visualization scene (default)
viz_scene = ngl.construct_scene(
    image=True,           # Aligned EM
    segmentation=True,    # Proofreadable segmentation
    brain_mesh=True       # Brain outline
)

print("Visualization scene layers:")
for layer in viz_scene['layers']:
    print(f"  - {layer['name']}")

# Full proofreading scene with all layers
proof_scene = ngl.construct_scene(
    image=True,
    segmentation=True,
    brain_mesh=True,
    merge_biased_seg=True,  # Alternative segmentation
    nuclei=True             # Nuclear segmentation
)

print(f"\nProofreading scene layers:")
for layer in proof_scene['layers']:
    visible = layer.get('visible', True)
    print(f"  - {layer['name']} {'(visible)' if visible else '(hidden)'}")

# Segmentation-only scene
seg_only_scene = ngl.construct_scene(
    image=False,
    segmentation=True,
    brain_mesh=False
)

print(f"\nSegmentation-only scene has {len(seg_only_scene['layers'])} layer(s)")
Visualization scene layers:
  - aligned
  - proofreadable seg
  - brain mesh

Proofreading scene layers:
  - aligned (visible)
  - proofreadable seg (visible)
  - brain mesh (visible)
  - merge-biased seg (hidden)
  - nuclei (hidden)

Segmentation-only scene has 1 layer(s)

Converting scenes to URLs#

We can convert our scenes to shareable URLs using the scene_to_url function.

# get the url for each scene and present it as a clickable link
for scene, name in zip([viz_scene, proof_scene, seg_only_scene],
                       ["Visualization", "Proofreading", "Segmentation-only"]):
    # use scene_to_url to get the url
    scene_url = ngl.scene_to_url(scene)
    scene_url_view = IPython.display.HTML(f'<a href="{scene_url}" target="_blank">{name} Scene URL</a>')
    display(scene_url_view)

Part 3: Coloring Neurons#

Neuroglancer supports various color schemes for visualizing neurons. Let’s explore the options:

# Sample neurons for coloring demos
demo_neurons = [576460752722405178, 576460752773799604, 576460752681552812, 
                576460752679088143, 576460752679088399]
# Update to latest IDs
demo_neurons = cp.update_ids(demo_neurons)['new_id'].values

# 1. Single color for all neurons
url1 = ngl.encode_url(
    segments=demo_neurons,
    seg_colors='red'
)
if url1:
    url1_view = IPython.display.HTML(f'<a href="{url1}" target="_blank">Single Color URL</a>')
    display(url1_view)

# 2. Dictionary mapping neurons to colors
color_dict = {
    demo_neurons[0]: 'red',
    demo_neurons[1]: 'blue',
    demo_neurons[2]: 'green'
}
url2 = ngl.encode_url(
    segments=demo_neurons[:3],
    seg_colors=color_dict
)
if url2:
    url2_view = IPython.display.HTML(f'<a href="{url2}" target="_blank">Dictionary Color URL</a>')
    display(url2_view)


# 3. List of colors (matches order of segments)
color_list = ['#FF0000', '#00FF00', '#0000FF', '#FFFF00', '#FF00FF']
url3 = ngl.encode_url(
    segments=demo_neurons,
    seg_colors=color_list
)
if url3:
    url3_view = IPython.display.HTML(f'<a href="{url3}" target="_blank">List Color URL</a>')
    display(url3_view)

# 4. RGB tuples
url4 = ngl.encode_url(
    segments=demo_neurons[:2],
    seg_colors=(1.0, 0.5, 0.0)  # Orange
)
if url4:
    url4_view = IPython.display.HTML(f'<a href="{url4}" target="_blank">RGB Tuple Color URL</a>')
    display(url4_view)

# 5. Automatic color palette from labels
# Useful when you have categorical data (e.g., cell types)
labels = np.array([0, 0, 0, 1, 1])  # Group labels
url5 = ngl.encode_url(
    segments=demo_neurons,
    seg_colors=labels  # Automatically generates colors
)
if url5:
    url5_view = IPython.display.HTML(f'<a href="{url5}" target="_blank">Label-based Color URL</a>')
    display(url5_view)
2025-10-07 07:38:25 - WARNING - Multiple supervoxel IDs found for 129 root IDs. Using first occurrence for each.

Color Palette Selection#

When using label-based coloring, the system automatically selects appropriate palettes:

  • ≀10 unique labels: Uses tab10 palette

  • 11-20 unique labels: Uses tab20 palette

  • >20 unique labels: Uses shuffled hls palette

Let’s demonstrate:

# Generate many neurons with categorical labels
many_neurons = [
    576460752716464275,
    576460752728712007,
    576460752739393783,
    576460752762063694,
    576460752677921294,
    576460752690086420,
    576460752655523147,
    576460752645333811,
    576460752677918734,
    576460752659856798,
    576460752621523276,
    576460752652793881,
    576460752684257937,
    576460752843984962,
    576460752661953002,
    576460752705394516,
    576460752707407425,
    576460752714526325,
    576460752716050638,
    576460752621554764,
    576460752680502120,
    576460752722618617,
    576460752673707308,
    576460752726670303,
    576460752688481327
]
# update to latest IDs
many_neurons = cp.update_ids(many_neurons)['new_id'].values
many_labels = np.random.randint(0, 8, size=25)  # 8 categories

url_many = ngl.encode_url(
    segments=many_neurons,
    seg_colors=many_labels
)

# Decode to see the colors
scene = ngl.decode_url(url_many, format='json')
seg_layer = [l for l in scene['layers'] if 'segmentation' in l.get('type', '')][0]

if 'segmentColors' in seg_layer:
    print(f"Generated {len(seg_layer['segmentColors'])} segment colors")
    print("First 5 colors:")
    for seg_id, color in list(seg_layer['segmentColors'].items())[:5]:
        # display segment ID and color in HTML
        display(IPython.display.HTML(f'Segment {seg_id}: <span style="color:{color}">{color}</span>'))

# Display the URL as a clickable link
if url_many:
    url_many_view = IPython.display.HTML(f'<a href="{url_many}" target="_blank">Many Neurons with Labels URL</a>')
    display(url_many_view)
2025-10-07 07:38:26 - WARNING - Multiple supervoxel IDs found for 129 root IDs. Using first occurrence for each.
Generated 25 segment colors
First 5 colors:
Segment 576460752716464275: #2ca02c
Segment 576460752728712007: #1f77b4
Segment 576460752739393783: #d62728
Segment 576460752734522875: #ff7f0e
Segment 576460752691493411: #d62728

Part 4: Grouping Neurons into Layers#

You can organize neurons into separate annotation layers for better visualization. This is especially useful for comparing different cell types or regions.

# Method 1: Using a dictionary {group_name: [neuron_ids]}
groups_dict = {
    'Olfactory Projection Neurons': [demo_neurons[0], demo_neurons[1], demo_neurons[2]],
    'Central Complex Ring Neurons': [demo_neurons[3], demo_neurons[4]]
}

url_groups1 = ngl.encode_url(
    segments=demo_neurons,
    seg_groups=groups_dict,
    seg_colors={n: c for n, c in zip(demo_neurons, ['red', 'red', 'red', 'blue', 'blue'])}
)
print("Method 1 creates separate layers per group")
url_view = IPython.display.HTML(f'<a href="{url_groups1}" target="_blank">Grouped Neurons URL</a>')
display(url_view)

scene = ngl.decode_url(url_groups1, format='json')
print("Created layers:")
for layer in scene['layers']:
    if 'segmentation' in layer.get('type', ''):
        segs = layer.get('segments', [])
        visible = layer.get('visible', True)
        print(f"  - {layer['name']}: {len(segs)} segments {'(visible)' if visible else '(hidden)'}")



# Method 2: Using a list of group labels (matches segment order)
group_labels = ['oPNs', 'oPNs', 'oPNs', 'exRs', 'exRs']

url_groups2 = ngl.encode_url(
    segments=demo_neurons,
    seg_groups=group_labels
)

print("\nMethod 2 also creates separate layers per group")
url_view = IPython.display.HTML(f'<a href="{url_groups2}" target="_blank">Grouped Neurons URL (Method 2)</a>')
display(url_view)
Method 1 creates separate layers per group
Created layers:
  - proofreadable seg: 0 segments (visible)
  - brain mesh: 1 segments (visible)
  - Olfactory Projection Neurons: 3 segments (hidden)
  - Central Complex Ring Neurons: 2 segments (hidden)

Method 2 also creates separate layers per group

Part 5: Working with Coordinates#

Neuroglancer uses voxel coordinates (8nm x 8nm x 42nm). CRANTpy handles coordinate conversions for you.

# Center view on specific coordinates (voxel space)
coords_voxel = [25148, 19334, 1531]

url_centered = ngl.encode_url(
    segments=[demo_neurons[0]],
    coords=coords_voxel
)
url_view = IPython.display.HTML(f'<a href="{url_centered}" target="_blank">Centered View URL</a>')
display(url_view)

# Verify the position
info = ngl.decode_url(url_centered, format='brief')
print(f"Scene centered at: {info['position']}")

# Convert between coordinate systems
from crantpy.utils.config import SCALE_X, SCALE_Y, SCALE_Z

# If you have nanometer coordinates, convert to voxels:
coords_nm = np.array([201186.67, 154672.0, 64302.0])
coords_voxel = coords_nm / [SCALE_X, SCALE_Y, SCALE_Z]
print(f"\nNanometers: {coords_nm}")
print(f"Voxels: {coords_voxel}")

# And back to nanometers:
coords_nm_back = coords_voxel * [SCALE_X, SCALE_Y, SCALE_Z]
print(f"Back to nm: {coords_nm_back}")
Scene centered at: [25148, 19334, 1531]

Nanometers: [201186.67 154672.    64302.  ]
Voxels: [25148.33375 19334.       1531.     ]
Back to nm: [201186.67 154672.    64302.  ]

Part 6: Adding Annotations#

Annotations are markers you can add to highlight specific locations. Supports points, lines, and ellipsoids.

# Example 1: Point annotations (e.g., synapse locations)
synapse_locations = np.array([
    [25000, 19000, 1500],
    [25100, 19100, 1510],
    [25200, 19200, 1520],
    [25300, 19300, 1530]
])

url_points = ngl.encode_url(
    segments=[demo_neurons[0]],
    annotations=synapse_locations,
)
print("Added point annotations for synapses")
if url_points:
    url_points_view = IPython.display.HTML(f'<a href="{url_points}" target="_blank">Point Annotations URL</a>')
    display(url_points_view)

# Example 2: Multiple annotation layers with names
soma_locations = np.array([[25148, 19334, 1531]])
dendrite_points = np.array([
    [25200, 19400, 1540],
    [25250, 19450, 1550]
])

annotations_dict = {
    'soma': soma_locations,
    'dendrites': dendrite_points,
    'synapses': synapse_locations
}

# Calculate center of all annotations
all_points = np.vstack([soma_locations, dendrite_points, synapse_locations])

url_multi_annotations = ngl.encode_url(
    segments=[demo_neurons[0]],
    annotations=annotations_dict,
)

scene = ngl.decode_url(url_multi_annotations, format='json')
ann_layers = [l for l in scene['layers'] if l['type'] == 'annotation']
print(f"\nCreated {len(ann_layers)} annotation layers:")
for layer in ann_layers:
    print(f"  - {layer['name']}: {len(layer['annotations'])} annotations")



if url_multi_annotations:
    url_multi_view = IPython.display.HTML(f'<a href="{url_multi_annotations}" target="_blank">Multi-layer Annotations URL</a>')
    display(url_multi_view)
Added point annotations for synapses
Created 3 annotation layers:
  - soma: 1 annotations
  - dendrites: 2 annotations
  - synapses: 4 annotations

Advanced Annotations: Lines and Ellipsoids#

# Line annotations - useful for showing connections
# Format: (N, 2, 3) array where each row is [start_point, end_point]
connections = np.array([
    [[25000, 19000, 1500], [25100, 19100, 1550]],  # Connection 1
    [[25100, 19100, 1510], [25200, 19200, 1560]],  # Connection 2
    [[25200, 19200, 1520], [25300, 19300, 1570]]   # Connection 3
])

# Calculate center from all line endpoints for better view
all_line_points = connections.reshape(-1, 3)

scene = ngl.construct_scene()
scene = ngl.add_annotation_layer(connections, scene, name="connections")

url_lines = ngl.scene_to_url(scene)
# Re-encode with centered coordinates
url_lines = ngl.encode_url(
    scene=url_lines,
)
print("Added line annotations showing connections")

# Ellipsoid annotations - useful for highlighting regions
# Format: (N, 4) array where each row is [x, y, z, radius]
regions = np.array([
    [25148, 19334, 1531, 50],   # Soma region
    [25300, 19500, 1550, 30],   # Dendritic field
    [25000, 19000, 1480, 25]    # Axon terminal
])


scene = ngl.construct_scene()
scene = ngl.add_annotation_layer(regions, scene, name="regions_of_interest")

url_ellipsoids = ngl.scene_to_url(scene)
# Re-encode with centered coordinates
url_ellipsoids = ngl.encode_url(
    scene=url_ellipsoids,
)
print("Added ellipsoid annotations for regions of interest")

# You can combine all types in one scene
# Calculate overall center from all annotations
all_annotation_points = np.vstack([
    synapse_locations,
    all_line_points,
    regions[:, :3]
])

scene = ngl.construct_scene()
scene = ngl.add_annotation_layer(synapse_locations, scene, name="synapses")
scene = ngl.add_annotation_layer(connections, scene, name="connections")
scene = ngl.add_annotation_layer(regions, scene, name="regions")

url_combined = ngl.scene_to_url(scene)
# Re-encode with centered coordinates
url_combined = ngl.encode_url(
    scene=url_combined,
)
print(f"\nCombined scene has {len([l for l in scene['layers'] if l['type'] == 'annotation'])} annotation layers")

if url_lines:
    url_lines_view = IPython.display.HTML(f'<a href="{url_lines}" target="_blank">Line Annotations URL</a>')
    display(url_lines_view)

if url_ellipsoids:
    url_ellipsoids_view = IPython.display.HTML(f'<a href="{url_ellipsoids}" target="_blank">Ellipsoid Annotations URL</a>')
    display(url_ellipsoids_view)

if url_combined:
    url_combined_view = IPython.display.HTML(f'<a href="{url_combined}" target="_blank">Combined Annotations URL</a>')
    display(url_combined_view)
Added line annotations showing connections
Added ellipsoid annotations for regions of interest

Combined scene has 3 annotation layers

Part 7: Adding Neuron Skeletons#

Skeletons provide a detailed structural view of neurons. CRANTpy can fetch skeletons and add them to your scene.

# Note: Skeletonization can take time, so we'll demonstrate the workflow
# Uncomment and run with real neuron IDs

# Get skeleton for a neuron
skeleton = cp.viz.get_skeletons([demo_neurons[0]])[0]

# Add skeleton to scene
url_with_skeleton = ngl.encode_url(
    segments=[demo_neurons[0]],
    skeletons=skeleton
)
print("Added skeleton to scene")

# Add multiple skeletons
skeletons = cp.viz.get_skeletons(demo_neurons[:3])
url_multi_skeletons = ngl.encode_url(
    segments=demo_neurons[:3],
    skeletons=skeletons,
    seg_colors=['red', 'blue', 'green']
)
print("Added multiple skeletons with colors")

# Manual skeleton addition to existing scene
scene = ngl.construct_scene()

# Skeletons are in nanometers and automatically converted to voxels
scene = ngl.add_skeleton_layer(skeleton, scene, name="neuron_skeleton")

url = ngl.scene_to_url(scene)
print("Skeleton added as line annotations")

if url_with_skeleton:
    url_skel_view = IPython.display.HTML(f'<a href="{url_with_skeleton}" target="_blank">Single Skeleton URL</a>')
    display(url_skel_view)
Added skeleton to scene
                                                                 
Added multiple skeletons with colors
Skeleton added as line annotations

Batch URL Creation for Multiple Neurons#

For creating URLs for many neurons at once:

# Get multiple skeletons
neurons = cp.viz.get_skeletons(demo_neurons[:5])

# Create individual URLs for each
urls_df = ngl.neurons_to_url(
    neurons,
    include_skeleton=True,
    downsample=5  # Downsample for faster loading
)

# Results in a DataFrame with columns: id, name, url
print(urls_df)

# Save to CSV for sharing
# urls_df.to_csv('neuron_urls.csv', index=False)
                                                                 
                   id  name                                                url
0  576460752722405178  None  https://spelunker.cave-explorer.org/#!%7B%22di...
1  576460752773799604  None  https://spelunker.cave-explorer.org/#!%7B%22di...
2  576460752681552812  None  https://spelunker.cave-explorer.org/#!%7B%22di...
3  576460752679088143  None  https://spelunker.cave-explorer.org/#!%7B%22di...
4  576460752679088399  None  https://spelunker.cave-explorer.org/#!%7B%22di...

Part 8: Layout Options#

Neuroglancer supports different viewing layouts. Choose based on your analysis needs.

# Available layouts:
layouts = {
    '3d': '3D view only (best for structure)',
    'xy-3d': 'XY slice + 3D (default, balanced)',
    'xy': 'XY slice only (best for tracing)',
    '4panel': 'XY, XZ, YZ slices + 3D (comprehensive)'
}

print("Available layouts:")
for layout, description in layouts.items():
    print(f"  {layout:8s} - {description}")

# Create URLs with different layouts
urls_by_layout = {}
for layout in layouts.keys():
    urls_by_layout[layout] = ngl.encode_url(
        segments=[demo_neurons[0]],
        layout=layout
    )
    
# Verify layout is set correctly
for layout in ['3d', '4panel']:
    scene = ngl.decode_url(urls_by_layout[layout], format='json')
    actual_layout = scene['layout']
    if isinstance(actual_layout, dict):
        actual_layout = actual_layout['type']
    print(f"\nRequested: {layout}, Got: {actual_layout}")

print("\nLayout URLs:")
for layout, url in urls_by_layout.items():
    url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">{layout} Layout URL</a>')
    display(url_view)
Available layouts:
  3d       - 3D view only (best for structure)
  xy-3d    - XY slice + 3D (default, balanced)
  xy       - XY slice only (best for tracing)
  4panel   - XY, XZ, YZ slices + 3D (comprehensive)

Requested: 3d, Got: 3d

Requested: 4panel, Got: 4panel

Layout URLs:

Part 9: URL Shortening and Sharing#

Long URLs can be unwieldy. CRANTpy supports URL shortening through a state server, though this feature may not always be available.

Note: The global state server at https://global.daf-apis.com/nglstate is sometimes unavailable or may return errors. If URL shortening fails, CRANTpy will automatically fall back to using the full URL.

# Create a regular (full) URL
full_url = ngl.encode_url(segments=demo_neurons)
print(f"Full URL length: {len(full_url)} characters")
print(f"Full URL: {full_url[:100]}...")

# Create a shortened URL
print("\nAttempting to create shortened URL...")
try:
    short_url = ngl.encode_url(
        segments=demo_neurons,
        shorten=True  # Use state server
    )
    print(f"βœ“ Shortened URL length: {len(short_url)} characters")
    print(f"βœ“ Shortened URL: {short_url}")
    
    # Display as clickable link
    short_url_view = IPython.display.HTML(f'<a href="{short_url}" target="_blank">Open Shortened URL</a>')
    display(short_url_view)
    
except Exception as e:
    print(f"βœ— URL shortening failed: {e}")
    print(f"  Using full URL instead ({len(full_url)} characters)")
    print("  Note: The state server may be unavailable or misconfigured.")
Full URL length: 2143 characters
Full URL: https://spelunker.cave-explorer.org/#!%7B%22dimensions%22%3A%20%7B%22x%22%3A%20%5B8e-09%2C%20%22m%22...

Attempting to create shortened URL...
/Users/neurorishika/Projects/Rockefeller/Kronauer/crantpy/src/crantpy/utils/neuroglancer.py:563: RuntimeWarning: Primary state server rejected authentication; falling back to CAVE upload.
  url = _shorten_url(scene, state_url)
βœ“ Shortened URL length: 109 characters
βœ“ Shortened URL: https:/spelunker.cave-explorer.org/#!middleauth+https:/proofreading.zetta.ai/nglstate/api/v1/5766460328640512

Other options for URLs#

You can also open URLs directly in your browser.

# Option 1: Open in browser automatically
print("\n1. Open URL in browser:")
ngl.encode_url(segments=demo_neurons, open=True)
1. Open URL in browser:
'https://spelunker.cave-explorer.org/#!%7B%22dimensions%22%3A%20%7B%22x%22%3A%20%5B8e-09%2C%20%22m%22%5D%2C%20%22y%22%3A%20%5B8e-09%2C%20%22m%22%5D%2C%20%22z%22%3A%20%5B4.2e-08%2C%20%22m%22%5D%7D%2C%20%22position%22%3A%20%5B25148.333984375%2C%2019334.7421875%2C%201531.5%5D%2C%20%22crossSectionScale%22%3A%2085.66190459506113%2C%20%22projectionScale%22%3A%2078956.25548751229%2C%20%22projectionDepth%22%3A%20541651.3969244945%2C%20%22layers%22%3A%20%5B%7B%22type%22%3A%20%22image%22%2C%20%22source%22%3A%20%22precomputed%3A//gs%3A//dkronauer-ant-001-alignment-final/aligned%22%2C%20%22tab%22%3A%20%22source%22%2C%20%22name%22%3A%20%22aligned%22%7D%2C%20%7B%22type%22%3A%20%22segmentation%22%2C%20%22source%22%3A%20%7B%22url%22%3A%20%22graphene%3A//middleauth%2Bhttps%3A//data.proofreading.zetta.ai/segmentation/table/kronauer_ant_x1%22%2C%20%22subsources%22%3A%20%7B%22default%22%3A%20true%2C%20%22graph%22%3A%20true%2C%20%22bounds%22%3A%20true%2C%20%22mesh%22%3A%20true%7D%2C%20%22enableDefaultSubsources%22%3A%20false%2C%20%22state%22%3A%20%7B%22multicut%22%3A%20%7B%22sinks%22%3A%20%5B%5D%2C%20%22sources%22%3A%20%5B%5D%7D%2C%20%22merge%22%3A%20%7B%22merges%22%3A%20%5B%5D%7D%2C%20%22findPath%22%3A%20%7B%7D%7D%7D%2C%20%22tab%22%3A%20%22source%22%2C%20%22segments%22%3A%20%5B%22576460752722405178%22%2C%20%22576460752773799604%22%2C%20%22576460752681552812%22%2C%20%22576460752679088143%22%2C%20%22576460752679088399%22%5D%2C%20%22colorSeed%22%3A%201212430833%2C%20%22name%22%3A%20%22proofreadable%20seg%22%7D%2C%20%7B%22type%22%3A%20%22segmentation%22%2C%20%22source%22%3A%20%22precomputed%3A//gs%3A//dkronauer-ant-001-alignment-final/tissue_mesh/mesh%23type%3Dmesh%22%2C%20%22tab%22%3A%20%22segments%22%2C%20%22objectAlpha%22%3A%200.11%2C%20%22hoverHighlight%22%3A%20false%2C%20%22segments%22%3A%20%5B%221%22%5D%2C%20%22segmentQuery%22%3A%20%221%22%2C%20%22name%22%3A%20%22brain%20mesh%22%7D%5D%2C%20%22showSlices%22%3A%20false%2C%20%22selectedLayer%22%3A%20%7B%22visible%22%3A%20true%2C%20%22layer%22%3A%20%22proofreadable%20seg%22%7D%2C%20%22layout%22%3A%20%7B%22type%22%3A%20%22xy-3d%22%2C%20%22orthographicProjection%22%3A%20true%7D%7D'

Or directly copy to clipboard.

# Option 2: Copy to clipboard
print("\n2. Copy URL to clipboard (requires pyperclip):")
test_url = ngl.encode_url(segments=demo_neurons[:2], to_clipboard=True, shorten=True)
print("   βœ“ URL copied to clipboard!")
2. Copy URL to clipboard (requires pyperclip):
URL copied to clipboard.
   βœ“ URL copied to clipboard!

Or, save to a text file for later use.

# Option 3: Save to file
print("\n3. Save URL to file:")
# with open('neuroglancer_url.txt', 'w') as f:
#     f.write(full_url)

# Display full URL as clickable link
full_url_view = IPython.display.HTML(f'<a href="{full_url}" target="_blank">Open Full URL</a>')
display(full_url_view)
3. Save URL to file:

Part 10: Integration with NeuronCriteria#

The real power comes from combining queries with visualization!

# Example: Query neurons and visualize them

# Find all olfactory projection neurons
nc = cp.NeuronCriteria(cell_class='olfactory_projection_neuron')
neurons = nc.get_roots()

# Create URL directly from NeuronCriteria
url = nc.to_neuroglancer(seg_colors=np.arange(len(neurons)))
print(f"Visualizing {len(neurons)} olfactory projection neurons")
# Display as clickable link
url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">Olfactory Projection Neurons URL</a>')
display(url_view) 

# With custom settings
url = nc.to_neuroglancer(
    layout='4panel',
    shorten=False,
    seg_colors='red'
)

print(f"Visualizing {len(neurons)} olfactory projection neurons (custom settings)")
# Display as clickable link
url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">Olfactory Projection Neurons URL (Custom)</a>')
display(url_view)
Visualizing 107 olfactory projection neurons
Visualizing 107 olfactory projection neurons (custom settings)

Part 11: Connectivity Visualization#

Visualize connectivity patterns with colors and annotations.

# Example connectivity visualization workflow
root_id = demo_neurons[0]

# Get downstream partners
partners = cp.get_connectivity(int(root_id))

# get downstream partners only
downstream = partners[partners['pre']==root_id]
# sort by weight and take top one
downstream = downstream.sort_values(by='weight', ascending=False)['post'].values[0]

# Get synaptic locations
synapses = cp.get_synapses(pre_ids=int(root_id), post_ids=int(downstream))

# presynaptic coordinates
presyn_coords = np.array(synapses['pre_pt_position'].tolist())
postsyn_coords = np.array(synapses['post_pt_position'].tolist())

# Create visualization
all_neurons = [root_id, downstream]

# Color: source neuron in red, partners in blue
colors = {root_id: 'red'}
colors.update({n: 'blue' for n in all_neurons if n != root_id})

# add line annotations for synapses
url = ngl.encode_url(
    segments=all_neurons,
    annotations={
        'presynaptic': presyn_coords,
        'postsynaptic': postsyn_coords
    },
    seg_colors=colors,
    layout='xy-3d'
)
print(f"Visualizing connectivity from neuron {root_id} to its top partner {downstream}")
# Display as clickable link
url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">Connectivity Visualization URL</a>')
display(url_view)
Visualizing connectivity from neuron 576460752722405178 to its top partner 576460752721057863

Part 12: Advanced Features#

Invisible Segments#

You can add neurons to the selection but keep them invisible:

# Add some neurons as visible, others as invisible
url_invisible = ngl.encode_url(
    segments=demo_neurons[:2],      # Visible neurons
    invis_segs=demo_neurons[2:4],   # Invisible but selected
    seg_colors={
        demo_neurons[0]: 'red',
        demo_neurons[1]: 'blue'
    }
)

scene = ngl.decode_url(url_invisible, format='json')
seg_layer = [l for l in scene['layers'] if 'segmentation' in l.get('type', '')][0]

print("Visible segments:", seg_layer.get('segments', []))
print("Hidden segments:", seg_layer.get('hiddenSegments', []))

print("\nNeurons with visibility settings:")
for layer in scene['layers']:
    if 'segmentation' in layer.get('type', ''):
        segs = layer.get('segments', [])
        hidden_segs = layer.get('hiddenSegments', [])
        visible = layer.get('visible', True)
        print(f"  - {layer['name']}: {len(segs)} segments, {len(hidden_segs)} hidden {'(visible)' if visible else '(hidden)'}")

print("\nNeuroglancer URL generation and visualization complete!")
if url_invisible:
    url_invisible_view = IPython.display.HTML(f'<a href="{url_invisible}" target="_blank">Visibility Settings URL</a>')
    display(url_invisible_view)
Visible segments: ['576460752722405178', '576460752773799604']
Hidden segments: ['576460752681552812', '576460752679088143']

Neurons with visibility settings:
  - proofreadable seg: 2 segments, 2 hidden (visible)
  - brain mesh: 1 segments, 0 hidden (visible)

Neuroglancer URL generation and visualization complete!

Modifying Existing Scenes#

You can decode a URL, modify it, and re-encode:

# Start with a URL
original_url = ngl.encode_url(segments=[demo_neurons[0]])

# Decode it
scene = ngl.decode_url(original_url, format='json')

# Add more layers
scene = ngl.add_annotation_layer(
    np.array([[25000, 19000, 1500]]),
    scene,
    name="new_point"
)

# Re-encode
modified_url = ngl.scene_to_url(scene)

print("Original scene layers:", len(ngl.decode_url(original_url, format='json')['layers']))
print("Modified scene layers:", len(ngl.decode_url(modified_url, format='json')['layers']))

# Or pass scene directly to encode_url to add more neurons
new_url = ngl.encode_url(
    scene=original_url,  # Can pass URL string
    segments=[demo_neurons[1]]  # Adds to existing neurons
)

info = ngl.decode_url(new_url, format='brief')
print(f"Total neurons now: {len(info['selected'])}")

if original_url:
    original_url_view = IPython.display.HTML(f'<a href="{original_url}" target="_blank">Original Scene URL</a>')
    display(original_url_view)

if modified_url:
    modified_url_view = IPython.display.HTML(f'<a href="{modified_url}" target="_blank">Modified Scene URL</a>')
    display(modified_url_view)

if new_url:
    new_url_view = IPython.display.HTML(f'<a href="{new_url}" target="_blank">New URL with Added Neuron</a>')
    display(new_url_view)
Original scene layers: 3
Modified scene layers: 4
Total neurons now: 3

Part 13: Dataset Selection#

CRANTpy supports multiple datasets (latest and sandbox):

# Check current default
print(f"Current default dataset: {cp.CRANT_DEFAULT_DATASET}")
print(f"Available datasets: {cp.CRANT_VALID_DATASETS}")

# Use specific dataset for this URL
url_latest = ngl.encode_url(
    segments=[demo_neurons[0]],
    dataset='latest'
)
if url_latest:
    url_latest_view = IPython.display.HTML(f'<a href="{url_latest}" target="_blank">Latest Dataset URL</a>')
    display(url_latest_view)

url_sandbox = ngl.encode_url(
    segments=[demo_neurons[0]],
    dataset='sandbox'
)
if url_sandbox:
    url_sandbox_view = IPython.display.HTML(f'<a href="{url_sandbox}" target="_blank">Sandbox Dataset URL</a>')
    display(url_sandbox_view)

# Check which dataset is used
scene_latest = ngl.decode_url(url_latest, format='json')
scene_sandbox = ngl.decode_url(url_sandbox, format='json')

# Find segmentation layer and check URL
for scene, name in [(scene_latest, 'latest'), (scene_sandbox, 'sandbox')]:
    seg_layer = [l for l in scene['layers'] if 'segmentation' in l.get('type', '')][0]
    source_url = seg_layer['source']['url'] if isinstance(seg_layer['source'], dict) else seg_layer['source']
    print(f"\n{name.upper()} dataset URL: {source_url}")
Current default dataset: latest
Available datasets: ['latest', 'sandbox']
LATEST dataset URL: graphene://middleauth+https://data.proofreading.zetta.ai/segmentation/table/kronauer_ant_x1

SANDBOX dataset URL: graphene://middleauth+https://data.proofreading.zetta.ai/segmentation/table/kronauer_ant_sandbox_x1

Part 14: URL Analysis and Comparison#

Extract and compare information from different URLs:

# Create several URLs
urls = []
for i in range(3):
    url = ngl.encode_url(
        segments=[demo_neurons[i]],
        coords=[25000 + i*100, 19000 + i*100, 1500 + i*10]
    )
    urls.append(url)

# Decode to DataFrame for comparison
df = ngl.decode_url(urls, format='dataframe')

print("Segments across all URLs:")
print(df)
print(f"\nTotal unique segments: {df['segment'].nunique()}")
print(f"Segments per layer:")
print(df.groupby('layer')['segment'].count())

# Brief analysis of each URL
print("\nDetailed analysis:")
for i, url in enumerate(urls):
    info = ngl.decode_url(url, format='brief')
    if url:
        url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">URL {i+1}</a>')
        display(url_view)
    print(f"\nURL {i+1}:")
    print(f"  Position: {info['position']}")
    print(f"  Neurons: {len(info['selected'])}")
    print(f"  Annotations: {len(info['annotations'])}")
Segments across all URLs:
              segment              layer  visible
0  576460752722405178  proofreadable seg     True
1                   1         brain mesh     True
0  576460752773799604  proofreadable seg     True
1                   1         brain mesh     True
0  576460752681552812  proofreadable seg     True
1                   1         brain mesh     True

Total unique segments: 4
Segments per layer:
layer
brain mesh           3
proofreadable seg    3
Name: segment, dtype: int64

Detailed analysis:
URL 1:
  Position: [25000, 19000, 1500]
  Neurons: 2
  Annotations: 0
URL 2:
  Position: [25100, 19100, 1510]
  Neurons: 2
  Annotations: 0
URL 3:
  Position: [25200, 19200, 1520]
  Neurons: 2
  Annotations: 0

Part 15: Best Practices and Tips#

Performance Tips#

1. URL Length Management:#

  • Full URLs can be very long (>10,000 chars)

  • Use shorten=True for sharing: ngl.encode_url(..., shorten=True)

  • Shortened URLs are easier to share and don’t break in emails

2. Performance with Many Neurons:#

  • Limit colored neurons to <50 for best performance

  • Use groups instead of coloring all neurons

  • Consider using invisible segments for context

3. Skeleton Performance:#

  • Downsample skeletons: neurons_to_url(..., downsample=5)

  • Use skeletons only when detailed structure is needed

  • For overview, mesh visualization is faster

4. Coordinate System:#

  • Always verify coordinate units (voxels vs nanometers)

  • Use SCALE_X, SCALE_Y, SCALE_Z constants for conversion

  • Skeletons are auto-converted from nm to voxels

5. Layer Selection:#

  • Use minimal layers for performance

  • Visualization: image + segmentation + brain_mesh

  • Proofreading: add merge_biased_seg + nuclei

6. Sharing URLs:#

  • Use shortened URLs for publications

  • Include dataset info in documentation

  • Test URLs before sharing

7. Annotation Guidelines:#

  • Keep annotation counts reasonable (<1000 points)

  • Use meaningful layer names

  • Group related annotations in named layers

Part 16: Complete Workflow Example#

Putting it all together in a realistic analysis workflow:

print("COMPLETE WORKFLOW: Analyzing and Visualizing a Neural Circuit")
print("="*60)

# reduce to error

# Step 1: Query neurons of interest
print("\n1. Query neurons...")
nc = cp.NeuronCriteria(
    cell_class='olfactory_projection_neuron',
    side='left',
)
neurons = nc.get_roots()[:1].astype(int).tolist()
print(f"   Found {len(neurons)} neurons")

# Step 2: Get connectivity
print("\n2. Get connectivity...")

partners = cp.get_connectivity(neurons, threshold=5)
# Filter for downstream partners only
partners = partners[partners['pre'].isin(neurons)]
# Sort by weight
partners = partners.sort_values(by='weight', ascending=False)

all_neurons = neurons + partners['post'].tolist()
skeletons = cp.get_l2_skeleton(all_neurons, omit_failures=True)
skeleton_names = [str(skeletons[i].id) for i in range(len(skeletons))]

print(f"   Found {len(partners)} downstream partners")
# Get synapses between target neuron and its partners
synapses = cp.get_synapses(pre_ids=neurons, post_ids=partners['post'].tolist())
print(f"   Found {len(synapses)} synapses")
# Extract synaptic coordinates
syn_coords = np.array(synapses['pre_pt_position'].tolist())
print(f"   Example synapse coordinates (first 3): {syn_coords[:3]}")

# Step 4: Create colored groups
print("\n4. Organize neurons...")
groups = {
    'Source Neuron': neurons,
    'Downstream Partners': partners['post'].tolist()
}
colors = {n: 'red' for n in neurons}
colors.update({n: 'blue' for n in partners['post'].tolist()})

# Step 5: Create comprehensive visualization
print("\n5. Create neuroglancer scene...")
url = ngl.encode_url(
    segments=all_neurons,
    seg_groups=groups,
    seg_colors=colors,
    annotations={'synapses': syn_coords},
    skeletons=skeletons,
    skeleton_names=skeleton_names,
    layout='xy-3d',
    shorten=False
)

print(f"\n6. Share URL:")
# Display as clickable link
url_view = IPython.display.HTML(f'<a href="{url}" target="_blank">Open Circuit Visualization</a>')
display(url_view)

# Step 8: Save for publication
print("\n8. Save results...")
results = {
    'target_neurons': neurons,
    'n_partners': len(partners),
    'n_synapses': len(synapses),
    'neuroglancer_url': url
}
# pd.DataFrame([results]).to_csv('circuit_analysis.csv')
COMPLETE WORKFLOW: Analyzing and Visualizing a Neural Circuit
============================================================

1. Query neurons...
   Found 1 neurons

2. Get connectivity...
   Found 25 downstream partners
   Found 180 synapses
   Example synapse coordinates (first 3): [[32276 10002  1809]
 [30062 12238  2844]
 [29320 11868  2955]]

4. Organize neurons...

5. Create neuroglancer scene...

6. Share URL:
8. Save results...

Summary#

This tutorial covered all aspects of neuroglancer integration in CRANTpy:

Core Functions#

  • βœ… encode_url() - Create URLs with segments, colors, annotations

  • βœ… decode_url() - Extract information from URLs

  • βœ… construct_scene() - Build custom scenes

  • βœ… add_annotation_layer() - Add points, lines, ellipsoids

  • βœ… add_skeleton_layer() - Add neuron skeletons

  • βœ… neurons_to_url() - Batch URL creation

  • βœ… scene_to_url() - Convert scenes to URLs

Key Features#

  • βœ… Multiple color schemes (single, dict, list, palette)

  • βœ… Neuron grouping into layers

  • βœ… Three annotation types (points, lines, ellipsoids)

  • βœ… Four layout options (3d, xy-3d, xy, 4panel)

  • βœ… URL shortening via state server

  • βœ… Dataset selection (latest/sandbox)

  • βœ… NeuronCriteria integration

  • βœ… Coordinate conversion (nm ↔ voxels)