-
Notifications
You must be signed in to change notification settings - Fork 234
Allow create_sorting_analyzer to accept dicts
#4037
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Merci beaucoup. |
alejoe91
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few small suggestions :)
|
This is ultra smart and usefull. |
zm711
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two small points-- 1 a rendering issue 2 a discussion of if we should explain the dict layout better for beginners trying to use this strategy (at the level of inputs to the analyzer creation).
|
|
||
| The code above generates a dictionary of recording objects and a dictionary of sorting objects. | ||
| When making a :ref:`SortingAnalyzer <modules/core:SortingAnalyzer>`, we can pass these dictionaries and | ||
| a single analyzer will be created, with the recordings and sortings appropriately aggregated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How? Do keys of the dict need to match up? Will the user know how the sorting dict looks vs the recording dict?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could check that keys are the same in the same order no ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we either do that for the users internally or need to explain it here. So I think doing it ourselves is fine. Chris is doing a dict key comparison below but I haven't tested it to see if it checks the order or just the presence of the same keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to hopefully make things clearer.
The order of keys doesn't matter: when you aggregate, the link between the recording channels and unit ids is independent of this dict stuff I'm adding. Added a test to check this is true.
Hello. Hmm, I'm not sure how to handle this. Like you say: you defo want this for tetrodes. You probably don't want this for NP since you want an extra sparsity calc on each shank. You maybe want this for 32 channel silicon probes?? Ideally, you would compute sparsity per group, but this seems fairly complicated code-wise. Maybe we can add a note to the tetrode docs, and these docs, that if you're using tetrodes you can pass the sparsity explicitly? Maybe easiest to discuss "in person". |
for more information, see https://pre-commit.ci
| You can control `estimate_sparsity()` : all extra arguments are propagated to it (included job_kwargs) | ||
| sparsity : ChannelSparsity or None, default: None | ||
| The sparsity used to compute exensions. If this is given, `sparse` is ignored. | ||
| set_sparsity_by_dict_key : bool, default: False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I support this.
What about setting it to True ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's False by default, some tetrode people will have un-sparsified tetrode bundles (not great, but ok)
If True by default, some silicon people will have badly-sparsified probes (worse!)
So I vote False!
|
Updated docstring of |
Co-authored-by: Chris Halcrow <[email protected]>
Allows
create_sorting_analyzerto accept a dict of recordings and a dict of sortings. Now the following code......works for both grouped and un-grouped recordings! So the user doesn't have to include any logic about grouping in their pipeline for e.g. Neuropixel 2.0 recordings.
For now, I've implemented the simple solution: internally the
create_sorting_analyzerfunction aggregates the recordings and sortings and combines them into a single sorting analyzer. The actual code insidecreate_sorting_analyzeris super simple! Also updated theunits_aggregationfunction to accept dicts.(We've also discussed allowing it to output a dictionary of analyzers. This is slightly more painful: the user will have to write awkward code when they want to
compute, orexport. We'll need to write and test newsaveandloadoptions etc. I think we should add this in the future if there's a need for it.)Internally, we need to keep track of the splitting and aggregation (e.g. which units in the analyzer belonged to which sorting?). I've put that logic in the
aggregate_channelsandaggregate_unitsfunctions, which attach new properties to the channels and units depending on the underlying group called "aggregate_channels_key" and "aggregate_units_key". Happy to hear other name ideas! So you can get the grouped analyzer by doing something like:Docs: https://spikeinterface--4037.org.readthedocs.build/en/4037/how_to/process_by_channel_group.html#
Plus a comment here: https://spikeinterface--4037.org.readthedocs.build/en/4037/modules/sorters.html#spike-sorting-by-group