Script to generate and validate goldens for an import#1905
Script to generate and validate goldens for an import#1905ajaits wants to merge 5 commits intodatacommonsorg:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a robust new framework for validating data imports against predefined "golden" datasets. It provides utilities to both generate these golden sets from existing data, with advanced sampling and inclusion rules, and to perform comparisons to identify discrepancies. Additionally, it includes minor but important fixes to data processing logic, ensuring accurate handling of zero-value evaluations and proper CSV data interpretation during node loading. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new tool for generating and validating 'golden' files for data imports, which is a great addition for ensuring data quality. The changes also include support for 'must-include' values during sampling and a fix for handling zero as a valid evaluation result.
My review has identified a critical bug in validator_goldens.py that would cause a NameError. I've also included suggestions to improve code quality by addressing mutable default arguments, removing leftover debug code, and refactoring duplicated logic. The new tests are comprehensive, but adding a test case for the load_must_include_values function would have caught the aforementioned bug.
| strip_namespaces=strip_namespaces) | ||
| # Initialize match count to 0. | ||
| golden_matches[key] = 0 | ||
| logging.debug(f'DELETE: matching golden keys: {golden_matches.keys()}') |
Adding support for comparing output files against goldens.
Goldens can have a subset of columns. The validate_goldens verifies the output has all the combinations in the golden file.
Expected usage:
more in PR#1916
info a folder called
golden_data.For more details, please refer to design doc