-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: ta processor handles placeholder storage #983
base: main
Are you sure you want to change the base?
Conversation
these 2 new tasks are meant to replace the test results processor and test results finisher tasks the difference between the tasks is that the new tasks: - use the new parse_raw_upload function provided by the test results parser library - instead of writing to the database in the processor, the finisher takes care of writing to the database - the processor writes the results of its parsing and the finisher pulls from that
the upload task will check the new ta tasks feature flag to determine whether to use the newly introduced ta processor and ta finisher tasks we also call the new ta processor and ta finisher tasks via a chord since we removed the concurrent db writes from the processors
also update test results parser version
we don't need to chunk them anymore and each upload can get its own processing task
i want to introduce the new finished state in the test results upload pipeline to do so safely i'm adding the new v2_processed and v2_finished states the reason for this is that the meaning of processed and v2_processed are different processed means that test instances are in the db and persisted but in the new pipeline v2_finished has that meaning and v2_processed just means that the intermediate results are in redis for now
this commit changes the TA upload states to have the v2_persisted state which represents the test run data being persisted to the db and the v2_finished state representing that the upload was taken into account when making the latest commment on the PR this also removes the v2_failed state since i want an upload to both be able to have valid test runs and have some failed parsing, the v2_failed state made it seem like it was either processed or failed when it could be both the failures related to an upload will instead be represented by upload errors
we want to start persisting errors so we can display in the relevant comments (coverage or TA)
we want to be able to pass UploadError objects to the test results notifier so it can display the specific parser error to the user to give them more information on the issue
we want to fetch any relevant upload errors and pass them to the test results notifier in the finisher so it can display them
- removes the TestResultsProcessingError enum - ComparisonContext.test_results_error is now a str | None - notify writers consume ComparisonContext.test_results_error as the error message to display - notify task checks UploadError objects to populate test_results_error
this commit makes it so there is a single flow for TA finishing where we will always attempt to comment if there is relevant data to comment and we will only persist data if there is new data (uploads) to persist in the new test results flow the error comment will no longer be used and there will be a single notify function that has the capability of choosing which section of the comment to show: if there are errors show them, if there are relevant failures show them
we want to be able to specify an upload that was not able to find a file so we can make a comment on the PR letting the user know that a file was not found. this upload is represented using the "placeholder" storage path
This PR includes changes to |
❌ 3 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 3 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 3 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 3 Tests Failed:
View the top 3 failed tests by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
❌ 3 Tests Failed:
View the top 3 failed tests by shortest run time
📣 Thoughts on this report? Let Codecov know! | Powered by Codecov |
we want to be able to specify an upload that was not able to find a file
so we can make a comment on the PR letting the user know that a file
was not found.
this upload is represented using the "placeholder" storage path