Skip to content

Improve reliability (and speed) of UT's be transitioning to rely on a mock of Invoke-WebRequest #267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
HowardWolosky opened this issue Jul 17, 2020 · 2 comments
Labels
discussion We are looking for additional community feedback on this topic before proceeding further. enhancement An issue or pull request introducing new functionality to the project. help wanted Anyone in the community is welcome to do this work tests A change related to the Pester tests for the module. up for grabs Anyone in the community is welcome to do this work

Comments

@HowardWolosky
Copy link
Member

Feature Idea Summary

Trying to fix #264 has made me start to rethink how we're performing tests within this module.
I think it is imperative that we continue to have tests that perform end-to-end testing of the module against the API, however I think we can probably reduce our reliance on those tests from being 100% of our tests to being closer to 20% of our tests. For the other 80%, I think we can switch to using mocks. My current thinking is that we'd only need to mock out Invoke-WebRebquest.

Feature Idea Additional Details

  • I think that for any given endpoint that we are contacting, we need to have at least one true end-to-end test to verify that we're interacting correctly with the API.
  • For the rest of the API's, we should be verifying that what we're sending to the API is correct. So, we would mock out Invoke-WebRequest, and the mock should expose the Uri, Method, Headers, InFile, OutFile, and Body so that they can all be successively verified by the tests. The mock will need to be able to return back a valid-looking success object, but it doesn't need to have any real content (it could just be a 202 with no body). It OutFile was specified, it will need to create an empty text file at that location.

Taking this approach should dramatically speed up the overall runtime of the UT's, while reducing the unreliability we've been seeing lately with the existing UT's when running live against GitHub in our CI build.

Requested Assignment

I'm just suggesting this idea, but don't want to implement it.

Operating System

OsName               : Microsoft Windows 10 Pro
OsOperatingSystemSKU : 48
OsArchitecture       : 64-bit
WindowsVersion       : 1909
WindowsBuildLabEx    : 18362.1.amd64fre.19h1_release.190318-1202
OsLanguage           : en-US
OsMuiLanguages       : {en-US}

PowerShell Version

Name                           Value
----                           -----
PSVersion                      5.1.18362.752
PSEdition                      Desktop
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
BuildVersion                   10.0.18362.752
CLRVersion                     4.0.30319.42000
WSManStackVersion              3.0
PSRemotingProtocolVersion      2.3
SerializationVersion           1.1.0.1

Module Version

Running: 0.14.0
Installed:

@HowardWolosky HowardWolosky added enhancement An issue or pull request introducing new functionality to the project. up for grabs Anyone in the community is welcome to do this work discussion We are looking for additional community feedback on this topic before proceeding further. tests A change related to the Pester tests for the module. help wanted Anyone in the community is welcome to do this work labels Jul 17, 2020
HowardWolosky added a commit that referenced this issue Jul 18, 2020
We're seeing some inconsistent failures in some of the Pester tests.

The hypothesis is that GitHub may need a little bit more time after the creation of objects before performing certain operations on them (like renaming repos), or may need more time after deleting them before it will successfully return a 404 on a successive Get request.

I have added a number of `Start-Sleep`'s throughout the test codebase wherever we've encountered inconsistent failures, and that appears to have resolved the problem.  We may need to continue to do more of these if additional ones pop up.

The duration of the sleep itself is controlled by `$script:defaultSleepSecondsForReliability` which is defined in `Tests/Common.ps1`.

Long term, I have opened #267 which poses the idea of switching over to mocking out `Invoke-WebRequest` for the majority of the tests, and instead focus on validating the data that it's sending matches the expected values per the API documentation, and having just a limited number of tests that do actual end-to-end testing.

Fixes #264
@X-Guardian
Copy link
Contributor

It would seem a shame to lose the depth of the integration tests that are currently being run, as these will always be better than any mock based tests. Have you thought about adding a Set-GitHubConfiguration parameter that would globally add a delay after any state-changing API call (i.e. POST, PUT, PATCH, DELETE)? This could then be configured for the CI only and wouldn't require any changes to the tests.

@HowardWolosky
Copy link
Member Author

It would seem a shame to lose the depth of the integration tests that are currently being run, as these will always be better than any mock based tests.

Agreed. It's about balance. We still need end-to-end integration tests, especially for the API's that are targeting endpoints using experimentation headers (since we won't be notified ahead of time if the behavior under experimentation changes). I just figured that a smaller set of those paired with a larger percentage of tests that simply verified we were calling the API should still be sufficient. I'm trying to balance the fact that our tests' job isn't to supply code coverage against their GitHub's actual API implementation...it's to verify that our module is performing as it was intended to, and along the way make sure that it's still actually working with the API.

As the tags indicate, this is still a discussion at the moment, as we try to think through the full set of positives/negatives with the proposed change. In the meantime though, regarding your second point...

Have you thought about adding a Set-GitHubConfiguration parameter that would globally add a delay after any state-changing API call (i.e. POST, PUT, PATCH, DELETE)? This could then be configured for the CI only and wouldn't require any changes to the tests.

I hadn't. That's a brilliant idea. I'll work on getting that in within the next day or two, and then I'll revert the sleeps that were added to the tests in the interim.

HowardWolosky added a commit that referenced this issue Jul 19, 2020
This is re-implementing #265 in a more robust way, thanks to a suggestion from @X-Guardian in #267.

Instead of adding in one-off sleeps throughout the test codebase, there is now a new configuration value `StateChangeDelaySeconds`) that will allow us to insert a delay before returning the result of _any_ state change request.

This should ideally consistently add reliability throughout the entire test codebase.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion We are looking for additional community feedback on this topic before proceeding further. enhancement An issue or pull request introducing new functionality to the project. help wanted Anyone in the community is welcome to do this work tests A change related to the Pester tests for the module. up for grabs Anyone in the community is welcome to do this work
Projects
None yet
Development

No branches or pull requests

2 participants