Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a tests workflow upon a release in the action repo #5

Closed
wants to merge 2 commits into from

Conversation

@SuperKogito SuperKogito requested a review from vsoch April 18, 2020 15:44
@SuperKogito
Copy link
Member Author

I am not sure how to set the action version to the last release and using @latest caused an error. See this.

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

@SuperKogito the tests for the action should go in urlchecker-action so they are run whenever there is a change - just curious why did you put them here? This repo should just be a collection of static files that are used there. Then for "latest" you just refer to ./.

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

I'm going to close here... looking forward to seeing the PR for the urlchecker-aciton!

@vsoch vsoch closed this Apr 18, 2020
@SuperKogito
Copy link
Member Author

well as far as I understood, to be able to test, we need to make a release and based on that I assumed that the test-repo would be good for testing because it is similar to any user set-up. But to your point, having the tests in a different repo can be very confusing :/

@SuperKogito
Copy link
Member Author

I don't get the part about "latest" ?

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

We would actually want to test the action before release, which is standard in software (or action) development. How would it work to release and then find out it’s broken?

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

You made a comment that you couldn’t figure out how to test “latest” I assume referring to the current state of the repository and the answer is to reference the present working directory like ./ Take a look at the current action yml file, it’s already being done there for the brief tests that I wrote.

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

53F280D8-7FC5-4970-AA43-C14646EC432B

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

Latest is a tag specifically for Docker containers, doesn’t apply here unless we explicitly created it (we did not).

@SuperKogito
Copy link
Member Author

Okay, that's actually way better. Silly me thought we can only test upon release, which horrible. I will check it out and make the needed fixes ;)

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

No worries! It’s been a long quarantine, most of us are a little silly at this point 😜

@SuperKogito
Copy link
Member Author

I don't think it is the quarantine in my case lol but that is nice of you to say xD

@SuperKogito
Copy link
Member Author

btw I have another silly question: idk if you noticed, I am currently using force_pass to avoid having the tests failing, do you have a better alternative to this or should I keep the way it is for now?

@vsoch
Copy link
Contributor

vsoch commented Apr 18, 2020

Hmm that's a good point - so let's break down the testing into a few cases:

  • urlchecker-python: is where we test the specifics of the library, via functions or the command line client urlchecker. This would be where we would want to assert that failures happen.
  • urlchecker-action: we should be relatively sure about failures, but we need to test general command line utils. What I would do here is use force_pass, but then inspect other output to see if there is a failure. For example, if you set force_pass to true but then you expect a certain number of failing urls, you can add a --save and then grep and count the number of "failed" in the output file, and in a step following the particular test run assert that expected fails == actual fails.

So here is how I would go about this:

  • for each version/release, start with the basic test case. Since we have been using latest, you are probably save just writing the tests for the present working directory (the current release) ./
  • create a list of tests that you want to write. E.g.,:
    • different cases for whitelisting, grepping the results file to found
    • extensions
    • saving a file with results
    • different subfolders

TLDR I don't think there is anything we can do other than force_pass, and then use other methods (e.g., the output file) to validate that some extension only was checked, etc. For each test that you write, you'll likely have a run block after the run of urlchecker to validate something. And each test in urlhecker-action should correspond to some subfolder here (e.g., I currently added test_action and you can consider that a base, and add specific folders like test_pyextension, test_whitelist, etc.

@SuperKogito
Copy link
Member Author

That's a nice elaborate explanation <3 you raise many valid points and I agree to the approach and the suggestions. I will get on it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

trigger a workflow (tests) on urlschecker-action release
2 participants