Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(unit): workaround for function definition in command offset test #1312

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

inconstante
Copy link
Contributor

In test/t/unit/test_unit_command_offset.py, if the first item in test2 is not executed, for example with the following diff:

  @@ -55,7 +55,6 @@ class TestUnitCommandOffset:
       @pytest.mark.parametrize(
           "cmd,expected_completion",
           [
  -            ("cmd2", wordlist),
               ("cmd3", wordlist),
               ("cmd4", []),
               ("cmd5", ["0"]),

test_cmd_quoted fails with the following error message:

def test_cmd_quoted(self, bash, functions):
>       assert assert_complete(bash, "meta 'cmd2' ") == self.wordlist
E       AssertionError: assert <CompletionResult []> == ['bar', 'foo']
E
E         Full diff:
E         + <CompletionResult []>
E         - [
E         -     'bar',
E         -     'foo',
E         - ]

This means that test_cmd_quoted depends on the previous execution of test2. When executed serially, this issue does not manifest itself. However, with parallel execution it might, dependending on the scheduling of the tests.

This patch adds a workaround to test_cmd_quoted, so that it executes the required subcommand of test2 prior to its own test.

This is probably not the right fix, thus I'm opening this pull request as draft. I hit a wall and I can't progress, so I'm asking for your help.

If the first item in test2 is not executed, for example with the
following diff:

  @@ -55,7 +55,6 @@ class TestUnitCommandOffset:
       @pytest.mark.parametrize(
           "cmd,expected_completion",
           [
  -            ("cmd2", wordlist),
               ("cmd3", wordlist),
               ("cmd4", []),
               ("cmd5", ["0"]),

test_cmd_quoted fails with the following error message:

      def test_cmd_quoted(self, bash, functions):
  >       assert assert_complete(bash, "meta 'cmd2' ") == self.wordlist
  E       AssertionError: assert <CompletionResult []> == ['bar', 'foo']
  E
  E         Full diff:
  E         + <CompletionResult []>
  E         - [
  E         -     'bar',
  E         -     'foo',
  E         - ]

This means that test_cmd_quoted depends on the previous execution of
test2. When executed serially, this issue does not manifest itself.
However, with parallel execution it might, dependending on the
scheduling of the tests.

This patch adds a workaround to test_cmd_quoted, so that it executes the
required subcommand of test2 prior to its own test.
@akinomyoga
Copy link
Collaborator

I think I have a related branch in my fork repository. Do you think this is fixed by 409ec8a?

@inconstante
Copy link
Contributor Author

I think I have a related branch in my fork repository. Do you think this is fixed by 409ec8a?

I doesn't.

Also, it looks like this commit breaks another test (test_1) in the same file:

________________________________________________________________ TestUnitCommandOffset.test_1 ________________________________________________________________

self = <test_unit_command_offset.TestUnitCommandOffset object at 0x7f5071985d10>, bash = <pexpect.pty_spawn.spawn object at 0x7f5071907e00>, functions = None

    def test_1(self, bash, functions):
        assert_complete(bash, 'cmd1 "/tmp/aaa bbb" ')
>       assert_bash_exec(bash, "! complete -p aaa", want_output=None)

/home/gabriel/upstream/bash-completion/test/t/unit/test_unit_command_offset.py:53: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

bash = <pexpect.pty_spawn.spawn object at 0x7f5071907e00>, cmd = '! complete -p aaa', want_output = None, want_newline = True

    def assert_bash_exec(
        bash: pexpect.spawn,
        cmd: str,
        want_output: Optional[bool] = False,
        want_newline=True,
    ) -> str:
        """
        :param want_output: if None, don't care if got output or not
        """
    
        # Send command
        bash.sendline(cmd)
        bash.expect_exact(cmd)
    
        # Find prompt, output is before it
        bash.expect_exact("%s%s" % ("\r\n" if want_newline else "", PS1))
        output = bash.before
    
        # Retrieve exit status
        echo = "echo $?"
        bash.sendline(echo)
        got = bash.expect(
            [
                r"^%s\r\n(\d+)\r\n%s" % (re.escape(echo), re.escape(PS1)),
                PS1,
                pexpect.EOF,
                pexpect.TIMEOUT,
            ]
        )
        status = bash.match.group(1) if got == 0 else "unknown"
    
>       assert status == "0", 'Error running "%s": exit status=%s, output="%s"' % (
            cmd,
            status,
            output,
        )
E       AssertionError: Error running "! complete -p aaa": exit status=1, output="
E         complete -F _comp_complete_minimal aaa"
E       assert '1' == '0'
E         
E         - 0
E         + 1

/home/gabriel/upstream/bash-completion/test/t/conftest.py:422: AssertionError

@akinomyoga
Copy link
Collaborator

akinomyoga commented Jan 12, 2025

OK, thanks for trying. I haven't actually tested that branch. I'll later take a look at it.

@inconstante
Copy link
Contributor Author

OK, thanks for trying. I haven't actually tested that branch. I'll later take a look at it.

You're very welcome. By the way, I did not test a branch... I applied you patch on top of master. Maybe I should try your branch instead.

@inconstante
Copy link
Contributor Author

The branch has the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants