Fix tokenizer: Closing parentheses in the wrong place should not lead to strange behavior

This commit is contained in:
Dave Halter
2019-01-13 14:51:34 +01:00
parent e10802ab09
commit dd1761da96
2 changed files with 16 additions and 1 deletions

View File

@@ -285,3 +285,17 @@ def test_error_token_after_dedent():
ERRORTOKEN, NAME, NEWLINE, ENDMARKER
]
assert [t.type for t in lst] == expected
def test_brackets_no_indentation():
"""
There used to be an issue that the parentheses counting would go below
zero. This should not happen.
"""
code = dedent("""\
}
{
}
""")
lst = _get_token_list(code)
assert [t.type for t in lst] == [OP, NEWLINE, OP, OP, NEWLINE, ENDMARKER]