30 Commits

Author SHA1 Message Date
Dave Halter
28005a5cf6 Upgrade to 0.8.6 2026-02-09 16:40:11 +01:00
Dave Halter
0e774b0fd3 Add a changelog 2026-02-09 16:36:49 +01:00
Dave Halter
cc2c562500 Merge pull request #235 from davidhalter/typing
Typing changes
2026-02-09 15:34:33 +00:00
Dave Halter
341a60b115 Upgrade zuban 2026-02-09 16:32:59 +01:00
Dave Halter
c0f6b96b0c Add a way to debug github CI 2026-02-09 16:31:46 +01:00
Dave Halter
59541564e5 Better dependencies 2026-02-08 00:36:52 +01:00
Dave Halter
a0fa3ae95d Pass older Python versions 2026-02-08 00:33:06 +01:00
Dave Halter
d13000a74c Fix a flake8 issue 2026-02-08 00:31:37 +01:00
Dave Halter
0b6800c7a9 Use zuban instead of Mypy 2026-02-08 00:28:41 +01:00
Dave Halter
59f4282d21 Fix a typing issue 2026-02-04 16:14:13 +01:00
Dave Halter
9eb6675f00 Make the whole code base type checkable 2026-02-04 16:06:59 +01:00
Dave Halter
22da0526b3 Change a few small typing related things 2026-02-04 16:05:26 +01:00
Dave Halter
07fb4584a3 Help the type checker a bit with recursive type definitions 2026-02-04 15:58:38 +01:00
Dave Halter
6fbeec9e2f Use Zuban and therefore check untyped code 2026-02-04 02:55:06 +01:00
Dave Halter
aecfc0e0c4 Avoid duplication of code 2026-02-04 02:29:05 +01:00
Dave Halter
3158571e46 Change a small typing issue 2026-02-04 02:28:45 +01:00
Dave Halter
be9f5a401f Prepare release 0.8.5
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
2025-08-23 17:12:20 +02:00
Dave Halter
7e4777b775 Merge pull request #234 from A5rocks/future-compatibility
Load newest grammar in face of a future grammar
2025-08-23 15:09:10 +00:00
A5rocks
e99dbdd536 Remove redundant warnings import 2025-08-23 17:00:06 +09:00
A5rocks
e22dc67aa1 Avoid warning 2025-08-23 16:38:08 +09:00
A5rocks
baa3c90d85 Load newest grammar in face of a future grammar 2025-08-22 23:35:46 +09:00
A5rocks
23b1cdf73d Drop Python 3.7 in CI
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
Update some versions in various places (#233)
2025-08-22 14:03:23 +00:00
Dave Halter
a73af5c709 Fix pip install -e in docs
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.7) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
2025-06-24 12:29:39 +02:00
Jon Crall
9328cffce3 Update classifiers in setup.py (#230)
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.7) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
2025-03-10 22:13:22 +00:00
Thomas A Caswell
f670e6e7dc ENH: add grammar file for py314 (#229)
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.7) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
Following  #78 and #220  copied the py313 grammar file.
2024-12-27 10:18:25 +00:00
dheeraj
338a576027 Updated readme installation documentation, now we can copy it easier (#228)
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.7) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
2024-11-24 14:31:05 +00:00
Dave Halter
9ddffca4da Merge pull request #227 from juliangilbey/python3.13-test-fixes
Some checks failed
Build / lint (push) Has been cancelled
Build / test (false, 3.10) (push) Has been cancelled
Build / test (false, 3.11) (push) Has been cancelled
Build / test (false, 3.12) (push) Has been cancelled
Build / test (false, 3.13) (push) Has been cancelled
Build / test (false, 3.7) (push) Has been cancelled
Build / test (false, 3.8) (push) Has been cancelled
Build / test (false, 3.9) (push) Has been cancelled
Build / coverage (push) Has been cancelled
Fix tests so they work with Python 3.12/3.13
2024-11-22 19:52:17 +00:00
Julian Gilbey
06db036e23 Conditionally include failing examples rather than handle them in the testing code 2024-11-22 11:40:13 +00:00
Julian Gilbey
c792ae546c Add Python 3.12 and 3.13 to test matrix 2024-11-22 09:44:49 +00:00
Julian Gilbey
1c01dafc2b Fix tests so they work with Python 3.12/3.13 2024-11-21 09:50:06 +00:00
26 changed files with 386 additions and 108 deletions

View File

@@ -1,6 +1,14 @@
name: Build name: Build
on: [push, pull_request] on:
push:
pull_request:
workflow_call:
inputs:
debug_ssh_session:
required: false
type: boolean
env: env:
PYTEST_ADDOPTS: --color=yes PYTEST_ADDOPTS: --color=yes
@@ -17,19 +25,24 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
python -m pip install --upgrade pip setuptools wheel python -m pip install --upgrade pip setuptools wheel
pip install .[qa] pip install .[qa] .[testing]
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ inputs.debug_ssh_session }}
with:
limit-access-to-actor: true
- name: Run Flake8 - name: Run Flake8
# Ignore F401, which are unused imports. flake8 is a primitive tool and is sometimes wrong. # Ignore F401, which are unused imports. flake8 is a primitive tool and is sometimes wrong.
run: flake8 --extend-ignore F401 parso test/*.py setup.py scripts/ run: flake8 --extend-ignore F401 parso test/*.py setup.py scripts/
- name: Run Mypy - name: Run Zuban
run: mypy parso setup.py run: zuban check
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
continue-on-error: ${{ matrix.experimental }} continue-on-error: ${{ matrix.experimental }}
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
python-version: ['3.7', '3.8', '3.9', '3.10', '3.11'] python-version: ['3.8', '3.9', '3.10', '3.11', '3.12', '3.13']
experimental: [false] experimental: [false]
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2

12
.github/workflows/debug_ci.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
name: Debug CI
on:
workflow_dispatch:
jobs:
tests:
uses: ./.github/workflows/tests.yml
with:
all_tests: true
debug_ssh_session: true
secrets: inherit

View File

@@ -6,6 +6,17 @@ Changelog
Unreleased Unreleased
++++++++++ ++++++++++
0.8.6 (2026-02-09)
++++++++++++++++++
- Switch the type checker to Zuban. It's faster and now also checks untyped
code.
0.8.5 (2025-08-23)
++++++++++++++++++
- Add a fallback grammar for Python 3.14+
0.8.4 (2024-04-05) 0.8.4 (2024-04-05)
++++++++++++++++++ ++++++++++++++++++

View File

@@ -68,6 +68,8 @@ Resources
Installation Installation
============ ============
.. code-block:: bash
pip install parso pip install parso
Future Future

View File

@@ -13,7 +13,7 @@ from parso.utils import parse_version_string
collect_ignore = ["setup.py"] collect_ignore = ["setup.py"]
_SUPPORTED_VERSIONS = '3.6', '3.7', '3.8', '3.9', '3.10' _SUPPORTED_VERSIONS = '3.6', '3.7', '3.8', '3.9', '3.10', '3.11', '3.12', '3.13', '3.14'
@pytest.fixture(scope='session') @pytest.fixture(scope='session')

View File

@@ -16,7 +16,7 @@ From git
-------- --------
If you want to install the current development version (master branch):: If you want to install the current development version (master branch)::
sudo pip install -e git://github.com/davidhalter/parso.git#egg=parso sudo pip install -e git+https://github.com/davidhalter/parso.git#egg=parso
Manual installation from a downloaded package (not recommended) Manual installation from a downloaded package (not recommended)

View File

@@ -43,7 +43,7 @@ from parso.grammar import Grammar, load_grammar
from parso.utils import split_lines, python_bytes_to_unicode from parso.utils import split_lines, python_bytes_to_unicode
__version__ = '0.8.4' __version__ = '0.8.6'
def parse(code=None, **kwargs): def parse(code=None, **kwargs):

View File

@@ -1,6 +1,6 @@
import hashlib import hashlib
import os import os
from typing import Generic, TypeVar, Union, Dict, Optional, Any from typing import Generic, TypeVar, Union, Dict, Optional, Any, Iterator
from pathlib import Path from pathlib import Path
from parso._compatibility import is_pypy from parso._compatibility import is_pypy
@@ -8,7 +8,7 @@ from parso.pgen2 import generate_grammar
from parso.utils import split_lines, python_bytes_to_unicode, \ from parso.utils import split_lines, python_bytes_to_unicode, \
PythonVersionInfo, parse_version_string PythonVersionInfo, parse_version_string
from parso.python.diff import DiffParser from parso.python.diff import DiffParser
from parso.python.tokenize import tokenize_lines, tokenize from parso.python.tokenize import tokenize_lines, tokenize, PythonToken
from parso.python.token import PythonTokenTypes from parso.python.token import PythonTokenTypes
from parso.cache import parser_cache, load_module, try_to_save_module from parso.cache import parser_cache, load_module, try_to_save_module
from parso.parser import BaseParser from parso.parser import BaseParser
@@ -223,7 +223,7 @@ class PythonGrammar(Grammar):
) )
self.version_info = version_info self.version_info = version_info
def _tokenize_lines(self, lines, **kwargs): def _tokenize_lines(self, lines, **kwargs) -> Iterator[PythonToken]:
return tokenize_lines(lines, version_info=self.version_info, **kwargs) return tokenize_lines(lines, version_info=self.version_info, **kwargs)
def _tokenize(self, code): def _tokenize(self, code):
@@ -239,14 +239,22 @@ def load_grammar(*, version: str = None, path: str = None):
:param str version: A python version string, e.g. ``version='3.8'``. :param str version: A python version string, e.g. ``version='3.8'``.
:param str path: A path to a grammar file :param str path: A path to a grammar file
""" """
version_info = parse_version_string(version) # NOTE: this (3, 14) should be updated to the latest version parso supports.
# (if this doesn't happen, users will get older syntaxes and spurious warnings)
passed_version_info = parse_version_string(version)
version_info = min(passed_version_info, PythonVersionInfo(3, 14))
# # NOTE: this is commented out until parso properly supports newer Python grammars.
# if passed_version_info != version_info:
# warnings.warn('parso does not support %s.%s yet.' % (
# passed_version_info.major, passed_version_info.minor
# ))
file = path or os.path.join( file = path or os.path.join(
'python', 'python',
'grammar%s%s.txt' % (version_info.major, version_info.minor) 'grammar%s%s.txt' % (version_info.major, version_info.minor)
) )
global _loaded_grammars
path = os.path.join(os.path.dirname(__file__), file) path = os.path.join(os.path.dirname(__file__), file)
try: try:
return _loaded_grammars[path] return _loaded_grammars[path]

View File

@@ -1,8 +1,11 @@
from contextlib import contextmanager from contextlib import contextmanager
from typing import Dict, List from typing import Dict, List, Any
class _NormalizerMeta(type): class _NormalizerMeta(type):
rule_value_classes: Any
rule_type_classes: Any
def __new__(cls, name, bases, dct): def __new__(cls, name, bases, dct):
new_cls = type.__new__(cls, name, bases, dct) new_cls = type.__new__(cls, name, bases, dct)
new_cls.rule_value_classes = {} new_cls.rule_value_classes = {}
@@ -109,9 +112,6 @@ class NormalizerConfig:
normalizer_class = Normalizer normalizer_class = Normalizer
def create_normalizer(self, grammar): def create_normalizer(self, grammar):
if self.normalizer_class is None:
return None
return self.normalizer_class(grammar, self) return self.normalizer_class(grammar, self)

View File

@@ -83,14 +83,14 @@ class DFAState(Generic[_TokenTypeT]):
self.from_rule = from_rule self.from_rule = from_rule
self.nfa_set = nfa_set self.nfa_set = nfa_set
# map from terminals/nonterminals to DFAState # map from terminals/nonterminals to DFAState
self.arcs: Mapping[str, DFAState] = {} self.arcs: dict[str, DFAState] = {}
# In an intermediary step we set these nonterminal arcs (which has the # In an intermediary step we set these nonterminal arcs (which has the
# same structure as arcs). These don't contain terminals anymore. # same structure as arcs). These don't contain terminals anymore.
self.nonterminal_arcs: Mapping[str, DFAState] = {} self.nonterminal_arcs: dict[str, DFAState] = {}
# Transitions are basically the only thing that the parser is using # Transitions are basically the only thing that the parser is using
# with is_final. Everyting else is purely here to create a parser. # with is_final. Everyting else is purely here to create a parser.
self.transitions: Mapping[Union[_TokenTypeT, ReservedString], DFAPlan] = {} self.transitions: dict[Union[_TokenTypeT, ReservedString], DFAPlan] = {}
self.is_final = final in nfa_set self.is_final = final in nfa_set
def add_arc(self, next_, label): def add_arc(self, next_, label):
@@ -261,7 +261,7 @@ def generate_grammar(bnf_grammar: str, token_namespace) -> Grammar:
if start_nonterminal is None: if start_nonterminal is None:
start_nonterminal = nfa_a.from_rule start_nonterminal = nfa_a.from_rule
reserved_strings: Mapping[str, ReservedString] = {} reserved_strings: dict[str, ReservedString] = {}
for nonterminal, dfas in rule_to_dfas.items(): for nonterminal, dfas in rule_to_dfas.items():
for dfa_state in dfas: for dfa_state in dfas:
for terminal_or_nonterminal, next_dfa in dfa_state.arcs.items(): for terminal_or_nonterminal, next_dfa in dfa_state.arcs.items():

View File

@@ -881,6 +881,6 @@ class _NodesTree:
end_pos[0] += len(lines) - 1 end_pos[0] += len(lines) - 1
end_pos[1] = len(lines[-1]) end_pos[1] = len(lines[-1])
endmarker = EndMarker('', tuple(end_pos), self.prefix + self._prefix_remainder) endmarker = EndMarker('', (end_pos[0], end_pos[1]), self.prefix + self._prefix_remainder)
endmarker.parent = self._module endmarker.parent = self._module
self._module.children.append(endmarker) self._module.children.append(endmarker)

169
parso/python/grammar314.txt Normal file
View File

@@ -0,0 +1,169 @@
# Grammar for Python
# NOTE WELL: You should also follow all the steps listed at
# https://devguide.python.org/grammar/
# Start symbols for the grammar:
# single_input is a single interactive statement;
# file_input is a module or sequence of commands read from an input file;
# eval_input is the input for the eval() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
file_input: stmt* ENDMARKER
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' namedexpr_test NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
async_funcdef: 'async' funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: (
(tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [',' [ tfpdef ['=' test] (
',' tfpdef ['=' test])* ([',' [
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [',']]])
| '*' [tfpdef] (',' tfpdef ['=' test])* ([',' ['**' tfpdef [',']]])
| '**' tfpdef [',']]] )
| (tfpdef ['=' test] (',' tfpdef ['=' test])* [',' [
'*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [',']]]
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' ['**' tfpdef [',']]]
| '**' tfpdef [','])
)
tfpdef: NAME [':' test]
varargslist: vfpdef ['=' test ](',' vfpdef ['=' test])* ',' '/' [',' [ (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']) ]] | (vfpdef ['=' test] (',' vfpdef ['=' test])* [',' [
'*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']]]
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' ['**' vfpdef [',']]]
| '**' vfpdef [',']
)
vfpdef: NAME
stmt: simple_stmt | compound_stmt | NEWLINE
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
annassign: ':' test ['=' (yield_expr|testlist_star_expr)]
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal and annotated assignments, additional restrictions enforced by the interpreter
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist_star_expr]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: 'global' NAME (',' NAME)*
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: 'async' (funcdef | with_stmt | for_stmt)
if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]
while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test ['as' NAME]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
namedexpr_test: test [':=' test]
test: or_test ['if' or_test 'else' test] | lambdef
lambdef: 'lambda' [varargslist] ':' test
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
# <> isn't actually a valid comparison operator in Python. It's here for the
# sake of a __future__ import described in PEP 401 (which really works :-)
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: atom_expr ['**' factor]
atom_expr: ['await'] atom trailer*
atom: ('(' [yield_expr|testlist_comp] ')' |
'[' [testlist_comp] ']' |
'{' [dictorsetmaker] '}' |
NAME | NUMBER | strings | '...' | 'None' | 'True' | 'False')
testlist_comp: (namedexpr_test|star_expr) ( comp_for | (',' (namedexpr_test|star_expr))* [','] )
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test [':=' test] | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictorsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test [':=' test] | star_expr)
(comp_for | (',' (test [':=' test] | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
# The reason that keywords are test nodes instead of NAME is that using NAME
# results in an ambiguity. ast.c makes sure it's a NAME.
# "test '=' test" is really "keyword '=' test", but we have no such token.
# These need to be in a single rule to avoid grammar that is ambiguous
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test ':=' test |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
sync_comp_for: 'for' exprlist 'in' or_test [comp_iter]
comp_for: ['async'] sync_comp_for
comp_if: 'if' or_test [comp_iter]
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist_star_expr
strings: (STRING | fstring)+
fstring: FSTRING_START fstring_content* FSTRING_END
fstring_content: FSTRING_STRING | fstring_expr
fstring_conversion: '!' NAME
fstring_expr: '{' (testlist_comp | yield_expr) ['='] [ fstring_conversion ] [ fstring_format_spec ] '}'
fstring_format_spec: ':' fstring_content*

View File

@@ -676,7 +676,7 @@ class PEP8Normalizer(ErrorFinder):
elif leaf.parent.type == 'function' and leaf.parent.name == leaf: elif leaf.parent.type == 'function' and leaf.parent.name == leaf:
self.add_issue(leaf, 743, message % 'function') self.add_issue(leaf, 743, message % 'function')
else: else:
self.add_issuadd_issue(741, message % 'variables', leaf) self.add_issue(741, message % 'variables', leaf)
elif leaf.value == ':': elif leaf.value == ':':
if isinstance(leaf.parent, (Flow, Scope)) and leaf.parent.type != 'lambdef': if isinstance(leaf.parent, (Flow, Scope)) and leaf.parent.type != 'lambdef':
next_leaf = leaf.get_next_leaf() next_leaf = leaf.get_next_leaf()
@@ -764,4 +764,4 @@ class BlankLineAtEnd(Rule):
message = 'Blank line at end of file' message = 'Blank line at end of file'
def is_issue(self, leaf): def is_issue(self, leaf):
return self._newline_count >= 2 return False # TODO return self._newline_count >= 2

View File

@@ -16,7 +16,7 @@ import re
import itertools as _itertools import itertools as _itertools
from codecs import BOM_UTF8 from codecs import BOM_UTF8
from typing import NamedTuple, Tuple, Iterator, Iterable, List, Dict, \ from typing import NamedTuple, Tuple, Iterator, Iterable, List, Dict, \
Pattern, Set Pattern, Set, Any
from parso.python.token import PythonTokenTypes from parso.python.token import PythonTokenTypes
from parso.utils import split_lines, PythonVersionInfo, parse_version_string from parso.utils import split_lines, PythonVersionInfo, parse_version_string
@@ -47,12 +47,12 @@ class TokenCollection(NamedTuple):
endpats: Dict[str, Pattern] endpats: Dict[str, Pattern]
whitespace: Pattern whitespace: Pattern
fstring_pattern_map: Dict[str, str] fstring_pattern_map: Dict[str, str]
always_break_tokens: Tuple[str] always_break_tokens: Set[str]
BOM_UTF8_STRING = BOM_UTF8.decode('utf-8') BOM_UTF8_STRING = BOM_UTF8.decode('utf-8')
_token_collection_cache: Dict[PythonVersionInfo, TokenCollection] = {} _token_collection_cache: Dict[Tuple[int, int], TokenCollection] = {}
def group(*choices, capture=False, **kwargs): def group(*choices, capture=False, **kwargs):
@@ -249,7 +249,7 @@ class Token(NamedTuple):
class PythonToken(Token): class PythonToken(Token):
def __repr__(self): def __repr__(self):
return ('TokenInfo(type=%s, string=%r, start_pos=%r, prefix=%r)' % return ('TokenInfo(type=%s, string=%r, start_pos=%r, prefix=%r)' %
self._replace(type=self.type.name)) self._replace(type=self.type.name)) # type: ignore[arg-type]
class FStringNode: class FStringNode:
@@ -257,7 +257,7 @@ class FStringNode:
self.quote = quote self.quote = quote
self.parentheses_count = 0 self.parentheses_count = 0
self.previous_lines = '' self.previous_lines = ''
self.last_string_start_pos = None self.last_string_start_pos: Any = None
# In the syntax there can be multiple format_spec's nested: # In the syntax there can be multiple format_spec's nested:
# {x:{y:3}} # {x:{y:3}}
self.format_spec_count = 0 self.format_spec_count = 0
@@ -340,7 +340,7 @@ def _find_fstring_string(endpats, fstring_stack, line, lnum, pos):
def tokenize( def tokenize(
code: str, *, version_info: PythonVersionInfo, start_pos: Tuple[int, int] = (1, 0) code: str, *, version_info: Tuple[int, int], start_pos: Tuple[int, int] = (1, 0)
) -> Iterator[PythonToken]: ) -> Iterator[PythonToken]:
"""Generate tokens from a the source code (string).""" """Generate tokens from a the source code (string)."""
lines = split_lines(code, keepends=True) lines = split_lines(code, keepends=True)
@@ -363,7 +363,7 @@ def _print_tokens(func):
def tokenize_lines( def tokenize_lines(
lines: Iterable[str], lines: Iterable[str],
*, *,
version_info: PythonVersionInfo, version_info: Tuple[int, int],
indents: List[int] = None, indents: List[int] = None,
start_pos: Tuple[int, int] = (1, 0), start_pos: Tuple[int, int] = (1, 0),
is_first_token=True, is_first_token=True,
@@ -444,7 +444,7 @@ def tokenize_lines(
if string: if string:
yield PythonToken( yield PythonToken(
FSTRING_STRING, string, FSTRING_STRING, string,
tos.last_string_start_pos, tos.last_string_start_pos, # type: ignore[arg-type]
# Never has a prefix because it can start anywhere and # Never has a prefix because it can start anywhere and
# include whitespace. # include whitespace.
prefix='' prefix=''
@@ -496,8 +496,8 @@ def tokenize_lines(
initial = token[0] initial = token[0]
else: else:
match = whitespace.match(line, pos) match = whitespace.match(line, pos)
initial = line[match.end()] initial = line[match.end()] # type: ignore[union-attr]
start = match.end() start = match.end() # type: ignore[union-attr]
spos = (lnum, start) spos = (lnum, start)
if new_line and initial not in '\r\n#' and (initial != '\\' or pseudomatch is None): if new_line and initial not in '\r\n#' and (initial != '\\' or pseudomatch is None):
@@ -512,12 +512,12 @@ def tokenize_lines(
if not pseudomatch: # scan for tokens if not pseudomatch: # scan for tokens
match = whitespace.match(line, pos) match = whitespace.match(line, pos)
if new_line and paren_level == 0 and not fstring_stack: if new_line and paren_level == 0 and not fstring_stack:
yield from dedent_if_necessary(match.end()) yield from dedent_if_necessary(match.end()) # type: ignore[union-attr]
pos = match.end() pos = match.end() # type: ignore[union-attr]
new_line = False new_line = False
yield PythonToken( yield PythonToken(
ERRORTOKEN, line[pos], (lnum, pos), ERRORTOKEN, line[pos], (lnum, pos),
additional_prefix + match.group(0) additional_prefix + match.group(0) # type: ignore[union-attr]
) )
additional_prefix = '' additional_prefix = ''
pos += 1 pos += 1
@@ -586,7 +586,7 @@ def tokenize_lines(
# backslash and is continued. # backslash and is continued.
contstr_start = lnum, start contstr_start = lnum, start
endprog = (endpats.get(initial) or endpats.get(token[1]) endprog = (endpats.get(initial) or endpats.get(token[1])
or endpats.get(token[2])) or endpats.get(token[2])) # type: ignore[assignment]
contstr = line[start:] contstr = line[start:]
contline = line contline = line
break break

View File

@@ -43,11 +43,8 @@ Parser Tree Classes
""" """
import re import re
try: from collections.abc import Mapping
from collections.abc import Mapping from typing import Tuple, Any
except ImportError:
from collections import Mapping
from typing import Tuple
from parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, search_ancestor # noqa from parso.tree import Node, BaseNode, Leaf, ErrorNode, ErrorLeaf, search_ancestor # noqa
from parso.python.prefix import split_prefix from parso.python.prefix import split_prefix
@@ -70,6 +67,9 @@ _IMPORTS = set(['import_name', 'import_from'])
class DocstringMixin: class DocstringMixin:
__slots__ = () __slots__ = ()
type: str
children: "list[Any]"
parent: Any
def get_doc_node(self): def get_doc_node(self):
""" """
@@ -101,6 +101,7 @@ class PythonMixin:
Some Python specific utilities. Some Python specific utilities.
""" """
__slots__ = () __slots__ = ()
children: "list[Any]"
def get_name_of_position(self, position): def get_name_of_position(self, position):
""" """
@@ -219,7 +220,7 @@ class Name(_LeafWithoutNewlines):
type_ = node.type type_ = node.type
if type_ in ('funcdef', 'classdef'): if type_ in ('funcdef', 'classdef'):
if self == node.name: if self == node.name: # type: ignore[union-attr]
return node return node
return None return None
@@ -232,7 +233,7 @@ class Name(_LeafWithoutNewlines):
if node.type == 'suite': if node.type == 'suite':
return None return None
if node.type in _GET_DEFINITION_TYPES: if node.type in _GET_DEFINITION_TYPES:
if self in node.get_defined_names(include_setitem): if self in node.get_defined_names(include_setitem): # type: ignore[attr-defined]
return node return node
if import_name_always and node.type in _IMPORTS: if import_name_always and node.type in _IMPORTS:
return node return node
@@ -296,6 +297,7 @@ class FStringEnd(PythonLeaf):
class _StringComparisonMixin: class _StringComparisonMixin:
__slots__ = () __slots__ = ()
value: Any
def __eq__(self, other): def __eq__(self, other):
""" """
@@ -368,7 +370,7 @@ class Scope(PythonBaseNode, DocstringMixin):
def __repr__(self): def __repr__(self):
try: try:
name = self.name.value name = self.name.value # type: ignore[attr-defined]
except AttributeError: except AttributeError:
name = '' name = ''
@@ -794,6 +796,8 @@ class WithStmt(Flow):
class Import(PythonBaseNode): class Import(PythonBaseNode):
__slots__ = () __slots__ = ()
get_paths: Any
_aliases: Any
def get_path_for_name(self, name): def get_path_for_name(self, name):
""" """
@@ -818,6 +822,9 @@ class Import(PythonBaseNode):
def is_star_import(self): def is_star_import(self):
return self.children[-1] == '*' return self.children[-1] == '*'
def get_defined_names(self):
raise NotImplementedError("Use ImportFrom or ImportName")
class ImportFrom(Import): class ImportFrom(Import):
type = 'import_from' type = 'import_from'

View File

@@ -14,12 +14,7 @@ def search_ancestor(node: 'NodeOrLeaf', *node_types: str) -> 'Optional[BaseNode]
:param node: The ancestors of this node will be checked. :param node: The ancestors of this node will be checked.
:param node_types: type names that are searched for. :param node_types: type names that are searched for.
""" """
n = node.parent return node.search_ancestor(*node_types)
while n is not None:
if n.type in node_types:
return n
n = n.parent
return None
class NodeOrLeaf: class NodeOrLeaf:
@@ -371,7 +366,7 @@ class BaseNode(NodeOrLeaf):
""" """
__slots__ = ('children',) __slots__ = ('children',)
def __init__(self, children: List[NodeOrLeaf]) -> None: def __init__(self, children) -> None:
self.children = children self.children = children
""" """
A list of :class:`NodeOrLeaf` child nodes. A list of :class:`NodeOrLeaf` child nodes.

View File

@@ -2,7 +2,7 @@ import re
import sys import sys
from ast import literal_eval from ast import literal_eval
from functools import total_ordering from functools import total_ordering
from typing import NamedTuple, Sequence, Union from typing import NamedTuple, Union
# The following is a list in Python that are line breaks in str.splitlines, but # The following is a list in Python that are line breaks in str.splitlines, but
# not in Python. In Python only \r (Carriage Return, 0xD) and \n (Line Feed, # not in Python. In Python only \r (Carriage Return, 0xD) and \n (Line Feed,
@@ -26,7 +26,7 @@ class Version(NamedTuple):
micro: int micro: int
def split_lines(string: str, keepends: bool = False) -> Sequence[str]: def split_lines(string: str, keepends: bool = False) -> "list[str]":
r""" r"""
Intended for Python code. In contrast to Python's :py:meth:`str.splitlines`, Intended for Python code. In contrast to Python's :py:meth:`str.splitlines`,
looks at form feeds and other special characters as normal text. Just looks at form feeds and other special characters as normal text. Just

14
pyproject.toml Normal file
View File

@@ -0,0 +1,14 @@
[tool.zuban]
enable_error_code = ["ignore-without-code"]
disallow_subclassing_any = true
# Avoid creating future gotchas emerging from bad typing
warn_redundant_casts = true
warn_unused_ignores = true
warn_unused_configs = true
warn_unreachable = true
strict_equality = true
implicit_optional = true
exclude = "^test/normalizer_issue_files"

View File

@@ -10,3 +10,6 @@ norecursedirs = .* docs scripts normalizer_issue_files build
# fine as long as we are using `clean_jedi_cache` as a session scoped # fine as long as we are using `clean_jedi_cache` as a session scoped
# fixture. # fixture.
usefixtures = clean_parso_cache usefixtures = clean_parso_cache
# Disallow warnings
filterwarnings = error

View File

@@ -15,11 +15,11 @@ Options:
import cProfile import cProfile
from docopt import docopt from docopt import docopt
from jedi.parser.python import load_grammar from jedi.parser.python import load_grammar # type: ignore[import-not-found]
from jedi.parser.diff import DiffParser from jedi.parser.diff import DiffParser # type: ignore[import-not-found]
from jedi.parser.python import ParserWithRecovery from jedi.parser.python import ParserWithRecovery # type: ignore[import-not-found]
from jedi.common import splitlines from jedi.common import splitlines # type: ignore[import-not-found]
import jedi import jedi # type: ignore[import-not-found]
def run(parser, lines): def run(parser, lines):

View File

@@ -10,20 +10,3 @@ ignore =
E226, E226,
# line break before binary operator # line break before binary operator
W503, W503,
[mypy]
show_error_codes = true
enable_error_code = ignore-without-code
disallow_subclassing_any = True
# Avoid creating future gotchas emerging from bad typing
warn_redundant_casts = True
warn_unused_ignores = True
warn_return_any = True
warn_unused_configs = True
warn_unreachable = True
strict_equality = True
no_implicit_optional = False

View File

@@ -40,6 +40,11 @@ setup(
'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Programming Language :: Python :: 3.13',
'Programming Language :: Python :: 3.14',
'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: Text Editors :: Integrated Development Environments (IDE)', 'Topic :: Text Editors :: Integrated Development Environments (IDE)',
'Topic :: Utilities', 'Topic :: Utilities',
@@ -53,9 +58,8 @@ setup(
'qa': [ 'qa': [
# Latest version which supports Python 3.6 # Latest version which supports Python 3.6
'flake8==5.0.4', 'flake8==5.0.4',
# Latest version which supports Python 3.6
'mypy==0.971',
# Arbitrary pins, latest at the time of pinning # Arbitrary pins, latest at the time of pinning
'zuban==0.5.1',
'types-setuptools==67.2.0.1', 'types-setuptools==67.2.0.1',
], ],
}, },

View File

@@ -29,7 +29,6 @@ FAILING_EXAMPLES = [
'from foo import a,', 'from foo import a,',
'from __future__ import whatever', 'from __future__ import whatever',
'from __future__ import braces', 'from __future__ import braces',
'from .__future__ import whatever',
'def f(x=3, y): pass', 'def f(x=3, y): pass',
'lambda x=3, y: x', 'lambda x=3, y: x',
'__debug__ = 1', '__debug__ = 1',
@@ -216,7 +215,6 @@ FAILING_EXAMPLES = [
'f"{\'\\\'}"', 'f"{\'\\\'}"',
'f"{#}"', 'f"{#}"',
"f'{1!b}'", "f'{1!b}'",
"f'{1:{5:{3}}}'",
"f'{'", "f'{'",
"f'{'", "f'{'",
"f'}'", "f'}'",
@@ -227,8 +225,6 @@ FAILING_EXAMPLES = [
"f'{1;1}'", "f'{1;1}'",
"f'{a;}'", "f'{a;}'",
"f'{b\"\" \"\"}'", "f'{b\"\" \"\"}'",
# f-string expression part cannot include a backslash
r'''f"{'\n'}"''',
'async def foo():\n yield x\n return 1', 'async def foo():\n yield x\n return 1',
'async def foo():\n yield x\n return 1', 'async def foo():\n yield x\n return 1',
@@ -413,3 +409,17 @@ if sys.version_info[:2] >= (3, 8):
FAILING_EXAMPLES += [ FAILING_EXAMPLES += [
"f'{1=!b}'", "f'{1=!b}'",
] ]
if sys.version_info[:2] < (3, 12):
FAILING_EXAMPLES += [
# f-string expression part cannot include a backslash before 3.12
r'''f"{'\n'}"''',
# this compiles successfully but fails when evaluated in 3.12
"f'{1:{5:{3}}}'",
]
if sys.version_info[:2] < (3, 13):
# this compiles successfully but fails when evaluated in 3.13
FAILING_EXAMPLES += [
'from .__future__ import whatever',
]

View File

@@ -4,15 +4,20 @@ from parso import utils
def test_load_inexisting_grammar(): def test_load_inexisting_grammar():
# This version shouldn't be out for a while, but if we ever do, wow! # We support future grammars assuming future compatibility,
with pytest.raises(NotImplementedError): # but we don't know how to parse old grammars.
load_grammar(version='15.8')
# The same is true for very old grammars (even though this is probably not
# going to be an issue.
with pytest.raises(NotImplementedError): with pytest.raises(NotImplementedError):
load_grammar(version='1.5') load_grammar(version='1.5')
def test_load_grammar_uses_older_syntax():
load_grammar(version='4.0')
def test_load_grammar_doesnt_warn(each_version):
load_grammar(version=each_version)
@pytest.mark.parametrize(('string', 'result'), [ @pytest.mark.parametrize(('string', 'result'), [
('2', (2, 7)), ('3', (3, 6)), ('1.1', (1, 1)), ('1.1.1', (1, 1)), ('300.1.31', (300, 1)) ('2', (2, 7)), ('3', (3, 6)), ('1.1', (1, 1)), ('1.1.1', (1, 1)), ('300.1.31', (300, 1))
]) ])
@@ -28,4 +33,4 @@ def test_invalid_grammar_version(string):
def test_grammar_int_version(): def test_grammar_int_version():
with pytest.raises(TypeError): with pytest.raises(TypeError):
load_grammar(version=3.8) load_grammar(version=3.8) # type: ignore

View File

@@ -117,7 +117,7 @@ def test_param_splitting(each_version):
def test_unicode_string(): def test_unicode_string():
s = tree.String(None, '', (0, 0)) s = tree.String('', (0, 0))
assert repr(s) # Should not raise an Error! assert repr(s) # Should not raise an Error!

View File

@@ -118,25 +118,57 @@ def _get_actual_exception(code):
assert False, "The piece of code should raise an exception." assert False, "The piece of code should raise an exception."
# SyntaxError # SyntaxError
if wanted == 'SyntaxError: assignment to keyword': # Some errors have changed error message in later versions of Python,
# and we give a translation table here. We deal with special cases
# below.
translations = {
'SyntaxError: f-string: unterminated string':
'SyntaxError: EOL while scanning string literal',
"SyntaxError: f-string: expecting '}'":
'SyntaxError: EOL while scanning string literal',
'SyntaxError: f-string: empty expression not allowed':
'SyntaxError: invalid syntax',
"SyntaxError: f-string expression part cannot include '#'":
'SyntaxError: invalid syntax',
"SyntaxError: f-string: single '}' is not allowed":
'SyntaxError: invalid syntax',
'SyntaxError: cannot use starred expression here':
"SyntaxError: can't use starred expression here",
'SyntaxError: f-string: cannot use starred expression here':
"SyntaxError: f-string: can't use starred expression here",
'SyntaxError: unterminated string literal':
'SyntaxError: EOL while scanning string literal',
'SyntaxError: parameter without a default follows parameter with a default':
'SyntaxError: non-default argument follows default argument',
"SyntaxError: 'yield from' outside function":
"SyntaxError: 'yield' outside function",
"SyntaxError: f-string: valid expression required before '}'":
'SyntaxError: invalid syntax',
"SyntaxError: '{' was never closed":
'SyntaxError: invalid syntax',
"SyntaxError: f-string: invalid conversion character 'b': expected 's', 'r', or 'a'":
"SyntaxError: f-string: invalid conversion character: expected 's', 'r', or 'a'",
"SyntaxError: (value error) Invalid format specifier ' 5' for object of type 'int'":
'SyntaxError: f-string: expressions nested too deeply',
"SyntaxError: f-string: expecting a valid expression after '{'":
'SyntaxError: f-string: invalid syntax',
"SyntaxError: f-string: expecting '=', or '!', or ':', or '}'":
'SyntaxError: f-string: invalid syntax',
"SyntaxError: f-string: expecting '=', or '!', or ':', or '}'":
'SyntaxError: f-string: invalid syntax',
}
if wanted in translations:
wanted = translations[wanted]
elif wanted == 'SyntaxError: assignment to keyword':
return [wanted, "SyntaxError: can't assign to keyword", return [wanted, "SyntaxError: can't assign to keyword",
'SyntaxError: cannot assign to __debug__'], line_nr 'SyntaxError: cannot assign to __debug__'], line_nr
elif wanted == 'SyntaxError: f-string: unterminated string':
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string expression part cannot include a backslash': elif wanted == 'SyntaxError: f-string expression part cannot include a backslash':
return [ return [
wanted, wanted,
"SyntaxError: EOL while scanning string literal", "SyntaxError: EOL while scanning string literal",
"SyntaxError: unexpected character after line continuation character", "SyntaxError: unexpected character after line continuation character",
], line_nr ], line_nr
elif wanted == "SyntaxError: f-string: expecting '}'":
wanted = 'SyntaxError: EOL while scanning string literal'
elif wanted == 'SyntaxError: f-string: empty expression not allowed':
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string expression part cannot include '#'":
wanted = 'SyntaxError: invalid syntax'
elif wanted == "SyntaxError: f-string: single '}' is not allowed":
wanted = 'SyntaxError: invalid syntax'
elif "Maybe you meant '==' instead of '='?" in wanted: elif "Maybe you meant '==' instead of '='?" in wanted:
wanted = wanted.removesuffix(" here. Maybe you meant '==' instead of '='?") wanted = wanted.removesuffix(" here. Maybe you meant '==' instead of '='?")
elif re.match( elif re.match(
@@ -148,18 +180,28 @@ def _get_actual_exception(code):
wanted, wanted,
): ):
wanted = 'SyntaxError: EOF while scanning triple-quoted string literal' wanted = 'SyntaxError: EOF while scanning triple-quoted string literal'
elif wanted == 'SyntaxError: cannot use starred expression here':
wanted = "SyntaxError: can't use starred expression here"
elif wanted == 'SyntaxError: f-string: cannot use starred expression here':
wanted = "SyntaxError: f-string: can't use starred expression here"
elif re.match( elif re.match(
r"IndentationError: expected an indented block after '[^']*' statement on line \d", r"IndentationError: expected an indented block after '[^']*' statement on line \d",
wanted, wanted,
): ):
wanted = 'IndentationError: expected an indented block' wanted = 'IndentationError: expected an indented block'
elif wanted == 'SyntaxError: unterminated string literal': # The following two errors are produced for both some f-strings and
wanted = 'SyntaxError: EOL while scanning string literal' # some non-f-strings in Python 3.13:
return [wanted], line_nr elif wanted == "SyntaxError: can't use starred expression here":
wanted = [
"SyntaxError: can't use starred expression here",
"SyntaxError: f-string: can't use starred expression here"
]
elif wanted == 'SyntaxError: cannot mix bytes and nonbytes literals':
wanted = [
'SyntaxError: cannot mix bytes and nonbytes literals',
'SyntaxError: f-string: cannot mix bytes and nonbytes literals'
]
if isinstance(wanted, list):
return wanted, line_nr
else:
return [wanted], line_nr
def test_default_except_error_postition(): def test_default_except_error_postition():