mirror of
https://github.com/davidhalter/jedi.git
synced 2026-02-24 01:28:27 +08:00
Merge branch 'dev' into unicode_tokenize_fix2
Conflicts: AUTHORS.txt
This commit is contained in:
@@ -29,6 +29,9 @@ Syohei Yoshida (@syohex) <syohex@gmail.com>
|
||||
ppalucky (@ppalucky)
|
||||
immerrr (@immerrr) immerrr@gmail.com
|
||||
Albertas Agejevas (@alga)
|
||||
Savor d'Isavano (@KenetJervet) <newelevenken@163.com>
|
||||
Phillip Berndt (@phillipberndt) <phillip.berndt@gmail.com>
|
||||
Ian Lee (@IanLee1521) <IanLee1521@gmail.com>
|
||||
Farkhad Khatamov (@hatamov) <comsgn@gmail.com>
|
||||
|
||||
Note: (@user) means a github user name.
|
||||
|
||||
@@ -8,6 +8,7 @@ include conftest.py
|
||||
include pytest.ini
|
||||
include tox.ini
|
||||
include jedi/evaluate/compiled/fake/*.pym
|
||||
include jedi/parser/grammar*.txt
|
||||
recursive-include test *
|
||||
recursive-include docs *
|
||||
recursive-exclude * *.pyc
|
||||
|
||||
21
README.rst
21
README.rst
@@ -34,7 +34,7 @@ me.
|
||||
Jedi can be used with the following editors:
|
||||
|
||||
- Vim (jedi-vim_, YouCompleteMe_)
|
||||
- Emacs (Jedi.el_, elpy_, anaconda-mode_)
|
||||
- Emacs (Jedi.el_, elpy_, anaconda-mode_, ycmd_)
|
||||
- Sublime Text (SublimeJEDI_ [ST2 + ST3], anaconda_ [only ST3])
|
||||
- SynWrite_
|
||||
- TextMate_ (Not sure if it's actually working)
|
||||
@@ -65,8 +65,8 @@ There is also support for goto and renaming.
|
||||
Get the latest version from `github <https://github.com/davidhalter/jedi>`_
|
||||
(master branch should always be kind of stable/working).
|
||||
|
||||
Docs are available at `https://jedi.jedidjah.ch/
|
||||
<https://jedi.jedidjah.ch/>`_. Pull requests with documentation
|
||||
Docs are available at `https://jedi.readthedocs.org/en/latest/
|
||||
<https://jedi.readthedocs.org/en/latest/>`_. Pull requests with documentation
|
||||
enhancements and/or fixes are awesome and most welcome. Jedi uses `semantic
|
||||
versioning <http://semver.org/>`_.
|
||||
|
||||
@@ -81,7 +81,7 @@ information about how to make it work with your editor, refer to the
|
||||
corresponding documentation.
|
||||
|
||||
You don't want to use ``pip``? Please refer to the `manual
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/installation.html>`_.
|
||||
<https://jedi.readthedocs.org/en/latest/docs/installation.html>`_.
|
||||
|
||||
|
||||
Feature Support and Caveats
|
||||
@@ -89,21 +89,21 @@ Feature Support and Caveats
|
||||
|
||||
Jedi really understands your Python code. For a comprehensive list what Jedi
|
||||
can do, see: `Features
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/features.html>`_. A list of
|
||||
<https://jedi.readthedocs.org/en/latest/docs/features.html>`_. A list of
|
||||
caveats can be found on the same page.
|
||||
|
||||
You can run Jedi on cPython 2.6, 2.7, 3.2 or 3.3, but it should also
|
||||
You can run Jedi on cPython 2.6, 2.7, 3.2, 3.3 or 3.4, but it should also
|
||||
understand/parse code older than those versions.
|
||||
|
||||
Tips on how to use Jedi efficiently can be found `here
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/recipes.html>`_.
|
||||
<https://jedi.readthedocs.org/en/latest/docs/recipes.html>`_.
|
||||
|
||||
|
||||
API for IDEs
|
||||
============
|
||||
|
||||
It's very easy to create an editor plugin that uses Jedi. See `Plugin API
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/plugin-api.html>`_ for more
|
||||
<https://jedi.readthedocs.org/en/latest/docs/plugin-api.html>`_ for more
|
||||
information.
|
||||
|
||||
If you have specific questions, please add an issue or ask on `stackoverflow
|
||||
@@ -114,7 +114,7 @@ Development
|
||||
===========
|
||||
|
||||
There's a pretty good and extensive `development documentation
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/development.html>`_.
|
||||
<https://jedi.readthedocs.org/en/latest/docs/development.html>`_.
|
||||
|
||||
|
||||
Testing
|
||||
@@ -137,7 +137,7 @@ Tests are also run automatically on `Travis CI
|
||||
<https://travis-ci.org/davidhalter/jedi/>`_.
|
||||
|
||||
For more detailed information visit the `testing documentation
|
||||
<https://jedi.jedidjah.ch/en/latest/docs/testing.html>`_
|
||||
<https://jedi.readthedocs.org/en/latest/docs/testing.html>`_
|
||||
|
||||
|
||||
.. _jedi-vim: https://github.com/davidhalter/jedi-vim
|
||||
@@ -145,6 +145,7 @@ For more detailed information visit the `testing documentation
|
||||
.. _Jedi.el: https://github.com/tkf/emacs-jedi
|
||||
.. _elpy: https://github.com/jorgenschaefer/elpy
|
||||
.. _anaconda-mode: https://github.com/proofit404/anaconda-mode
|
||||
.. _ycmd: https://github.com/abingham/emacs-ycmd
|
||||
.. _sublimejedi: https://github.com/srusskih/SublimeJEDI
|
||||
.. _anaconda: https://github.com/DamnWidget/anaconda
|
||||
.. _SynWrite: http://uvviewsoft.com/synjedi/
|
||||
|
||||
@@ -30,18 +30,20 @@ System-wide installation via a package manager
|
||||
Arch Linux
|
||||
~~~~~~~~~~
|
||||
|
||||
You can install |jedi| directly from official AUR packages:
|
||||
You can install |jedi| directly from official Arch Linux packages:
|
||||
|
||||
- `python-jedi <https://aur.archlinux.org/packages/python-jedi/>`__ (Python 3)
|
||||
- `python2-jedi <https://aur.archlinux.org/packages/python2-jedi/>`__ (Python 2)
|
||||
- `python-jedi <https://www.archlinux.org/packages/community/any/python-jedi/>`__
|
||||
(Python 3)
|
||||
- `python2-jedi <https://www.archlinux.org/packages/community/any/python2-jedi/>`__
|
||||
(Python 2)
|
||||
|
||||
The specified Python version just refers to the *runtime environment* for
|
||||
|jedi|. Use the Python 2 version if you're running vim (or whatever editor you
|
||||
use) under Python 2. Otherwise, use the Python 3 version. But whatever version
|
||||
you choose, both are able to complete both Python 2 and 3 *code*.
|
||||
|
||||
(There is also a packaged version of the vim plugin available: `vim-jedi at AUR
|
||||
<https://aur.archlinux.org/packages/vim-jedi/>`__.)
|
||||
(There is also a packaged version of the vim plugin available: `vim-jedi at
|
||||
Arch Linux<https://www.archlinux.org/packages/community/any/vim-jedi/>`__.)
|
||||
|
||||
Debian
|
||||
~~~~~~
|
||||
|
||||
@@ -34,7 +34,7 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
|
||||
good text editor, while still having very good IDE features for Python.
|
||||
"""
|
||||
|
||||
__version__ = '0.8.1-final0'
|
||||
__version__ = '0.9.0'
|
||||
|
||||
from jedi.api import Script, Interpreter, NotFoundError, set_debug_function
|
||||
from jedi.api import preload_module, defined_names, names
|
||||
|
||||
@@ -65,23 +65,6 @@ or the name of the module if it is a builtin one and a boolean indicating
|
||||
if the module is contained in a package.
|
||||
"""
|
||||
|
||||
# next was defined in python 2.6, in python 3 obj.next won't be possible
|
||||
# anymore
|
||||
try:
|
||||
next = next
|
||||
except NameError:
|
||||
_raiseStopIteration = object()
|
||||
|
||||
def next(iterator, default=_raiseStopIteration):
|
||||
if not hasattr(iterator, 'next'):
|
||||
raise TypeError("not an iterator")
|
||||
try:
|
||||
return iterator.next()
|
||||
except StopIteration:
|
||||
if default is _raiseStopIteration:
|
||||
raise
|
||||
else:
|
||||
return default
|
||||
|
||||
# unicode function
|
||||
try:
|
||||
@@ -125,18 +108,6 @@ Usage::
|
||||
|
||||
"""
|
||||
|
||||
# hasattr function used because python
|
||||
if is_py3:
|
||||
hasattr = hasattr
|
||||
else:
|
||||
def hasattr(obj, name):
|
||||
try:
|
||||
getattr(obj, name)
|
||||
return True
|
||||
except AttributeError:
|
||||
return False
|
||||
|
||||
|
||||
class Python3Method(object):
|
||||
def __init__(self, func):
|
||||
self.func = func
|
||||
@@ -197,3 +168,15 @@ try:
|
||||
from itertools import zip_longest
|
||||
except ImportError:
|
||||
from itertools import izip_longest as zip_longest # Python 2
|
||||
|
||||
|
||||
def no_unicode_pprint(dct):
|
||||
"""
|
||||
Python 2/3 dict __repr__ may be different, because of unicode differens
|
||||
(with or without a `u` prefix). Normally in doctests we could use `pprint`
|
||||
to sort dicts and check for equality, but here we have to write a separate
|
||||
function to do that.
|
||||
"""
|
||||
import pprint
|
||||
s = pprint.pformat(dct)
|
||||
print(re.sub("u'", "'", s))
|
||||
|
||||
@@ -2,9 +2,7 @@
|
||||
The API basically only provides one class. You can create a :class:`Script` and
|
||||
use its methods.
|
||||
|
||||
Additionally you can add a debug function with :func:`set_debug_function` and
|
||||
catch :exc:`NotFoundError` which is being raised if your completion is not
|
||||
possible.
|
||||
Additionally you can add a debug function with :func:`set_debug_function`.
|
||||
|
||||
.. warning:: Please, note that Jedi is **not thread safe**.
|
||||
"""
|
||||
@@ -14,10 +12,10 @@ import warnings
|
||||
import sys
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import next, unicode, builtins
|
||||
from jedi.parser import Parser
|
||||
from jedi._compatibility import unicode, builtins
|
||||
from jedi.parser import Parser, load_grammar
|
||||
from jedi.parser.tokenize import source_tokens
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.parser.user_context import UserContext, UserContextParser
|
||||
from jedi import debug
|
||||
from jedi import settings
|
||||
@@ -28,13 +26,13 @@ from jedi.api import classes
|
||||
from jedi.api import interpreter
|
||||
from jedi.api import usages
|
||||
from jedi.api import helpers
|
||||
from jedi.evaluate import Evaluator, filter_private_variable
|
||||
from jedi.evaluate import Evaluator
|
||||
from jedi.evaluate import representation as er
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import imports
|
||||
from jedi.evaluate.helpers import FakeName, get_module_name_parts
|
||||
from jedi.evaluate.finder import get_names_of_scope
|
||||
from jedi.evaluate.helpers import search_call_signatures
|
||||
from jedi.evaluate.cache import memoize_default
|
||||
from jedi.evaluate.helpers import FakeName, get_module_names
|
||||
from jedi.evaluate.finder import global_names_dict_generator, filter_definition_names
|
||||
from jedi.evaluate import analysis
|
||||
|
||||
# Jedi uses lots and lots of recursion. By setting this a little bit higher, we
|
||||
@@ -43,7 +41,13 @@ sys.setrecursionlimit(2000)
|
||||
|
||||
|
||||
class NotFoundError(Exception):
|
||||
"""A custom error to avoid catching the wrong exceptions."""
|
||||
"""A custom error to avoid catching the wrong exceptions.
|
||||
|
||||
.. deprecated:: 0.9.0
|
||||
Not in use anymore, Jedi just returns no goto result if you're not on a
|
||||
valid name.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
|
||||
|
||||
class Script(object):
|
||||
@@ -98,11 +102,13 @@ class Script(object):
|
||||
raise ValueError('`column` parameter is not in a valid range.')
|
||||
self._pos = line, column
|
||||
|
||||
cache.clear_caches()
|
||||
cache.clear_time_caches()
|
||||
debug.reset_time()
|
||||
self._grammar = load_grammar('grammar%s.%s' % sys.version_info[:2])
|
||||
self._user_context = UserContext(self.source, self._pos)
|
||||
self._parser = UserContextParser(self.source, path, self._pos, self._user_context)
|
||||
self._evaluator = Evaluator()
|
||||
self._parser = UserContextParser(self._grammar, self.source, path,
|
||||
self._pos, self._user_context)
|
||||
self._evaluator = Evaluator(self._grammar)
|
||||
debug.speed('init')
|
||||
|
||||
@property
|
||||
@@ -127,34 +133,54 @@ class Script(object):
|
||||
:rtype: list of :class:`classes.Completion`
|
||||
"""
|
||||
def get_completions(user_stmt, bs):
|
||||
if isinstance(user_stmt, pr.Import):
|
||||
context = self._user_context.get_context()
|
||||
next(context) # skip the path
|
||||
# TODO this closure is ugly. it also doesn't work with
|
||||
# simple_complete (used for Interpreter), somehow redo.
|
||||
module = self._parser.module()
|
||||
names, level, only_modules, unfinished_dotted = \
|
||||
helpers.check_error_statements(module, self._pos)
|
||||
completion_names = []
|
||||
if names is not None:
|
||||
imp_names = [n for n in names if n.end_pos < self._pos]
|
||||
i = imports.get_importer(self._evaluator, imp_names, module, level)
|
||||
completion_names = i.completion_names(self._evaluator, only_modules)
|
||||
|
||||
# TODO this paragraph is necessary, but not sure it works.
|
||||
context = self._user_context.get_context()
|
||||
if not next(context).startswith('.'): # skip the path
|
||||
if next(context) == 'from':
|
||||
# completion is just "import" if before stands from ..
|
||||
return ((k, bs) for k in keywords.keyword_names('import'))
|
||||
return self._simple_complete(path, like)
|
||||
if unfinished_dotted:
|
||||
return completion_names
|
||||
else:
|
||||
return keywords.keyword_names('import')
|
||||
|
||||
def completion_possible(path):
|
||||
"""
|
||||
The completion logic is kind of complicated, because we strip the
|
||||
last word part. To ignore certain strange patterns with dots, just
|
||||
use regex.
|
||||
"""
|
||||
if re.match('\d+\.\.$|\.{4}$', path):
|
||||
return True # check Ellipsis and float literal `1.`
|
||||
if isinstance(user_stmt, pr.Import):
|
||||
module = self._parser.module()
|
||||
completion_names += imports.completion_names(self._evaluator,
|
||||
user_stmt, self._pos)
|
||||
return completion_names
|
||||
|
||||
return not re.search(r'^\.|^\d\.$|\.\.$', path)
|
||||
if names is None and not isinstance(user_stmt, pr.Import):
|
||||
if not path and not dot:
|
||||
# add keywords
|
||||
completion_names += keywords.keyword_names(all=True)
|
||||
# TODO delete? We should search for valid parser
|
||||
# transformations.
|
||||
completion_names += self._simple_complete(path, dot, like)
|
||||
return completion_names
|
||||
|
||||
debug.speed('completions start')
|
||||
path = self._user_context.get_path_until_cursor()
|
||||
if not completion_possible(path):
|
||||
# Dots following an int are not the start of a completion but a float
|
||||
# literal.
|
||||
if re.search(r'^\d\.$', path):
|
||||
return []
|
||||
path, dot, like = helpers.completion_parts(path)
|
||||
|
||||
user_stmt = self._parser.user_stmt_with_whitespace()
|
||||
|
||||
b = compiled.builtin
|
||||
completions = get_completions(user_stmt, b)
|
||||
completion_names = get_completions(user_stmt, b)
|
||||
|
||||
if not dot:
|
||||
# add named params
|
||||
@@ -167,36 +193,29 @@ class Script(object):
|
||||
# Allow access on _definition here, because it's a
|
||||
# public API and we don't want to make the internal
|
||||
# Name object public.
|
||||
if p._name.get_definition().stars == 0: # no *args/**kwargs
|
||||
completions.append((p._name.parent, p))
|
||||
|
||||
if not path and not isinstance(user_stmt, pr.Import):
|
||||
# add keywords
|
||||
completions += ((k, b) for k in keywords.keyword_names(all=True))
|
||||
if p._definition.stars == 0: # no *args/**kwargs
|
||||
completion_names.append(p._name)
|
||||
|
||||
needs_dot = not dot and path
|
||||
|
||||
comps = []
|
||||
comp_dct = {}
|
||||
for c, s in set(completions):
|
||||
# TODO Remove this line. c should be a namepart even before that.
|
||||
c = c.names[-1]
|
||||
for c in set(completion_names):
|
||||
n = str(c)
|
||||
if settings.case_insensitive_completion \
|
||||
and n.lower().startswith(like.lower()) \
|
||||
or n.startswith(like):
|
||||
if not filter_private_variable(s, user_stmt or self._parser.user_scope(), n):
|
||||
if isinstance(c.parent.parent, (pr.Function, pr.Class)):
|
||||
# TODO I think this is a hack. It should be an
|
||||
# er.Function/er.Class before that.
|
||||
c = er.wrap(self._evaluator, c.parent.parent).name.names[-1]
|
||||
new = classes.Completion(self._evaluator, c, needs_dot, len(like), s)
|
||||
k = (new.name, new.complete) # key
|
||||
if k in comp_dct and settings.no_completion_duplicates:
|
||||
comp_dct[k]._same_name_completions.append(new)
|
||||
else:
|
||||
comp_dct[k] = new
|
||||
comps.append(new)
|
||||
if isinstance(c.parent, (pr.Function, pr.Class)):
|
||||
# TODO I think this is a hack. It should be an
|
||||
# er.Function/er.Class before that.
|
||||
c = er.wrap(self._evaluator, c.parent).name
|
||||
new = classes.Completion(self._evaluator, c, needs_dot, len(like))
|
||||
k = (new.name, new.complete) # key
|
||||
if k in comp_dct and settings.no_completion_duplicates:
|
||||
comp_dct[k]._same_name_completions.append(new)
|
||||
else:
|
||||
comp_dct[k] = new
|
||||
comps.append(new)
|
||||
|
||||
debug.speed('completions end')
|
||||
|
||||
@@ -204,42 +223,35 @@ class Script(object):
|
||||
x.name.startswith('_'),
|
||||
x.name.lower()))
|
||||
|
||||
def _simple_complete(self, path, like):
|
||||
try:
|
||||
scopes = list(self._prepare_goto(path, True))
|
||||
except NotFoundError:
|
||||
scopes = []
|
||||
scope_names_generator = get_names_of_scope(self._evaluator,
|
||||
self._parser.user_scope(),
|
||||
self._pos)
|
||||
completions = []
|
||||
for scope, name_list in scope_names_generator:
|
||||
for c in name_list:
|
||||
completions.append((c, scope))
|
||||
def _simple_complete(self, path, dot, like):
|
||||
if not path and not dot:
|
||||
scope = self._parser.user_scope()
|
||||
if not scope.is_scope(): # Might be a flow (if/while/etc).
|
||||
scope = scope.get_parent_scope()
|
||||
names_dicts = global_names_dict_generator(
|
||||
self._evaluator,
|
||||
er.wrap(self._evaluator, scope),
|
||||
self._pos
|
||||
)
|
||||
completion_names = []
|
||||
for names_dict, pos in names_dicts:
|
||||
names = list(chain.from_iterable(names_dict.values()))
|
||||
if not names:
|
||||
continue
|
||||
completion_names += filter_definition_names(names, self._parser.user_stmt(), pos)
|
||||
elif self._get_under_cursor_stmt(path) is None:
|
||||
return []
|
||||
else:
|
||||
completions = []
|
||||
scopes = list(self._prepare_goto(path, True))
|
||||
completion_names = []
|
||||
debug.dbg('possible completion scopes: %s', scopes)
|
||||
for s in scopes:
|
||||
if s.isinstance(er.Function):
|
||||
names = s.get_magic_function_names()
|
||||
elif isinstance(s, imports.ImportWrapper):
|
||||
under = like + self._user_context.get_path_after_cursor()
|
||||
if under == 'import':
|
||||
current_line = self._user_context.get_position_line()
|
||||
if not current_line.endswith('import import'):
|
||||
continue
|
||||
a = s.import_stmt.alias
|
||||
if a and a.start_pos <= self._pos <= a.end_pos:
|
||||
continue
|
||||
names = s.get_defined_names(on_import_stmt=True)
|
||||
else:
|
||||
names = []
|
||||
for _, new_names in s.scope_names_generator():
|
||||
names += new_names
|
||||
names = []
|
||||
for names_dict in s.names_dicts(search_global=False):
|
||||
names += chain.from_iterable(names_dict.values())
|
||||
|
||||
for c in names:
|
||||
completions.append((c, s))
|
||||
return completions
|
||||
completion_names += filter_definition_names(names, self._parser.user_stmt())
|
||||
return completion_names
|
||||
|
||||
def _prepare_goto(self, goto_path, is_completion=False):
|
||||
"""
|
||||
@@ -256,108 +268,51 @@ class Script(object):
|
||||
return []
|
||||
|
||||
if isinstance(user_stmt, pr.Import):
|
||||
scopes = [helpers.get_on_import_stmt(self._evaluator, self._user_context,
|
||||
user_stmt, is_completion)[0]]
|
||||
i, _ = helpers.get_on_import_stmt(self._evaluator, self._user_context,
|
||||
user_stmt, is_completion)
|
||||
if i is None:
|
||||
return []
|
||||
scopes = [i]
|
||||
else:
|
||||
# just parse one statement, take it and evaluate it
|
||||
eval_stmt = self._get_under_cursor_stmt(goto_path)
|
||||
if eval_stmt is None:
|
||||
return []
|
||||
|
||||
if not is_completion:
|
||||
# goto_definition returns definitions of its statements if the
|
||||
# cursor is on the assignee. By changing the start_pos of our
|
||||
# "pseudo" statement, the Jedi evaluator can find the assignees.
|
||||
if user_stmt is not None:
|
||||
eval_stmt.start_pos = user_stmt.end_pos
|
||||
scopes = self._evaluator.eval_statement(eval_stmt)
|
||||
module = self._parser.module()
|
||||
names, level, _, _ = helpers.check_error_statements(module, self._pos)
|
||||
if names:
|
||||
i = imports.get_importer(self._evaluator, names, module, level)
|
||||
return i.follow(self._evaluator)
|
||||
|
||||
scopes = self._evaluator.eval_element(eval_stmt)
|
||||
|
||||
return scopes
|
||||
|
||||
def _get_under_cursor_stmt(self, cursor_txt):
|
||||
tokenizer = source_tokens(cursor_txt, line_offset=self._pos[0] - 1)
|
||||
r = Parser(cursor_txt, no_docstr=True, tokenizer=tokenizer)
|
||||
@memoize_default()
|
||||
def _get_under_cursor_stmt(self, cursor_txt, start_pos=None):
|
||||
tokenizer = source_tokens(cursor_txt)
|
||||
r = Parser(self._grammar, cursor_txt, tokenizer=tokenizer)
|
||||
try:
|
||||
# Take the last statement available.
|
||||
stmt = r.module.statements[-1]
|
||||
except IndexError:
|
||||
raise NotFoundError()
|
||||
if isinstance(stmt, pr.KeywordStatement):
|
||||
stmt = stmt.stmt
|
||||
if not isinstance(stmt, pr.ExprStmt):
|
||||
raise NotFoundError()
|
||||
# Take the last statement available that is not an endmarker.
|
||||
# And because it's a simple_stmt, we need to get the first child.
|
||||
stmt = r.module.children[-2].children[0]
|
||||
except (AttributeError, IndexError):
|
||||
return None
|
||||
|
||||
user_stmt = self._parser.user_stmt()
|
||||
if user_stmt is None:
|
||||
# Set the start_pos to a pseudo position, that doesn't exist but works
|
||||
# perfectly well (for both completions in docstrings and statements).
|
||||
stmt.start_pos = self._pos
|
||||
# Set the start_pos to a pseudo position, that doesn't exist but
|
||||
# works perfectly well (for both completions in docstrings and
|
||||
# statements).
|
||||
pos = start_pos or self._pos
|
||||
else:
|
||||
stmt.start_pos = user_stmt.start_pos
|
||||
pos = user_stmt.start_pos
|
||||
|
||||
stmt.move(pos[0] - 1, pos[1]) # Moving the offset.
|
||||
stmt.parent = self._parser.user_scope()
|
||||
return stmt
|
||||
|
||||
def complete(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.completions` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use completions instead.", DeprecationWarning)
|
||||
return self.completions()
|
||||
|
||||
def goto(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.goto_assignments` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use goto_assignments instead.", DeprecationWarning)
|
||||
return self.goto_assignments()
|
||||
|
||||
def definition(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.goto_definitions` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use goto_definitions instead.", DeprecationWarning)
|
||||
return self.goto_definitions()
|
||||
|
||||
def get_definition(self):
|
||||
"""
|
||||
.. deprecated:: 0.5.0
|
||||
Use :attr:`.goto_definitions` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use goto_definitions instead.", DeprecationWarning)
|
||||
return self.goto_definitions()
|
||||
|
||||
def related_names(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.usages` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use usages instead.", DeprecationWarning)
|
||||
return self.usages()
|
||||
|
||||
def get_in_function_call(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.call_signatures` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
return self.function_definition()
|
||||
|
||||
def function_definition(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.call_signatures` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use call_signatures instead.", DeprecationWarning)
|
||||
sig = self.call_signatures()
|
||||
return sig[0] if sig else None
|
||||
|
||||
def goto_definitions(self):
|
||||
"""
|
||||
Return the definitions of a the path under the cursor. goto function!
|
||||
@@ -377,7 +332,6 @@ class Script(object):
|
||||
scopes.update(resolve_import_paths(set(s.follow())))
|
||||
return scopes
|
||||
|
||||
user_stmt = self._parser.user_stmt_with_whitespace()
|
||||
goto_path = self._user_context.get_path_under_cursor()
|
||||
context = self._user_context.get_context()
|
||||
definitions = set()
|
||||
@@ -386,19 +340,24 @@ class Script(object):
|
||||
else:
|
||||
# Fetch definition of callee, if there's no path otherwise.
|
||||
if not goto_path:
|
||||
call, _, _ = search_call_signatures(user_stmt, self._pos)
|
||||
if call is not None:
|
||||
definitions = set(self._evaluator.eval_call(call))
|
||||
definitions = set(signature._definition
|
||||
for signature in self.call_signatures())
|
||||
|
||||
if not definitions:
|
||||
if goto_path:
|
||||
definitions = set(self._prepare_goto(goto_path))
|
||||
if re.match('\w[\w\d_]*$', goto_path) and not definitions:
|
||||
user_stmt = self._parser.user_stmt()
|
||||
if user_stmt is not None and user_stmt.type == 'expr_stmt':
|
||||
for name in user_stmt.get_defined_names():
|
||||
if name.start_pos <= self._pos <= name.end_pos:
|
||||
# TODO scaning for a name and then using it should be
|
||||
# the default.
|
||||
definitions = set(self._evaluator.goto_definition(name))
|
||||
|
||||
if not definitions and goto_path:
|
||||
definitions = set(self._prepare_goto(goto_path))
|
||||
|
||||
definitions = resolve_import_paths(definitions)
|
||||
names = [s if isinstance(s, pr.Name) else s.name for s in definitions
|
||||
if s is not imports.ImportWrapper.GlobalNamespace]
|
||||
defs = [classes.Definition(self._evaluator, name.names[-1])
|
||||
for name in names]
|
||||
names = [s.name for s in definitions]
|
||||
defs = [classes.Definition(self._evaluator, name) for name in names]
|
||||
return helpers.sorted_definitions(set(defs))
|
||||
|
||||
def goto_assignments(self):
|
||||
@@ -411,8 +370,7 @@ class Script(object):
|
||||
:rtype: list of :class:`classes.Definition`
|
||||
"""
|
||||
results = self._goto()
|
||||
d = [classes.Definition(self._evaluator, d) for d in set(results)
|
||||
if d is not imports.ImportWrapper.GlobalNamespace]
|
||||
d = [classes.Definition(self._evaluator, d) for d in set(results)]
|
||||
return helpers.sorted_definitions(d)
|
||||
|
||||
def _goto(self, add_import_name=False):
|
||||
@@ -432,30 +390,38 @@ class Script(object):
|
||||
and d.start_pos == (0, 0):
|
||||
i = imports.ImportWrapper(self._evaluator, d.parent).follow(is_goto=True)
|
||||
definitions.remove(d)
|
||||
definitions |= follow_inexistent_imports(i.names[-1])
|
||||
definitions |= follow_inexistent_imports(i)
|
||||
return definitions
|
||||
|
||||
goto_path = self._user_context.get_path_under_cursor()
|
||||
context = self._user_context.get_context()
|
||||
user_stmt = self._parser.user_stmt()
|
||||
user_scope = self._parser.user_scope()
|
||||
|
||||
# TODO restructure this. stmt is not always needed.
|
||||
stmt = self._get_under_cursor_stmt(goto_path)
|
||||
expression_list = stmt.expression_list()
|
||||
if len(expression_list) == 0:
|
||||
if stmt is None:
|
||||
return []
|
||||
# The reverse tokenizer only generates parses call.
|
||||
assert len(expression_list) == 1
|
||||
call = expression_list[0]
|
||||
if isinstance(call, pr.Call):
|
||||
call_path = list(call.generate_call_path())
|
||||
|
||||
if user_scope is None:
|
||||
last_name = None
|
||||
else:
|
||||
# goto_assignments on Operator returns nothing.
|
||||
return []
|
||||
# Try to use the parser if possible.
|
||||
last_name = user_scope.name_for_position(self._pos)
|
||||
|
||||
if last_name is None:
|
||||
last_name = stmt
|
||||
while not isinstance(last_name, pr.Name):
|
||||
try:
|
||||
last_name = last_name.children[-1]
|
||||
except AttributeError:
|
||||
# Doesn't have a name in it.
|
||||
return []
|
||||
|
||||
if next(context) in ('class', 'def'):
|
||||
# The cursor is on a class/function name.
|
||||
user_scope = self._parser.user_scope()
|
||||
definitions = set([user_scope.name.names[-1]])
|
||||
definitions = set([user_scope.name])
|
||||
elif isinstance(user_stmt, pr.Import):
|
||||
s, name_part = helpers.get_on_import_stmt(self._evaluator,
|
||||
self._user_context, user_stmt)
|
||||
@@ -467,20 +433,20 @@ class Script(object):
|
||||
if add_import_name:
|
||||
import_name = user_stmt.get_defined_names()
|
||||
# imports have only one name
|
||||
np = import_name[0].names[-1]
|
||||
if not user_stmt.star and unicode(name_part) == unicode(np):
|
||||
np = import_name[0]
|
||||
if not user_stmt.is_star_import() and unicode(name_part) == unicode(np):
|
||||
definitions.append(np)
|
||||
else:
|
||||
# The Evaluator.goto function checks for definitions, but since we
|
||||
# use a reverse tokenizer, we have new name_part objects, so we
|
||||
# have to check the user_stmt here for positions.
|
||||
if isinstance(user_stmt, pr.ExprStmt):
|
||||
if isinstance(user_stmt, pr.ExprStmt) \
|
||||
and isinstance(last_name.parent, pr.ExprStmt):
|
||||
for name in user_stmt.get_defined_names():
|
||||
if name.start_pos <= self._pos <= name.end_pos \
|
||||
and len(name.names) == 1:
|
||||
return [name.names[0]]
|
||||
if name.start_pos <= self._pos <= name.end_pos:
|
||||
return [name]
|
||||
|
||||
defs = self._evaluator.goto(stmt, call_path)
|
||||
defs = self._evaluator.goto(last_name)
|
||||
definitions = follow_inexistent_imports(defs)
|
||||
return definitions
|
||||
|
||||
@@ -504,16 +470,6 @@ class Script(object):
|
||||
# Without a definition for a name we cannot find references.
|
||||
return []
|
||||
|
||||
# Once Script._goto works correct, we can probably remove this
|
||||
# branch.
|
||||
if isinstance(user_stmt, pr.ExprStmt):
|
||||
c = user_stmt.expression_list()[0]
|
||||
if not isinstance(c, unicode) and self._pos < c.start_pos:
|
||||
# The lookup might be before `=`
|
||||
definitions = [v.names[-1] for v in user_stmt.get_defined_names()
|
||||
if unicode(v.names[-1]) ==
|
||||
list(definitions)[0].get_code()]
|
||||
|
||||
if not isinstance(user_stmt, pr.Import):
|
||||
# import case is looked at with add_import_name option
|
||||
definitions = usages.usages_add_import_modules(self._evaluator,
|
||||
@@ -524,12 +480,7 @@ class Script(object):
|
||||
names = usages.usages(self._evaluator, definitions, module)
|
||||
|
||||
for d in set(definitions):
|
||||
try:
|
||||
name_part = d.names[-1]
|
||||
except AttributeError:
|
||||
names.append(classes.Definition(self._evaluator, d))
|
||||
else:
|
||||
names.append(classes.Definition(self._evaluator, name_part))
|
||||
names.append(classes.Definition(self._evaluator, d))
|
||||
finally:
|
||||
settings.dynamic_flow_information = temp
|
||||
|
||||
@@ -551,43 +502,43 @@ class Script(object):
|
||||
|
||||
:rtype: list of :class:`classes.CallSignature`
|
||||
"""
|
||||
user_stmt = self._parser.user_stmt_with_whitespace()
|
||||
call, execution_arr, index = search_call_signatures(user_stmt, self._pos)
|
||||
if call is None:
|
||||
call_txt, call_index, key_name, start_pos = self._user_context.call_signature()
|
||||
if call_txt is None:
|
||||
return []
|
||||
|
||||
stmt = self._get_under_cursor_stmt(call_txt, start_pos)
|
||||
if stmt is None:
|
||||
return []
|
||||
|
||||
with common.scale_speed_settings(settings.scale_call_signatures):
|
||||
_callable = lambda: self._evaluator.eval_call(call)
|
||||
origins = cache.cache_call_signatures(_callable, self.source,
|
||||
self._pos, user_stmt)
|
||||
origins = cache.cache_call_signatures(self._evaluator, stmt,
|
||||
self.source, self._pos)
|
||||
debug.speed('func_call followed')
|
||||
|
||||
key_name = None
|
||||
try:
|
||||
detail = execution_arr[index].assignment_details[0]
|
||||
except IndexError:
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
key_name = unicode(detail[0][0].name)
|
||||
except (IndexError, AttributeError):
|
||||
pass
|
||||
return [classes.CallSignature(self._evaluator, o.name.names[-1], call, index, key_name)
|
||||
return [classes.CallSignature(self._evaluator, o.name, stmt, call_index, key_name)
|
||||
for o in origins if hasattr(o, 'py__call__')]
|
||||
|
||||
def _analysis(self):
|
||||
def check_types(types):
|
||||
for typ in types:
|
||||
try:
|
||||
f = typ.iter_content
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
check_types(f())
|
||||
|
||||
#statements = set(chain(*self._parser.module().used_names.values()))
|
||||
stmts, imps = analysis.get_module_statements(self._parser.module())
|
||||
nodes, imp_names, decorated_funcs = \
|
||||
analysis.get_module_statements(self._parser.module())
|
||||
# Sort the statements so that the results are reproducible.
|
||||
for i in imps:
|
||||
iw = imports.ImportWrapper(self._evaluator, i,
|
||||
nested_resolve=True).follow()
|
||||
if i.is_nested() and any(not isinstance(i, pr.Module) for i in iw):
|
||||
analysis.add(self._evaluator, 'import-error', i.namespace.names[-1])
|
||||
for stmt in sorted(stmts, key=lambda obj: obj.start_pos):
|
||||
if not (isinstance(stmt.parent, pr.ForFlow)
|
||||
and stmt.parent.set_stmt == stmt):
|
||||
self._evaluator.eval_statement(stmt)
|
||||
for n in imp_names:
|
||||
imports.ImportWrapper(self._evaluator, n).follow()
|
||||
for node in sorted(nodes, key=lambda obj: obj.start_pos):
|
||||
check_types(self._evaluator.eval_element(node))
|
||||
|
||||
for dec_func in decorated_funcs:
|
||||
er.Function(self._evaluator, dec_func).get_decorated_func()
|
||||
|
||||
ana = [a for a in self._evaluator.analysis if self.path == a.path]
|
||||
return sorted(set(ana), key=lambda x: x.line)
|
||||
@@ -609,7 +560,7 @@ class Interpreter(Script):
|
||||
upper
|
||||
"""
|
||||
|
||||
def __init__(self, source, namespaces=[], **kwds):
|
||||
def __init__(self, source, namespaces, **kwds):
|
||||
"""
|
||||
Parse `source` and mixin interpreted Python objects from `namespaces`.
|
||||
|
||||
@@ -623,17 +574,28 @@ class Interpreter(Script):
|
||||
If `line` and `column` are None, they are assumed be at the end of
|
||||
`source`.
|
||||
"""
|
||||
if type(namespaces) is not list or len(namespaces) == 0 or \
|
||||
any([type(x) is not dict for x in namespaces]):
|
||||
raise TypeError("namespaces must be a non-empty list of dict")
|
||||
|
||||
super(Interpreter, self).__init__(source, **kwds)
|
||||
self.namespaces = namespaces
|
||||
|
||||
# Here we add the namespaces to the current parser.
|
||||
interpreter.create(self._evaluator, namespaces[0], self._parser.module())
|
||||
# Don't use the fast parser, because it does crazy stuff that we don't
|
||||
# need in our very simple and small code here (that is always
|
||||
# changing).
|
||||
self._parser = UserContextParser(self._grammar, self.source,
|
||||
self._orig_path, self._pos,
|
||||
self._user_context,
|
||||
use_fast_parser=False)
|
||||
interpreter.add_namespaces_to_parser(self._evaluator, namespaces,
|
||||
self._parser.module())
|
||||
|
||||
def _simple_complete(self, path, like):
|
||||
def _simple_complete(self, path, dot, like):
|
||||
user_stmt = self._parser.user_stmt_with_whitespace()
|
||||
is_simple_path = not path or re.search('^[\w][\w\d.]*$', path)
|
||||
if isinstance(user_stmt, pr.Import) or not is_simple_path:
|
||||
return super(Interpreter, self)._simple_complete(path, like)
|
||||
return super(Interpreter, self)._simple_complete(path, dot, like)
|
||||
else:
|
||||
class NamespaceModule(object):
|
||||
def __getattr__(_, name):
|
||||
@@ -659,14 +621,14 @@ class Interpreter(Script):
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
completions = []
|
||||
completion_names = []
|
||||
for namespace in namespaces:
|
||||
for name in dir(namespace):
|
||||
if name.lower().startswith(like.lower()):
|
||||
scope = self._parser.module()
|
||||
n = FakeName(name, scope)
|
||||
completions.append((n, scope))
|
||||
return completions
|
||||
completion_names.append(n)
|
||||
return completion_names
|
||||
|
||||
|
||||
def defined_names(source, path=None, encoding='utf-8'):
|
||||
@@ -680,12 +642,13 @@ def defined_names(source, path=None, encoding='utf-8'):
|
||||
(e.g., methods in class).
|
||||
|
||||
:rtype: list of classes.Definition
|
||||
|
||||
.. deprecated:: 0.9.0
|
||||
Use :func:`names` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
parser = Parser(
|
||||
common.source_to_unicode(source, encoding),
|
||||
module_path=path,
|
||||
)
|
||||
return classes.defined_names(Evaluator(), parser.module)
|
||||
warnings.warn("Use call_signatures instead.", DeprecationWarning)
|
||||
return names(source, path, encoding)
|
||||
|
||||
|
||||
def names(source=None, path=None, encoding='utf-8', all_scopes=False,
|
||||
@@ -711,7 +674,7 @@ def names(source=None, path=None, encoding='utf-8', all_scopes=False,
|
||||
# Set line/column to a random position, because they don't matter.
|
||||
script = Script(source, line=1, column=0, path=path, encoding=encoding)
|
||||
defs = [classes.Definition(script._evaluator, name_part)
|
||||
for name_part in get_module_name_parts(script._parser.module())]
|
||||
for name_part in get_module_names(script._parser.module(), all_scopes)]
|
||||
return sorted(filter(def_ref_filter, defs), key=lambda x: (x.line, x.column))
|
||||
|
||||
|
||||
|
||||
@@ -5,19 +5,19 @@ the interesting information about completion and goto operations.
|
||||
"""
|
||||
import warnings
|
||||
from itertools import chain
|
||||
import re
|
||||
|
||||
from jedi._compatibility import next, unicode, use_metaclass
|
||||
from jedi._compatibility import unicode, use_metaclass
|
||||
from jedi import settings
|
||||
from jedi import common
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.evaluate.helpers import statement_elements_in_statement
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate.cache import memoize_default, CachedMetaClass
|
||||
from jedi.evaluate import representation as er
|
||||
from jedi.evaluate import iterable
|
||||
from jedi.evaluate import imports
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.api import keywords
|
||||
from jedi.evaluate.finder import get_names_of_scope
|
||||
from jedi.evaluate.finder import filter_definition_names
|
||||
|
||||
|
||||
def defined_names(evaluator, scope):
|
||||
@@ -27,17 +27,10 @@ def defined_names(evaluator, scope):
|
||||
:type scope: Scope
|
||||
:rtype: list of Definition
|
||||
"""
|
||||
# Calling get_names_of_scope doesn't make sense always. It might include
|
||||
# star imports or inherited stuff. Wanted?
|
||||
# TODO discuss!
|
||||
if isinstance(scope, pr.Module):
|
||||
pair = scope, scope.get_defined_names()
|
||||
else:
|
||||
pair = next(get_names_of_scope(evaluator, scope, star_search=False,
|
||||
include_builtin=False), None)
|
||||
names = pair[1] if pair else []
|
||||
names = [n for n in names if isinstance(n, pr.Import) or (len(n) == 1)]
|
||||
return [Definition(evaluator, d.names[-1]) for d in sorted(names, key=lambda s: s.start_pos)]
|
||||
dct = scope.names_dict
|
||||
names = list(chain.from_iterable(dct.values()))
|
||||
names = filter_definition_names(names, scope)
|
||||
return [Definition(evaluator, d) for d in sorted(names, key=lambda s: s.start_pos)]
|
||||
|
||||
|
||||
class BaseDefinition(object):
|
||||
@@ -68,7 +61,7 @@ class BaseDefinition(object):
|
||||
"""
|
||||
An instance of :class:`jedi.parser.reprsentation.Name` subclass.
|
||||
"""
|
||||
self._definition = self._name.get_definition()
|
||||
self._definition = er.wrap(evaluator, self._name.get_definition())
|
||||
self.is_keyword = isinstance(self._definition, keywords.Keyword)
|
||||
|
||||
# generate a path to the definition
|
||||
@@ -156,9 +149,12 @@ class BaseDefinition(object):
|
||||
stripped = stripped.var
|
||||
|
||||
if isinstance(stripped, compiled.CompiledObject):
|
||||
return stripped.type()
|
||||
if isinstance(stripped, iterable.Array):
|
||||
return stripped.api_type()
|
||||
elif isinstance(stripped, iterable.Array):
|
||||
return 'instance'
|
||||
elif isinstance(stripped, pr.Import):
|
||||
return 'import'
|
||||
|
||||
string = type(stripped).__name__.lower().replace('wrapper', '')
|
||||
if string == 'exprstmt':
|
||||
return 'statement'
|
||||
@@ -168,18 +164,11 @@ class BaseDefinition(object):
|
||||
def _path(self):
|
||||
"""The module path."""
|
||||
path = []
|
||||
|
||||
def insert_nonnone(x):
|
||||
if x:
|
||||
path.insert(0, x)
|
||||
|
||||
par = self._definition
|
||||
while par is not None:
|
||||
if isinstance(par, pr.Import):
|
||||
insert_nonnone(par.namespace)
|
||||
insert_nonnone(par.from_ns)
|
||||
if par.relative_count == 0:
|
||||
break
|
||||
path += imports.ImportWrapper(self._evaluator, self._name).import_path
|
||||
break
|
||||
with common.ignored(AttributeError):
|
||||
path.insert(0, par.name)
|
||||
par = par.parent
|
||||
@@ -203,16 +192,6 @@ class BaseDefinition(object):
|
||||
"""Whether this is a builtin module."""
|
||||
return isinstance(self._module, compiled.CompiledObject)
|
||||
|
||||
@property
|
||||
def line_nr(self):
|
||||
"""
|
||||
.. deprecated:: 0.5.0
|
||||
Use :attr:`.line` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use line instead.", DeprecationWarning)
|
||||
return self.line
|
||||
|
||||
@property
|
||||
def line(self):
|
||||
"""The line where the definition occurs (starting with 1)."""
|
||||
@@ -241,7 +220,7 @@ class BaseDefinition(object):
|
||||
>>> script = Script(source, 1, len('def f'), 'example.py')
|
||||
>>> doc = script.goto_definitions()[0].docstring()
|
||||
>>> print(doc)
|
||||
f(a, b = 1)
|
||||
f(a, b=1)
|
||||
<BLANKLINE>
|
||||
Document for function f.
|
||||
|
||||
@@ -320,24 +299,8 @@ class BaseDefinition(object):
|
||||
return '.'.join(path if path[0] else path[1:])
|
||||
|
||||
def goto_assignments(self):
|
||||
def call_path_for_name_part(stmt_or_imp, name_part):
|
||||
if isinstance(stmt_or_imp, pr.Import):
|
||||
return [name_part]
|
||||
else:
|
||||
for stmt_el in statement_elements_in_statement(stmt_or_imp):
|
||||
call_path = list(stmt_el.generate_call_path())
|
||||
for i, element in enumerate(call_path):
|
||||
if element is name_part:
|
||||
return call_path[:i + 1]
|
||||
|
||||
if self.type not in ('statement', 'import'):
|
||||
# Functions, classes and modules are already fixed definitions, we
|
||||
# cannot follow them anymore.
|
||||
return [self]
|
||||
stmt_or_imp = self._name.get_parent_until((pr.Statement, pr.Import))
|
||||
call_path = call_path_for_name_part(stmt_or_imp, self._name)
|
||||
names = self._evaluator.goto(stmt_or_imp, call_path)
|
||||
return [Definition(self._evaluator, n) for n in names]
|
||||
defs = self._evaluator.goto(self._name)
|
||||
return [Definition(self._evaluator, d) for d in defs]
|
||||
|
||||
@memoize_default()
|
||||
def _follow_statements_imports(self):
|
||||
@@ -347,7 +310,7 @@ class BaseDefinition(object):
|
||||
if self._definition.isinstance(pr.ExprStmt):
|
||||
return self._evaluator.eval_statement(self._definition)
|
||||
elif self._definition.isinstance(pr.Import):
|
||||
return imports.follow_imports(self._evaluator, [self._definition])
|
||||
return imports.ImportWrapper(self._evaluator, self._name).follow()
|
||||
else:
|
||||
return [self._definition]
|
||||
|
||||
@@ -376,12 +339,12 @@ class BaseDefinition(object):
|
||||
params = sub.params[1:] # ignore self
|
||||
except KeyError:
|
||||
return []
|
||||
return [_Param(self._evaluator, p.get_name().names[-1]) for p in params]
|
||||
return [_Param(self._evaluator, p.name) for p in params]
|
||||
|
||||
def parent(self):
|
||||
scope = self._definition.get_parent_scope()
|
||||
non_flow = scope.get_parent_until(pr.Flow, reverse=True)
|
||||
return Definition(self._evaluator, non_flow.name.names[-1])
|
||||
scope = er.wrap(self._evaluator, scope)
|
||||
return Definition(self._evaluator, scope.name)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s %s>" % (type(self).__name__, self.description)
|
||||
@@ -392,12 +355,11 @@ class Completion(BaseDefinition):
|
||||
`Completion` objects are returned from :meth:`api.Script.completions`. They
|
||||
provide additional information about a completion.
|
||||
"""
|
||||
def __init__(self, evaluator, name, needs_dot, like_name_length, base):
|
||||
def __init__(self, evaluator, name, needs_dot, like_name_length):
|
||||
super(Completion, self).__init__(evaluator, name)
|
||||
|
||||
self._needs_dot = needs_dot
|
||||
self._like_name_length = like_name_length
|
||||
self._base = base
|
||||
|
||||
# Completion objects with the same Completion name (which means
|
||||
# duplicate items in the completion)
|
||||
@@ -411,9 +373,9 @@ class Completion(BaseDefinition):
|
||||
append = '('
|
||||
|
||||
if settings.add_dot_after_module:
|
||||
if isinstance(self._base, pr.Module):
|
||||
if isinstance(self._definition, pr.Module):
|
||||
append += '.'
|
||||
if isinstance(self._base, pr.Param):
|
||||
if isinstance(self._definition, pr.Param):
|
||||
append += '='
|
||||
|
||||
name = str(self._name)
|
||||
@@ -445,16 +407,6 @@ class Completion(BaseDefinition):
|
||||
"""
|
||||
return self._complete(False)
|
||||
|
||||
@property
|
||||
def word(self):
|
||||
"""
|
||||
.. deprecated:: 0.6.0
|
||||
Use :attr:`.name` instead.
|
||||
.. todo:: Remove!
|
||||
"""
|
||||
warnings.warn("Use name instead.", DeprecationWarning)
|
||||
return self.name
|
||||
|
||||
@property
|
||||
def description(self):
|
||||
"""Provide a description of the completion object."""
|
||||
@@ -462,7 +414,7 @@ class Completion(BaseDefinition):
|
||||
return ''
|
||||
t = self.type
|
||||
if t == 'statement' or t == 'import':
|
||||
desc = self._definition.get_code(False)
|
||||
desc = self._definition.get_code()
|
||||
else:
|
||||
desc = '.'.join(unicode(p) for p in self._path())
|
||||
|
||||
@@ -482,7 +434,7 @@ class Completion(BaseDefinition):
|
||||
"""
|
||||
definition = self._definition
|
||||
if isinstance(definition, pr.Import):
|
||||
i = imports.ImportWrapper(self._evaluator, definition)
|
||||
i = imports.ImportWrapper(self._evaluator, self._name)
|
||||
if len(i.import_path) > 1 or not fast:
|
||||
followed = self._follow_statements_imports()
|
||||
if followed:
|
||||
@@ -501,7 +453,7 @@ class Completion(BaseDefinition):
|
||||
description, look at :attr:`jedi.api.classes.BaseDefinition.type`.
|
||||
"""
|
||||
if isinstance(self._definition, pr.Import):
|
||||
i = imports.ImportWrapper(self._evaluator, self._definition)
|
||||
i = imports.ImportWrapper(self._evaluator, self._name)
|
||||
if len(i.import_path) <= 1:
|
||||
return 'module'
|
||||
|
||||
@@ -518,14 +470,9 @@ class Completion(BaseDefinition):
|
||||
# imports completion is very complicated and needs to be treated
|
||||
# separately in Completion.
|
||||
definition = self._definition
|
||||
if definition.isinstance(pr.Import) and definition.alias is None:
|
||||
i = imports.ImportWrapper(self._evaluator, definition, True)
|
||||
import_path = i.import_path + (unicode(self._name),)
|
||||
try:
|
||||
return imports.get_importer(self._evaluator, import_path,
|
||||
i._importer.module).follow(self._evaluator)
|
||||
except imports.ModuleNotFound:
|
||||
pass
|
||||
if definition.isinstance(pr.Import):
|
||||
i = imports.ImportWrapper(self._evaluator, self._name)
|
||||
return i.follow()
|
||||
return super(Completion, self)._follow_statements_imports()
|
||||
|
||||
@memoize_default()
|
||||
@@ -539,7 +486,7 @@ class Completion(BaseDefinition):
|
||||
it's just PITA-slow.
|
||||
"""
|
||||
defs = self._follow_statements_imports()
|
||||
return [Definition(self._evaluator, d.name.names[-1]) for d in defs]
|
||||
return [Definition(self._evaluator, d.name) for d in defs]
|
||||
|
||||
|
||||
class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
|
||||
@@ -583,7 +530,7 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
|
||||
d = d.var
|
||||
|
||||
if isinstance(d, compiled.CompiledObject):
|
||||
typ = d.type()
|
||||
typ = d.api_type()
|
||||
if typ == 'instance':
|
||||
typ = 'class' # The description should be similar to Py objects.
|
||||
d = typ + ' ' + d.name.get_code()
|
||||
@@ -596,9 +543,25 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
|
||||
elif isinstance(d, pr.Module):
|
||||
# only show module name
|
||||
d = 'module %s' % self.module_name
|
||||
else:
|
||||
d = d.get_code().replace('\n', '').replace('\r', '')
|
||||
return d
|
||||
elif isinstance(d, pr.Param):
|
||||
d = d.get_code()
|
||||
else: # ExprStmt
|
||||
try:
|
||||
first_leaf = d.first_leaf()
|
||||
except AttributeError:
|
||||
# `d` is already a Leaf (Name).
|
||||
first_leaf = d
|
||||
# Remove the prefix, because that's not what we want for get_code
|
||||
# here.
|
||||
old, first_leaf.prefix = first_leaf.prefix, ''
|
||||
try:
|
||||
d = d.get_code()
|
||||
finally:
|
||||
first_leaf.prefix = old
|
||||
# Delete comments:
|
||||
d = re.sub('#[^\n]+\n', ' ', d)
|
||||
# Delete multi spaces/newlines
|
||||
return re.sub('\s+', ' ', d).strip()
|
||||
|
||||
@property
|
||||
def desc_with_module(self):
|
||||
@@ -633,13 +596,7 @@ class Definition(use_metaclass(CachedMetaClass, BaseDefinition)):
|
||||
Returns True, if defined as a name in a statement, function or class.
|
||||
Returns False, if it's a reference to such a definition.
|
||||
"""
|
||||
_def = self._name.get_parent_until((pr.ExprStmt, pr.Import,
|
||||
pr.Function, pr.Class, pr.Module))
|
||||
if isinstance(_def, pr.ExprStmt):
|
||||
exp_list = _def.expression_list()
|
||||
return not exp_list or self._name.start_pos < exp_list[0].start_pos
|
||||
else:
|
||||
return True
|
||||
return self._name.is_definition()
|
||||
|
||||
def __eq__(self, other):
|
||||
return self._name.start_pos == other._name.start_pos \
|
||||
@@ -660,17 +617,17 @@ class CallSignature(Definition):
|
||||
It knows what functions you are currently in. e.g. `isinstance(` would
|
||||
return the `isinstance` function. without `(` it would return nothing.
|
||||
"""
|
||||
def __init__(self, evaluator, executable_name, call, index, key_name):
|
||||
def __init__(self, evaluator, executable_name, call_stmt, index, key_name):
|
||||
super(CallSignature, self).__init__(evaluator, executable_name)
|
||||
self._index = index
|
||||
self._key_name = key_name
|
||||
self._call = call
|
||||
self._call_stmt = call_stmt
|
||||
|
||||
@property
|
||||
def index(self):
|
||||
"""
|
||||
The Param index of the current call.
|
||||
Returns None if the index doesn't is not defined.
|
||||
Returns None if the index cannot be found in the curent call.
|
||||
"""
|
||||
if self._key_name is not None:
|
||||
for i, param in enumerate(self.params):
|
||||
@@ -696,10 +653,7 @@ class CallSignature(Definition):
|
||||
The indent of the bracket that is responsible for the last function
|
||||
call.
|
||||
"""
|
||||
c = self._call
|
||||
while c.next is not None:
|
||||
c = c.next
|
||||
return c.name.end_pos
|
||||
return self._call_stmt.end_pos
|
||||
|
||||
@property
|
||||
def call_name(self):
|
||||
|
||||
@@ -3,6 +3,7 @@ Helpers for the API
|
||||
"""
|
||||
import re
|
||||
|
||||
from jedi.parser import tree as pt
|
||||
from jedi.evaluate import imports
|
||||
|
||||
|
||||
@@ -25,22 +26,53 @@ def get_on_import_stmt(evaluator, user_context, user_stmt, is_like_search=False)
|
||||
Resolve the user statement, if it is an import. Only resolve the
|
||||
parts until the user position.
|
||||
"""
|
||||
import_names = user_stmt.get_all_import_names()
|
||||
kill_count = -1
|
||||
cur_name_part = None
|
||||
for i in import_names:
|
||||
if user_stmt.alias == i:
|
||||
continue
|
||||
for name_part in i.names:
|
||||
if name_part.end_pos >= user_context.position:
|
||||
if not cur_name_part:
|
||||
cur_name_part = name_part
|
||||
kill_count += 1
|
||||
name = user_stmt.name_for_position(user_context.position)
|
||||
if name is None:
|
||||
return None, None
|
||||
|
||||
context = user_context.get_context()
|
||||
just_from = next(context) == 'from'
|
||||
i = imports.ImportWrapper(evaluator, name)
|
||||
return i, name
|
||||
|
||||
i = imports.ImportWrapper(evaluator, user_stmt, is_like_search,
|
||||
kill_count=kill_count, nested_resolve=True,
|
||||
is_just_from=just_from)
|
||||
return i, cur_name_part
|
||||
|
||||
def check_error_statements(module, pos):
|
||||
for error_statement in module.error_statement_stacks:
|
||||
if error_statement.first_type in ('import_from', 'import_name') \
|
||||
and error_statement.first_pos < pos <= error_statement.next_start_pos:
|
||||
return importer_from_error_statement(error_statement, pos)
|
||||
return None, 0, False, False
|
||||
|
||||
|
||||
def importer_from_error_statement(error_statement, pos):
|
||||
def check_dotted(children):
|
||||
for name in children[::2]:
|
||||
if name.start_pos <= pos:
|
||||
yield name
|
||||
|
||||
names = []
|
||||
level = 0
|
||||
only_modules = True
|
||||
unfinished_dotted = False
|
||||
for typ, nodes in error_statement.stack:
|
||||
if typ == 'dotted_name':
|
||||
names += check_dotted(nodes)
|
||||
if nodes[-1] == '.':
|
||||
# An unfinished dotted_name
|
||||
unfinished_dotted = True
|
||||
elif typ == 'import_name':
|
||||
if nodes[0].start_pos <= pos <= nodes[0].end_pos:
|
||||
# We are on the import.
|
||||
return None, 0, False, False
|
||||
elif typ == 'import_from':
|
||||
for node in nodes:
|
||||
if node.start_pos >= pos:
|
||||
break
|
||||
elif isinstance(node, pt.Node) and node.type == 'dotted_name':
|
||||
names += check_dotted(node.children)
|
||||
elif node in ('.', '...'):
|
||||
level += len(node.value)
|
||||
elif isinstance(node, pt.Name):
|
||||
names.append(node)
|
||||
elif node == 'import':
|
||||
only_modules = False
|
||||
|
||||
return names, level, only_modules, unfinished_dotted
|
||||
|
||||
@@ -1,3 +1,6 @@
|
||||
"""
|
||||
TODO Some parts of this module are still not well documented.
|
||||
"""
|
||||
import inspect
|
||||
import re
|
||||
|
||||
@@ -7,50 +10,45 @@ from jedi.common import source_to_unicode
|
||||
from jedi.cache import underscore_memoization
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate.compiled.fake import get_module
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pt
|
||||
from jedi.parser import load_grammar
|
||||
from jedi.parser.fast import FastParser
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.evaluate import iterable
|
||||
from jedi.evaluate import representation as er
|
||||
|
||||
|
||||
class InterpreterNamespace(pr.Module):
|
||||
def __init__(self, evaluator, namespace, parser_module):
|
||||
self.namespace = namespace
|
||||
self.parser_module = parser_module
|
||||
self._evaluator = evaluator
|
||||
|
||||
@underscore_memoization
|
||||
def get_defined_names(self):
|
||||
for name in self.parser_module.get_defined_names():
|
||||
yield name
|
||||
for key, value in self.namespace.items():
|
||||
yield LazyName(self._evaluator, key, value)
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
yield self, list(self.get_defined_names())
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self.parser_module, name)
|
||||
def add_namespaces_to_parser(evaluator, namespaces, parser_module):
|
||||
for namespace in namespaces:
|
||||
for key, value in namespace.items():
|
||||
# Name lookups in an ast tree work by checking names_dict.
|
||||
# Therefore we just add fake names to that and we're done.
|
||||
arr = parser_module.names_dict.setdefault(key, [])
|
||||
arr.append(LazyName(evaluator, parser_module, key, value))
|
||||
|
||||
|
||||
class LazyName(helpers.FakeName):
|
||||
def __init__(self, evaluator, name, value):
|
||||
def __init__(self, evaluator, module, name, value):
|
||||
super(LazyName, self).__init__(name)
|
||||
self._module = module
|
||||
self._evaluator = evaluator
|
||||
self._value = value
|
||||
self._name = name
|
||||
|
||||
def is_definition(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
def parent(self):
|
||||
"""
|
||||
Creating fake statements for the interpreter.
|
||||
"""
|
||||
obj = self._value
|
||||
parser_path = []
|
||||
if inspect.ismodule(obj):
|
||||
module = obj
|
||||
else:
|
||||
class FakeParent(pr.Base):
|
||||
parent = None # To avoid having no parent for NamePart.
|
||||
path = None
|
||||
|
||||
names = []
|
||||
try:
|
||||
o = obj.__objclass__
|
||||
@@ -66,11 +64,12 @@ class LazyName(helpers.FakeName):
|
||||
# Unfortunately in some cases like `int` there's no __module__
|
||||
module = builtins
|
||||
else:
|
||||
# TODO this import is wrong. Yields x for x.y.z instead of z
|
||||
module = __import__(module_name)
|
||||
fake_name = helpers.FakeName(names, FakeParent())
|
||||
parser_path = fake_name.names
|
||||
parser_path = names
|
||||
raw_module = get_module(self._value)
|
||||
|
||||
found = []
|
||||
try:
|
||||
path = module.__file__
|
||||
except AttributeError:
|
||||
@@ -81,28 +80,30 @@ class LazyName(helpers.FakeName):
|
||||
# cut the `c` from `.pyc`
|
||||
with open(path) as f:
|
||||
source = source_to_unicode(f.read())
|
||||
mod = FastParser(source, path[:-1]).module
|
||||
if not parser_path:
|
||||
return mod
|
||||
found = self._evaluator.eval_call_path(iter(parser_path), mod, None)
|
||||
if found:
|
||||
return found[0]
|
||||
debug.warning('Interpreter lookup for Python code failed %s',
|
||||
mod)
|
||||
mod = FastParser(load_grammar(), source, path[:-1]).module
|
||||
if parser_path:
|
||||
assert len(parser_path) == 1
|
||||
found = self._evaluator.find_types(mod, parser_path[0], search_global=True)
|
||||
else:
|
||||
found = [er.wrap(self._evaluator, mod)]
|
||||
|
||||
module = compiled.CompiledObject(raw_module)
|
||||
if raw_module == builtins:
|
||||
# The builtins module is special and always cached.
|
||||
module = compiled.builtin
|
||||
return compiled.create(self._evaluator, self._value, module, module)
|
||||
if not found:
|
||||
debug.warning('Possibly an interpreter lookup for Python code failed %s',
|
||||
parser_path)
|
||||
|
||||
if not found:
|
||||
evaluated = compiled.CompiledObject(obj)
|
||||
if evaluated == builtins:
|
||||
# The builtins module is special and always cached.
|
||||
evaluated = compiled.builtin
|
||||
found = [evaluated]
|
||||
|
||||
content = iterable.AlreadyEvaluated(found)
|
||||
stmt = pt.ExprStmt([self, pt.Operator(pt.zero_position_modifier,
|
||||
'=', (0, 0), ''), content])
|
||||
stmt.parent = self._module
|
||||
return stmt
|
||||
|
||||
@parent.setter
|
||||
def parent(self, value):
|
||||
"""Needed because of the ``representation.Simple`` super class."""
|
||||
|
||||
|
||||
def create(evaluator, namespace, parser_module):
|
||||
ns = InterpreterNamespace(evaluator, namespace, parser_module)
|
||||
for attr_name in pr.SCOPE_CONTENTS:
|
||||
for something in getattr(parser_module, attr_name):
|
||||
something.parent = ns
|
||||
"""Needed because the super class tries to set parent."""
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import pydoc
|
||||
import keyword
|
||||
|
||||
from jedi.parser.representation import NamePart
|
||||
from jedi._compatibility import is_py3
|
||||
from jedi import common
|
||||
from jedi.evaluate import compiled
|
||||
|
||||
@@ -1,14 +1,12 @@
|
||||
from jedi._compatibility import u, unicode
|
||||
from jedi import common
|
||||
from jedi._compatibility import unicode
|
||||
from jedi.api import classes
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate import imports
|
||||
from jedi.evaluate import helpers
|
||||
|
||||
|
||||
def usages(evaluator, definitions, mods):
|
||||
def usages(evaluator, definition_names, mods):
|
||||
"""
|
||||
:param definitions: list of NameParts
|
||||
:param definitions: list of Name
|
||||
"""
|
||||
def compare_array(definitions):
|
||||
""" `definitions` are being compared by module/start_pos, because
|
||||
@@ -20,72 +18,24 @@ def usages(evaluator, definitions, mods):
|
||||
result.append((module, d.start_pos))
|
||||
return result
|
||||
|
||||
def check_call_for_usage(call):
|
||||
stmt = call.parent
|
||||
while not stmt.parent.is_scope():
|
||||
stmt = stmt.parent
|
||||
# New definition, call cannot be a part of stmt
|
||||
if len(call.name) == 1 and call.next is None \
|
||||
and call.name in stmt.get_defined_names():
|
||||
# Class params are not definitions (like function params). They
|
||||
# are super classes, that need to be resolved.
|
||||
if not (isinstance(stmt, pr.Param) and isinstance(stmt.parent, pr.Class)):
|
||||
return
|
||||
|
||||
follow = [] # There might be multiple search_name's in one call_path
|
||||
call_path = list(call.generate_call_path())
|
||||
for i, name in enumerate(call_path):
|
||||
# name is `pr.NamePart`.
|
||||
if u(name) == search_name:
|
||||
follow.append(call_path[:i + 1])
|
||||
|
||||
for call_path in follow:
|
||||
follow_res = evaluator.goto(call.parent, call_path)
|
||||
search = call_path[-1]
|
||||
# names can change (getattr stuff), therefore filter names that
|
||||
# don't match `search`.
|
||||
|
||||
# TODO add something like that in the future - for now usages are
|
||||
# completely broken anyway.
|
||||
#follow_res = [r for r in follow_res if str(r) == search]
|
||||
#print search.start_pos,search_name.start_pos
|
||||
#print follow_res, search, search_name, [(r, r.start_pos) for r in follow_res]
|
||||
follow_res = usages_add_import_modules(evaluator, follow_res)
|
||||
|
||||
compare_follow_res = compare_array(follow_res)
|
||||
# compare to see if they match
|
||||
if any(r in compare_definitions for r in compare_follow_res):
|
||||
yield classes.Definition(evaluator, search)
|
||||
|
||||
search_name = unicode(list(definitions)[0])
|
||||
compare_definitions = compare_array(definitions)
|
||||
mods |= set([d.get_parent_until() for d in definitions])
|
||||
names = []
|
||||
for m in imports.get_modules_containing_name(mods, search_name):
|
||||
search_name = unicode(list(definition_names)[0])
|
||||
compare_definitions = compare_array(definition_names)
|
||||
mods |= set([d.get_parent_until() for d in definition_names])
|
||||
definitions = []
|
||||
for m in imports.get_modules_containing_name(evaluator, mods, search_name):
|
||||
try:
|
||||
stmts = m.used_names[search_name]
|
||||
check_names = m.used_names[search_name]
|
||||
except KeyError:
|
||||
continue
|
||||
for stmt in stmts:
|
||||
if isinstance(stmt, pr.Import):
|
||||
count = 0
|
||||
imps = []
|
||||
for i in stmt.get_all_import_names():
|
||||
for name_part in i.names:
|
||||
count += 1
|
||||
if unicode(name_part) == search_name:
|
||||
imps.append((count, name_part))
|
||||
for name in check_names:
|
||||
|
||||
for used_count, name_part in imps:
|
||||
i = imports.ImportWrapper(evaluator, stmt, kill_count=count - used_count,
|
||||
nested_resolve=True)
|
||||
f = i.follow(is_goto=True)
|
||||
if set(f) & set(definitions):
|
||||
names.append(classes.Definition(evaluator, name_part))
|
||||
else:
|
||||
for call in helpers.scan_statement_for_calls(stmt, search_name, assignment_details=True):
|
||||
names += check_call_for_usage(call)
|
||||
return names
|
||||
result = evaluator.goto(name)
|
||||
if [c for c in compare_array(result) if c in compare_definitions]:
|
||||
definitions.append(classes.Definition(evaluator, name))
|
||||
# Previous definitions might be imports, so include them
|
||||
# (because goto might return that import name).
|
||||
compare_definitions += compare_array([name])
|
||||
return definitions
|
||||
|
||||
|
||||
def usages_add_import_modules(evaluator, definitions):
|
||||
@@ -94,7 +44,6 @@ def usages_add_import_modules(evaluator, definitions):
|
||||
for d in definitions:
|
||||
imp_or_stmt = d.get_definition()
|
||||
if isinstance(imp_or_stmt, pr.Import):
|
||||
s = imports.ImportWrapper(evaluator, imp_or_stmt, nested_resolve=True)
|
||||
with common.ignored(IndexError):
|
||||
new.add(s.follow(is_goto=True)[0])
|
||||
s = imports.ImportWrapper(evaluator, d)
|
||||
new |= set(s.follow(is_goto=True))
|
||||
return set(definitions) | new
|
||||
|
||||
@@ -31,9 +31,7 @@ from jedi import settings
|
||||
from jedi import common
|
||||
from jedi import debug
|
||||
|
||||
_time_caches = []
|
||||
|
||||
_star_import_cache = {}
|
||||
_time_caches = {}
|
||||
|
||||
# for fast_parser, should not be deleted
|
||||
parser_cache = {}
|
||||
@@ -47,7 +45,7 @@ class ParserCacheItem(object):
|
||||
self.change_time = change_time
|
||||
|
||||
|
||||
def clear_caches(delete_all=False):
|
||||
def clear_time_caches(delete_all=False):
|
||||
""" Jedi caches many things, that should be completed after each completion
|
||||
finishes.
|
||||
|
||||
@@ -57,12 +55,12 @@ def clear_caches(delete_all=False):
|
||||
global _time_caches
|
||||
|
||||
if delete_all:
|
||||
_time_caches = []
|
||||
_star_import_cache.clear()
|
||||
for cache in _time_caches.values():
|
||||
cache.clear()
|
||||
parser_cache.clear()
|
||||
else:
|
||||
# normally just kill the expired entries, not all
|
||||
for tc in _time_caches:
|
||||
for tc in _time_caches.values():
|
||||
# check time_cache for expired entries
|
||||
for key, (t, value) in list(tc.items()):
|
||||
if t < time.time():
|
||||
@@ -71,23 +69,28 @@ def clear_caches(delete_all=False):
|
||||
|
||||
|
||||
def time_cache(time_add_setting):
|
||||
""" This decorator works as follows: Call it with a setting and after that
|
||||
"""
|
||||
s
|
||||
This decorator works as follows: Call it with a setting and after that
|
||||
use the function with a callable that returns the key.
|
||||
But: This function is only called if the key is not available. After a
|
||||
certain amount of time (`time_add_setting`) the cache is invalid.
|
||||
"""
|
||||
def _temp(key_func):
|
||||
dct = {}
|
||||
_time_caches.append(dct)
|
||||
_time_caches[time_add_setting] = dct
|
||||
|
||||
def wrapper(optional_callable, *args, **kwargs):
|
||||
key = key_func(*args, **kwargs)
|
||||
value = None
|
||||
if key in dct:
|
||||
def wrapper(*args, **kwargs):
|
||||
generator = key_func(*args, **kwargs)
|
||||
key = next(generator)
|
||||
try:
|
||||
expiry, value = dct[key]
|
||||
if expiry > time.time():
|
||||
return value
|
||||
value = optional_callable()
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
value = next(generator)
|
||||
time_add = getattr(settings, time_add_setting)
|
||||
if key is not None:
|
||||
dct[key] = time.time() + time_add, value
|
||||
@@ -97,18 +100,19 @@ def time_cache(time_add_setting):
|
||||
|
||||
|
||||
@time_cache("call_signatures_validity")
|
||||
def cache_call_signatures(source, user_pos, stmt):
|
||||
def cache_call_signatures(evaluator, call, source, user_pos):
|
||||
"""This function calculates the cache key."""
|
||||
index = user_pos[0] - 1
|
||||
lines = common.splitlines(source)
|
||||
|
||||
before_cursor = lines[index][:user_pos[1]]
|
||||
other_lines = lines[stmt.start_pos[0]:index]
|
||||
other_lines = lines[call.start_pos[0]:index]
|
||||
whole = '\n'.join(other_lines + [before_cursor])
|
||||
before_bracket = re.match(r'.*\(', whole, re.DOTALL)
|
||||
|
||||
module_path = stmt.get_parent_until().path
|
||||
return None if module_path is None else (module_path, before_bracket, stmt.start_pos)
|
||||
module_path = call.get_parent_until().path
|
||||
yield None if module_path is None else (module_path, before_bracket, call.start_pos)
|
||||
yield evaluator.eval_element(call)
|
||||
|
||||
|
||||
def underscore_memoization(func):
|
||||
@@ -145,52 +149,36 @@ def underscore_memoization(func):
|
||||
return wrapper
|
||||
|
||||
|
||||
def memoize(func):
|
||||
def memoize_method(method):
|
||||
"""A normal memoize function."""
|
||||
dct = {}
|
||||
|
||||
def wrapper(*args, **kwargs):
|
||||
def wrapper(self, *args, **kwargs):
|
||||
dct = self.__dict__.setdefault('_memoize_method_dct', {})
|
||||
key = (args, frozenset(kwargs.items()))
|
||||
try:
|
||||
return dct[key]
|
||||
except KeyError:
|
||||
result = func(*args, **kwargs)
|
||||
result = method(self, *args, **kwargs)
|
||||
dct[key] = result
|
||||
return result
|
||||
return wrapper
|
||||
|
||||
|
||||
def cache_star_import(func):
|
||||
def wrapper(evaluator, scope, *args, **kwargs):
|
||||
with common.ignored(KeyError):
|
||||
mods = _star_import_cache[scope]
|
||||
if mods[0] + settings.star_import_cache_validity > time.time():
|
||||
return mods[1]
|
||||
# cache is too old and therefore invalid or not available
|
||||
_invalidate_star_import_cache_module(scope)
|
||||
mods = func(evaluator, scope, *args, **kwargs)
|
||||
_star_import_cache[scope] = time.time(), mods
|
||||
|
||||
return mods
|
||||
@time_cache("star_import_cache_validity")
|
||||
def wrapper(self):
|
||||
yield self.base # The cache key
|
||||
yield func(self)
|
||||
return wrapper
|
||||
|
||||
|
||||
def _invalidate_star_import_cache_module(module, only_main=False):
|
||||
""" Important if some new modules are being reparsed """
|
||||
with common.ignored(KeyError):
|
||||
t, mods = _star_import_cache[module]
|
||||
|
||||
del _star_import_cache[module]
|
||||
|
||||
for m in mods:
|
||||
_invalidate_star_import_cache_module(m, only_main=True)
|
||||
|
||||
if not only_main:
|
||||
# We need a list here because otherwise the list is being changed
|
||||
# during the iteration in py3k: iteritems -> items.
|
||||
for key, (t, mods) in list(_star_import_cache.items()):
|
||||
if module in mods:
|
||||
_invalidate_star_import_cache_module(key)
|
||||
try:
|
||||
t, modules = _time_caches['star_import_cache_validity'][module]
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
del _time_caches['star_import_cache_validity'][module]
|
||||
|
||||
|
||||
def invalidate_star_import_cache(path):
|
||||
@@ -198,10 +186,9 @@ def invalidate_star_import_cache(path):
|
||||
try:
|
||||
parser_cache_item = parser_cache[path]
|
||||
except KeyError:
|
||||
return False
|
||||
pass
|
||||
else:
|
||||
_invalidate_star_import_cache_module(parser_cache_item.parser.module)
|
||||
return True
|
||||
|
||||
|
||||
def load_parser(path, name):
|
||||
@@ -243,7 +230,7 @@ def save_parser(path, name, parser, pickling=True):
|
||||
|
||||
class ParserPickling(object):
|
||||
|
||||
version = 17
|
||||
version = 24
|
||||
"""
|
||||
Version number (integer) for file system cache.
|
||||
|
||||
|
||||
@@ -5,16 +5,10 @@ import functools
|
||||
import re
|
||||
from ast import literal_eval
|
||||
|
||||
from jedi._compatibility import unicode, next, reraise
|
||||
from jedi._compatibility import unicode, reraise
|
||||
from jedi import settings
|
||||
|
||||
|
||||
class MultiLevelStopIteration(Exception):
|
||||
"""
|
||||
StopIteration's get catched pretty easy by for loops, let errors propagate.
|
||||
"""
|
||||
|
||||
|
||||
class UncaughtAttributeError(Exception):
|
||||
"""
|
||||
Important, because `__getattr__` and `hasattr` catch AttributeErrors
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
"""
|
||||
Evaluation of Python code in |jedi| is based on three assumptions:
|
||||
|
||||
* Code is recursive (to weaken this assumption, the
|
||||
:mod:`jedi.evaluate.dynamic` module exists).
|
||||
* The code uses as least side effects as possible. Jedi understands certain
|
||||
list/tuple/set modifications, but there's no guarantee that Jedi detects
|
||||
everything (list.append in different modules for example).
|
||||
* No magic is being used:
|
||||
|
||||
- metaclasses
|
||||
@@ -11,11 +12,12 @@ Evaluation of Python code in |jedi| is based on three assumptions:
|
||||
* The programmer is not a total dick, e.g. like `this
|
||||
<https://github.com/davidhalter/jedi/issues/24>`_ :-)
|
||||
|
||||
That said, there's mainly one entry point in this script: ``eval_statement``.
|
||||
This is where autocompletion starts. Everything you want to complete is either
|
||||
a ``Statement`` or some special name like ``class``, which is easy to complete.
|
||||
The actual algorithm is based on a principle called lazy evaluation. If you
|
||||
don't know about it, google it. That said, the typical entry point for static
|
||||
analysis is calling ``eval_statement``. There's separate logic for
|
||||
autocompletion in the API, the evaluator is all about evaluating an expression.
|
||||
|
||||
Therefore you need to understand what follows after ``eval_statement``. Let's
|
||||
Now you need to understand what follows after ``eval_statement``. Let's
|
||||
make an example::
|
||||
|
||||
import datetime
|
||||
@@ -23,58 +25,45 @@ make an example::
|
||||
|
||||
First of all, this module doesn't care about completion. It really just cares
|
||||
about ``datetime.date``. At the end of the procedure ``eval_statement`` will
|
||||
return the ``datetime`` class.
|
||||
return the ``date`` class.
|
||||
|
||||
To *visualize* this (simplified):
|
||||
|
||||
- ``eval_statement`` - ``<Statement: datetime.date>``
|
||||
- ``Evaluator.eval_statement`` doesn't do much, because there's no assignment.
|
||||
- ``Evaluator.eval_element`` cares for resolving the dotted path
|
||||
- ``Evaluator.find_types`` searches for global definitions of datetime, which
|
||||
it finds in the definition of an import, by scanning the syntax tree.
|
||||
- Using the import logic, the datetime module is found.
|
||||
- Now ``find_types`` is called again by ``eval_element`` to find ``date``
|
||||
inside the datetime module.
|
||||
|
||||
- Unpacking of the statement into ``[[<Call: datetime.date>]]``
|
||||
- ``eval_expression_list``, calls ``eval_call`` with ``<Call: datetime.date>``
|
||||
- ``eval_call`` - searches the ``datetime`` name within the module.
|
||||
Now what would happen if we wanted ``datetime.date.foo.bar``? Two more
|
||||
calls to ``find_types``. However the second call would be ignored, because the
|
||||
first one would return nothing (there's no foo attribute in ``date``).
|
||||
|
||||
This is exactly where it starts to get complicated. Now recursions start to
|
||||
kick in. The statement has not been resolved fully, but now we need to resolve
|
||||
the datetime import. So it continues
|
||||
|
||||
- follow import, which happens in the :mod:`jedi.evaluate.imports` module.
|
||||
- now the same ``eval_call`` as above calls ``follow_path`` to follow the
|
||||
second part of the statement ``date``.
|
||||
- After ``follow_path`` returns with the desired ``datetime.date`` class, the
|
||||
result is being returned and the recursion finishes.
|
||||
|
||||
Now what would happen if we wanted ``datetime.date.foo.bar``? Just two more
|
||||
calls to ``follow_path`` (which calls itself with a recursion). What if the
|
||||
import would contain another Statement like this::
|
||||
What if the import would contain another ``ExprStmt`` like this::
|
||||
|
||||
from foo import bar
|
||||
Date = bar.baz
|
||||
|
||||
Well... You get it. Just another ``eval_statement`` recursion. It's really
|
||||
easy. Just that Python is not that easy sometimes. To understand tuple
|
||||
assignments and different class scopes, a lot more code had to be written. Yet
|
||||
we're still not talking about Descriptors and Nested List Comprehensions, just
|
||||
the simple stuff.
|
||||
easy. Python can obviously get way more complicated then this. To understand
|
||||
tuple assignments, list comprehensions and everything else, a lot more code had
|
||||
to be written.
|
||||
|
||||
So if you want to change something, write a test and then just change what you
|
||||
want. This module has been tested by about 600 tests. Don't be afraid to break
|
||||
something. The tests are good enough.
|
||||
Jedi has been tested very well, so you can just start modifying code. It's best
|
||||
to write your own test first for your "new" feature. Don't be scared of
|
||||
breaking stuff. As long as the tests pass, you're most likely to be fine.
|
||||
|
||||
I need to mention now that this recursive approach is really good because it
|
||||
I need to mention now that lazy evaluation is really good because it
|
||||
only *evaluates* what needs to be *evaluated*. All the statements and modules
|
||||
that are not used are just being ignored. It's a little bit similar to the
|
||||
backtracking algorithm.
|
||||
|
||||
|
||||
.. todo:: nonlocal statement, needed or can be ignored? (py3k)
|
||||
that are not used are just being ignored.
|
||||
"""
|
||||
import copy
|
||||
import itertools
|
||||
|
||||
from jedi._compatibility import next, hasattr, unicode
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser.tokenize import Token
|
||||
from jedi.parser import fast
|
||||
import copy
|
||||
from itertools import chain
|
||||
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi.evaluate import representation as er
|
||||
from jedi.evaluate import imports
|
||||
@@ -85,11 +74,13 @@ from jedi.evaluate import stdlib
|
||||
from jedi.evaluate import finder
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import precedence
|
||||
from jedi.evaluate.helpers import FakeStatement, deep_ast_copy
|
||||
from jedi.evaluate import param
|
||||
from jedi.evaluate import helpers
|
||||
|
||||
|
||||
class Evaluator(object):
|
||||
def __init__(self):
|
||||
def __init__(self, grammar):
|
||||
self.grammar = grammar
|
||||
self.memoize_cache = {} # for memoize decorators
|
||||
self.import_cache = {} # like `sys.modules`.
|
||||
self.compiled_cache = {} # see `compiled.create()`
|
||||
@@ -98,7 +89,7 @@ class Evaluator(object):
|
||||
self.analysis = []
|
||||
|
||||
def find_types(self, scope, name_str, position=None, search_global=False,
|
||||
is_goto=False, resolve_decorator=True):
|
||||
is_goto=False):
|
||||
"""
|
||||
This is the search function. The most important part to debug.
|
||||
`remove_statements` and `filter_statements` really are the core part of
|
||||
@@ -111,7 +102,7 @@ class Evaluator(object):
|
||||
scopes = f.scopes(search_global)
|
||||
if is_goto:
|
||||
return f.filter_name(scopes)
|
||||
return f.find(scopes, resolve_decorator, search_global)
|
||||
return f.find(scopes, search_global)
|
||||
|
||||
@memoize_default(default=[], evaluator_is_first_arg=True)
|
||||
@recursion.recursion_decorator
|
||||
@@ -126,186 +117,161 @@ class Evaluator(object):
|
||||
:param stmt: A `pr.ExprStmt`.
|
||||
"""
|
||||
debug.dbg('eval_statement %s (%s)', stmt, seek_name)
|
||||
expression_list = stmt.expression_list()
|
||||
if isinstance(stmt, FakeStatement):
|
||||
return expression_list # Already contains the results.
|
||||
types = self.eval_element(stmt.get_rhs())
|
||||
|
||||
result = self.eval_expression_list(expression_list)
|
||||
if seek_name:
|
||||
types = finder.check_tuple_assignments(types, seek_name)
|
||||
|
||||
ass_details = stmt.assignment_details
|
||||
if ass_details and ass_details[0][1] != '=' and not isinstance(stmt, er.InstanceElement): # TODO don't check for this.
|
||||
expr_list, _operator = ass_details[0]
|
||||
first_operation = stmt.first_operation()
|
||||
if first_operation not in ('=', None) and not isinstance(stmt, er.InstanceElement): # TODO don't check for this.
|
||||
# `=` is always the last character in aug assignments -> -1
|
||||
operator = copy.copy(_operator)
|
||||
operator.string = operator.string[:-1]
|
||||
name = str(expr_list[0].name)
|
||||
parent = stmt.parent.get_parent_until(pr.Flow, reverse=True)
|
||||
if isinstance(parent, (pr.SubModule, fast.Module)):
|
||||
parent = er.ModuleWrapper(self, parent)
|
||||
left = self.find_types(parent, name, stmt.start_pos)
|
||||
if isinstance(stmt.parent, pr.ForFlow):
|
||||
# iterate through result and add the values, that's possible
|
||||
operator = copy.copy(first_operation)
|
||||
operator.value = operator.value[:-1]
|
||||
name = str(stmt.get_defined_names()[0])
|
||||
parent = er.wrap(self, stmt.get_parent_scope())
|
||||
left = self.find_types(parent, name, stmt.start_pos, search_global=True)
|
||||
if isinstance(stmt.get_parent_until(pr.ForStmt), pr.ForStmt):
|
||||
# Iterate through result and add the values, that's possible
|
||||
# only in for loops without clutter, because they are
|
||||
# predictable.
|
||||
for r in result:
|
||||
for r in types:
|
||||
left = precedence.calculate(self, left, operator, [r])
|
||||
result = left
|
||||
types = left
|
||||
else:
|
||||
result = precedence.calculate(self, left, operator, result)
|
||||
elif len(stmt.get_defined_names()) > 1 and seek_name and ass_details:
|
||||
# Assignment checking is only important if the statement defines
|
||||
# multiple variables.
|
||||
new_result = []
|
||||
for ass_expression_list, op in ass_details:
|
||||
new_result += finder.find_assignments(ass_expression_list[0], result, seek_name)
|
||||
result = new_result
|
||||
return result
|
||||
types = precedence.calculate(self, left, operator, types)
|
||||
debug.dbg('eval_statement result %s', types)
|
||||
return types
|
||||
|
||||
def eval_expression_list(self, expression_list):
|
||||
"""
|
||||
`expression_list` can be either `pr.Array` or `list of list`.
|
||||
It is used to evaluate a two dimensional object, that has calls, arrays and
|
||||
operators in it.
|
||||
"""
|
||||
debug.dbg('eval_expression_list: %s', expression_list)
|
||||
p = precedence.create_precedence(expression_list)
|
||||
return precedence.process_precedence_element(self, p) or []
|
||||
@memoize_default(evaluator_is_first_arg=True)
|
||||
def eval_element(self, element):
|
||||
if isinstance(element, iterable.AlreadyEvaluated):
|
||||
return list(element)
|
||||
elif isinstance(element, iterable.MergedNodes):
|
||||
return iterable.unite(self.eval_element(e) for e in element)
|
||||
|
||||
def eval_statement_element(self, element):
|
||||
if pr.Array.is_type(element, pr.Array.NOARRAY):
|
||||
debug.dbg('eval_element %s@%s', element, element.start_pos)
|
||||
if isinstance(element, (pr.Name, pr.Literal)) or pr.is_node(element, 'atom'):
|
||||
return self._eval_atom(element)
|
||||
elif isinstance(element, pr.Keyword):
|
||||
# For False/True/None
|
||||
if element.value in ('False', 'True', 'None'):
|
||||
return [compiled.builtin.get_by_name(element.value)]
|
||||
else:
|
||||
return []
|
||||
elif element.isinstance(pr.Lambda):
|
||||
return [er.LambdaWrapper(self, element)]
|
||||
elif element.isinstance(er.LambdaWrapper):
|
||||
return [element] # TODO this is no real evaluation.
|
||||
elif element.type == 'expr_stmt':
|
||||
return self.eval_statement(element)
|
||||
elif element.type == 'power':
|
||||
types = self._eval_atom(element.children[0])
|
||||
for trailer in element.children[1:]:
|
||||
if trailer == '**': # has a power operation.
|
||||
raise NotImplementedError
|
||||
types = self.eval_trailer(types, trailer)
|
||||
|
||||
return types
|
||||
elif element.type in ('testlist_star_expr', 'testlist',):
|
||||
# The implicit tuple in statements.
|
||||
return [iterable.ImplicitTuple(self, element)]
|
||||
elif element.type in ('not_test', 'factor'):
|
||||
types = self.eval_element(element.children[-1])
|
||||
for operator in element.children[:-1]:
|
||||
types = list(precedence.factor_calculate(self, types, operator))
|
||||
return types
|
||||
elif element.type == 'test':
|
||||
# `x if foo else y` case.
|
||||
return (self.eval_element(element.children[0]) +
|
||||
self.eval_element(element.children[-1]))
|
||||
elif element.type == 'operator':
|
||||
# Must be an ellipsis, other operators are not evaluated.
|
||||
return [] # Ignore for now.
|
||||
elif element.type == 'dotted_name':
|
||||
types = self._eval_atom(element.children[0])
|
||||
for next_name in element.children[2::2]:
|
||||
types = list(chain.from_iterable(self.find_types(typ, next_name)
|
||||
for typ in types))
|
||||
return types
|
||||
else:
|
||||
return precedence.calculate_children(self, element.children)
|
||||
|
||||
def _eval_atom(self, atom):
|
||||
"""
|
||||
Basically to process ``atom`` nodes. The parser sometimes doesn't
|
||||
generate the node (because it has just one child). In that case an atom
|
||||
might be a name or a literal as well.
|
||||
"""
|
||||
if isinstance(atom, pr.Name):
|
||||
# This is the first global lookup.
|
||||
stmt = atom.get_definition()
|
||||
scope = stmt.get_parent_until(pr.IsScope, include_current=True)
|
||||
if isinstance(stmt, pr.CompFor):
|
||||
stmt = stmt.get_parent_until((pr.ClassOrFunc, pr.ExprStmt))
|
||||
if stmt.type != 'expr_stmt':
|
||||
# We only need to adjust the start_pos for statements, because
|
||||
# there the name cannot be used.
|
||||
stmt = atom
|
||||
return self.find_types(scope, atom, stmt.start_pos, search_global=True)
|
||||
elif isinstance(atom, pr.Literal):
|
||||
return [compiled.create(self, atom.eval())]
|
||||
else:
|
||||
c = atom.children
|
||||
# Parentheses without commas are not tuples.
|
||||
if c[0] == '(' and not len(c) == 2 \
|
||||
and not(pr.is_node(c[1], 'testlist_comp')
|
||||
and len(c[1].children) > 1):
|
||||
return self.eval_element(c[1])
|
||||
try:
|
||||
lst_cmp = element[0].expression_list()[0]
|
||||
if not isinstance(lst_cmp, pr.ListComprehension):
|
||||
raise IndexError
|
||||
except IndexError:
|
||||
r = list(itertools.chain.from_iterable(self.eval_statement(s)
|
||||
for s in element))
|
||||
comp_for = c[1].children[1]
|
||||
except (IndexError, AttributeError):
|
||||
pass
|
||||
else:
|
||||
r = [iterable.GeneratorComprehension(self, lst_cmp)]
|
||||
call_path = element.generate_call_path()
|
||||
next(call_path, None) # the first one has been used already
|
||||
return self.follow_path(call_path, r, element.parent)
|
||||
elif isinstance(element, pr.ListComprehension):
|
||||
return self.eval_statement(element.stmt)
|
||||
elif isinstance(element, pr.Lambda):
|
||||
return [er.Function(self, element)]
|
||||
# With things like params, these can also be functions...
|
||||
elif isinstance(element, pr.Base) and element.isinstance(
|
||||
er.Function, er.Class, er.Instance, iterable.ArrayInstance):
|
||||
return [element]
|
||||
# The string tokens are just operations (+, -, etc.)
|
||||
elif isinstance(element, compiled.CompiledObject):
|
||||
return [element]
|
||||
elif isinstance(element, Token):
|
||||
return []
|
||||
else:
|
||||
return self.eval_call(element)
|
||||
if isinstance(comp_for, pr.CompFor):
|
||||
return [iterable.Comprehension.from_atom(self, atom)]
|
||||
return [iterable.Array(self, atom)]
|
||||
|
||||
def eval_call(self, call):
|
||||
"""Follow a call is following a function, variable, string, etc."""
|
||||
path = call.generate_call_path()
|
||||
def eval_trailer(self, types, trailer):
|
||||
trailer_op, node = trailer.children[:2]
|
||||
if node == ')': # `arglist` is optional.
|
||||
node = ()
|
||||
new_types = []
|
||||
for typ in types:
|
||||
debug.dbg('eval_trailer: %s in scope %s', trailer, typ)
|
||||
if trailer_op == '.':
|
||||
new_types += self.find_types(typ, node)
|
||||
elif trailer_op == '(':
|
||||
new_types += self.execute(typ, node, trailer)
|
||||
elif trailer_op == '[':
|
||||
try:
|
||||
get = typ.get_index_types
|
||||
except AttributeError:
|
||||
debug.warning("TypeError: '%s' object is not subscriptable"
|
||||
% typ)
|
||||
else:
|
||||
new_types += get(self, node)
|
||||
return new_types
|
||||
|
||||
# find the statement of the Scope
|
||||
s = call
|
||||
while not s.parent.is_scope():
|
||||
s = s.parent
|
||||
scope = s.parent
|
||||
return self.eval_call_path(path, scope, s.start_pos)
|
||||
|
||||
def eval_call_path(self, path, scope, position):
|
||||
def execute_evaluated(self, obj, *args):
|
||||
"""
|
||||
Follows a path generated by `pr.StatementElement.generate_call_path()`.
|
||||
Execute a function with already executed arguments.
|
||||
"""
|
||||
current = next(path)
|
||||
|
||||
if isinstance(current, pr.Array):
|
||||
types = [iterable.Array(self, current)]
|
||||
else:
|
||||
if isinstance(current, pr.NamePart):
|
||||
# This is the first global lookup.
|
||||
types = self.find_types(scope, current, position=position,
|
||||
search_global=True)
|
||||
else:
|
||||
# for pr.Literal
|
||||
types = [compiled.create(self, current.value)]
|
||||
types = imports.follow_imports(self, types)
|
||||
|
||||
return self.follow_path(path, types, scope)
|
||||
|
||||
def follow_path(self, path, types, call_scope):
|
||||
"""
|
||||
Follows a path like::
|
||||
|
||||
self.follow_path(iter(['Foo', 'bar']), [a_type], from_somewhere)
|
||||
|
||||
to follow a call like ``module.a_type.Foo.bar`` (in ``from_somewhere``).
|
||||
"""
|
||||
results_new = []
|
||||
iter_paths = itertools.tee(path, len(types))
|
||||
|
||||
for i, typ in enumerate(types):
|
||||
fp = self._follow_path(iter_paths[i], typ, call_scope)
|
||||
if fp is not None:
|
||||
results_new += fp
|
||||
else:
|
||||
# This means stop iteration.
|
||||
return types
|
||||
return results_new
|
||||
|
||||
def _follow_path(self, path, typ, scope):
|
||||
"""
|
||||
Uses a generator and tries to complete the path, e.g.::
|
||||
|
||||
foo.bar.baz
|
||||
|
||||
`_follow_path` is only responsible for completing `.bar.baz`, the rest
|
||||
is done in the `follow_call` function.
|
||||
"""
|
||||
# current is either an Array or a Scope.
|
||||
try:
|
||||
current = next(path)
|
||||
except StopIteration:
|
||||
return None
|
||||
debug.dbg('_follow_path: %s in scope %s', current, typ)
|
||||
|
||||
result = []
|
||||
if isinstance(current, pr.Array):
|
||||
# This must be an execution, either () or [].
|
||||
if current.type == pr.Array.LIST:
|
||||
if hasattr(typ, 'get_index_types'):
|
||||
if isinstance(typ, compiled.CompiledObject):
|
||||
# CompiledObject doesn't contain an evaluator instance.
|
||||
result = typ.get_index_types(self, current)
|
||||
else:
|
||||
result = typ.get_index_types(current)
|
||||
elif current.type not in [pr.Array.DICT]:
|
||||
# Scope must be a class or func - make an instance or execution.
|
||||
result = self.execute(typ, current)
|
||||
else:
|
||||
# Curly braces are not allowed, because they make no sense.
|
||||
debug.warning('strange function call with {} %s %s', current, typ)
|
||||
else:
|
||||
# The function must not be decorated with something else.
|
||||
if typ.isinstance(er.Function):
|
||||
typ = typ.get_magic_function_scope()
|
||||
else:
|
||||
# This is the typical lookup while chaining things.
|
||||
if filter_private_variable(typ, scope, current):
|
||||
return []
|
||||
types = self.find_types(typ, current)
|
||||
result = imports.follow_imports(self, types)
|
||||
return self.follow_path(path, result, scope)
|
||||
args = [iterable.AlreadyEvaluated([arg]) for arg in args]
|
||||
return self.execute(obj, args)
|
||||
|
||||
@debug.increase_indent
|
||||
def execute(self, obj, params=()):
|
||||
def execute(self, obj, arguments=(), trailer=None):
|
||||
if not isinstance(arguments, param.Arguments):
|
||||
arguments = param.Arguments(self, arguments, trailer)
|
||||
|
||||
if obj.isinstance(er.Function):
|
||||
obj = obj.get_decorated_func()
|
||||
|
||||
debug.dbg('execute: %s %s', obj, params)
|
||||
debug.dbg('execute: %s %s', obj, arguments)
|
||||
try:
|
||||
# Some stdlib functions like super(), namedtuple(), etc. have been
|
||||
# hard-coded in Jedi to support them.
|
||||
return stdlib.execute(self, obj, params)
|
||||
return stdlib.execute(self, obj, arguments)
|
||||
except stdlib.NotInStdLib:
|
||||
pass
|
||||
|
||||
@@ -315,64 +281,82 @@ class Evaluator(object):
|
||||
debug.warning("no execution possible %s", obj)
|
||||
return []
|
||||
else:
|
||||
types = func(self, params)
|
||||
types = func(self, arguments)
|
||||
debug.dbg('execute result: %s in %s', types, obj)
|
||||
return types
|
||||
|
||||
def goto(self, stmt, call_path):
|
||||
# Return the name defined in the call_path, if it's part of the
|
||||
# statement name definitions. Only return, if it's one name and one
|
||||
# name only. Otherwise it's a mixture between a definition and a
|
||||
# reference. In this case it's just a definition. So we stay on it.
|
||||
if len(call_path) == 1 and isinstance(call_path[0], pr.NamePart) \
|
||||
and call_path[0] in [d.names[-1] for d in stmt.get_defined_names()]:
|
||||
# Named params should get resolved to their param definitions.
|
||||
if pr.Array.is_type(stmt.parent, pr.Array.TUPLE, pr.Array.NOARRAY) \
|
||||
and stmt.parent.previous:
|
||||
call = deep_ast_copy(stmt.parent.previous)
|
||||
# We have made a copy, so we're fine to change it.
|
||||
call.next = None
|
||||
while call.previous is not None:
|
||||
call = call.previous
|
||||
def goto_definition(self, name):
|
||||
def_ = name.get_definition()
|
||||
if def_.type == 'expr_stmt' and name in def_.get_defined_names():
|
||||
return self.eval_statement(def_, name)
|
||||
call = helpers.call_of_name(name)
|
||||
return self.eval_element(call)
|
||||
|
||||
def goto(self, name):
|
||||
def resolve_implicit_imports(names):
|
||||
for name in names:
|
||||
if isinstance(name.parent, helpers.FakeImport):
|
||||
# Those are implicit imports.
|
||||
s = imports.ImportWrapper(self, name)
|
||||
for n in s.follow(is_goto=True):
|
||||
yield n
|
||||
yield name
|
||||
|
||||
stmt = name.get_definition()
|
||||
par = name.parent
|
||||
if par.type == 'argument' and par.children[1] == '=' and par.children[0] == name:
|
||||
# Named param goto.
|
||||
trailer = par.parent
|
||||
if trailer.type == 'arglist':
|
||||
trailer = trailer.parent
|
||||
if trailer.type != 'classdef':
|
||||
if trailer.type == 'decorator':
|
||||
types = self.eval_element(trailer.children[1])
|
||||
else:
|
||||
i = trailer.parent.children.index(trailer)
|
||||
to_evaluate = trailer.parent.children[:i]
|
||||
types = self.eval_element(to_evaluate[0])
|
||||
for trailer in to_evaluate[1:]:
|
||||
types = self.eval_trailer(types, trailer)
|
||||
param_names = []
|
||||
named_param_name = stmt.get_defined_names()[0]
|
||||
for typ in self.eval_call(call):
|
||||
for param in typ.params:
|
||||
if unicode(param.get_name()) == unicode(named_param_name):
|
||||
param_names.append(param.get_name().names[-1])
|
||||
for typ in types:
|
||||
try:
|
||||
params = typ.params
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
param_names += [param.name for param in params
|
||||
if param.name.value == name.value]
|
||||
return param_names
|
||||
return [call_path[0]]
|
||||
elif isinstance(par, pr.ExprStmt) and name in par.get_defined_names():
|
||||
# Only take the parent, because if it's more complicated than just
|
||||
# a name it's something you can "goto" again.
|
||||
return [name]
|
||||
elif isinstance(par, (pr.Param, pr.Function, pr.Class)) and par.name is name:
|
||||
return [name]
|
||||
elif isinstance(stmt, pr.Import):
|
||||
return imports.ImportWrapper(self, name).follow(is_goto=True)
|
||||
elif par.type == 'dotted_name': # Is a decorator.
|
||||
index = par.children.index(name)
|
||||
if index > 0:
|
||||
new_dotted = helpers.deep_ast_copy(par)
|
||||
new_dotted.children[index - 1:] = []
|
||||
types = self.eval_element(new_dotted)
|
||||
return resolve_implicit_imports(iterable.unite(
|
||||
self.find_types(typ, name, is_goto=True) for typ in types
|
||||
))
|
||||
|
||||
scope = stmt.get_parent_scope()
|
||||
pos = stmt.start_pos
|
||||
first_part, search_name_part = call_path[:-1], call_path[-1]
|
||||
|
||||
if first_part:
|
||||
scopes = self.eval_call_path(iter(first_part), scope, pos)
|
||||
search_global = False
|
||||
pos = None
|
||||
scope = name.get_parent_scope()
|
||||
if pr.is_node(name.parent, 'trailer'):
|
||||
call = helpers.call_of_name(name, cut_own_trailer=True)
|
||||
types = self.eval_element(call)
|
||||
return resolve_implicit_imports(iterable.unite(
|
||||
self.find_types(typ, name, is_goto=True) for typ in types
|
||||
))
|
||||
else:
|
||||
scopes = [scope]
|
||||
search_global = True
|
||||
|
||||
follow_res = []
|
||||
for s in scopes:
|
||||
follow_res += self.find_types(s, search_name_part, pos,
|
||||
search_global=search_global, is_goto=True)
|
||||
return follow_res
|
||||
|
||||
|
||||
def filter_private_variable(scope, call_scope, var_name):
|
||||
"""private variables begin with a double underline `__`"""
|
||||
var_name = str(var_name) # var_name could be a NamePart
|
||||
if isinstance(var_name, (str, unicode)) and isinstance(scope, er.Instance)\
|
||||
and var_name.startswith('__') and not var_name.endswith('__'):
|
||||
s = call_scope.get_parent_until((pr.Class, er.Instance, compiled.CompiledObject))
|
||||
if s != scope:
|
||||
if isinstance(scope.base, compiled.CompiledObject):
|
||||
if s != scope.base:
|
||||
return True
|
||||
else:
|
||||
if s != scope.base.base:
|
||||
return True
|
||||
return False
|
||||
if stmt.type != 'expr_stmt':
|
||||
# We only need to adjust the start_pos for statements, because
|
||||
# there the name cannot be used.
|
||||
stmt = name
|
||||
return self.find_types(scope, name, stmt.start_pos,
|
||||
search_global=True, is_goto=True)
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
"""
|
||||
Module for statical analysis.
|
||||
"""
|
||||
|
||||
from jedi import debug
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate.compiled import CompiledObject
|
||||
|
||||
|
||||
@@ -64,8 +63,8 @@ class Error(object):
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s %s: %s@%s,%s>' % (self.__class__.__name__,
|
||||
self.name, self.path,
|
||||
self._start_pos[0], self._start_pos[1])
|
||||
self.name, self.path,
|
||||
self._start_pos[0], self._start_pos[1])
|
||||
|
||||
|
||||
class Warning(Error):
|
||||
@@ -73,6 +72,13 @@ class Warning(Error):
|
||||
|
||||
|
||||
def add(evaluator, name, jedi_obj, message=None, typ=Error, payload=None):
|
||||
from jedi.evaluate.iterable import MergedNodes
|
||||
while isinstance(jedi_obj, MergedNodes):
|
||||
if len(jedi_obj) != 1:
|
||||
# TODO is this kosher?
|
||||
return
|
||||
jedi_obj = list(jedi_obj)[0]
|
||||
|
||||
exception = CODES[name][1]
|
||||
if _check_for_exception_catch(evaluator, jedi_obj, exception, payload):
|
||||
return
|
||||
@@ -97,8 +103,8 @@ def _check_for_setattr(instance):
|
||||
for stmt in stmts)
|
||||
|
||||
|
||||
def add_attribute_error(evaluator, scope, name_part):
|
||||
message = ('AttributeError: %s has no attribute %s.' % (scope, name_part))
|
||||
def add_attribute_error(evaluator, scope, name):
|
||||
message = ('AttributeError: %s has no attribute %s.' % (scope, name))
|
||||
from jedi.evaluate.representation import Instance
|
||||
# Check for __getattr__/__getattribute__ existance and issue a warning
|
||||
# instead of an error, if that happens.
|
||||
@@ -115,8 +121,8 @@ def add_attribute_error(evaluator, scope, name_part):
|
||||
else:
|
||||
typ = Error
|
||||
|
||||
payload = scope, name_part
|
||||
add(evaluator, 'attribute-error', name_part, message, typ, payload)
|
||||
payload = scope, name
|
||||
add(evaluator, 'attribute-error', name, message, typ, payload)
|
||||
|
||||
|
||||
def _check_for_exception_catch(evaluator, jedi_obj, exception, payload=None):
|
||||
@@ -127,64 +133,77 @@ def _check_for_exception_catch(evaluator, jedi_obj, exception, payload=None):
|
||||
it.
|
||||
Returns True if the exception was catched.
|
||||
"""
|
||||
def check_match(cls):
|
||||
def check_match(cls, exception):
|
||||
try:
|
||||
return isinstance(cls, CompiledObject) and issubclass(exception, cls.obj)
|
||||
except TypeError:
|
||||
return False
|
||||
|
||||
def check_try_for_except(obj):
|
||||
while obj.next is not None:
|
||||
obj = obj.next
|
||||
if not obj.inputs:
|
||||
# No import implies a `except:` catch, which catches
|
||||
# everything.
|
||||
return True
|
||||
def check_try_for_except(obj, exception):
|
||||
# Only nodes in try
|
||||
iterator = iter(obj.children)
|
||||
for branch_type in iterator:
|
||||
colon = next(iterator)
|
||||
suite = next(iterator)
|
||||
if branch_type == 'try' \
|
||||
and not (branch_type.start_pos < jedi_obj.start_pos <= suite.end_pos):
|
||||
return False
|
||||
|
||||
for i in obj.inputs:
|
||||
except_classes = evaluator.eval_statement(i)
|
||||
for node in obj.except_clauses():
|
||||
if node is None:
|
||||
return True # An exception block that catches everything.
|
||||
else:
|
||||
except_classes = evaluator.eval_element(node)
|
||||
for cls in except_classes:
|
||||
from jedi.evaluate import iterable
|
||||
if isinstance(cls, iterable.Array) and cls.type == 'tuple':
|
||||
# multiple exceptions
|
||||
for c in cls.values():
|
||||
if check_match(c):
|
||||
if check_match(c, exception):
|
||||
return True
|
||||
else:
|
||||
if check_match(cls):
|
||||
if check_match(cls, exception):
|
||||
return True
|
||||
return False
|
||||
|
||||
def check_hasattr(stmt):
|
||||
expression_list = stmt.expression_list()
|
||||
def check_hasattr(node, suite):
|
||||
try:
|
||||
assert len(expression_list) == 1
|
||||
call = expression_list[0]
|
||||
assert isinstance(call, pr.Call) and str(call.name) == 'hasattr'
|
||||
assert call.next_is_execution()
|
||||
execution = call.next
|
||||
assert execution and len(execution) == 2
|
||||
assert suite.start_pos <= jedi_obj.start_pos < suite.end_pos
|
||||
assert node.type == 'power'
|
||||
base = node.children[0]
|
||||
assert base.type == 'name' and base.value == 'hasattr'
|
||||
trailer = node.children[1]
|
||||
assert trailer.type == 'trailer'
|
||||
arglist = trailer.children[1]
|
||||
assert arglist.type == 'arglist'
|
||||
from jedi.evaluate.param import Arguments
|
||||
args = list(Arguments(evaluator, arglist).unpack())
|
||||
# Arguments should be very simple
|
||||
assert len(args) == 2
|
||||
|
||||
# check if the names match
|
||||
names = evaluator.eval_statement(execution[1])
|
||||
# Check name
|
||||
key, values = args[1]
|
||||
assert len(values) == 1
|
||||
names = evaluator.eval_element(values[0])
|
||||
assert len(names) == 1 and isinstance(names[0], CompiledObject)
|
||||
assert names[0].obj == str(payload[1])
|
||||
|
||||
objects = evaluator.eval_statement(execution[0])
|
||||
# Check objects
|
||||
key, values = args[0]
|
||||
assert len(values) == 1
|
||||
objects = evaluator.eval_element(values[0])
|
||||
return payload[0] in objects
|
||||
except AssertionError:
|
||||
pass
|
||||
return False
|
||||
return False
|
||||
|
||||
obj = jedi_obj
|
||||
while obj is not None and not obj.isinstance(pr.Function, pr.Class):
|
||||
if obj.isinstance(pr.Flow):
|
||||
# try/except catch check
|
||||
if obj.command == 'try' and check_try_for_except(obj):
|
||||
if obj.isinstance(pr.TryStmt) and check_try_for_except(obj, exception):
|
||||
return True
|
||||
# hasattr check
|
||||
if exception == AttributeError and obj.command in ('if', 'while'):
|
||||
if obj.inputs and check_hasattr(obj.inputs[0]):
|
||||
if exception == AttributeError and obj.isinstance(pr.IfStmt, pr.WhileStmt):
|
||||
if check_hasattr(obj.children[1], obj.children[3]):
|
||||
return True
|
||||
obj = obj.parent
|
||||
|
||||
@@ -196,41 +215,88 @@ def get_module_statements(module):
|
||||
Returns the statements used in a module. All these statements should be
|
||||
evaluated to check for potential exceptions.
|
||||
"""
|
||||
def add_stmts(stmts):
|
||||
def check_children(node):
|
||||
try:
|
||||
children = node.children
|
||||
except AttributeError:
|
||||
return []
|
||||
else:
|
||||
nodes = []
|
||||
for child in children:
|
||||
nodes += check_children(child)
|
||||
if child.type == 'trailer':
|
||||
c = child.children
|
||||
if c[0] == '(' and c[1] != ')':
|
||||
if c[1].type != 'arglist':
|
||||
if c[1].type == 'argument':
|
||||
nodes.append(c[1].children[-1])
|
||||
else:
|
||||
nodes.append(c[1])
|
||||
else:
|
||||
for argument in c[1].children:
|
||||
if argument.type == 'argument':
|
||||
nodes.append(argument.children[-1])
|
||||
elif argument.type != 'operator':
|
||||
nodes.append(argument)
|
||||
return nodes
|
||||
|
||||
def add_nodes(nodes):
|
||||
new = set()
|
||||
for stmt in stmts:
|
||||
if isinstance(stmt, pr.Flow):
|
||||
while stmt is not None:
|
||||
new |= add_stmts(stmt.inputs)
|
||||
stmt = stmt.next
|
||||
continue
|
||||
if isinstance(stmt, pr.KeywordStatement):
|
||||
stmt = stmt.stmt
|
||||
if stmt is None:
|
||||
continue
|
||||
for node in nodes:
|
||||
if isinstance(node, pr.Flow):
|
||||
children = node.children
|
||||
if node.type == 'for_stmt':
|
||||
children = children[2:] # Don't want to include the names.
|
||||
# Pick the suite/simple_stmt.
|
||||
new |= add_nodes(children)
|
||||
elif node.type in ('simple_stmt', 'suite'):
|
||||
new |= add_nodes(node.children)
|
||||
elif node.type in ('return_stmt', 'yield_expr'):
|
||||
try:
|
||||
new.add(node.children[1])
|
||||
except IndexError:
|
||||
pass
|
||||
elif node.type not in ('whitespace', 'operator', 'keyword',
|
||||
'parameters', 'decorated', 'except_clause') \
|
||||
and not isinstance(node, (pr.ClassOrFunc, pr.Import)):
|
||||
new.add(node)
|
||||
|
||||
for expression in stmt.expression_list():
|
||||
if isinstance(expression, pr.Array):
|
||||
new |= add_stmts(expression.values)
|
||||
|
||||
if isinstance(expression, pr.StatementElement):
|
||||
for element in expression.generate_call_path():
|
||||
if isinstance(element, pr.Array):
|
||||
new |= add_stmts(element.values)
|
||||
new.add(stmt)
|
||||
try:
|
||||
children = node.children
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
for next_node in children:
|
||||
new.update(check_children(node))
|
||||
if next_node.type != 'keyword' and node.type != 'expr_stmt':
|
||||
new.add(node)
|
||||
return new
|
||||
|
||||
stmts = set()
|
||||
imports = set()
|
||||
nodes = set()
|
||||
import_names = set()
|
||||
decorated_funcs = []
|
||||
for scope in module.walk():
|
||||
imports |= set(scope.imports)
|
||||
stmts |= add_stmts(scope.statements)
|
||||
stmts |= add_stmts(r for r in scope.returns if r is not None)
|
||||
for imp in set(scope.imports):
|
||||
import_names |= set(imp.get_defined_names())
|
||||
if imp.is_nested():
|
||||
import_names |= set(path[-1] for path in imp.paths())
|
||||
|
||||
children = scope.children
|
||||
if isinstance(scope, pr.ClassOrFunc):
|
||||
children = children[2:] # We don't want to include the class name.
|
||||
nodes |= add_nodes(children)
|
||||
|
||||
for flow in scope.flows:
|
||||
if flow.type == 'for_stmt':
|
||||
nodes.add(flow.children[3])
|
||||
elif flow.type == 'try_stmt':
|
||||
nodes.update(e for e in flow.except_clauses() if e is not None)
|
||||
|
||||
try:
|
||||
decorators = scope.decorators
|
||||
decorators = scope.get_decorators()
|
||||
except AttributeError:
|
||||
pass
|
||||
else:
|
||||
stmts |= add_stmts(decorators)
|
||||
return stmts, imports
|
||||
if decorators:
|
||||
decorated_funcs.append(scope)
|
||||
return nodes, import_names, decorated_funcs
|
||||
|
||||
@@ -4,10 +4,12 @@
|
||||
- ``CachedMetaClass`` uses ``memoize_default`` to do the same with classes.
|
||||
"""
|
||||
|
||||
import inspect
|
||||
|
||||
NO_DEFAULT = object()
|
||||
|
||||
|
||||
def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
|
||||
def memoize_default(default=NO_DEFAULT, evaluator_is_first_arg=False, second_arg_is_evaluator=False):
|
||||
""" This is a typical memoization decorator, BUT there is one difference:
|
||||
To prevent recursion it sets defaults.
|
||||
|
||||
@@ -37,6 +39,8 @@ def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_ev
|
||||
if default is not NO_DEFAULT:
|
||||
memo[key] = default
|
||||
rv = function(obj, *args, **kwargs)
|
||||
if inspect.isgenerator(rv):
|
||||
rv = list(rv)
|
||||
memo[key] = rv
|
||||
return rv
|
||||
return wrapper
|
||||
@@ -46,8 +50,8 @@ def memoize_default(default=None, evaluator_is_first_arg=False, second_arg_is_ev
|
||||
class CachedMetaClass(type):
|
||||
"""
|
||||
This is basically almost the same than the decorator above, it just caches
|
||||
class initializations. I haven't found any other way, so I'm doing it with
|
||||
meta classes.
|
||||
class initializations. Either you do it this way or with decorators, but
|
||||
with decorators you lose class access (isinstance, etc).
|
||||
"""
|
||||
@memoize_default(None, second_arg_is_evaluator=True)
|
||||
def __call__(self, *args, **kwargs):
|
||||
|
||||
@@ -9,9 +9,9 @@ from functools import partial
|
||||
|
||||
from jedi._compatibility import builtins as _builtins, unicode
|
||||
from jedi import debug
|
||||
from jedi.cache import underscore_memoization, memoize
|
||||
from jedi.cache import underscore_memoization, memoize_method
|
||||
from jedi.evaluate.sys_path import get_sys_path
|
||||
from jedi.parser.representation import Param, SubModule, Base, Operator
|
||||
from jedi.parser.tree import Param, Base, Operator, zero_position_modifier
|
||||
from jedi.evaluate.helpers import FakeName
|
||||
from . import fake
|
||||
|
||||
@@ -39,8 +39,8 @@ class CheckAttribute(object):
|
||||
class CompiledObject(Base):
|
||||
# comply with the parser
|
||||
start_pos = 0, 0
|
||||
asserts = []
|
||||
path = None # modules have this attribute - set it to None.
|
||||
used_names = {} # To be consistent with modules.
|
||||
|
||||
def __init__(self, obj, parent=None):
|
||||
self.obj = obj
|
||||
@@ -65,7 +65,7 @@ class CompiledObject(Base):
|
||||
|
||||
@CheckAttribute
|
||||
def py__mro__(self, evaluator):
|
||||
return tuple(create(evaluator, cls) for cls in self.obj.__mro__)
|
||||
return tuple(create(evaluator, cls, self.parent) for cls in self.obj.__mro__)
|
||||
|
||||
@CheckAttribute
|
||||
def py__bases__(self, evaluator):
|
||||
@@ -86,16 +86,11 @@ class CompiledObject(Base):
|
||||
params_str, ret = self._parse_function_doc()
|
||||
tokens = params_str.split(',')
|
||||
params = []
|
||||
module = SubModule(self.get_parent_until().name)
|
||||
# it seems like start_pos/end_pos is always (0, 0) for a compiled
|
||||
# object
|
||||
start_pos, end_pos = (0, 0), (0, 0)
|
||||
for p in tokens:
|
||||
parts = [FakeName(part) for part in p.strip().split('=')]
|
||||
if len(parts) >= 2:
|
||||
parts.insert(1, Operator(module, '=', module, (0, 0)))
|
||||
params.append(Param(module, parts, start_pos,
|
||||
end_pos, builtin))
|
||||
if len(parts) > 1:
|
||||
parts.insert(1, Operator(zero_position_modifier, '=', (0, 0)))
|
||||
params.append(Param(parts, self))
|
||||
return params
|
||||
|
||||
def __repr__(self):
|
||||
@@ -108,7 +103,7 @@ class CompiledObject(Base):
|
||||
|
||||
return _parse_function_doc(self.doc)
|
||||
|
||||
def type(self):
|
||||
def api_type(self):
|
||||
if fake.is_class_instance(self.obj):
|
||||
return 'instance'
|
||||
|
||||
@@ -121,6 +116,18 @@ class CompiledObject(Base):
|
||||
or inspect.ismethoddescriptor(cls):
|
||||
return 'function'
|
||||
|
||||
@property
|
||||
def type(self):
|
||||
"""Imitate the tree.Node.type values."""
|
||||
cls = self._cls().obj
|
||||
if inspect.isclass(cls):
|
||||
return 'classdef'
|
||||
elif inspect.ismodule(cls):
|
||||
return 'file_input'
|
||||
elif inspect.isbuiltin(cls) or inspect.ismethod(cls) \
|
||||
or inspect.ismethoddescriptor(cls):
|
||||
return 'funcdef'
|
||||
|
||||
@underscore_memoization
|
||||
def _cls(self):
|
||||
# Ensures that a CompiledObject is returned that is not an instance (like list)
|
||||
@@ -134,22 +141,21 @@ class CompiledObject(Base):
|
||||
return CompiledObject(c, self.parent)
|
||||
return self
|
||||
|
||||
def get_defined_names(self):
|
||||
if inspect.ismodule(self.obj):
|
||||
return self.instance_names()
|
||||
else:
|
||||
return type_names + self.instance_names()
|
||||
@property
|
||||
def names_dict(self):
|
||||
# For compatibility with `representation.Class`.
|
||||
return self.names_dicts(False)[0]
|
||||
|
||||
def scope_names_generator(self, position=None, add_class_vars=True):
|
||||
yield self, self.get_defined_names()
|
||||
def names_dicts(self, search_global, is_instance=False):
|
||||
return self._names_dict_ensure_one_dict(is_instance)
|
||||
|
||||
@underscore_memoization
|
||||
def instance_names(self):
|
||||
names = []
|
||||
cls = self._cls()
|
||||
for name in dir(cls.obj):
|
||||
names.append(CompiledName(cls, name))
|
||||
return names
|
||||
@memoize_method
|
||||
def _names_dict_ensure_one_dict(self, is_instance):
|
||||
"""
|
||||
search_global shouldn't change the fact that there's one dict, this way
|
||||
there's only one `object`.
|
||||
"""
|
||||
return [LazyNamesDict(self._cls(), is_instance)]
|
||||
|
||||
def get_subscope_by_name(self, name):
|
||||
if name in dir(self._cls().obj):
|
||||
@@ -157,7 +163,7 @@ class CompiledObject(Base):
|
||||
else:
|
||||
raise KeyError("CompiledObject doesn't have an attribute '%s'." % name)
|
||||
|
||||
def get_index_types(self, evaluator, index_array):
|
||||
def get_index_types(self, evaluator, index_array=()):
|
||||
# If the object doesn't have `__getitem__`, just raise the
|
||||
# AttributeError.
|
||||
if not hasattr(self.obj, '__getitem__'):
|
||||
@@ -194,7 +200,7 @@ class CompiledObject(Base):
|
||||
return FakeName(self._cls().obj.__name__, self)
|
||||
|
||||
def _execute_function(self, evaluator, params):
|
||||
if self.type() != 'function':
|
||||
if self.type != 'funcdef':
|
||||
return
|
||||
|
||||
for name in self._parse_function_doc()[1].split():
|
||||
@@ -235,12 +241,47 @@ class CompiledObject(Base):
|
||||
return [] # Builtins don't have imports
|
||||
|
||||
|
||||
class LazyNamesDict(object):
|
||||
"""
|
||||
A names_dict instance for compiled objects, resembles the parser.tree.
|
||||
"""
|
||||
def __init__(self, compiled_obj, is_instance):
|
||||
self._compiled_obj = compiled_obj
|
||||
self._is_instance = is_instance
|
||||
|
||||
def __iter__(self):
|
||||
return (v[0].value for v in self.values())
|
||||
|
||||
@memoize_method
|
||||
def __getitem__(self, name):
|
||||
try:
|
||||
getattr(self._compiled_obj.obj, name)
|
||||
except AttributeError:
|
||||
raise KeyError('%s in %s not found.' % (name, self._compiled_obj))
|
||||
return [CompiledName(self._compiled_obj, name)]
|
||||
|
||||
def values(self):
|
||||
obj = self._compiled_obj.obj
|
||||
|
||||
values = []
|
||||
for name in dir(obj):
|
||||
try:
|
||||
values.append(self[name])
|
||||
except KeyError:
|
||||
# The dir function can be wrong.
|
||||
pass
|
||||
|
||||
# dir doesn't include the type names.
|
||||
if not inspect.ismodule(obj) and obj != type and not self._is_instance:
|
||||
values += _type_names_dict.values()
|
||||
return values
|
||||
|
||||
|
||||
class CompiledName(FakeName):
|
||||
def __init__(self, obj, name):
|
||||
super(CompiledName, self).__init__(name)
|
||||
self._obj = obj
|
||||
self.name = name
|
||||
self.start_pos = 0, 0 # an illegal start_pos, to make sorting easy.
|
||||
|
||||
def __repr__(self):
|
||||
try:
|
||||
@@ -249,6 +290,9 @@ class CompiledName(FakeName):
|
||||
name = None
|
||||
return '<%s: (%s).%s>' % (type(self).__name__, name, self.name)
|
||||
|
||||
def is_definition(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
def parent(self):
|
||||
@@ -323,7 +367,14 @@ def load_module(path, name):
|
||||
sys_path.insert(0, p)
|
||||
|
||||
temp, sys.path = sys.path, sys_path
|
||||
__import__(dotted_path)
|
||||
try:
|
||||
__import__(dotted_path)
|
||||
except RuntimeError:
|
||||
if 'PySide' in dotted_path or 'PyQt' in dotted_path:
|
||||
# RuntimeError: the PyQt4.QtCore and PyQt5.QtCore modules both wrap
|
||||
# the QObject class.
|
||||
# See https://github.com/davidhalter/jedi/pull/483
|
||||
return None
|
||||
# Just access the cache after import, because of #59 as well as the very
|
||||
# complicated import structure of Python.
|
||||
module = sys.modules[dotted_path]
|
||||
@@ -400,14 +451,10 @@ def _parse_function_doc(doc):
|
||||
return param_str, ret
|
||||
|
||||
|
||||
class Builtin(CompiledObject, Base):
|
||||
@memoize
|
||||
class Builtin(CompiledObject):
|
||||
@memoize_method
|
||||
def get_by_name(self, name):
|
||||
item = [n for n in self.get_defined_names() if n.get_code() == name][0]
|
||||
return item.parent
|
||||
|
||||
def is_scope(self):
|
||||
return True
|
||||
return self.names_dict[name][0].parent
|
||||
|
||||
|
||||
def _a_generator(foo):
|
||||
@@ -436,8 +483,7 @@ def _create_from_name(module, parent, name):
|
||||
builtin = Builtin(_builtins)
|
||||
magic_function_class = CompiledObject(type(load_module), parent=builtin)
|
||||
generator_obj = CompiledObject(_a_generator(1.0))
|
||||
type_names = [] # Need this, because it's return in get_defined_names.
|
||||
type_names = builtin.get_by_name('type').get_defined_names()
|
||||
_type_names_dict = builtin.get_by_name('type').names_dict
|
||||
none_obj = builtin.get_by_name('None')
|
||||
false_obj = builtin.get_by_name('False')
|
||||
true_obj = builtin.get_by_name('True')
|
||||
@@ -483,7 +529,7 @@ def create(evaluator, obj, parent=builtin, module=None):
|
||||
return faked
|
||||
|
||||
try:
|
||||
if obj.__module__ in ('builtins', '__builtin__'):
|
||||
if parent == builtin and obj.__module__ in ('builtins', '__builtin__'):
|
||||
return builtin.get_by_name(obj.__name__)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
@@ -8,9 +8,8 @@ import os
|
||||
import inspect
|
||||
|
||||
from jedi._compatibility import is_py3, builtins, unicode
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import tokenize
|
||||
from jedi.parser.representation import Class
|
||||
from jedi.parser import Parser, load_grammar
|
||||
from jedi.parser import tree as pt
|
||||
from jedi.evaluate.helpers import FakeName
|
||||
|
||||
modules = {}
|
||||
@@ -31,16 +30,17 @@ def _load_faked_module(module):
|
||||
except IOError:
|
||||
modules[module_name] = None
|
||||
return
|
||||
module = Parser(unicode(source), module_name).module
|
||||
grammar = load_grammar('grammar3.4')
|
||||
module = Parser(grammar, unicode(source), module_name).module
|
||||
modules[module_name] = module
|
||||
|
||||
if module_name == 'builtins' and not is_py3:
|
||||
# There are two implementations of `open` for either python 2/3.
|
||||
# -> Rename the python2 version (`look at fake/builtins.pym`).
|
||||
open_func = search_scope(module, 'open')
|
||||
open_func.name = FakeName('open_python3')
|
||||
open_func.children[1] = FakeName('open_python3')
|
||||
open_func = search_scope(module, 'open_python2')
|
||||
open_func.name = FakeName('open')
|
||||
open_func.children[1] = FakeName('open')
|
||||
return module
|
||||
|
||||
|
||||
@@ -100,11 +100,16 @@ def _faked(module, obj, name):
|
||||
def get_faked(module, obj, name=None):
|
||||
obj = obj.__class__ if is_class_instance(obj) else obj
|
||||
result = _faked(module, obj, name)
|
||||
if not isinstance(result, Class) and result is not None:
|
||||
# TODO may this ever happen? result None? if so, document!
|
||||
if not isinstance(result, pt.Class) and result is not None:
|
||||
# Set the docstr which was previously not set (faked modules don't
|
||||
# contain it).
|
||||
doc = '''"""%s"""''' % obj.__doc__ # TODO need escapes.
|
||||
result.add_docstr(tokenize.Token(tokenize.STRING, doc, (0, 0)))
|
||||
doc = '"""%s"""' % obj.__doc__ # TODO need escapes.
|
||||
suite = result.children[-1]
|
||||
string = pt.String(pt.zero_position_modifier, doc, (0, 0), '')
|
||||
new_line = pt.Whitespace('\n', (0, 0), '')
|
||||
docstr_node = pt.Node('simple_stmt', [string, new_line])
|
||||
suite.children.insert(2, docstr_node)
|
||||
return result
|
||||
|
||||
|
||||
|
||||
@@ -5,5 +5,5 @@ class partial():
|
||||
self.__keywords = keywords
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
# I know this doesn't work in Python, but in Jedi it does ;-)
|
||||
return self.__func(*self.__args, *args, **self.__keywords, **kwargs)
|
||||
# TODO should be **dict(self.__keywords, **kwargs)
|
||||
return self.__func(*(self.__args + args), **self.__keywords)
|
||||
|
||||
@@ -156,7 +156,7 @@ class set():
|
||||
yield i
|
||||
|
||||
def pop(self):
|
||||
return self.__iterable.pop()
|
||||
return list(self.__iterable)[-1]
|
||||
|
||||
def copy(self):
|
||||
return self
|
||||
|
||||
@@ -20,15 +20,14 @@ from itertools import chain
|
||||
from textwrap import dedent
|
||||
|
||||
from jedi.evaluate.cache import memoize_default
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import Parser, load_grammar
|
||||
from jedi.common import indent_block
|
||||
from jedi.evaluate.iterable import Array
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.evaluate.iterable import Array, FakeSequence, AlreadyEvaluated
|
||||
|
||||
|
||||
DOCSTRING_PARAM_PATTERNS = [
|
||||
r'\s*:type\s+%s:\s*([^\n]+)', # Sphinx
|
||||
r'\s*:param\s+(\w+)\s+%s:[^\n]+', # Sphinx param with type
|
||||
r'\s*:param\s+(\w+)\s+%s:[^\n]+', # Sphinx param with type
|
||||
r'\s*@type\s+%s:\s*([^\n]+)', # Epydoc
|
||||
]
|
||||
|
||||
@@ -117,7 +116,7 @@ def _strip_rst_role(type_str):
|
||||
def _evaluate_for_statement_string(evaluator, string, module):
|
||||
code = dedent("""
|
||||
def pseudo_docstring_stuff():
|
||||
'''Create a pseudo function for docstring statements.'''
|
||||
# Create a pseudo function for docstring statements.
|
||||
%s
|
||||
""")
|
||||
if string is None:
|
||||
@@ -128,11 +127,16 @@ def _evaluate_for_statement_string(evaluator, string, module):
|
||||
# (e.g., 'threading' in 'threading.Thread').
|
||||
string = 'import %s\n' % element + string
|
||||
|
||||
p = Parser(code % indent_block(string), no_docstr=True)
|
||||
pseudo_cls = p.module.subscopes[0]
|
||||
# Take the default grammar here, if we load the Python 2.7 grammar here, it
|
||||
# will be impossible to use `...` (Ellipsis) as a token. Docstring types
|
||||
# don't need to conform with the current grammar.
|
||||
p = Parser(load_grammar(), code % indent_block(string))
|
||||
try:
|
||||
stmt = pseudo_cls.statements[-1]
|
||||
except IndexError:
|
||||
pseudo_cls = p.module.subscopes[0]
|
||||
# First pick suite, then simple_stmt (-2 for DEDENT) and then the node,
|
||||
# which is also not the last item, because there's a newline.
|
||||
stmt = pseudo_cls.children[-1].children[-2].children[-2]
|
||||
except (AttributeError, IndexError):
|
||||
return []
|
||||
|
||||
# Use the module of the param.
|
||||
@@ -149,7 +153,7 @@ def _execute_types_in_stmt(evaluator, stmt):
|
||||
doesn't include tuple, list and dict literals, because the stuff they
|
||||
contain is executed. (Used as type information).
|
||||
"""
|
||||
definitions = evaluator.eval_statement(stmt)
|
||||
definitions = evaluator.eval_element(stmt)
|
||||
return chain.from_iterable(_execute_array_values(evaluator, d) for d in definitions)
|
||||
|
||||
|
||||
@@ -162,10 +166,8 @@ def _execute_array_values(evaluator, array):
|
||||
values = []
|
||||
for typ in array.values():
|
||||
objects = _execute_array_values(evaluator, typ)
|
||||
values.append(helpers.FakeStatement(objects))
|
||||
arr = helpers.FakeArray(values, array.parent, array.type)
|
||||
# Wrap it, because that's what the evaluator knows.
|
||||
return [Array(evaluator, arr)]
|
||||
values.append(AlreadyEvaluated(objects))
|
||||
return [FakeSequence(evaluator, values, array.type)]
|
||||
else:
|
||||
return evaluator.execute(array)
|
||||
|
||||
@@ -176,7 +178,7 @@ def follow_param(evaluator, param):
|
||||
|
||||
return [p
|
||||
for param_str in _search_param_in_docstr(func.raw_doc,
|
||||
str(param.get_name()))
|
||||
str(param.name))
|
||||
for p in _evaluate_for_statement_string(evaluator, param_str,
|
||||
param.get_parent_until())]
|
||||
|
||||
|
||||
@@ -16,18 +16,15 @@ It works as follows:
|
||||
- search for function calls named ``foo``
|
||||
- execute these calls and check the input. This work with a ``ParamListener``.
|
||||
"""
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import unicode
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import settings
|
||||
from jedi.evaluate import helpers
|
||||
from jedi import debug
|
||||
from jedi.evaluate.cache import memoize_default
|
||||
from jedi.evaluate import imports
|
||||
|
||||
# This is something like the sys.path, but only for searching params. It means
|
||||
# that this is the order in which Jedi searches params.
|
||||
search_param_modules = ['.']
|
||||
|
||||
|
||||
class ParamListener(object):
|
||||
"""
|
||||
@@ -37,25 +34,43 @@ class ParamListener(object):
|
||||
self.param_possibilities = []
|
||||
|
||||
def execute(self, params):
|
||||
self.param_possibilities.append(params)
|
||||
self.param_possibilities += params
|
||||
|
||||
|
||||
@memoize_default([], evaluator_is_first_arg=True)
|
||||
@debug.increase_indent
|
||||
def search_params(evaluator, param):
|
||||
"""
|
||||
This is a dynamic search for params. If you try to complete a type:
|
||||
A dynamic search for param values. If you try to complete a type:
|
||||
|
||||
>>> def func(foo):
|
||||
... foo
|
||||
>>> func(1)
|
||||
>>> func("")
|
||||
|
||||
It is not known what the type is, because it cannot be guessed with
|
||||
recursive madness. Therefore one has to analyse the statements that are
|
||||
calling the function, as well as analyzing the incoming params.
|
||||
It is not known what the type ``foo`` without analysing the whole code. You
|
||||
have to look for all calls to ``func`` to find out what ``foo`` possibly
|
||||
is.
|
||||
"""
|
||||
if not settings.dynamic_params:
|
||||
return []
|
||||
debug.dbg('Dynamic param search for %s', param)
|
||||
|
||||
func = param.get_parent_until(pr.Function)
|
||||
# Compare the param names.
|
||||
names = [n for n in search_function_call(evaluator, func)
|
||||
if n.value == param.name.value]
|
||||
# Evaluate the ExecutedParams to types.
|
||||
result = list(chain.from_iterable(n.parent.eval(evaluator) for n in names))
|
||||
debug.dbg('Dynamic param result %s', result)
|
||||
return result
|
||||
|
||||
|
||||
@memoize_default([], evaluator_is_first_arg=True)
|
||||
def search_function_call(evaluator, func):
|
||||
"""
|
||||
Returns a list of param names.
|
||||
"""
|
||||
from jedi.evaluate import representation as er
|
||||
|
||||
def get_params_for_module(module):
|
||||
"""
|
||||
@@ -64,82 +79,54 @@ def search_params(evaluator, param):
|
||||
@memoize_default([], evaluator_is_first_arg=True)
|
||||
def get_posibilities(evaluator, module, func_name):
|
||||
try:
|
||||
possible_stmts = module.used_names[func_name]
|
||||
names = module.used_names[func_name]
|
||||
except KeyError:
|
||||
return []
|
||||
|
||||
for stmt in possible_stmts:
|
||||
if isinstance(stmt, pr.Import):
|
||||
continue
|
||||
calls = helpers.scan_statement_for_calls(stmt, func_name)
|
||||
for c in calls:
|
||||
# no execution means that params cannot be set
|
||||
call_path = list(c.generate_call_path())
|
||||
pos = c.start_pos
|
||||
scope = stmt.parent
|
||||
for name in names:
|
||||
parent = name.parent
|
||||
if pr.is_node(parent, 'trailer'):
|
||||
parent = parent.parent
|
||||
|
||||
# This whole stuff is just to not execute certain parts
|
||||
# (speed improvement), basically we could just call
|
||||
# ``eval_call_path`` on the call_path and it would also
|
||||
# work.
|
||||
def listRightIndex(lst, value):
|
||||
return len(lst) - lst[-1::-1].index(value) - 1
|
||||
trailer = None
|
||||
if pr.is_node(parent, 'power'):
|
||||
for t in parent.children[1:]:
|
||||
if t == '**':
|
||||
break
|
||||
if t.start_pos > name.start_pos and t.children[0] == '(':
|
||||
trailer = t
|
||||
break
|
||||
if trailer is not None:
|
||||
types = evaluator.goto_definition(name)
|
||||
|
||||
# Need to take right index, because there could be a
|
||||
# func usage before.
|
||||
call_path_simple = [unicode(d) if isinstance(d, pr.NamePart)
|
||||
else d for d in call_path]
|
||||
i = listRightIndex(call_path_simple, func_name)
|
||||
before, after = call_path[:i], call_path[i + 1:]
|
||||
if not after and not call_path_simple.index(func_name) != i:
|
||||
continue
|
||||
scopes = [scope]
|
||||
if before:
|
||||
scopes = evaluator.eval_call_path(iter(before), c.parent, pos)
|
||||
pos = None
|
||||
from jedi.evaluate import representation as er
|
||||
for scope in scopes:
|
||||
# Not resolving decorators is a speed hack:
|
||||
# By ignoring them, we get the function that is
|
||||
# probably called really fast. If it's not called, it
|
||||
# doesn't matter. But this is a way to get potential
|
||||
# candidates for calling that function really quick!
|
||||
s = evaluator.find_types(scope, func_name, position=pos,
|
||||
search_global=not before,
|
||||
resolve_decorator=False)
|
||||
# We have to remove decorators, because they are not the
|
||||
# "original" functions, this way we can easily compare.
|
||||
# At the same time we also have to remove InstanceElements.
|
||||
undec = []
|
||||
for escope in types:
|
||||
if escope.isinstance(er.Function, er.Instance) \
|
||||
and escope.decorates is not None:
|
||||
undec.append(escope.decorates)
|
||||
elif isinstance(escope, er.InstanceElement):
|
||||
undec.append(escope.var)
|
||||
else:
|
||||
undec.append(escope)
|
||||
|
||||
c = [getattr(escope, 'base_func', None) or escope.base
|
||||
for escope in s
|
||||
if escope.isinstance(er.Function, er.Class)]
|
||||
if compare in c:
|
||||
# only if we have the correct function we execute
|
||||
# it, otherwise just ignore it.
|
||||
evaluator.follow_path(iter(after), s, scope)
|
||||
if er.wrap(evaluator, compare) in undec:
|
||||
# Only if we have the correct function we execute
|
||||
# it, otherwise just ignore it.
|
||||
evaluator.eval_trailer(types, trailer)
|
||||
return listener.param_possibilities
|
||||
return get_posibilities(evaluator, module, func_name)
|
||||
|
||||
result = []
|
||||
for params in get_posibilities(evaluator, module, func_name):
|
||||
for p in params:
|
||||
if str(p) == param_name:
|
||||
result += evaluator.eval_statement(p.parent)
|
||||
return result
|
||||
|
||||
func = param.get_parent_until(pr.Function)
|
||||
current_module = param.get_parent_until()
|
||||
current_module = func.get_parent_until()
|
||||
func_name = unicode(func.name)
|
||||
compare = func
|
||||
if func_name == '__init__' and isinstance(func.parent, pr.Class):
|
||||
func_name = unicode(func.parent.name)
|
||||
compare = func.parent
|
||||
|
||||
# get the param name
|
||||
if param.assignment_details:
|
||||
# first assignment details, others would be a syntax error
|
||||
expression_list, op = param.assignment_details[0]
|
||||
else:
|
||||
expression_list = param.expression_list()
|
||||
offset = 1 if expression_list[0] in ['*', '**'] else 0
|
||||
param_name = str(expression_list[offset].name)
|
||||
if func_name == '__init__':
|
||||
cls = func.get_parent_scope()
|
||||
if isinstance(cls, pr.Class):
|
||||
func_name = unicode(cls.name)
|
||||
compare = cls
|
||||
|
||||
# add the listener
|
||||
listener = ParamListener()
|
||||
@@ -148,7 +135,7 @@ def search_params(evaluator, param):
|
||||
try:
|
||||
result = []
|
||||
# This is like backtracking: Get the first possible result.
|
||||
for mod in imports.get_modules_containing_name([current_module], func_name):
|
||||
for mod in imports.get_modules_containing_name(evaluator, [current_module], func_name):
|
||||
result = get_params_for_module(mod)
|
||||
if result:
|
||||
break
|
||||
|
||||
@@ -13,9 +13,8 @@ check for -> a is a string). There's big potential in these checks.
|
||||
"""
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import hasattr, unicode, u
|
||||
from jedi.parser import representation as pr, tokenize
|
||||
from jedi.parser import fast
|
||||
from jedi._compatibility import unicode, u
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi import common
|
||||
from jedi import settings
|
||||
@@ -27,22 +26,68 @@ from jedi.evaluate import iterable
|
||||
from jedi.evaluate import imports
|
||||
from jedi.evaluate import analysis
|
||||
from jedi.evaluate import flow_analysis
|
||||
from jedi.evaluate import param
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.evaluate.cache import memoize_default
|
||||
|
||||
|
||||
def filter_after_position(names, position):
|
||||
"""
|
||||
Removes all names after a certain position. If position is None, just
|
||||
returns the names list.
|
||||
"""
|
||||
if position is None:
|
||||
return names
|
||||
|
||||
names_new = []
|
||||
for n in names:
|
||||
# Filter positions and also allow list comprehensions and lambdas.
|
||||
if n.start_pos[0] is not None and n.start_pos < position \
|
||||
or isinstance(n.get_definition(), (pr.CompFor, pr.Lambda)):
|
||||
names_new.append(n)
|
||||
return names_new
|
||||
|
||||
|
||||
def filter_definition_names(names, origin, position=None):
|
||||
"""
|
||||
Filter names that are actual definitions in a scope. Names that are just
|
||||
used will be ignored.
|
||||
"""
|
||||
# Just calculate the scope from the first
|
||||
stmt = names[0].get_definition()
|
||||
scope = stmt.get_parent_scope()
|
||||
|
||||
if not (isinstance(scope, er.FunctionExecution)
|
||||
and isinstance(scope.base, er.LambdaWrapper)):
|
||||
names = filter_after_position(names, position)
|
||||
names = [name for name in names if name.is_definition()]
|
||||
|
||||
# Private name mangling (compile.c) disallows access on names
|
||||
# preceeded by two underscores `__` if used outside of the class. Names
|
||||
# that also end with two underscores (e.g. __id__) are not affected.
|
||||
for name in list(names):
|
||||
if name.value.startswith('__') and not name.value.endswith('__'):
|
||||
if filter_private_variable(scope, origin):
|
||||
names.remove(name)
|
||||
return names
|
||||
|
||||
|
||||
class NameFinder(object):
|
||||
def __init__(self, evaluator, scope, name_str, position=None):
|
||||
self._evaluator = evaluator
|
||||
self.scope = scope
|
||||
# Make sure that it's not just a syntax tree node.
|
||||
self.scope = er.wrap(evaluator, scope)
|
||||
self.name_str = name_str
|
||||
self.position = position
|
||||
|
||||
@debug.increase_indent
|
||||
def find(self, scopes, resolve_decorator=True, search_global=False):
|
||||
def find(self, scopes, search_global=False):
|
||||
# TODO rename scopes to names_dicts
|
||||
names = self.filter_name(scopes)
|
||||
types = self._names_to_types(names, resolve_decorator)
|
||||
types = self._names_to_types(names, search_global)
|
||||
|
||||
if not names and not types \
|
||||
and not (isinstance(self.name_str, pr.NamePart)
|
||||
and not (isinstance(self.name_str, pr.Name)
|
||||
and isinstance(self.name_str.parent.parent, pr.Param)):
|
||||
if not isinstance(self.name_str, (str, unicode)): # TODO Remove?
|
||||
if search_global:
|
||||
@@ -55,81 +100,86 @@ class NameFinder(object):
|
||||
self.scope, self.name_str)
|
||||
|
||||
debug.dbg('finder._names_to_types: %s -> %s', names, types)
|
||||
return self._resolve_descriptors(types)
|
||||
return types
|
||||
|
||||
def scopes(self, search_global=False):
|
||||
if search_global:
|
||||
return get_names_of_scope(self._evaluator, self.scope, self.position)
|
||||
return global_names_dict_generator(self._evaluator, self.scope, self.position)
|
||||
else:
|
||||
return self.scope.scope_names_generator(self.position)
|
||||
return ((n, None) for n in self.scope.names_dicts(search_global))
|
||||
|
||||
def filter_name(self, scope_names_generator):
|
||||
def names_dict_lookup(self, names_dict, position):
|
||||
def get_param(scope, el):
|
||||
if isinstance(el.get_parent_until(pr.Param), pr.Param):
|
||||
return scope.param_by_name(str(el))
|
||||
return el
|
||||
|
||||
search_str = str(self.name_str)
|
||||
try:
|
||||
names = names_dict[search_str]
|
||||
if not names: # We want names, otherwise stop.
|
||||
return []
|
||||
except KeyError:
|
||||
return []
|
||||
|
||||
names = filter_definition_names(names, self.name_str, position)
|
||||
|
||||
name_scope = None
|
||||
# Only the names defined in the last position are valid definitions.
|
||||
last_names = []
|
||||
for name in reversed(sorted(names, key=lambda name: name.start_pos)):
|
||||
stmt = name.get_definition()
|
||||
name_scope = er.wrap(self._evaluator, stmt.get_parent_scope())
|
||||
|
||||
if isinstance(self.scope, er.Instance) and not isinstance(name_scope, er.Instance):
|
||||
# Instances should not be checked for positioning, because we
|
||||
# don't know in which order the functions are called.
|
||||
last_names.append(name)
|
||||
continue
|
||||
|
||||
if isinstance(name_scope, compiled.CompiledObject):
|
||||
# Let's test this. TODO need comment. shouldn't this be
|
||||
# filtered before?
|
||||
last_names.append(name)
|
||||
continue
|
||||
|
||||
if isinstance(name, compiled.CompiledName) \
|
||||
or isinstance(name, er.InstanceName) and isinstance(name._origin_name, compiled.CompiledName):
|
||||
last_names.append(name)
|
||||
continue
|
||||
|
||||
if isinstance(self.name_str, pr.Name):
|
||||
origin_scope = self.name_str.get_parent_until(pr.Scope, reverse=True)
|
||||
else:
|
||||
origin_scope = None
|
||||
if isinstance(stmt.parent, compiled.CompiledObject):
|
||||
# TODO seriously? this is stupid.
|
||||
continue
|
||||
check = flow_analysis.break_check(self._evaluator, name_scope,
|
||||
stmt, origin_scope)
|
||||
if check is not flow_analysis.UNREACHABLE:
|
||||
last_names.append(name)
|
||||
if check is flow_analysis.REACHABLE:
|
||||
break
|
||||
|
||||
if isinstance(name_scope, er.FunctionExecution):
|
||||
# Replace params
|
||||
return [get_param(name_scope, n) for n in last_names]
|
||||
return last_names
|
||||
|
||||
def filter_name(self, names_dicts):
|
||||
"""
|
||||
Filters all variables of a scope (which are defined in the
|
||||
`scope_names_generator`), until the name fits.
|
||||
Searches names that are defined in a scope (the different
|
||||
`names_dicts`), until a name fits.
|
||||
"""
|
||||
# TODO Now this import is really ugly. Try to remove it.
|
||||
# It's possibly the only api dependency.
|
||||
from jedi.api.interpreter import InterpreterNamespace
|
||||
names = []
|
||||
self.maybe_descriptor = isinstance(self.scope, er.Class)
|
||||
for name_list_scope, name_list in scope_names_generator:
|
||||
break_scopes = []
|
||||
if not isinstance(name_list_scope, compiled.CompiledObject):
|
||||
# Here is the position stuff happening (sorting of variables).
|
||||
# Compiled objects don't need that, because there's only one
|
||||
# reference.
|
||||
name_list = sorted(name_list, key=lambda n: n.start_pos, reverse=True)
|
||||
|
||||
for name in name_list:
|
||||
if unicode(self.name_str) != name.get_code():
|
||||
continue
|
||||
|
||||
stmt = name.get_definition()
|
||||
scope = stmt.parent
|
||||
if scope in break_scopes:
|
||||
continue
|
||||
|
||||
# Exclude `arr[1] =` from the result set.
|
||||
if not self._name_is_array_assignment(name, stmt):
|
||||
# TODO we ignore a lot of elements here that should not be
|
||||
# ignored. But then again flow_analysis also stops when the
|
||||
# input scope is reached. This is not correct: variables
|
||||
# might still have conditions if defined outside of the
|
||||
# current scope.
|
||||
if isinstance(stmt, (pr.Param, pr.Import)) \
|
||||
or isinstance(name_list_scope, (pr.Lambda, pr.ListComprehension, er.Instance, InterpreterNamespace)) \
|
||||
or isinstance(scope, compiled.CompiledObject) \
|
||||
or isinstance(stmt, pr.ExprStmt) and stmt.is_global():
|
||||
# Always reachable.
|
||||
names.append(name.names[-1])
|
||||
else:
|
||||
check = flow_analysis.break_check(self._evaluator,
|
||||
name_list_scope,
|
||||
er.wrap(self._evaluator, scope),
|
||||
self.scope)
|
||||
if check is not flow_analysis.UNREACHABLE:
|
||||
names.append(name.names[-1])
|
||||
if check is flow_analysis.REACHABLE:
|
||||
break
|
||||
|
||||
if names and self._is_name_break_scope(name, stmt):
|
||||
if self._does_scope_break_immediately(scope, name_list_scope):
|
||||
break
|
||||
else:
|
||||
break_scopes.append(scope)
|
||||
for names_dict, position in names_dicts:
|
||||
names = self.names_dict_lookup(names_dict, position)
|
||||
if names:
|
||||
break
|
||||
|
||||
if isinstance(self.scope, er.Instance):
|
||||
# After checking the dictionary of an instance (self
|
||||
# attributes), an attribute maybe a descriptor.
|
||||
self.maybe_descriptor = True
|
||||
|
||||
scope_txt = (self.scope if self.scope == name_list_scope
|
||||
else '%s-%s' % (self.scope, name_list_scope))
|
||||
debug.dbg('finder.filter_name "%s" in (%s): %s@%s', self.name_str,
|
||||
scope_txt, u(names), self.position)
|
||||
self.scope, u(names), self.position)
|
||||
return list(self._clean_names(names))
|
||||
|
||||
def _clean_names(self, names):
|
||||
@@ -139,236 +189,193 @@ class NameFinder(object):
|
||||
evaluation, so remove them already here!
|
||||
"""
|
||||
for n in names:
|
||||
definition = n.parent.parent
|
||||
definition = n.parent
|
||||
if isinstance(definition, (pr.Function, pr.Class, pr.Module)):
|
||||
yield er.wrap(self._evaluator, definition).name.names[-1]
|
||||
yield er.wrap(self._evaluator, definition).name
|
||||
else:
|
||||
yield n
|
||||
|
||||
def _check_getattr(self, inst):
|
||||
"""Checks for both __getattr__ and __getattribute__ methods"""
|
||||
result = []
|
||||
# str is important to lose the NamePart!
|
||||
# str is important, because it shouldn't be `Name`!
|
||||
name = compiled.create(self._evaluator, str(self.name_str))
|
||||
with common.ignored(KeyError):
|
||||
result = inst.execute_subscope_by_name('__getattr__', [name])
|
||||
result = inst.execute_subscope_by_name('__getattr__', name)
|
||||
if not result:
|
||||
# this is a little bit special. `__getattribute__` is executed
|
||||
# before anything else. But: I know no use case, where this
|
||||
# could be practical and the jedi would return wrong types. If
|
||||
# you ever have something, let me know!
|
||||
with common.ignored(KeyError):
|
||||
result = inst.execute_subscope_by_name('__getattribute__', [name])
|
||||
result = inst.execute_subscope_by_name('__getattribute__', name)
|
||||
return result
|
||||
|
||||
def _is_name_break_scope(self, name, stmt):
|
||||
"""
|
||||
Returns True except for nested imports and instance variables.
|
||||
"""
|
||||
if stmt.isinstance(pr.ExprStmt):
|
||||
if isinstance(name, er.InstanceElement) and not name.is_class_var:
|
||||
return False
|
||||
elif isinstance(stmt, pr.Import) and stmt.is_nested():
|
||||
return False
|
||||
return True
|
||||
|
||||
def _does_scope_break_immediately(self, scope, name_list_scope):
|
||||
"""
|
||||
In comparison to everthing else, if/while/etc doesn't break directly,
|
||||
because there are multiple different places in which a variable can be
|
||||
defined.
|
||||
"""
|
||||
if isinstance(scope, pr.Flow) \
|
||||
or isinstance(scope, pr.KeywordStatement) and scope.name == 'global':
|
||||
|
||||
if isinstance(name_list_scope, er.Class):
|
||||
name_list_scope = name_list_scope.base
|
||||
return scope == name_list_scope
|
||||
else:
|
||||
return True
|
||||
|
||||
def _name_is_array_assignment(self, name, stmt):
|
||||
if stmt.isinstance(pr.ExprStmt):
|
||||
def is_execution(calls):
|
||||
for c in calls:
|
||||
if isinstance(c, (unicode, str, tokenize.Token)):
|
||||
continue
|
||||
if c.isinstance(pr.Array):
|
||||
if is_execution(c):
|
||||
return True
|
||||
elif c.isinstance(pr.Call):
|
||||
# Compare start_pos, because names may be different
|
||||
# because of executions.
|
||||
if c.name.start_pos == name.start_pos \
|
||||
and isinstance(c.next, pr.Array):
|
||||
return True
|
||||
return False
|
||||
|
||||
is_exe = False
|
||||
for assignee, op in stmt.assignment_details:
|
||||
is_exe |= is_execution(assignee)
|
||||
|
||||
if is_exe:
|
||||
# filter array[3] = ...
|
||||
# TODO check executions for dict contents
|
||||
return True
|
||||
return False
|
||||
|
||||
def _names_to_types(self, names, resolve_decorator):
|
||||
def _names_to_types(self, names, search_global):
|
||||
types = []
|
||||
evaluator = self._evaluator
|
||||
|
||||
# Add isinstance and other if/assert knowledge.
|
||||
if isinstance(self.name_str, pr.NamePart):
|
||||
flow_scope = self.name_str.parent.parent
|
||||
if isinstance(self.name_str, pr.Name):
|
||||
# Ignore FunctionExecution parents for now.
|
||||
flow_scope = self.name_str
|
||||
until = flow_scope.get_parent_until(er.FunctionExecution)
|
||||
while flow_scope and not isinstance(until, er.FunctionExecution):
|
||||
while not isinstance(until, er.FunctionExecution):
|
||||
flow_scope = flow_scope.get_parent_scope(include_flows=True)
|
||||
if flow_scope is None:
|
||||
break
|
||||
# TODO check if result is in scope -> no evaluation necessary
|
||||
n = check_flow_information(evaluator, flow_scope,
|
||||
n = check_flow_information(self._evaluator, flow_scope,
|
||||
self.name_str, self.position)
|
||||
if n:
|
||||
return n
|
||||
flow_scope = flow_scope.parent
|
||||
|
||||
for name in names:
|
||||
typ = name.get_definition()
|
||||
if typ.isinstance(pr.ForFlow):
|
||||
types += self._handle_for_loops(typ)
|
||||
elif isinstance(typ, pr.Param):
|
||||
types += self._eval_param(typ)
|
||||
elif typ.isinstance(pr.ExprStmt):
|
||||
if typ.is_global():
|
||||
# global keyword handling.
|
||||
types += evaluator.find_types(typ.parent.parent, str(name))
|
||||
else:
|
||||
types += self._remove_statements(typ, name)
|
||||
new_types = _name_to_types(self._evaluator, name, self.scope)
|
||||
if isinstance(self.scope, (er.Class, er.Instance)) and not search_global:
|
||||
types += self._resolve_descriptors(name, new_types)
|
||||
else:
|
||||
if typ.isinstance(er.Function) and resolve_decorator:
|
||||
typ = typ.get_decorated_func()
|
||||
types.append(typ)
|
||||
|
||||
types += new_types
|
||||
if not names and isinstance(self.scope, er.Instance):
|
||||
# handling __getattr__ / __getattribute__
|
||||
types = self._check_getattr(self.scope)
|
||||
|
||||
return types
|
||||
|
||||
def _remove_statements(self, stmt, name):
|
||||
"""
|
||||
This is the part where statements are being stripped.
|
||||
|
||||
Due to lazy evaluation, statements like a = func; b = a; b() have to be
|
||||
evaluated.
|
||||
"""
|
||||
evaluator = self._evaluator
|
||||
types = []
|
||||
# Remove the statement docstr stuff for now, that has to be
|
||||
# implemented with the evaluator class.
|
||||
#if stmt.docstr:
|
||||
#res_new.append(stmt)
|
||||
|
||||
check_instance = None
|
||||
if isinstance(stmt, er.InstanceElement) and stmt.is_class_var:
|
||||
check_instance = stmt.instance
|
||||
stmt = stmt.var
|
||||
|
||||
types += evaluator.eval_statement(stmt, seek_name=unicode(self.name_str))
|
||||
|
||||
# check for `except X as y` usages, because y needs to be instantiated.
|
||||
p = stmt.parent
|
||||
# TODO this looks really hacky, improve parser representation!
|
||||
if isinstance(p, pr.Flow) and p.command == 'except' and p.inputs:
|
||||
as_names = p.inputs[0].as_names
|
||||
try:
|
||||
if as_names[0].names[-1] == name:
|
||||
# TODO check for types that are not classes and add it to
|
||||
# the static analysis report.
|
||||
types = list(chain.from_iterable(
|
||||
evaluator.execute(t) for t in types))
|
||||
except IndexError:
|
||||
pass
|
||||
|
||||
if check_instance is not None:
|
||||
# class renames
|
||||
types = [er.get_instance_el(evaluator, check_instance, a, True)
|
||||
if isinstance(a, (er.Function, pr.Function))
|
||||
else a for a in types]
|
||||
return types
|
||||
|
||||
def _eval_param(self, param):
|
||||
evaluator = self._evaluator
|
||||
res_new = []
|
||||
func = param.parent
|
||||
|
||||
cls = func.parent.get_parent_until((pr.Class, pr.Function))
|
||||
|
||||
from jedi.evaluate.param import ExecutedParam
|
||||
if isinstance(cls, pr.Class) and param.position_nr == 0 \
|
||||
and not isinstance(param, ExecutedParam):
|
||||
# This is where we add self - if it has never been
|
||||
# instantiated.
|
||||
if isinstance(self.scope, er.InstanceElement):
|
||||
res_new.append(self.scope.instance)
|
||||
else:
|
||||
for inst in evaluator.execute(er.Class(evaluator, cls)):
|
||||
inst.is_generated = True
|
||||
res_new.append(inst)
|
||||
return res_new
|
||||
|
||||
# Instances are typically faked, if the instance is not called from
|
||||
# outside. Here we check it for __init__ functions and return.
|
||||
if isinstance(func, er.InstanceElement) \
|
||||
and func.instance.is_generated and str(func.name) == '__init__':
|
||||
param = func.var.params[param.position_nr]
|
||||
|
||||
# Add docstring knowledge.
|
||||
doc_params = docstrings.follow_param(evaluator, param)
|
||||
if doc_params:
|
||||
return doc_params
|
||||
|
||||
if not param.is_generated:
|
||||
# Param owns no information itself.
|
||||
res_new += dynamic.search_params(evaluator, param)
|
||||
if not res_new:
|
||||
if param.stars:
|
||||
t = 'tuple' if param.stars == 1 else 'dict'
|
||||
typ = evaluator.find_types(compiled.builtin, t)[0]
|
||||
res_new = evaluator.execute(typ)
|
||||
if not param.assignment_details:
|
||||
# this means that there are no default params,
|
||||
# so just ignore it.
|
||||
return res_new
|
||||
return res_new + evaluator.eval_statement(param, seek_name=unicode(self.name_str))
|
||||
|
||||
def _handle_for_loops(self, loop):
|
||||
# Take the first statement (for has always only one`in`).
|
||||
if not loop.inputs:
|
||||
return []
|
||||
result = iterable.get_iterator_types(self._evaluator.eval_statement(loop.inputs[0]))
|
||||
if len(loop.set_vars) > 1:
|
||||
expression_list = loop.set_stmt.expression_list()
|
||||
# loops with loop.set_vars > 0 only have one command
|
||||
result = _assign_tuples(expression_list[0], result, unicode(self.name_str))
|
||||
return result
|
||||
|
||||
def _resolve_descriptors(self, types):
|
||||
"""Processes descriptors"""
|
||||
if not self.maybe_descriptor:
|
||||
def _resolve_descriptors(self, name, types):
|
||||
# The name must not be in the dictionary, but part of the class
|
||||
# definition. __get__ is only called if the descriptor is defined in
|
||||
# the class dictionary.
|
||||
name_scope = name.get_definition().get_parent_scope()
|
||||
if not isinstance(name_scope, (er.Instance, pr.Class)):
|
||||
return types
|
||||
|
||||
result = []
|
||||
for r in types:
|
||||
if isinstance(self.scope, (er.Instance, er.Class)) \
|
||||
and hasattr(r, 'get_descriptor_return'):
|
||||
# handle descriptors
|
||||
with common.ignored(KeyError):
|
||||
result += r.get_descriptor_return(self.scope)
|
||||
continue
|
||||
result.append(r)
|
||||
try:
|
||||
desc_return = r.get_descriptor_returns
|
||||
except AttributeError:
|
||||
result.append(r)
|
||||
else:
|
||||
result += desc_return(self.scope)
|
||||
return result
|
||||
|
||||
|
||||
def check_flow_information(evaluator, flow, search_name_part, pos):
|
||||
@memoize_default([], evaluator_is_first_arg=True)
|
||||
def _name_to_types(evaluator, name, scope):
|
||||
types = []
|
||||
typ = name.get_definition()
|
||||
if typ.isinstance(pr.ForStmt):
|
||||
for_types = evaluator.eval_element(typ.children[3])
|
||||
for_types = iterable.get_iterator_types(for_types)
|
||||
types += check_tuple_assignments(for_types, name)
|
||||
elif typ.isinstance(pr.CompFor):
|
||||
for_types = evaluator.eval_element(typ.children[3])
|
||||
for_types = iterable.get_iterator_types(for_types)
|
||||
types += check_tuple_assignments(for_types, name)
|
||||
elif isinstance(typ, pr.Param):
|
||||
types += _eval_param(evaluator, typ, scope)
|
||||
elif typ.isinstance(pr.ExprStmt):
|
||||
types += _remove_statements(evaluator, typ, name)
|
||||
elif typ.isinstance(pr.WithStmt):
|
||||
types += evaluator.eval_element(typ.node_from_name(name))
|
||||
elif isinstance(typ, pr.Import):
|
||||
types += imports.ImportWrapper(evaluator, name).follow()
|
||||
elif isinstance(typ, pr.GlobalStmt):
|
||||
# TODO theoretically we shouldn't be using search_global here, it
|
||||
# doesn't make sense, because it's a local search (for that name)!
|
||||
# However, globals are not that important and resolving them doesn't
|
||||
# guarantee correctness in any way, because we don't check for when
|
||||
# something is executed.
|
||||
types += evaluator.find_types(typ.get_parent_scope(), str(name),
|
||||
search_global=True)
|
||||
elif isinstance(typ, pr.TryStmt):
|
||||
# TODO an exception can also be a tuple. Check for those.
|
||||
# TODO check for types that are not classes and add it to
|
||||
# the static analysis report.
|
||||
exceptions = evaluator.eval_element(name.prev_sibling().prev_sibling())
|
||||
types = list(chain.from_iterable(
|
||||
evaluator.execute(t) for t in exceptions))
|
||||
else:
|
||||
if typ.isinstance(er.Function):
|
||||
typ = typ.get_decorated_func()
|
||||
types.append(typ)
|
||||
return types
|
||||
|
||||
|
||||
def _remove_statements(evaluator, stmt, name):
|
||||
"""
|
||||
This is the part where statements are being stripped.
|
||||
|
||||
Due to lazy evaluation, statements like a = func; b = a; b() have to be
|
||||
evaluated.
|
||||
"""
|
||||
types = []
|
||||
# Remove the statement docstr stuff for now, that has to be
|
||||
# implemented with the evaluator class.
|
||||
#if stmt.docstr:
|
||||
#res_new.append(stmt)
|
||||
|
||||
check_instance = None
|
||||
if isinstance(stmt, er.InstanceElement) and stmt.is_class_var:
|
||||
check_instance = stmt.instance
|
||||
stmt = stmt.var
|
||||
|
||||
types += evaluator.eval_statement(stmt, seek_name=name)
|
||||
|
||||
if check_instance is not None:
|
||||
# class renames
|
||||
types = [er.get_instance_el(evaluator, check_instance, a, True)
|
||||
if isinstance(a, (er.Function, pr.Function))
|
||||
else a for a in types]
|
||||
return types
|
||||
|
||||
|
||||
def _eval_param(evaluator, param, scope):
|
||||
res_new = []
|
||||
func = param.get_parent_scope()
|
||||
|
||||
cls = func.parent.get_parent_until((pr.Class, pr.Function))
|
||||
|
||||
from jedi.evaluate.param import ExecutedParam, Arguments
|
||||
if isinstance(cls, pr.Class) and param.position_nr == 0 \
|
||||
and not isinstance(param, ExecutedParam):
|
||||
# This is where we add self - if it has never been
|
||||
# instantiated.
|
||||
if isinstance(scope, er.InstanceElement):
|
||||
res_new.append(scope.instance)
|
||||
else:
|
||||
inst = er.Instance(evaluator, er.wrap(evaluator, cls),
|
||||
Arguments(evaluator, ()), is_generated=True)
|
||||
res_new.append(inst)
|
||||
return res_new
|
||||
|
||||
# Instances are typically faked, if the instance is not called from
|
||||
# outside. Here we check it for __init__ functions and return.
|
||||
if isinstance(func, er.InstanceElement) \
|
||||
and func.instance.is_generated and str(func.name) == '__init__':
|
||||
param = func.var.params[param.position_nr]
|
||||
|
||||
# Add docstring knowledge.
|
||||
doc_params = docstrings.follow_param(evaluator, param)
|
||||
if doc_params:
|
||||
return doc_params
|
||||
|
||||
if isinstance(param, ExecutedParam):
|
||||
return res_new + param.eval(evaluator)
|
||||
else:
|
||||
# Param owns no information itself.
|
||||
res_new += dynamic.search_params(evaluator, param)
|
||||
if not res_new:
|
||||
if param.stars:
|
||||
t = 'tuple' if param.stars == 1 else 'dict'
|
||||
typ = evaluator.find_types(compiled.builtin, t)[0]
|
||||
res_new = evaluator.execute(typ)
|
||||
if param.default:
|
||||
res_new += evaluator.eval_element(param.default)
|
||||
return res_new
|
||||
|
||||
|
||||
def check_flow_information(evaluator, flow, search_name, pos):
|
||||
""" Try to find out the type of a variable just with the information that
|
||||
is given by the flows: e.g. It is also responsible for assert checks.::
|
||||
|
||||
@@ -382,61 +389,68 @@ def check_flow_information(evaluator, flow, search_name_part, pos):
|
||||
|
||||
result = []
|
||||
if flow.is_scope():
|
||||
for ass in reversed(flow.asserts):
|
||||
if pos is None or ass.start_pos > pos:
|
||||
continue
|
||||
result = _check_isinstance_type(evaluator, ass, search_name_part)
|
||||
if result:
|
||||
break
|
||||
# Check for asserts.
|
||||
try:
|
||||
names = reversed(flow.names_dict[search_name.value])
|
||||
except (KeyError, AttributeError):
|
||||
names = []
|
||||
|
||||
if isinstance(flow, pr.Flow) and not result:
|
||||
if flow.command in ['if', 'while'] and len(flow.inputs) == 1:
|
||||
result = _check_isinstance_type(evaluator, flow.inputs[0], search_name_part)
|
||||
for name in names:
|
||||
ass = name.get_parent_until(pr.AssertStmt)
|
||||
if isinstance(ass, pr.AssertStmt) and pos is not None and ass.start_pos < pos:
|
||||
result = _check_isinstance_type(evaluator, ass.assertion(), search_name)
|
||||
if result:
|
||||
break
|
||||
|
||||
if isinstance(flow, (pr.IfStmt, pr.WhileStmt)):
|
||||
element = flow.children[1]
|
||||
result = _check_isinstance_type(evaluator, element, search_name)
|
||||
return result
|
||||
|
||||
|
||||
def _check_isinstance_type(evaluator, stmt, search_name_part):
|
||||
def _check_isinstance_type(evaluator, element, search_name):
|
||||
try:
|
||||
expression_list = stmt.expression_list()
|
||||
assert element.type == 'power'
|
||||
# this might be removed if we analyze and, etc
|
||||
assert len(expression_list) == 1
|
||||
call = expression_list[0]
|
||||
assert isinstance(call, pr.Call) and str(call.name) == 'isinstance'
|
||||
assert call.next_is_execution()
|
||||
assert len(element.children) == 2
|
||||
first, trailer = element.children
|
||||
assert isinstance(first, pr.Name) and first.value == 'isinstance'
|
||||
assert trailer.type == 'trailer' and trailer.children[0] == '('
|
||||
assert len(trailer.children) == 3
|
||||
|
||||
# isinstance check
|
||||
isinst = call.next.values
|
||||
assert len(isinst) == 2 # has two params
|
||||
obj, classes = [statement.expression_list() for statement in isinst]
|
||||
assert len(obj) == 1
|
||||
assert len(classes) == 1
|
||||
assert isinstance(obj[0], pr.Call)
|
||||
|
||||
# names fit?
|
||||
assert unicode(obj[0].name) == unicode(search_name_part.parent)
|
||||
assert isinstance(classes[0], pr.StatementElement) # can be type or tuple
|
||||
# arglist stuff
|
||||
arglist = trailer.children[1]
|
||||
args = param.Arguments(evaluator, arglist, trailer)
|
||||
lst = list(args.unpack())
|
||||
# Disallow keyword arguments
|
||||
assert len(lst) == 2 and lst[0][0] is None and lst[1][0] is None
|
||||
name = lst[0][1][0] # first argument, values, first value
|
||||
# Do a simple get_code comparison. They should just have the same code,
|
||||
# and everything will be all right.
|
||||
classes = lst[1][1][0]
|
||||
call = helpers.call_of_name(search_name)
|
||||
assert name.get_code() == call.get_code()
|
||||
except AssertionError:
|
||||
return []
|
||||
|
||||
result = []
|
||||
for c in evaluator.eval_call(classes[0]):
|
||||
for typ in (c.values() if isinstance(c, iterable.Array) else [c]):
|
||||
for typ in evaluator.eval_element(classes):
|
||||
for typ in (typ.values() if isinstance(typ, iterable.Array) else [typ]):
|
||||
result += evaluator.execute(typ)
|
||||
return result
|
||||
|
||||
|
||||
def get_names_of_scope(evaluator, scope, position=None, star_search=True, include_builtin=True):
|
||||
def global_names_dict_generator(evaluator, scope, position):
|
||||
"""
|
||||
Get all completions (names) possible for the current scope. The star search
|
||||
option is only here to provide an optimization. Otherwise the whole thing
|
||||
would probably start a little recursive madness.
|
||||
For global name lookups. Yields tuples of (names_dict, position). If the
|
||||
position is None, the position does not matter anymore in that scope.
|
||||
|
||||
This function is used to include names from outer scopes. For example, when
|
||||
the current scope is function:
|
||||
|
||||
>>> from jedi._compatibility import u
|
||||
>>> from jedi.parser import Parser
|
||||
>>> parser = Parser(u('''
|
||||
>>> from jedi._compatibility import u, no_unicode_pprint
|
||||
>>> from jedi.parser import Parser, load_grammar
|
||||
>>> parser = Parser(load_grammar(), u('''
|
||||
... x = ['a', 'b', 'c']
|
||||
... def func():
|
||||
... y = None
|
||||
@@ -445,145 +459,89 @@ def get_names_of_scope(evaluator, scope, position=None, star_search=True, includ
|
||||
>>> scope
|
||||
<Function: func@3-5>
|
||||
|
||||
`get_names_of_scope` is a generator. First it yields names from most inner
|
||||
scope.
|
||||
`global_names_dict_generator` is a generator. First it yields names from
|
||||
most inner scope.
|
||||
|
||||
>>> from jedi.evaluate import Evaluator
|
||||
>>> pairs = list(get_names_of_scope(Evaluator(), scope))
|
||||
>>> pairs[0]
|
||||
(<Function: func@3-5>, [<Name: y@4,4>])
|
||||
>>> evaluator = Evaluator(load_grammar())
|
||||
>>> scope = er.wrap(evaluator, scope)
|
||||
>>> pairs = list(global_names_dict_generator(evaluator, scope, (4, 0)))
|
||||
>>> no_unicode_pprint(pairs[0])
|
||||
({'func': [], 'y': [<Name: y@4,4>]}, (4, 0))
|
||||
|
||||
Then it yield the names from one level outer scope. For this example, this
|
||||
is the most outer scope.
|
||||
Then it yields the names from one level "lower". In this example, this
|
||||
is the most outer scope. As you can see, the position in the tuple is now
|
||||
None, because typically the whole module is loaded before the function is
|
||||
called.
|
||||
|
||||
>>> pairs[1]
|
||||
(<ModuleWrapper: <SubModule: None@1-5>>, [<Name: x@2,0>, <Name: func@3,4>])
|
||||
>>> no_unicode_pprint(pairs[1])
|
||||
({'func': [<Name: func@3,4>], 'x': [<Name: x@2,0>]}, None)
|
||||
|
||||
After that we have a few underscore names that have been defined
|
||||
|
||||
>>> pairs[2]
|
||||
(<ModuleWrapper: <SubModule: None@1-5>>, [<LazyName: __file__@0,0>, ...])
|
||||
After that we have a few underscore names that are part of the module.
|
||||
|
||||
>>> sorted(pairs[2][0].keys())
|
||||
['__doc__', '__file__', '__name__', '__package__']
|
||||
>>> pairs[3] # global names -> there are none in our example.
|
||||
({}, None)
|
||||
>>> pairs[4] # package modules -> Also none.
|
||||
({}, None)
|
||||
|
||||
Finally, it yields names from builtin, if `include_builtin` is
|
||||
true (default).
|
||||
|
||||
>>> pairs[3] #doctest: +ELLIPSIS
|
||||
(<Builtin: ...builtin...>, [<CompiledName: ...>, ...])
|
||||
|
||||
:rtype: [(pr.Scope, [pr.Name])]
|
||||
:return: Return an generator that yields a pair of scope and names.
|
||||
>>> pairs[5][0].values() #doctest: +ELLIPSIS
|
||||
[[<CompiledName: ...>], ...]
|
||||
"""
|
||||
if isinstance(scope, pr.ListComprehension):
|
||||
position = scope.parent.start_pos
|
||||
in_func = False
|
||||
while scope is not None:
|
||||
if not (scope.type == 'classdef' and in_func):
|
||||
# Names in methods cannot be resolved within the class.
|
||||
|
||||
in_func_scope = scope
|
||||
non_flow = scope.get_parent_until(pr.Flow, reverse=True)
|
||||
while scope:
|
||||
# We don't want submodules to report if we have modules.
|
||||
# As well as some non-scopes, which are parents of list comprehensions.
|
||||
if isinstance(scope, pr.SubModule) and scope.parent or not scope.is_scope():
|
||||
scope = scope.parent
|
||||
continue
|
||||
# `pr.Class` is used, because the parent is never `Class`.
|
||||
# Ignore the Flows, because the classes and functions care for that.
|
||||
# InstanceElement of Class is ignored, if it is not the start scope.
|
||||
if not (scope != non_flow and scope.isinstance(pr.Class)
|
||||
or scope.isinstance(pr.Flow)
|
||||
or scope.isinstance(er.Instance)
|
||||
and non_flow.isinstance(er.Function)
|
||||
or isinstance(scope, compiled.CompiledObject)
|
||||
and scope.type() == 'class' and in_func_scope != scope):
|
||||
for names_dict in scope.names_dicts(True):
|
||||
yield names_dict, position
|
||||
if scope.type == 'funcdef':
|
||||
# The position should be reset if the current scope is a function.
|
||||
in_func = True
|
||||
position = None
|
||||
scope = er.wrap(evaluator, scope.get_parent_scope())
|
||||
|
||||
if isinstance(scope, (pr.SubModule, fast.Module)):
|
||||
scope = er.ModuleWrapper(evaluator, scope)
|
||||
|
||||
for g in scope.scope_names_generator(position):
|
||||
yield g
|
||||
if scope.isinstance(pr.ListComprehension):
|
||||
# is a list comprehension
|
||||
yield scope, scope.get_defined_names(is_internal_call=True)
|
||||
|
||||
scope = scope.parent
|
||||
# This is used, because subscopes (Flow scopes) would distort the
|
||||
# results.
|
||||
if scope and scope.isinstance(er.Function, pr.Function, er.FunctionExecution):
|
||||
in_func_scope = scope
|
||||
if in_func_scope != scope \
|
||||
and isinstance(in_func_scope, (pr.Function, er.FunctionExecution)):
|
||||
position = None
|
||||
|
||||
# Add star imports.
|
||||
if star_search:
|
||||
for s in imports.remove_star_imports(evaluator, non_flow.get_parent_until()):
|
||||
for g in get_names_of_scope(evaluator, s, star_search=False):
|
||||
yield g
|
||||
|
||||
# Add builtins to the global scope.
|
||||
if include_builtin:
|
||||
yield compiled.builtin, compiled.builtin.get_defined_names()
|
||||
# Add builtins to the global scope.
|
||||
for names_dict in compiled.builtin.names_dicts(True):
|
||||
yield names_dict, None
|
||||
|
||||
|
||||
def _assign_tuples(tup, results, seek_name):
|
||||
def check_tuple_assignments(types, name):
|
||||
"""
|
||||
This is a normal assignment checker. In python functions and other things
|
||||
can return tuples:
|
||||
>>> a, b = 1, ""
|
||||
>>> a, (b, c) = 1, ("", 1.0)
|
||||
|
||||
Here, if `seek_name` is "a", the number type will be returned.
|
||||
The first part (before `=`) is the param tuples, the second one result.
|
||||
|
||||
:type tup: pr.Array
|
||||
Checks if tuples are assigned.
|
||||
"""
|
||||
def eval_results(index):
|
||||
types = []
|
||||
for r in results:
|
||||
for index in name.assignment_indexes():
|
||||
new_types = []
|
||||
for r in types:
|
||||
try:
|
||||
func = r.get_exact_index_types
|
||||
except AttributeError:
|
||||
debug.warning("invalid tuple lookup %s of result %s in %s",
|
||||
tup, results, seek_name)
|
||||
debug.warning("Invalid tuple lookup #%s of result %s in %s",
|
||||
index, types, name)
|
||||
else:
|
||||
with common.ignored(IndexError):
|
||||
types += func(index)
|
||||
return types
|
||||
|
||||
result = []
|
||||
for i, stmt in enumerate(tup):
|
||||
# Used in assignments. There is just one call and no other things,
|
||||
# therefore we can just assume, that the first part is important.
|
||||
command = stmt.expression_list()[0]
|
||||
|
||||
if tup.type == pr.Array.NOARRAY:
|
||||
|
||||
# unnessecary braces -> just remove.
|
||||
r = results
|
||||
else:
|
||||
r = eval_results(i)
|
||||
|
||||
# LHS of tuples can be nested, so resolve it recursively
|
||||
result += find_assignments(command, r, seek_name)
|
||||
return result
|
||||
try:
|
||||
new_types += func(index)
|
||||
except IndexError:
|
||||
pass
|
||||
types = new_types
|
||||
return types
|
||||
|
||||
|
||||
def find_assignments(lhs, results, seek_name):
|
||||
"""
|
||||
Check if `seek_name` is in the left hand side `lhs` of assignment.
|
||||
def filter_private_variable(scope, origin_node):
|
||||
"""Check if a variable is defined inside the same class or outside."""
|
||||
instance = scope.get_parent_scope()
|
||||
coming_from = origin_node
|
||||
while coming_from is not None \
|
||||
and not isinstance(coming_from, (pr.Class, compiled.CompiledObject)):
|
||||
coming_from = coming_from.get_parent_scope()
|
||||
|
||||
`lhs` can simply be a variable (`pr.Call`) or a tuple/list (`pr.Array`)
|
||||
representing the following cases::
|
||||
|
||||
a = 1 # lhs is pr.Call
|
||||
(a, b) = 2 # lhs is pr.Array
|
||||
|
||||
:type lhs: pr.Call
|
||||
:type results: list
|
||||
:type seek_name: str
|
||||
"""
|
||||
if isinstance(lhs, pr.Array):
|
||||
return _assign_tuples(lhs, results, seek_name)
|
||||
elif unicode(lhs.name.names[-1]) == seek_name:
|
||||
return results
|
||||
# CompiledObjects don't have double underscore attributes, but Jedi abuses
|
||||
# those for fakes (builtins.pym -> list).
|
||||
if isinstance(instance, compiled.CompiledObject):
|
||||
return instance != coming_from
|
||||
else:
|
||||
return []
|
||||
return isinstance(instance, er.Instance) and instance.base.base != coming_from
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from jedi.parser.representation import Flow
|
||||
from jedi.parser import tree as pr
|
||||
|
||||
|
||||
class Status(object):
|
||||
@@ -32,52 +32,55 @@ UNREACHABLE = Status(False, 'unreachable')
|
||||
UNSURE = Status(None, 'unsure')
|
||||
|
||||
|
||||
def break_check(evaluator, base_scope, element_scope, origin_scope=None):
|
||||
def break_check(evaluator, base_scope, stmt, origin_scope=None):
|
||||
from jedi.evaluate.representation import wrap
|
||||
base_scope = wrap(evaluator, base_scope)
|
||||
element_scope = wrap(evaluator, element_scope)
|
||||
|
||||
element_scope = wrap(evaluator, stmt.get_parent_scope(include_flows=True))
|
||||
# Direct parents get resolved, we filter scopes that are separate branches.
|
||||
# This makes sense for autocompletion and static analysis. For actual
|
||||
# Python it doesn't matter, because we're talking about potentially
|
||||
# unreachable code.
|
||||
s = origin_scope
|
||||
while s is not None:
|
||||
if element_scope == s:
|
||||
# e.g. `if 0:` would cause all name lookup within the flow make
|
||||
# unaccessible. This is not a "problem" in Python, because the code is
|
||||
# never called. In Jedi though, we still want to infer types.
|
||||
while origin_scope is not None:
|
||||
if element_scope == origin_scope:
|
||||
return REACHABLE
|
||||
s = s.parent
|
||||
origin_scope = origin_scope.parent
|
||||
return _break_check(evaluator, stmt, base_scope, element_scope)
|
||||
|
||||
|
||||
def _break_check(evaluator, stmt, base_scope, element_scope):
|
||||
from jedi.evaluate.representation import wrap
|
||||
element_scope = wrap(evaluator, element_scope)
|
||||
base_scope = wrap(evaluator, base_scope)
|
||||
|
||||
reachable = REACHABLE
|
||||
if isinstance(element_scope, Flow):
|
||||
if element_scope.command == 'else':
|
||||
check_scope = element_scope
|
||||
while check_scope.previous is not None:
|
||||
check_scope = check_scope.previous
|
||||
reachable = _check_flow(evaluator, check_scope)
|
||||
if isinstance(element_scope, pr.IfStmt):
|
||||
if element_scope.node_after_else(stmt):
|
||||
for check_node in element_scope.check_nodes():
|
||||
reachable = _check_if(evaluator, check_node)
|
||||
if reachable in (REACHABLE, UNSURE):
|
||||
break
|
||||
reachable = reachable.invert()
|
||||
else:
|
||||
reachable = _check_flow(evaluator, element_scope)
|
||||
node = element_scope.node_in_which_check_node(stmt)
|
||||
reachable = _check_if(evaluator, node)
|
||||
elif isinstance(element_scope, (pr.TryStmt, pr.WhileStmt)):
|
||||
return UNSURE
|
||||
|
||||
# Only reachable branches need to be examined further.
|
||||
if reachable in (UNREACHABLE, UNSURE):
|
||||
return reachable
|
||||
|
||||
if base_scope != element_scope and base_scope != element_scope.parent:
|
||||
return reachable & break_check(evaluator, base_scope, element_scope.parent)
|
||||
return reachable & _break_check(evaluator, stmt, base_scope, element_scope.parent)
|
||||
return reachable
|
||||
|
||||
|
||||
def _check_flow(evaluator, flow):
|
||||
if flow.command in ('elif', 'if') and flow.inputs:
|
||||
types = evaluator.eval_statement(flow.inputs[0])
|
||||
values = set(x.py__bool__() for x in types)
|
||||
if len(values) == 1:
|
||||
return Status.lookup_table[values.pop()]
|
||||
else:
|
||||
return UNSURE
|
||||
elif flow.command in ('try', 'except', 'finally', 'while'):
|
||||
def _check_if(evaluator, node):
|
||||
types = evaluator.eval_element(node)
|
||||
values = set(x.py__bool__() for x in types)
|
||||
if len(values) == 1:
|
||||
return Status.lookup_table[values.pop()]
|
||||
else:
|
||||
return UNSURE
|
||||
else: # for loop
|
||||
return REACHABLE
|
||||
|
||||
@@ -1,323 +1,173 @@
|
||||
import copy
|
||||
from itertools import chain
|
||||
|
||||
from jedi.parser import representation as pr
|
||||
from jedi import debug
|
||||
from jedi.parser import tree as pr
|
||||
|
||||
|
||||
def deep_ast_copy(obj, new_elements_default=None):
|
||||
def deep_ast_copy(obj, parent=None, new_elements=None):
|
||||
"""
|
||||
Much, much faster than copy.deepcopy, but just for Parser elements (Doesn't
|
||||
copy parents).
|
||||
"""
|
||||
def sort_stmt(key_value):
|
||||
return key_value[0] not in ('_expression_list', '_assignment_details')
|
||||
|
||||
new_elements = new_elements_default or {}
|
||||
accept = (pr.Simple, pr.NamePart, pr.KeywordStatement)
|
||||
if new_elements is None:
|
||||
new_elements = {}
|
||||
|
||||
def recursion(obj):
|
||||
def copy_node(obj):
|
||||
# If it's already in the cache, just return it.
|
||||
try:
|
||||
return new_elements[obj]
|
||||
except KeyError:
|
||||
pass
|
||||
# Actually copy and set attributes.
|
||||
new_obj = copy.copy(obj)
|
||||
new_elements[obj] = new_obj
|
||||
|
||||
if isinstance(obj, pr.Statement):
|
||||
# Need to set _set_vars, otherwise the cache is not working
|
||||
# correctly, don't know exactly why.
|
||||
obj.get_defined_names()
|
||||
# Copy children
|
||||
new_children = []
|
||||
for child in obj.children:
|
||||
typ = child.type
|
||||
if typ in ('whitespace', 'operator', 'keyword', 'number', 'string'):
|
||||
# At the moment we're not actually copying those primitive
|
||||
# elements, because there's really no need to. The parents are
|
||||
# obviously wrong, but that's not an issue.
|
||||
new_child = child
|
||||
elif typ == 'name':
|
||||
new_elements[child] = new_child = copy.copy(child)
|
||||
new_child.parent = new_obj
|
||||
else: # Is a BaseNode.
|
||||
new_child = copy_node(child)
|
||||
new_child.parent = new_obj
|
||||
new_children.append(new_child)
|
||||
new_obj.children = new_children
|
||||
|
||||
# Gather items
|
||||
# Copy the names_dict (if there is one).
|
||||
try:
|
||||
items = list(obj.__dict__.items())
|
||||
names_dict = obj.names_dict
|
||||
except AttributeError:
|
||||
# __dict__ not available, because of __slots__
|
||||
items = []
|
||||
|
||||
before = ()
|
||||
for cls in obj.__class__.__mro__:
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
if before == cls.__slots__:
|
||||
continue
|
||||
before = cls.__slots__
|
||||
items += [(n, getattr(obj, n)) for n in before]
|
||||
except AttributeError:
|
||||
new_obj.names_dict = new_names_dict = {}
|
||||
except AttributeError: # Impossible to set CompFor.names_dict
|
||||
pass
|
||||
|
||||
if isinstance(obj, pr.Statement):
|
||||
# We need to process something with priority for statements,
|
||||
# because there are several references that don't walk the whole
|
||||
# tree in there.
|
||||
items = sorted(items, key=sort_stmt)
|
||||
|
||||
# Actually copy and set attributes.
|
||||
new_obj = copy.copy(obj)
|
||||
new_elements[obj] = new_obj
|
||||
|
||||
for key, value in items:
|
||||
# replace parent (first try _parent and then parent)
|
||||
if key in ['parent', '_parent'] and value is not None:
|
||||
if key == 'parent' and '_parent' in items:
|
||||
# parent can be a property
|
||||
continue
|
||||
try:
|
||||
setattr(new_obj, key, new_elements[value])
|
||||
except KeyError:
|
||||
pass
|
||||
elif key in ['parent_function', 'use_as_parent', '_sub_module']:
|
||||
continue
|
||||
elif isinstance(value, (list, tuple)):
|
||||
setattr(new_obj, key, list_or_tuple_rec(value))
|
||||
elif isinstance(value, accept):
|
||||
setattr(new_obj, key, recursion(value))
|
||||
else:
|
||||
for string, names in names_dict.items():
|
||||
new_names_dict[string] = [new_elements[n] for n in names]
|
||||
return new_obj
|
||||
|
||||
def list_or_tuple_rec(array_obj):
|
||||
if isinstance(array_obj, tuple):
|
||||
copied_array = list(array_obj)
|
||||
else:
|
||||
copied_array = array_obj[:] # lists, tuples, strings, unicode
|
||||
for i, el in enumerate(copied_array):
|
||||
if isinstance(el, accept):
|
||||
copied_array[i] = recursion(el)
|
||||
elif isinstance(el, (tuple, list)):
|
||||
copied_array[i] = list_or_tuple_rec(el)
|
||||
|
||||
if isinstance(array_obj, tuple):
|
||||
return tuple(copied_array)
|
||||
return copied_array
|
||||
|
||||
return recursion(obj)
|
||||
if obj.type == 'name':
|
||||
# Special case of a Name object.
|
||||
new_elements[obj] = new_obj = copy.copy(obj)
|
||||
if parent is not None:
|
||||
new_obj.parent = parent
|
||||
elif isinstance(obj, pr.BaseNode):
|
||||
new_obj = copy_node(obj)
|
||||
if parent is not None:
|
||||
for child in new_obj.children:
|
||||
if isinstance(child, (pr.Name, pr.BaseNode)):
|
||||
child.parent = parent
|
||||
else: # String literals and so on.
|
||||
new_obj = obj # Good enough, don't need to copy anything.
|
||||
return new_obj
|
||||
|
||||
|
||||
def call_signature_array_for_pos(stmt, pos):
|
||||
def call_of_name(name, cut_own_trailer=False):
|
||||
"""
|
||||
Searches for the array and position of a tuple.
|
||||
Returns a tuple of (array, index-in-the-array, call).
|
||||
Creates a "call" node that consist of all ``trailer`` and ``power``
|
||||
objects. E.g. if you call it with ``append``::
|
||||
|
||||
list([]).append(3) or None
|
||||
|
||||
You would get a node with the content ``list([]).append`` back.
|
||||
|
||||
This generates a copy of the original ast node.
|
||||
"""
|
||||
def search_array(arr, pos, origin_call=None):
|
||||
accepted_types = pr.Array.TUPLE, pr.Array.NOARRAY
|
||||
if arr.type == 'dict':
|
||||
for stmt in arr.values + arr.keys:
|
||||
tup = call_signature_array_for_pos(stmt, pos)
|
||||
if tup[0] is not None:
|
||||
return tup
|
||||
else:
|
||||
for i, stmt in enumerate(arr):
|
||||
tup = call_signature_array_for_pos(stmt, pos)
|
||||
if tup[0] is not None:
|
||||
return tup
|
||||
par = name
|
||||
if pr.is_node(par.parent, 'trailer'):
|
||||
par = par.parent
|
||||
|
||||
# Since we need the index, we duplicate efforts (with empty
|
||||
# arrays).
|
||||
if arr.start_pos < pos <= stmt.end_pos:
|
||||
if arr.type in accepted_types and origin_call:
|
||||
return arr, i, origin_call
|
||||
power = par.parent
|
||||
if pr.is_node(power, 'power') and power.children[0] != name \
|
||||
and not (power.children[-2] == '**' and
|
||||
name.start_pos > power.children[-1].start_pos):
|
||||
par = power
|
||||
# Now the name must be part of a trailer
|
||||
index = par.children.index(name.parent)
|
||||
if index != len(par.children) - 1 or cut_own_trailer:
|
||||
# Now we have to cut the other trailers away.
|
||||
par = deep_ast_copy(par)
|
||||
if not cut_own_trailer:
|
||||
# Normally we would remove just the stuff after the index, but
|
||||
# if the option is set remove the index as well. (for goto)
|
||||
index = index + 1
|
||||
par.children[index:] = []
|
||||
|
||||
if len(arr) == 0 and arr.start_pos < pos < arr.end_pos:
|
||||
if arr.type in accepted_types and origin_call:
|
||||
return arr, 0, origin_call
|
||||
return None, 0, None
|
||||
|
||||
def search_call(call, pos, origin_call=None):
|
||||
tup = None, 0, None
|
||||
while call.next is not None and tup[0] is None:
|
||||
method = search_array if isinstance(call.next, pr.Array) else search_call
|
||||
# TODO This is wrong, don't call search_call again, because it will
|
||||
# automatically be called by call.next.
|
||||
tup = method(call.next, pos, origin_call or call)
|
||||
call = call.next
|
||||
return tup
|
||||
|
||||
if stmt.start_pos >= pos >= stmt.end_pos:
|
||||
return None, 0, None
|
||||
|
||||
tup = None, 0, None
|
||||
for command in stmt.expression_list():
|
||||
if isinstance(command, pr.Array):
|
||||
tup = search_array(command, pos)
|
||||
elif isinstance(command, pr.StatementElement):
|
||||
tup = search_call(command, pos, command)
|
||||
if tup[0] is not None:
|
||||
break
|
||||
return tup
|
||||
return par
|
||||
|
||||
|
||||
def search_call_signatures(user_stmt, position):
|
||||
"""
|
||||
Returns the function Call that matches the position before.
|
||||
"""
|
||||
debug.speed('func_call start')
|
||||
call, arr, index = None, None, 0
|
||||
if user_stmt is not None and isinstance(user_stmt, pr.ExprStmt):
|
||||
# some parts will of the statement will be removed
|
||||
user_stmt = deep_ast_copy(user_stmt)
|
||||
arr, index, call = call_signature_array_for_pos(user_stmt, position)
|
||||
|
||||
# Now remove the part after the call. Including the array from the
|
||||
# statement.
|
||||
stmt_el = call
|
||||
while isinstance(stmt_el, pr.StatementElement):
|
||||
if stmt_el.next == arr:
|
||||
stmt_el.next = None
|
||||
break
|
||||
stmt_el = stmt_el.next
|
||||
|
||||
debug.speed('func_call parsed')
|
||||
return call, arr, index
|
||||
|
||||
|
||||
def scan_statement_for_calls(stmt, search_name, assignment_details=False):
|
||||
""" Returns the function Calls that match search_name in an Array. """
|
||||
def scan_array(arr, search_name):
|
||||
result = []
|
||||
if arr.type == pr.Array.DICT:
|
||||
for key_stmt, value_stmt in arr.items():
|
||||
result += scan_statement_for_calls(key_stmt, search_name)
|
||||
result += scan_statement_for_calls(value_stmt, search_name)
|
||||
else:
|
||||
for stmt in arr:
|
||||
result += scan_statement_for_calls(stmt, search_name)
|
||||
return result
|
||||
|
||||
check = list(stmt.expression_list())
|
||||
if assignment_details:
|
||||
for expression_list, op in stmt.assignment_details:
|
||||
check += expression_list
|
||||
|
||||
result = []
|
||||
for c in check:
|
||||
if isinstance(c, pr.Array):
|
||||
result += scan_array(c, search_name)
|
||||
elif isinstance(c, pr.Call):
|
||||
s_new = c
|
||||
while s_new is not None:
|
||||
if isinstance(s_new, pr.Array):
|
||||
result += scan_array(s_new, search_name)
|
||||
else:
|
||||
n = s_new.name
|
||||
if isinstance(n, pr.Name) \
|
||||
and search_name in [str(x) for x in n.names]:
|
||||
result.append(c)
|
||||
|
||||
s_new = s_new.next
|
||||
elif isinstance(c, pr.ListComprehension):
|
||||
for s in c.stmt, c.middle, c.input:
|
||||
result += scan_statement_for_calls(s, search_name)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_module_name_parts(module):
|
||||
def get_module_names(module, all_scopes):
|
||||
"""
|
||||
Returns a dictionary with name parts as keys and their call paths as
|
||||
values.
|
||||
"""
|
||||
def scope_name_parts(scope):
|
||||
for s in scope.subscopes:
|
||||
# Yield the name parts, not names.
|
||||
yield s.name.names[0]
|
||||
for need_yield_from in scope_name_parts(s):
|
||||
yield need_yield_from
|
||||
|
||||
statements_or_imports = set(chain(*module.used_names.values()))
|
||||
name_parts = set(scope_name_parts(module))
|
||||
for stmt_or_import in statements_or_imports:
|
||||
if isinstance(stmt_or_import, pr.Import):
|
||||
for name in stmt_or_import.get_all_import_names():
|
||||
name_parts.update(name.names)
|
||||
else:
|
||||
# Running this ensures that all the expression lists are generated
|
||||
# and the parents are all set. (Important for Lambdas) Howeer, this
|
||||
# is only necessary because of the weird fault-tolerant structure
|
||||
# of the parser. I hope to get rid of such behavior in the future.
|
||||
stmt_or_import.expression_list()
|
||||
# For now this is ok, but this could change if we don't have a
|
||||
# token_list anymore, but for now this is the easiest way to get
|
||||
# all the name_parts.
|
||||
for tok in stmt_or_import._token_list:
|
||||
if isinstance(tok, pr.Name):
|
||||
name_parts.update(tok.names)
|
||||
|
||||
return name_parts
|
||||
if all_scopes:
|
||||
dct = module.used_names
|
||||
else:
|
||||
dct = module.names_dict
|
||||
return chain.from_iterable(dct.values())
|
||||
|
||||
|
||||
def statement_elements_in_statement(stmt):
|
||||
"""
|
||||
Returns a list of statements. Statements can contain statements again in
|
||||
Arrays.
|
||||
"""
|
||||
def search_stmt_el(stmt_el, stmt_els):
|
||||
stmt_els.append(stmt_el)
|
||||
while stmt_el is not None:
|
||||
if isinstance(stmt_el, pr.Array):
|
||||
for stmt in stmt_el.values + stmt_el.keys:
|
||||
stmt_els.extend(statement_elements_in_statement(stmt))
|
||||
stmt_el = stmt_el.next
|
||||
|
||||
stmt_els = []
|
||||
for as_name in stmt.as_names:
|
||||
# TODO This creates a custom pr.Call, we shouldn't do that.
|
||||
stmt_els.append(pr.Call(as_name._sub_module, as_name,
|
||||
as_name.start_pos, as_name.end_pos))
|
||||
|
||||
ass_items = chain.from_iterable(items for items, op in stmt.assignment_details)
|
||||
for item in stmt.expression_list() + list(ass_items):
|
||||
if isinstance(item, pr.StatementElement):
|
||||
search_stmt_el(item, stmt_els)
|
||||
elif isinstance(item, pr.ListComprehension):
|
||||
for stmt in (item.stmt, item.middle, item.input):
|
||||
stmt_els.extend(statement_elements_in_statement(stmt))
|
||||
elif isinstance(item, pr.Lambda):
|
||||
for stmt in item.params + item.returns:
|
||||
stmt_els.extend(statement_elements_in_statement(stmt))
|
||||
|
||||
return stmt_els
|
||||
|
||||
|
||||
class FakeSubModule():
|
||||
line_offset = 0
|
||||
|
||||
|
||||
class FakeArray(pr.Array):
|
||||
def __init__(self, values, parent=None, arr_type=pr.Array.LIST):
|
||||
p = (0, 0)
|
||||
super(FakeArray, self).__init__(FakeSubModule, p, arr_type, parent)
|
||||
self.values = values
|
||||
|
||||
|
||||
class FakeStatement(pr.ExprStmt):
|
||||
def __init__(self, expression_list, start_pos=(0, 0), parent=None):
|
||||
p = start_pos
|
||||
super(FakeStatement, self).__init__(FakeSubModule, expression_list, p, p)
|
||||
self.set_expression_list(expression_list)
|
||||
self.parent = parent
|
||||
|
||||
|
||||
class FakeImport(pr.Import):
|
||||
class FakeImport(pr.ImportName):
|
||||
def __init__(self, name, parent, level=0):
|
||||
p = 0, 0
|
||||
super(FakeImport, self).__init__(FakeSubModule, p, p, name,
|
||||
relative_count=level)
|
||||
super(FakeImport, self).__init__([])
|
||||
self.parent = parent
|
||||
self._level = level
|
||||
self.name = name
|
||||
|
||||
def get_defined_names(self):
|
||||
return [self.name]
|
||||
|
||||
def aliases(self):
|
||||
return {}
|
||||
|
||||
@property
|
||||
def level(self):
|
||||
return self._level
|
||||
|
||||
@property
|
||||
def start_pos(self):
|
||||
return 0, 0
|
||||
|
||||
def paths(self):
|
||||
return [[self.name]]
|
||||
|
||||
def is_definition(self):
|
||||
return True
|
||||
|
||||
|
||||
class FakeName(pr.Name):
|
||||
def __init__(self, name_or_names, parent=None, start_pos=(0, 0)):
|
||||
if isinstance(name_or_names, list):
|
||||
names = [(n, start_pos) for n in name_or_names]
|
||||
else:
|
||||
names = [(name_or_names, start_pos)]
|
||||
super(FakeName, self).__init__(FakeSubModule, names, start_pos, start_pos, parent)
|
||||
def __init__(self, name_str, parent=None, start_pos=(0, 0), is_definition=None):
|
||||
"""
|
||||
In case is_definition is defined (not None), that bool value will be
|
||||
returned.
|
||||
"""
|
||||
super(FakeName, self).__init__(pr.zero_position_modifier, name_str, start_pos)
|
||||
self.parent = parent
|
||||
self._is_definition = is_definition
|
||||
|
||||
def get_definition(self):
|
||||
return self.parent
|
||||
|
||||
def is_definition(self):
|
||||
if self._is_definition is None:
|
||||
return super(FakeName, self).is_definition()
|
||||
else:
|
||||
return self._is_definition
|
||||
|
||||
|
||||
class LazyName(FakeName):
|
||||
def __init__(self, name, parent_callback):
|
||||
super(LazyName, self).__init__(name)
|
||||
def __init__(self, name, parent_callback, is_definition=None):
|
||||
super(LazyName, self).__init__(name, is_definition=is_definition)
|
||||
self._parent_callback = parent_callback
|
||||
|
||||
@property
|
||||
@@ -326,15 +176,4 @@ class LazyName(FakeName):
|
||||
|
||||
@parent.setter
|
||||
def parent(self, value):
|
||||
pass # Do nothing, lower level can try to set the parent.
|
||||
|
||||
|
||||
def stmts_to_stmt(statements):
|
||||
"""
|
||||
Sometimes we want to have something like a result_set and unite some
|
||||
statements in one.
|
||||
"""
|
||||
if len(statements) == 1:
|
||||
return statements[0]
|
||||
array = FakeArray(statements, arr_type=pr.Array.NOARRAY)
|
||||
return FakeStatement([array])
|
||||
pass # Do nothing, super classes can try to set the parent.
|
||||
|
||||
@@ -21,7 +21,7 @@ from jedi import common
|
||||
from jedi import debug
|
||||
from jedi import cache
|
||||
from jedi.parser import fast
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate.sys_path import get_sys_path, sys_path_with_modifications
|
||||
from jedi.evaluate import helpers
|
||||
from jedi import settings
|
||||
@@ -29,206 +29,101 @@ from jedi.common import source_to_unicode
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import analysis
|
||||
from jedi.evaluate.cache import memoize_default, NO_DEFAULT
|
||||
from jedi.evaluate.helpers import FakeSubModule
|
||||
|
||||
|
||||
class ModuleNotFound(Exception):
|
||||
def __init__(self, name_part):
|
||||
def __init__(self, name):
|
||||
super(ModuleNotFound, self).__init__()
|
||||
self.name_part = name_part
|
||||
self.name = name
|
||||
|
||||
|
||||
def completion_names(evaluator, imp, pos):
|
||||
name = imp.name_for_position(pos)
|
||||
module = imp.get_parent_until()
|
||||
if name is None:
|
||||
level = 0
|
||||
for node in imp.children:
|
||||
if node.end_pos <= pos:
|
||||
if node in ('.', '...'):
|
||||
level += len(node.value)
|
||||
import_path = []
|
||||
else:
|
||||
# Completion on an existing name.
|
||||
|
||||
# The import path needs to be reduced by one, because we're completing.
|
||||
import_path = imp.path_for_name(name)[:-1]
|
||||
level = imp.level
|
||||
|
||||
importer = get_importer(evaluator, tuple(import_path), module, level)
|
||||
if isinstance(imp, pr.ImportFrom):
|
||||
c = imp.children
|
||||
only_modules = c[c.index('import')].start_pos >= pos
|
||||
else:
|
||||
only_modules = True
|
||||
return importer.completion_names(evaluator, only_modules)
|
||||
|
||||
|
||||
class ImportWrapper(pr.Base):
|
||||
"""
|
||||
An ImportWrapper is the path of a `pr.Import` object.
|
||||
"""
|
||||
class GlobalNamespace(object):
|
||||
def __init__(self):
|
||||
self.line_offset = 0
|
||||
|
||||
GlobalNamespace = GlobalNamespace()
|
||||
|
||||
def __init__(self, evaluator, import_stmt, is_like_search=False, kill_count=0,
|
||||
nested_resolve=False, is_just_from=False):
|
||||
"""
|
||||
:param is_like_search: If the wrapper is used for autocompletion.
|
||||
:param kill_count: Placement of the import, sometimes we only want to
|
||||
resole a part of the import.
|
||||
:param nested_resolve: Resolves nested imports fully.
|
||||
:param is_just_from: Bool if the second part is missing.
|
||||
"""
|
||||
def __init__(self, evaluator, name):
|
||||
self._evaluator = evaluator
|
||||
self.import_stmt = import_stmt
|
||||
self.is_like_search = is_like_search
|
||||
self.nested_resolve = nested_resolve
|
||||
self.is_just_from = is_just_from
|
||||
self._name = name
|
||||
|
||||
self.is_partial_import = bool(max(0, kill_count))
|
||||
|
||||
# rest is import_path resolution
|
||||
import_path = []
|
||||
if import_stmt.from_ns:
|
||||
import_path += import_stmt.from_ns.names
|
||||
if import_stmt.namespace:
|
||||
if self.import_stmt.is_nested() and not nested_resolve:
|
||||
import_path.append(import_stmt.namespace.names[0])
|
||||
else:
|
||||
import_path += import_stmt.namespace.names
|
||||
|
||||
for i in range(kill_count + int(is_like_search)):
|
||||
if import_path:
|
||||
import_path.pop()
|
||||
|
||||
module = import_stmt.get_parent_until()
|
||||
self._importer = get_importer(self._evaluator, tuple(import_path), module,
|
||||
import_stmt.relative_count)
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s: %s>' % (type(self).__name__, self.import_stmt)
|
||||
|
||||
@property
|
||||
def import_path(self):
|
||||
return self._importer.str_import_path()
|
||||
|
||||
def get_defined_names(self, on_import_stmt=False):
|
||||
names = []
|
||||
for scope in self.follow():
|
||||
if scope is ImportWrapper.GlobalNamespace:
|
||||
if not self._is_relative_import():
|
||||
names += self._get_module_names()
|
||||
|
||||
if self._importer.file_path is not None:
|
||||
path = os.path.abspath(self._importer.file_path)
|
||||
for i in range(self.import_stmt.relative_count - 1):
|
||||
path = os.path.dirname(path)
|
||||
names += self._get_module_names([path])
|
||||
|
||||
if self._is_relative_import():
|
||||
rel_path = os.path.join(self._importer.get_relative_path(),
|
||||
'__init__.py')
|
||||
if os.path.exists(rel_path):
|
||||
m = _load_module(rel_path)
|
||||
names += m.get_defined_names()
|
||||
else:
|
||||
if self.import_path == ('flask', 'ext'):
|
||||
# List Flask extensions like ``flask_foo``
|
||||
for mod in self._get_module_names():
|
||||
modname = str(mod)
|
||||
if modname.startswith('flask_'):
|
||||
extname = modname[len('flask_'):]
|
||||
names.append(self._generate_name(extname))
|
||||
# Now the old style: ``flaskext.foo``
|
||||
for dir in self._importer.sys_path_with_modifications():
|
||||
flaskext = os.path.join(dir, 'flaskext')
|
||||
if os.path.isdir(flaskext):
|
||||
names += self._get_module_names([flaskext])
|
||||
if on_import_stmt and isinstance(scope, pr.Module) \
|
||||
and scope.path.endswith('__init__.py'):
|
||||
pkg_path = os.path.dirname(scope.path)
|
||||
paths = self._importer.namespace_packages(pkg_path,
|
||||
self.import_path)
|
||||
names += self._get_module_names([pkg_path] + paths)
|
||||
if self.is_just_from:
|
||||
# In the case of an import like `from x.` we don't need to
|
||||
# add all the variables.
|
||||
if ('os',) == self.import_path and not self._is_relative_import():
|
||||
# os.path is a hardcoded exception, because it's a
|
||||
# ``sys.modules`` modification.
|
||||
names.append(self._generate_name('path'))
|
||||
continue
|
||||
from jedi.evaluate import finder
|
||||
for s, scope_names in finder.get_names_of_scope(self._evaluator,
|
||||
scope, include_builtin=False):
|
||||
for n in scope_names:
|
||||
if self.import_stmt.from_ns is None \
|
||||
or self.is_partial_import:
|
||||
# from_ns must be defined to access module
|
||||
# values plus a partial import means that there
|
||||
# is something after the import, which
|
||||
# automatically implies that there must not be
|
||||
# any non-module scope.
|
||||
continue
|
||||
names.append(n)
|
||||
return names
|
||||
|
||||
def _generate_name(self, name):
|
||||
return helpers.FakeName(name, parent=self.import_stmt)
|
||||
|
||||
def _get_module_names(self, search_path=None):
|
||||
"""
|
||||
Get the names of all modules in the search_path. This means file names
|
||||
and not names defined in the files.
|
||||
"""
|
||||
|
||||
names = []
|
||||
# add builtin module names
|
||||
if search_path is None:
|
||||
names += [self._generate_name(name) for name in sys.builtin_module_names]
|
||||
|
||||
if search_path is None:
|
||||
search_path = self._importer.sys_path_with_modifications()
|
||||
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
|
||||
names.append(self._generate_name(name))
|
||||
return names
|
||||
|
||||
def _is_relative_import(self):
|
||||
return bool(self.import_stmt.relative_count)
|
||||
self._import = name.get_parent_until(pr.Import)
|
||||
self.import_path = self._import.path_for_name(name)
|
||||
|
||||
@memoize_default()
|
||||
def follow(self, is_goto=False):
|
||||
if self._evaluator.recursion_detector.push_stmt(self.import_stmt):
|
||||
if self._evaluator.recursion_detector.push_stmt(self._import):
|
||||
# check recursion
|
||||
return []
|
||||
|
||||
if self.import_path:
|
||||
try:
|
||||
module = self._import.get_parent_until()
|
||||
import_path = self._import.path_for_name(self._name)
|
||||
importer = get_importer(self._evaluator, tuple(import_path),
|
||||
module, self._import.level)
|
||||
try:
|
||||
module, rest = self._importer.follow_file_system()
|
||||
module, rest = importer.follow_file_system()
|
||||
except ModuleNotFound as e:
|
||||
analysis.add(self._evaluator, 'import-error', e.name_part)
|
||||
analysis.add(self._evaluator, 'import-error', e.name)
|
||||
return []
|
||||
|
||||
if self.import_stmt.is_nested() and not self.nested_resolve:
|
||||
scopes = [NestedImportModule(module, self.import_stmt)]
|
||||
else:
|
||||
scopes = [module]
|
||||
if module is None:
|
||||
return []
|
||||
|
||||
star_imports = remove_star_imports(self._evaluator, module)
|
||||
if star_imports:
|
||||
scopes = [StarImportModule(scopes[0], star_imports)]
|
||||
#if self._import.is_nested() and not self.nested_resolve:
|
||||
# scopes = [NestedImportModule(module, self._import)]
|
||||
scopes = [module]
|
||||
|
||||
# goto only accepts Names or NameParts
|
||||
# goto only accepts `Name`
|
||||
if is_goto and not rest:
|
||||
scopes = [s.name.names[-1] for s in scopes]
|
||||
scopes = [s.name for s in scopes]
|
||||
|
||||
# follow the rest of the import (not FS -> classes, functions)
|
||||
if len(rest) > 1 or rest and self.is_like_search:
|
||||
scopes = []
|
||||
if ('os', 'path') == self.import_path[:2] \
|
||||
and not self._is_relative_import():
|
||||
# This is a huge exception, we follow a nested import
|
||||
# ``os.path``, because it's a very important one in Python
|
||||
# that is being achieved by messing with ``sys.modules`` in
|
||||
# ``os``.
|
||||
scopes = self._evaluator.follow_path(iter(rest), [module], module)
|
||||
elif rest:
|
||||
if rest:
|
||||
if is_goto:
|
||||
scopes = list(chain.from_iterable(
|
||||
self._evaluator.find_types(s, rest[0], is_goto=True)
|
||||
for s in scopes))
|
||||
else:
|
||||
scopes = list(chain.from_iterable(
|
||||
self._evaluator.follow_path(iter(rest), [s], s)
|
||||
for s in scopes))
|
||||
else:
|
||||
scopes = [ImportWrapper.GlobalNamespace]
|
||||
debug.dbg('after import: %s', scopes)
|
||||
if not scopes:
|
||||
analysis.add(self._evaluator, 'import-error',
|
||||
self._importer.import_path[-1])
|
||||
self._evaluator.recursion_detector.pop_stmt()
|
||||
if self._import.type == 'import_from' \
|
||||
or importer.str_import_path == ('os', 'path'):
|
||||
scopes = importer.follow_rest(scopes[0], rest)
|
||||
else:
|
||||
scopes = []
|
||||
debug.dbg('after import: %s', scopes)
|
||||
if not scopes:
|
||||
analysis.add(self._evaluator, 'import-error', importer.import_path[-1])
|
||||
finally:
|
||||
self._evaluator.recursion_detector.pop_stmt()
|
||||
return scopes
|
||||
|
||||
|
||||
class NestedImportModule(pr.Module):
|
||||
"""
|
||||
TODO while there's no use case for nested import module right now, we might
|
||||
be able to use them for static analysis checks later on.
|
||||
"""
|
||||
def __init__(self, module, nested_import):
|
||||
self._module = module
|
||||
self._nested_import = nested_import
|
||||
@@ -241,21 +136,12 @@ class NestedImportModule(pr.Module):
|
||||
# This is not an existing Import statement. Therefore, set position to
|
||||
# 0 (0 is not a valid line number).
|
||||
zero = (0, 0)
|
||||
names = [unicode(name_part) for name_part in i.namespace.names[1:]]
|
||||
names = [unicode(name) for name in i.namespace_names[1:]]
|
||||
name = helpers.FakeName(names, self._nested_import)
|
||||
new = pr.Import(i._sub_module, zero, zero, name)
|
||||
new.parent = self._module
|
||||
debug.dbg('Generated a nested import: %s', new)
|
||||
return helpers.FakeName(str(i.namespace.names[1]), new)
|
||||
|
||||
def _get_defined_names(self):
|
||||
"""
|
||||
NesteImportModule don't seem to be actively used, right now.
|
||||
However, they might in the future. If we do more sophisticated static
|
||||
analysis checks.
|
||||
"""
|
||||
nested = self._get_nested_import_name()
|
||||
return self._module.get_defined_names() + [nested]
|
||||
return helpers.FakeName(str(i.namespace_names[1]), new)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self._module, name)
|
||||
@@ -265,32 +151,12 @@ class NestedImportModule(pr.Module):
|
||||
self._nested_import)
|
||||
|
||||
|
||||
class StarImportModule(pr.Module):
|
||||
"""
|
||||
Used if a module contains star imports.
|
||||
"""
|
||||
def __init__(self, module, star_import_modules):
|
||||
self._module = module
|
||||
self.star_import_modules = star_import_modules
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
for module, names in self._module.scope_names_generator(position):
|
||||
yield module, names
|
||||
for s in self.star_import_modules:
|
||||
yield s, s.get_defined_names()
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self._module, name)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s: %s>" % (self.__class__.__name__, self._module)
|
||||
|
||||
|
||||
def get_importer(evaluator, import_path, module, level=0):
|
||||
"""
|
||||
Checks the evaluator caches first, which resembles the ``sys.modules``
|
||||
cache and speeds up libraries like ``numpy``.
|
||||
"""
|
||||
import_path = tuple(import_path) # We use it as hash in the import cache.
|
||||
if level != 0:
|
||||
# Only absolute imports should be cached. Otherwise we have a mess.
|
||||
# TODO Maybe calculate the absolute import and save it here?
|
||||
@@ -326,9 +192,10 @@ class _Importer(object):
|
||||
# TODO abspath
|
||||
self.file_path = os.path.dirname(path) if path is not None else None
|
||||
|
||||
@property
|
||||
def str_import_path(self):
|
||||
"""Returns the import path as pure strings instead of NameParts."""
|
||||
return tuple(str(name_part) for name_part in self.import_path)
|
||||
"""Returns the import path as pure strings instead of `Name`."""
|
||||
return tuple(str(name) for name in self.import_path)
|
||||
|
||||
def get_relative_path(self):
|
||||
path = self.file_path
|
||||
@@ -352,31 +219,45 @@ class _Importer(object):
|
||||
return in_path + sys_path_with_modifications(self._evaluator, self.module)
|
||||
|
||||
def follow(self, evaluator):
|
||||
scope, rest = self.follow_file_system()
|
||||
try:
|
||||
scope, rest = self.follow_file_system()
|
||||
except ModuleNotFound:
|
||||
return []
|
||||
if rest:
|
||||
# follow the rest of the import (not FS -> classes, functions)
|
||||
return evaluator.follow_path(iter(rest), [scope], scope)
|
||||
return self.follow_rest(scope, rest)
|
||||
return [scope]
|
||||
|
||||
def follow_rest(self, module, rest):
|
||||
# Either os.path or path length is smaller.
|
||||
if len(rest) < 2 or len(self.str_import_path) < 4 \
|
||||
and ('os', 'path') == self.str_import_path[:2] and self.level == 0:
|
||||
# This is a huge exception, we follow a nested import
|
||||
# ``os.path``, because it's a very important one in Python
|
||||
# that is being achieved by messing with ``sys.modules`` in
|
||||
# ``os``.
|
||||
scopes = [module]
|
||||
for r in rest:
|
||||
scopes = list(chain.from_iterable(
|
||||
self._evaluator.find_types(s, r)
|
||||
for s in scopes))
|
||||
return scopes
|
||||
else:
|
||||
return []
|
||||
|
||||
@memoize_default(NO_DEFAULT)
|
||||
def follow_file_system(self):
|
||||
# Handle "magic" Flask extension imports:
|
||||
# ``flask.ext.foo`` is really ``flask_foo`` or ``flaskext.foo``.
|
||||
if len(self.import_path) > 2 and \
|
||||
[str(part) for part in self.import_path[:2]] == ['flask', 'ext']:
|
||||
if len(self.import_path) > 2 and self.str_import_path[:2] == ('flask', 'ext'):
|
||||
orig_path = tuple(self.import_path)
|
||||
part = orig_path[2]
|
||||
pos = (part._line, part._column)
|
||||
try:
|
||||
self.import_path = (
|
||||
pr.NamePart(FakeSubModule, 'flask_' + str(part), part.parent, pos),
|
||||
) + orig_path[3:]
|
||||
self.import_path = ('flask_' + str(orig_path[2]),) + orig_path[3:]
|
||||
return self._real_follow_file_system()
|
||||
except ModuleNotFound as e:
|
||||
self.import_path = (
|
||||
pr.NamePart(FakeSubModule, 'flaskext', part.parent, pos),
|
||||
) + orig_path[2:]
|
||||
except ModuleNotFound:
|
||||
self.import_path = ('flaskext',) + orig_path[2:]
|
||||
return self._real_follow_file_system()
|
||||
|
||||
return self._real_follow_file_system()
|
||||
|
||||
def _real_follow_file_system(self):
|
||||
@@ -430,7 +311,7 @@ class _Importer(object):
|
||||
options = ('declare_namespace(__name__)', 'extend_path(__path__')
|
||||
if options[0] in content or options[1] in content:
|
||||
# It is a namespace, now try to find the rest of the modules.
|
||||
return follow_path(iter(import_path), sys.path)
|
||||
return follow_path((str(i) for i in import_path), sys.path)
|
||||
return []
|
||||
|
||||
def _follow_sys_path(self, sys_path):
|
||||
@@ -438,7 +319,7 @@ class _Importer(object):
|
||||
Find a module with a path (of the module, like usb.backend.libusb10).
|
||||
"""
|
||||
def follow_str(ns_path, string):
|
||||
debug.dbg('follow_module %s %s', ns_path, string)
|
||||
debug.dbg('follow_module %s in %s', string, ns_path)
|
||||
path = None
|
||||
if ns_path:
|
||||
path = ns_path
|
||||
@@ -448,7 +329,7 @@ class _Importer(object):
|
||||
if path is not None:
|
||||
importing = find_module(string, [path])
|
||||
else:
|
||||
debug.dbg('search_module %s %s', string, self.file_path)
|
||||
debug.dbg('search_module %s in %s', string, self.file_path)
|
||||
# Override the sys.path. It works only good that way.
|
||||
# Injecting the path directly into `find_module` did not work.
|
||||
sys.path, temp = sys_path, sys.path
|
||||
@@ -473,7 +354,7 @@ class _Importer(object):
|
||||
with common.ignored(ImportError):
|
||||
current_namespace = follow_str(rel_path, '__init__')
|
||||
elif current_namespace[2]: # is a package
|
||||
path = self.str_import_path()[:i]
|
||||
path = self.str_import_path[:i]
|
||||
for n in self.namespace_packages(current_namespace[1], path):
|
||||
try:
|
||||
current_namespace = follow_str(n, unicode(s))
|
||||
@@ -485,7 +366,7 @@ class _Importer(object):
|
||||
|
||||
if not _continue:
|
||||
if current_namespace[1]:
|
||||
rest = self.str_import_path()[i:]
|
||||
rest = self.str_import_path[i:]
|
||||
break
|
||||
else:
|
||||
raise ModuleNotFound(s)
|
||||
@@ -503,49 +384,102 @@ class _Importer(object):
|
||||
else:
|
||||
source = current_namespace[0].read()
|
||||
current_namespace[0].close()
|
||||
return _load_module(path, source, sys_path=sys_path), rest
|
||||
return _load_module(self._evaluator, path, source, sys_path=sys_path), rest
|
||||
else:
|
||||
return _load_module(name=path, sys_path=sys_path), rest
|
||||
return _load_module(self._evaluator, name=path, sys_path=sys_path), rest
|
||||
|
||||
def _generate_name(self, name):
|
||||
return helpers.FakeName(name, parent=self.module)
|
||||
|
||||
def follow_imports(evaluator, scopes):
|
||||
"""
|
||||
Here we strip the imports - they don't get resolved necessarily.
|
||||
Really used anymore? Merge with remove_star_imports?
|
||||
"""
|
||||
result = []
|
||||
for s in scopes:
|
||||
if isinstance(s, pr.Import):
|
||||
for r in ImportWrapper(evaluator, s).follow():
|
||||
result.append(r)
|
||||
def _get_module_names(self, search_path=None):
|
||||
"""
|
||||
Get the names of all modules in the search_path. This means file names
|
||||
and not names defined in the files.
|
||||
"""
|
||||
|
||||
names = []
|
||||
# add builtin module names
|
||||
if search_path is None:
|
||||
names += [self._generate_name(name) for name in sys.builtin_module_names]
|
||||
|
||||
if search_path is None:
|
||||
search_path = self.sys_path_with_modifications()
|
||||
for module_loader, name, is_pkg in pkgutil.iter_modules(search_path):
|
||||
names.append(self._generate_name(name))
|
||||
return names
|
||||
|
||||
def completion_names(self, evaluator, only_modules=False):
|
||||
"""
|
||||
:param only_modules: Indicates wheter it's possible to import a
|
||||
definition that is not defined in a module.
|
||||
"""
|
||||
from jedi.evaluate import finder, representation as er
|
||||
names = []
|
||||
if self.import_path:
|
||||
# flask
|
||||
if self.str_import_path == ('flask', 'ext'):
|
||||
# List Flask extensions like ``flask_foo``
|
||||
for mod in self._get_module_names():
|
||||
modname = str(mod)
|
||||
if modname.startswith('flask_'):
|
||||
extname = modname[len('flask_'):]
|
||||
names.append(self._generate_name(extname))
|
||||
# Now the old style: ``flaskext.foo``
|
||||
for dir in self.sys_path_with_modifications():
|
||||
flaskext = os.path.join(dir, 'flaskext')
|
||||
if os.path.isdir(flaskext):
|
||||
names += self._get_module_names([flaskext])
|
||||
|
||||
for scope in self.follow(evaluator):
|
||||
# Non-modules are not completable.
|
||||
if not scope.type == 'file_input': # not a module
|
||||
continue
|
||||
|
||||
# namespace packages
|
||||
if isinstance(scope, pr.Module) and scope.path.endswith('__init__.py'):
|
||||
pkg_path = os.path.dirname(scope.path)
|
||||
paths = self.namespace_packages(pkg_path, self.import_path)
|
||||
names += self._get_module_names([pkg_path] + paths)
|
||||
|
||||
if only_modules:
|
||||
# In the case of an import like `from x.` we don't need to
|
||||
# add all the variables.
|
||||
if ('os',) == self.str_import_path and not self.level:
|
||||
# os.path is a hardcoded exception, because it's a
|
||||
# ``sys.modules`` modification.
|
||||
names.append(self._generate_name('path'))
|
||||
|
||||
continue
|
||||
|
||||
for names_dict in scope.names_dicts(search_global=False):
|
||||
_names = list(chain.from_iterable(names_dict.values()))
|
||||
if not _names:
|
||||
continue
|
||||
_names = finder.filter_definition_names(_names, scope)
|
||||
names += _names
|
||||
else:
|
||||
result.append(s)
|
||||
return result
|
||||
# Empty import path=completion after import
|
||||
if not self.level:
|
||||
names += self._get_module_names()
|
||||
|
||||
if self.file_path is not None:
|
||||
path = os.path.abspath(self.file_path)
|
||||
for i in range(self.level - 1):
|
||||
path = os.path.dirname(path)
|
||||
names += self._get_module_names([path])
|
||||
|
||||
if self.level:
|
||||
rel_path = os.path.join(self.get_relative_path(),
|
||||
'__init__.py')
|
||||
if os.path.exists(rel_path):
|
||||
module = _load_module(self._evaluator, rel_path)
|
||||
module = er.wrap(self._evaluator, module)
|
||||
for names_dict in module.names_dicts(search_global=False):
|
||||
names += chain.from_iterable(names_dict.values())
|
||||
return names
|
||||
|
||||
|
||||
@cache.cache_star_import
|
||||
def remove_star_imports(evaluator, scope, ignored_modules=()):
|
||||
"""
|
||||
Check a module for star imports::
|
||||
|
||||
from module import *
|
||||
|
||||
and follow these modules.
|
||||
"""
|
||||
if isinstance(scope, StarImportModule):
|
||||
return scope.star_import_modules
|
||||
modules = follow_imports(evaluator, (i for i in scope.get_imports() if i.star))
|
||||
new = []
|
||||
for m in modules:
|
||||
if m not in ignored_modules:
|
||||
new += remove_star_imports(evaluator, m, modules)
|
||||
modules += new
|
||||
|
||||
# Filter duplicate modules.
|
||||
return set(modules)
|
||||
|
||||
|
||||
def _load_module(path=None, source=None, name=None, sys_path=None):
|
||||
def _load_module(evaluator, path=None, source=None, name=None, sys_path=None):
|
||||
def load(source):
|
||||
dotted_path = path and compiled.dotted_from_fs_path(path, sys_path)
|
||||
if path is not None and path.endswith('.py') \
|
||||
@@ -556,7 +490,7 @@ def _load_module(path=None, source=None, name=None, sys_path=None):
|
||||
else:
|
||||
return compiled.load_module(path, name)
|
||||
p = path or name
|
||||
p = fast.FastParser(common.source_to_unicode(source), p)
|
||||
p = fast.FastParser(evaluator.grammar, common.source_to_unicode(source), p)
|
||||
cache.save_parser(path, name, p)
|
||||
return p.module
|
||||
|
||||
@@ -564,7 +498,7 @@ def _load_module(path=None, source=None, name=None, sys_path=None):
|
||||
return load(source) if cached is None else cached.module
|
||||
|
||||
|
||||
def get_modules_containing_name(mods, name):
|
||||
def get_modules_containing_name(evaluator, mods, name):
|
||||
"""
|
||||
Search a name in the directories of modules.
|
||||
"""
|
||||
@@ -581,7 +515,7 @@ def get_modules_containing_name(mods, name):
|
||||
with open(path, 'rb') as f:
|
||||
source = source_to_unicode(f.read())
|
||||
if name in source:
|
||||
return _load_module(path, source)
|
||||
return _load_module(evaluator, path, source)
|
||||
|
||||
# skip non python modules
|
||||
mods = set(m for m in mods if not isinstance(m, compiled.CompiledObject))
|
||||
@@ -603,5 +537,5 @@ def get_modules_containing_name(mods, name):
|
||||
for p in sorted(paths):
|
||||
# make testing easier, sort it - same results on every interpreter
|
||||
c = check_python_file(p)
|
||||
if c is not None and c not in mods:
|
||||
if c is not None and c not in mods and not isinstance(c, compiled.CompiledObject):
|
||||
yield c
|
||||
|
||||
@@ -26,53 +26,38 @@ from jedi import common
|
||||
from jedi import debug
|
||||
from jedi import settings
|
||||
from jedi._compatibility import use_metaclass, is_py3, unicode
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.evaluate import precedence
|
||||
from jedi.evaluate.cache import CachedMetaClass, memoize_default, NO_DEFAULT
|
||||
from jedi.cache import underscore_memoization
|
||||
from jedi.evaluate.cache import CachedMetaClass, memoize_default
|
||||
from jedi.evaluate import analysis
|
||||
|
||||
|
||||
def unite(iterable):
|
||||
"""Turns a two dimensional array into a one dimensional."""
|
||||
return list(chain.from_iterable(iterable))
|
||||
|
||||
|
||||
class IterableWrapper(pr.Base):
|
||||
def is_class(self):
|
||||
return False
|
||||
|
||||
|
||||
class Generator(use_metaclass(CachedMetaClass, IterableWrapper)):
|
||||
"""Handling of `yield` functions."""
|
||||
def __init__(self, evaluator, func, var_args):
|
||||
super(Generator, self).__init__()
|
||||
self._evaluator = evaluator
|
||||
self.func = func
|
||||
self.var_args = var_args
|
||||
|
||||
@underscore_memoization
|
||||
def _get_defined_names(self):
|
||||
"""
|
||||
Returns a list of names that define a generator, which can return the
|
||||
content of a generator.
|
||||
"""
|
||||
class GeneratorMixin(object):
|
||||
@memoize_default()
|
||||
def names_dicts(self, search_global=False): # is always False
|
||||
dct = {}
|
||||
executes_generator = '__next__', 'send', 'next'
|
||||
for name in compiled.generator_obj.get_defined_names():
|
||||
if name.name in executes_generator:
|
||||
parent = GeneratorMethod(self, name.parent)
|
||||
yield helpers.FakeName(name.name, parent)
|
||||
else:
|
||||
yield name
|
||||
for names in compiled.generator_obj.names_dict.values():
|
||||
for name in names:
|
||||
if name.value in executes_generator:
|
||||
parent = GeneratorMethod(self, name.parent)
|
||||
dct[name.value] = [helpers.FakeName(name.name, parent, is_definition=True)]
|
||||
else:
|
||||
dct[name.value] = [name]
|
||||
yield dct
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
yield self, self._get_defined_names()
|
||||
|
||||
def iter_content(self):
|
||||
""" returns the content of __iter__ """
|
||||
# Directly execute it, because with a normal call to py__call__ a
|
||||
# Generator will be returned.
|
||||
from jedi.evaluate.representation import FunctionExecution
|
||||
return FunctionExecution(self._evaluator, self.func, self.var_args).get_return_types()
|
||||
|
||||
def get_index_types(self, index_array):
|
||||
def get_index_types(self, evaluator, index_array):
|
||||
#debug.warning('Tried to get array access on a generator: %s', self)
|
||||
analysis.add(self._evaluator, 'type-error-generator', index_array)
|
||||
return []
|
||||
@@ -84,9 +69,29 @@ class Generator(use_metaclass(CachedMetaClass, IterableWrapper)):
|
||||
"""
|
||||
return [self.iter_content()[index]]
|
||||
|
||||
def py__bool__(self):
|
||||
return True
|
||||
|
||||
|
||||
class Generator(use_metaclass(CachedMetaClass, IterableWrapper, GeneratorMixin)):
|
||||
"""Handling of `yield` functions."""
|
||||
def __init__(self, evaluator, func, var_args):
|
||||
super(Generator, self).__init__()
|
||||
self._evaluator = evaluator
|
||||
self.func = func
|
||||
self.var_args = var_args
|
||||
|
||||
def iter_content(self):
|
||||
""" returns the content of __iter__ """
|
||||
# Directly execute it, because with a normal call to py__call__ a
|
||||
# Generator will be returned.
|
||||
from jedi.evaluate.representation import FunctionExecution
|
||||
f = FunctionExecution(self._evaluator, self.func, self.var_args)
|
||||
return f.get_return_types(check_yields=True)
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in ['start_pos', 'end_pos', 'parent', 'get_imports',
|
||||
'asserts', 'doc', 'docstr', 'get_parent_until',
|
||||
'doc', 'docstr', 'get_parent_until',
|
||||
'get_code', 'subscopes']:
|
||||
raise AttributeError("Accessing %s of %s is not allowed."
|
||||
% (self, name))
|
||||
@@ -110,46 +115,115 @@ class GeneratorMethod(IterableWrapper):
|
||||
return getattr(self._builtin_func, name)
|
||||
|
||||
|
||||
class GeneratorComprehension(Generator):
|
||||
def __init__(self, evaluator, comprehension):
|
||||
super(GeneratorComprehension, self).__init__(evaluator, comprehension, None)
|
||||
self.comprehension = comprehension
|
||||
class Comprehension(IterableWrapper):
|
||||
@staticmethod
|
||||
def from_atom(evaluator, atom):
|
||||
mapping = {
|
||||
'(': GeneratorComprehension,
|
||||
'[': ListComprehension
|
||||
}
|
||||
return mapping[atom.children[0]](evaluator, atom)
|
||||
|
||||
def iter_content(self):
|
||||
return self._evaluator.eval_statement_element(self.comprehension)
|
||||
|
||||
|
||||
class Array(use_metaclass(CachedMetaClass, IterableWrapper)):
|
||||
"""
|
||||
Used as a mirror to pr.Array, if needed. It defines some getter
|
||||
methods which are important in this module.
|
||||
"""
|
||||
def __init__(self, evaluator, array):
|
||||
def __init__(self, evaluator, atom):
|
||||
self._evaluator = evaluator
|
||||
self._array = array
|
||||
self._atom = atom
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return helpers.FakeName(self._array.type, parent=self)
|
||||
@memoize_default()
|
||||
def eval_node(self):
|
||||
"""
|
||||
The first part `x + 1` of the list comprehension:
|
||||
|
||||
[x + 1 for x in foo]
|
||||
"""
|
||||
comprehension = self._atom.children[1]
|
||||
# For nested comprehensions we need to search the last one.
|
||||
last = comprehension.children[-1]
|
||||
last_comp = comprehension.children[1]
|
||||
while True:
|
||||
if isinstance(last, pr.CompFor):
|
||||
last_comp = last
|
||||
elif not pr.is_node(last, 'comp_if'):
|
||||
break
|
||||
last = last.children[-1]
|
||||
|
||||
return helpers.deep_ast_copy(comprehension.children[0], parent=last_comp)
|
||||
|
||||
def get_exact_index_types(self, index):
|
||||
return [self._evaluator.eval_element(self.eval_node())[index]]
|
||||
|
||||
def __repr__(self):
|
||||
return "<e%s of %s>" % (type(self).__name__, self._atom)
|
||||
|
||||
|
||||
class ArrayMixin(object):
|
||||
@memoize_default()
|
||||
def names_dicts(self, search_global=False): # Always False.
|
||||
# `array.type` is a string with the type, e.g. 'list'.
|
||||
scope = self._evaluator.find_types(compiled.builtin, self.type)[0]
|
||||
# builtins only have one class -> [0]
|
||||
scope = self._evaluator.execute(scope, (AlreadyEvaluated((self,)),))[0]
|
||||
return scope.names_dicts(search_global)
|
||||
|
||||
def py__bool__(self):
|
||||
return None # We don't know the length, because of appends.
|
||||
|
||||
@memoize_default(NO_DEFAULT)
|
||||
def get_index_types(self, index_array=()):
|
||||
|
||||
class ListComprehension(Comprehension, ArrayMixin):
|
||||
type = 'list'
|
||||
|
||||
def get_index_types(self, evaluator, index):
|
||||
return self.iter_content()
|
||||
|
||||
def iter_content(self):
|
||||
return self._evaluator.eval_element(self.eval_node())
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return FakeSequence(self._evaluator, [], 'list').name
|
||||
|
||||
|
||||
class GeneratorComprehension(Comprehension, GeneratorMixin):
|
||||
def iter_content(self):
|
||||
return self._evaluator.eval_element(self.eval_node())
|
||||
|
||||
|
||||
class Array(IterableWrapper, ArrayMixin):
|
||||
mapping = {'(': 'tuple',
|
||||
'[': 'list',
|
||||
'{': 'dict'}
|
||||
|
||||
def __init__(self, evaluator, atom):
|
||||
self._evaluator = evaluator
|
||||
self.atom = atom
|
||||
self.type = Array.mapping[atom.children[0]]
|
||||
"""The builtin name of the array (list, set, tuple or dict)."""
|
||||
|
||||
c = self.atom.children
|
||||
array_node = c[1]
|
||||
if self.type == 'dict' and array_node != '}' \
|
||||
and (not hasattr(array_node, 'children')
|
||||
or ':' not in array_node.children):
|
||||
self.type = 'set'
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return helpers.FakeName(self.type, parent=self)
|
||||
|
||||
@memoize_default()
|
||||
def get_index_types(self, evaluator, index=()):
|
||||
"""
|
||||
Get the types of a specific index or all, if not given.
|
||||
|
||||
:param indexes: The index input types.
|
||||
:param index: A subscriptlist node (or subnode).
|
||||
"""
|
||||
indexes = create_indexes_or_slices(self._evaluator, index_array)
|
||||
if [index for index in indexes if isinstance(index, Slice)]:
|
||||
return [self]
|
||||
|
||||
indexes = create_indexes_or_slices(evaluator, index)
|
||||
lookup_done = False
|
||||
types = []
|
||||
for index in indexes:
|
||||
if isinstance(index, compiled.CompiledObject) \
|
||||
if isinstance(index, Slice):
|
||||
types += [self]
|
||||
lookup_done = True
|
||||
elif isinstance(index, compiled.CompiledObject) \
|
||||
and isinstance(index.obj, (int, str, unicode)):
|
||||
with common.ignored(KeyError, IndexError, TypeError):
|
||||
types += self.get_exact_index_types(index.obj)
|
||||
@@ -157,50 +231,31 @@ class Array(use_metaclass(CachedMetaClass, IterableWrapper)):
|
||||
|
||||
return types if lookup_done else self.values()
|
||||
|
||||
@memoize_default(NO_DEFAULT)
|
||||
@memoize_default()
|
||||
def values(self):
|
||||
result = list(_follow_values(self._evaluator, self._array.values))
|
||||
result = unite(self._evaluator.eval_element(v) for v in self._values())
|
||||
result += check_array_additions(self._evaluator, self)
|
||||
return result
|
||||
|
||||
def get_exact_index_types(self, mixed_index):
|
||||
""" Here the index is an int/str. Raises IndexError/KeyError """
|
||||
index = mixed_index
|
||||
if self.type == pr.Array.DICT:
|
||||
index = None
|
||||
for i, key_statement in enumerate(self._array.keys):
|
||||
if self.type == 'dict':
|
||||
for key, values in self._items():
|
||||
# Because we only want the key to be a string.
|
||||
key_expression_list = key_statement.expression_list()
|
||||
if len(key_expression_list) != 1: # cannot deal with complex strings
|
||||
continue
|
||||
key = key_expression_list[0]
|
||||
if isinstance(key, pr.Literal):
|
||||
key = key.value
|
||||
elif isinstance(key, pr.Name):
|
||||
key = str(key)
|
||||
else:
|
||||
continue
|
||||
keys = self._evaluator.eval_element(key)
|
||||
|
||||
if mixed_index == key:
|
||||
index = i
|
||||
break
|
||||
if index is None:
|
||||
raise KeyError('No key found in dictionary')
|
||||
for k in keys:
|
||||
if isinstance(k, compiled.CompiledObject) \
|
||||
and mixed_index == k.obj:
|
||||
for value in values:
|
||||
return self._evaluator.eval_element(value)
|
||||
raise KeyError('No key found in dictionary %s.' % self)
|
||||
|
||||
# Can raise an IndexError
|
||||
values = [self._array.values[index]]
|
||||
return _follow_values(self._evaluator, values)
|
||||
return self._evaluator.eval_element(self._items()[mixed_index])
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
"""
|
||||
This method generates all `ArrayMethod` for one pr.Array.
|
||||
It returns e.g. for a list: append, pop, ...
|
||||
"""
|
||||
# `array.type` is a string with the type, e.g. 'list'.
|
||||
scope = self._evaluator.find_types(compiled.builtin, self._array.type)[0]
|
||||
scope = self._evaluator.execute(scope)[0] # builtins only have one class
|
||||
for _, names in scope.scope_names_generator():
|
||||
yield self, [ArrayMethod(n) for n in names]
|
||||
def iter_content(self):
|
||||
return self.values()
|
||||
|
||||
@common.safe_property
|
||||
def parent(self):
|
||||
@@ -210,60 +265,113 @@ class Array(use_metaclass(CachedMetaClass, IterableWrapper)):
|
||||
return compiled.builtin
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in ['type', 'start_pos', 'get_only_subelement', 'parent',
|
||||
if name not in ['start_pos', 'get_only_subelement', 'parent',
|
||||
'get_parent_until', 'items']:
|
||||
raise AttributeError('Strange access on %s: %s.' % (self, name))
|
||||
return getattr(self._array, name)
|
||||
return getattr(self.atom, name)
|
||||
|
||||
def _values(self):
|
||||
"""Returns a list of a list of node."""
|
||||
if self.type == 'dict':
|
||||
return list(chain.from_iterable(v for k, v in self._items()))
|
||||
else:
|
||||
return self._items()
|
||||
|
||||
def _items(self):
|
||||
c = self.atom.children
|
||||
array_node = c[1]
|
||||
if array_node in (']', '}', ')'):
|
||||
return [] # Direct closing bracket, doesn't contain items.
|
||||
|
||||
if pr.is_node(array_node, 'testlist_comp'):
|
||||
return array_node.children[::2]
|
||||
elif pr.is_node(array_node, 'dictorsetmaker'):
|
||||
kv = []
|
||||
iterator = iter(array_node.children)
|
||||
for key in iterator:
|
||||
op = next(iterator, None)
|
||||
if op is None or op == ',':
|
||||
kv.append(key) # A set.
|
||||
elif op == ':': # A dict.
|
||||
kv.append((key, [next(iterator)]))
|
||||
next(iterator, None) # Possible comma.
|
||||
else:
|
||||
raise NotImplementedError('dict/set comprehensions')
|
||||
return kv
|
||||
else:
|
||||
return [array_node]
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self._array)
|
||||
|
||||
def __len__(self):
|
||||
return len(self._array)
|
||||
return iter(self._items())
|
||||
|
||||
def __repr__(self):
|
||||
return "<e%s of %s>" % (type(self).__name__, self._array)
|
||||
return "<%s of %s>" % (type(self).__name__, self.atom)
|
||||
|
||||
|
||||
class ArrayMethod(IterableWrapper):
|
||||
"""
|
||||
A name, e.g. `list.append`, it is used to access the original array
|
||||
methods.
|
||||
"""
|
||||
def __init__(self, name):
|
||||
super(ArrayMethod, self).__init__()
|
||||
self.name = name
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
def names(self):
|
||||
# TODO remove this method, we need the ArrayMethod input to be a NamePart.
|
||||
return [pr.NamePart(self.name._sub_module, unicode(n), self, n.start_pos) for n in self.name.names]
|
||||
|
||||
def __getattr__(self, name):
|
||||
# Set access privileges:
|
||||
if name not in ['parent', 'start_pos', 'end_pos', 'get_code', 'get_definition']:
|
||||
raise AttributeError('Strange access on %s: %s.' % (self, name))
|
||||
return getattr(self.name, name)
|
||||
|
||||
def get_parent_until(self):
|
||||
return compiled.builtin
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s of %s>" % (type(self).__name__, self.name)
|
||||
class _FakeArray(Array):
|
||||
def __init__(self, evaluator, container, type):
|
||||
self.type = type
|
||||
self._evaluator = evaluator
|
||||
self.atom = container
|
||||
|
||||
|
||||
class MergedArray(Array):
|
||||
class ImplicitTuple(_FakeArray):
|
||||
def __init__(self, evaluator, testlist):
|
||||
super(ImplicitTuple, self).__init__(evaluator, testlist, 'tuple')
|
||||
self._testlist = testlist
|
||||
|
||||
def _items(self):
|
||||
return self._testlist.children[::2]
|
||||
|
||||
|
||||
class FakeSequence(_FakeArray):
|
||||
def __init__(self, evaluator, sequence_values, type):
|
||||
super(FakeSequence, self).__init__(evaluator, sequence_values, type)
|
||||
self._sequence_values = sequence_values
|
||||
|
||||
def _items(self):
|
||||
return self._sequence_values
|
||||
|
||||
def get_exact_index_types(self, index):
|
||||
value = self._sequence_values[index]
|
||||
return self._evaluator.eval_element(value)
|
||||
|
||||
|
||||
class AlreadyEvaluated(frozenset):
|
||||
"""A simple container to add already evaluated objects to an array."""
|
||||
def get_code(self):
|
||||
# For debugging purposes.
|
||||
return str(self)
|
||||
|
||||
|
||||
class MergedNodes(frozenset):
|
||||
pass
|
||||
|
||||
|
||||
class FakeDict(_FakeArray):
|
||||
def __init__(self, evaluator, dct):
|
||||
super(FakeDict, self).__init__(evaluator, dct, 'dict')
|
||||
self._dct = dct
|
||||
|
||||
def get_exact_index_types(self, index):
|
||||
return list(chain.from_iterable(self._evaluator.eval_element(v)
|
||||
for v in self._dct[index]))
|
||||
|
||||
def _items(self):
|
||||
return self._dct.items()
|
||||
|
||||
|
||||
class MergedArray(_FakeArray):
|
||||
def __init__(self, evaluator, arrays):
|
||||
super(MergedArray, self).__init__(evaluator, arrays[-1]._array)
|
||||
super(MergedArray, self).__init__(evaluator, arrays, arrays[-1].type)
|
||||
self._arrays = arrays
|
||||
|
||||
def get_index_types(self, mixed_index):
|
||||
return list(chain(*(a.values() for a in self._arrays)))
|
||||
|
||||
def get_exact_index_types(self, mixed_index):
|
||||
raise IndexError
|
||||
|
||||
def values(self):
|
||||
return list(chain(*(a.values() for a in self._arrays)))
|
||||
|
||||
def __iter__(self):
|
||||
for array in self._arrays:
|
||||
for a in array:
|
||||
@@ -279,7 +387,7 @@ def get_iterator_types(inputs):
|
||||
# Take the first statement (for has always only
|
||||
# one, remember `in`). And follow it.
|
||||
for it in inputs:
|
||||
if isinstance(it, (Generator, Array, ArrayInstance)):
|
||||
if isinstance(it, (Generator, Array, ArrayInstance, Comprehension)):
|
||||
iterators.append(it)
|
||||
else:
|
||||
if not hasattr(it, 'execute_subscope_by_name'):
|
||||
@@ -294,9 +402,8 @@ def get_iterator_types(inputs):
|
||||
from jedi.evaluate.representation import Instance
|
||||
for it in iterators:
|
||||
if isinstance(it, Array):
|
||||
# Array is a little bit special, since this is an internal
|
||||
# array, but there's also the list builtin, which is
|
||||
# another thing.
|
||||
# Array is a little bit special, since this is an internal array,
|
||||
# but there's also the list builtin, which is another thing.
|
||||
result += it.values()
|
||||
elif isinstance(it, Instance):
|
||||
# __iter__ returned an instance.
|
||||
@@ -314,128 +421,128 @@ def get_iterator_types(inputs):
|
||||
|
||||
def check_array_additions(evaluator, array):
|
||||
""" Just a mapper function for the internal _check_array_additions """
|
||||
if not pr.Array.is_type(array._array, pr.Array.LIST, pr.Array.SET):
|
||||
if array.type not in ('list', 'set'):
|
||||
# TODO also check for dict updates
|
||||
return []
|
||||
|
||||
is_list = array._array.type == 'list'
|
||||
current_module = array._array.get_parent_until()
|
||||
res = _check_array_additions(evaluator, array, current_module, is_list)
|
||||
return res
|
||||
is_list = array.type == 'list'
|
||||
try:
|
||||
current_module = array.atom.get_parent_until()
|
||||
except AttributeError:
|
||||
# If there's no get_parent_until, it's a FakeSequence or another Fake
|
||||
# type. Those fake types are used inside Jedi's engine. No values may
|
||||
# be added to those after their creation.
|
||||
return []
|
||||
return _check_array_additions(evaluator, array, current_module, is_list)
|
||||
|
||||
|
||||
@memoize_default([], evaluator_is_first_arg=True)
|
||||
def _check_array_additions(evaluator, compare_array, module, is_list):
|
||||
"""
|
||||
Checks if a `pr.Array` has "add" statements:
|
||||
Checks if a `Array` has "add" (append, insert, extend) statements:
|
||||
|
||||
>>> a = [""]
|
||||
>>> a.append(1)
|
||||
"""
|
||||
if not settings.dynamic_array_additions or isinstance(module, compiled.CompiledObject):
|
||||
return []
|
||||
|
||||
def check_calls(calls, add_name):
|
||||
"""
|
||||
Calls are processed here. The part before the call is searched and
|
||||
compared with the original Array.
|
||||
"""
|
||||
def check_additions(arglist, add_name):
|
||||
params = list(param.Arguments(evaluator, arglist).unpack())
|
||||
result = []
|
||||
for c in calls:
|
||||
call_path = list(c.generate_call_path())
|
||||
call_path_simple = [unicode(n) if isinstance(n, pr.NamePart) else n
|
||||
for n in call_path]
|
||||
separate_index = call_path_simple.index(add_name)
|
||||
if add_name == call_path_simple[-1] or separate_index == 0:
|
||||
# this means that there is no execution -> [].append
|
||||
# or the keyword is at the start -> append()
|
||||
continue
|
||||
backtrack_path = iter(call_path[:separate_index])
|
||||
|
||||
position = c.start_pos
|
||||
scope = c.get_parent_scope()
|
||||
|
||||
found = evaluator.eval_call_path(backtrack_path, scope, position)
|
||||
if not compare_array in found:
|
||||
continue
|
||||
|
||||
params = call_path[separate_index + 1]
|
||||
if not params.values:
|
||||
continue # no params: just ignore it
|
||||
if add_name in ['append', 'add']:
|
||||
for param in params:
|
||||
result += evaluator.eval_statement(param)
|
||||
elif add_name in ['insert']:
|
||||
try:
|
||||
second_param = params[1]
|
||||
except IndexError:
|
||||
continue
|
||||
else:
|
||||
result += evaluator.eval_statement(second_param)
|
||||
elif add_name in ['extend', 'update']:
|
||||
for param in params:
|
||||
iterators = evaluator.eval_statement(param)
|
||||
if add_name in ['insert']:
|
||||
params = params[1:]
|
||||
if add_name in ['append', 'add', 'insert']:
|
||||
for key, nodes in params:
|
||||
result += unite(evaluator.eval_element(node) for node in nodes)
|
||||
elif add_name in ['extend', 'update']:
|
||||
for key, nodes in params:
|
||||
iterators = unite(evaluator.eval_element(node) for node in nodes)
|
||||
result += get_iterator_types(iterators)
|
||||
return result
|
||||
|
||||
from jedi.evaluate import representation as er
|
||||
from jedi.evaluate import representation as er, param
|
||||
|
||||
def get_execution_parent(element, *stop_classes):
|
||||
def get_execution_parent(element):
|
||||
""" Used to get an Instance/FunctionExecution parent """
|
||||
if isinstance(element, Array):
|
||||
stmt = element._array.parent
|
||||
node = element.atom
|
||||
else:
|
||||
# is an Instance with an ArrayInstance inside
|
||||
stmt = element.var_args[0].var_args.parent
|
||||
if isinstance(stmt, er.InstanceElement):
|
||||
stop_classes = list(stop_classes) + [er.Function]
|
||||
return stmt.get_parent_until(stop_classes)
|
||||
# Is an Instance with an
|
||||
# Arguments([AlreadyEvaluated([ArrayInstance])]) inside
|
||||
# Yeah... I know... It's complicated ;-)
|
||||
node = list(element.var_args.argument_node[0])[0].var_args.trailer
|
||||
if isinstance(node, er.InstanceElement):
|
||||
return node
|
||||
return node.get_parent_until(er.FunctionExecution)
|
||||
|
||||
temp_param_add = settings.dynamic_params_for_other_modules
|
||||
settings.dynamic_params_for_other_modules = False
|
||||
temp_param_add, settings.dynamic_params_for_other_modules = \
|
||||
settings.dynamic_params_for_other_modules, False
|
||||
|
||||
search_names = ['append', 'extend', 'insert'] if is_list else \
|
||||
['add', 'update']
|
||||
comp_arr_parent = get_execution_parent(compare_array, er.FunctionExecution)
|
||||
search_names = ['append', 'extend', 'insert'] if is_list else ['add', 'update']
|
||||
comp_arr_parent = get_execution_parent(compare_array)
|
||||
|
||||
possible_stmts = []
|
||||
res = []
|
||||
for n in search_names:
|
||||
added_types = []
|
||||
for add_name in search_names:
|
||||
try:
|
||||
possible_stmts += module.used_names[n]
|
||||
possible_names = module.used_names[add_name]
|
||||
except KeyError:
|
||||
continue
|
||||
for stmt in possible_stmts:
|
||||
# Check if the original scope is an execution. If it is, one
|
||||
# can search for the same statement, that is in the module
|
||||
# dict. Executions are somewhat special in jedi, since they
|
||||
# literally copy the contents of a function.
|
||||
if isinstance(comp_arr_parent, er.FunctionExecution):
|
||||
stmt = comp_arr_parent. \
|
||||
get_statement_for_position(stmt.start_pos)
|
||||
if stmt is None:
|
||||
else:
|
||||
for name in possible_names:
|
||||
# Check if the original scope is an execution. If it is, one
|
||||
# can search for the same statement, that is in the module
|
||||
# dict. Executions are somewhat special in jedi, since they
|
||||
# literally copy the contents of a function.
|
||||
if isinstance(comp_arr_parent, er.FunctionExecution):
|
||||
if comp_arr_parent.start_pos < name.start_pos < comp_arr_parent.end_pos:
|
||||
name = comp_arr_parent.name_for_position(name.start_pos)
|
||||
else:
|
||||
# Don't check definitions that are not defined in the
|
||||
# same function. This is not "proper" anyway. It also
|
||||
# improves Jedi's speed for array lookups, since we
|
||||
# don't have to check the whole source tree anymore.
|
||||
continue
|
||||
trailer = name.parent
|
||||
power = trailer.parent
|
||||
trailer_pos = power.children.index(trailer)
|
||||
try:
|
||||
execution_trailer = power.children[trailer_pos + 1]
|
||||
except IndexError:
|
||||
continue
|
||||
# InstanceElements are special, because they don't get copied,
|
||||
# but have this wrapper around them.
|
||||
if isinstance(comp_arr_parent, er.InstanceElement):
|
||||
stmt = er.get_instance_el(comp_arr_parent.instance, stmt)
|
||||
else:
|
||||
if execution_trailer.type != 'trailer' \
|
||||
or execution_trailer.children[0] != '(' \
|
||||
or execution_trailer.children[1] == ')':
|
||||
continue
|
||||
power = helpers.call_of_name(name, cut_own_trailer=True)
|
||||
# InstanceElements are special, because they don't get copied,
|
||||
# but have this wrapper around them.
|
||||
if isinstance(comp_arr_parent, er.InstanceElement):
|
||||
power = er.get_instance_el(evaluator, comp_arr_parent.instance, power)
|
||||
|
||||
if evaluator.recursion_detector.push_stmt(stmt):
|
||||
# check recursion
|
||||
continue
|
||||
if evaluator.recursion_detector.push_stmt(power):
|
||||
# Check for recursion. Possible by using 'extend' in
|
||||
# combination with function calls.
|
||||
continue
|
||||
if compare_array in evaluator.eval_element(power):
|
||||
# The arrays match. Now add the results
|
||||
added_types += check_additions(execution_trailer.children[1], add_name)
|
||||
|
||||
res += check_calls(helpers.scan_statement_for_calls(stmt, n), n)
|
||||
evaluator.recursion_detector.pop_stmt()
|
||||
evaluator.recursion_detector.pop_stmt()
|
||||
# reset settings
|
||||
settings.dynamic_params_for_other_modules = temp_param_add
|
||||
return res
|
||||
return added_types
|
||||
|
||||
|
||||
def check_array_instances(evaluator, instance):
|
||||
"""Used for set() and list() instances."""
|
||||
if not settings.dynamic_arrays_instances:
|
||||
if not settings.dynamic_array_additions:
|
||||
return instance.var_args
|
||||
|
||||
ai = ArrayInstance(evaluator, instance)
|
||||
return [ai]
|
||||
from jedi.evaluate import param
|
||||
return param.Arguments(evaluator, [AlreadyEvaluated([ai])])
|
||||
|
||||
|
||||
class ArrayInstance(IterableWrapper):
|
||||
@@ -443,6 +550,11 @@ class ArrayInstance(IterableWrapper):
|
||||
Used for the usage of set() and list().
|
||||
This is definitely a hack, but a good one :-)
|
||||
It makes it possible to use set/list conversions.
|
||||
|
||||
In contrast to Array, ListComprehension and all other iterable types, this
|
||||
is something that is only used inside `evaluate/compiled/fake/builtins.py`
|
||||
and therefore doesn't need `names_dicts`, `py__bool__` and so on, because
|
||||
we don't use these operations in `builtins.py`.
|
||||
"""
|
||||
def __init__(self, evaluator, instance):
|
||||
self._evaluator = evaluator
|
||||
@@ -455,21 +567,10 @@ class ArrayInstance(IterableWrapper):
|
||||
lists/sets are too complicated too handle that.
|
||||
"""
|
||||
items = []
|
||||
from jedi.evaluate.representation import Instance
|
||||
for stmt in self.var_args:
|
||||
for typ in self._evaluator.eval_statement(stmt):
|
||||
if isinstance(typ, Instance) and len(typ.var_args):
|
||||
array = typ.var_args[0]
|
||||
if isinstance(array, ArrayInstance):
|
||||
# Certain combinations can cause recursions, see tests.
|
||||
if not self._evaluator.recursion_detector.push_stmt(self.var_args):
|
||||
items += array.iter_content()
|
||||
self._evaluator.recursion_detector.pop_stmt()
|
||||
items += get_iterator_types([typ])
|
||||
|
||||
# TODO check if exclusion of tuple is a problem here.
|
||||
if isinstance(self.var_args, tuple) or self.var_args.parent is None:
|
||||
return [] # generated var_args should not be checked for arrays
|
||||
for key, nodes in self.var_args.unpack():
|
||||
for node in nodes:
|
||||
for typ in self._evaluator.eval_element(node):
|
||||
items += get_iterator_types([typ])
|
||||
|
||||
module = self.var_args.get_parent_until()
|
||||
is_list = str(self.instance.name) == 'list'
|
||||
@@ -477,11 +578,6 @@ class ArrayInstance(IterableWrapper):
|
||||
return items
|
||||
|
||||
|
||||
def _follow_values(evaluator, values):
|
||||
""" helper function for the index getters """
|
||||
return list(chain.from_iterable(evaluator.eval_statement(v) for v in values))
|
||||
|
||||
|
||||
class Slice(object):
|
||||
def __init__(self, evaluator, start, stop, step):
|
||||
self._evaluator = evaluator
|
||||
@@ -500,7 +596,7 @@ class Slice(object):
|
||||
if element is None:
|
||||
return None
|
||||
|
||||
result = precedence.process_precedence_element(self._evaluator, element)
|
||||
result = self._evaluator.eval_element(element)
|
||||
if len(result) != 1:
|
||||
# We want slices to be clear defined with just one type.
|
||||
# Otherwise we will return an empty slice object.
|
||||
@@ -516,26 +612,20 @@ class Slice(object):
|
||||
return slice(None, None, None)
|
||||
|
||||
|
||||
def create_indexes_or_slices(evaluator, index_array):
|
||||
if not index_array:
|
||||
return ()
|
||||
def create_indexes_or_slices(evaluator, index):
|
||||
if pr.is_node(index, 'subscript'): # subscript is a slice operation.
|
||||
start, stop, step = None, None, None
|
||||
result = []
|
||||
for el in index.children:
|
||||
if el == ':':
|
||||
if not result:
|
||||
result.append(None)
|
||||
elif pr.is_node(el, 'sliceop'):
|
||||
if len(el.children) == 2:
|
||||
result.append(el.children[1])
|
||||
else:
|
||||
result.append(el)
|
||||
result += [None] * (3 - len(result))
|
||||
|
||||
# Just take the first part of the "array", because this is Python stdlib
|
||||
# behavior. Numpy et al. perform differently, but Jedi won't understand
|
||||
# that anyway.
|
||||
expression_list = index_array[0].expression_list()
|
||||
prec = precedence.create_precedence(expression_list)
|
||||
|
||||
# check for slices
|
||||
if isinstance(prec, precedence.Precedence) and prec.operator == ':':
|
||||
start = prec.left
|
||||
if isinstance(start, precedence.Precedence) and start.operator == ':':
|
||||
stop = start.right
|
||||
start = start.left
|
||||
step = prec.right
|
||||
else:
|
||||
stop = prec.right
|
||||
step = None
|
||||
return (Slice(evaluator, start, stop, step),)
|
||||
else:
|
||||
return tuple(precedence.process_precedence_element(evaluator, prec))
|
||||
return (Slice(evaluator, *result),)
|
||||
return evaluator.eval_element(index)
|
||||
|
||||
@@ -1,49 +1,202 @@
|
||||
import copy
|
||||
from collections import defaultdict
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import unicode, zip_longest
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.evaluate import iterable
|
||||
from jedi import debug
|
||||
from jedi import common
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.evaluate import iterable
|
||||
from jedi.evaluate import analysis
|
||||
from jedi.evaluate import precedence
|
||||
from jedi.evaluate.helpers import FakeName
|
||||
from jedi.cache import underscore_memoization
|
||||
|
||||
|
||||
class Arguments(pr.Base):
|
||||
def __init__(self, evaluator, argument_node, trailer=None):
|
||||
"""
|
||||
The argument_node is either a parser node or a list of evaluated
|
||||
objects. Those evaluated objects may be lists of evaluated objects
|
||||
themselves (one list for the first argument, one for the second, etc).
|
||||
|
||||
:param argument_node: May be an argument_node or a list of nodes.
|
||||
"""
|
||||
self.argument_node = argument_node
|
||||
self._evaluator = evaluator
|
||||
self.trailer = trailer # Can be None, e.g. in a class definition.
|
||||
|
||||
def _split(self):
|
||||
if isinstance(self.argument_node, (tuple, list)):
|
||||
for el in self.argument_node:
|
||||
yield 0, el
|
||||
else:
|
||||
if not pr.is_node(self.argument_node, 'arglist'):
|
||||
yield 0, self.argument_node
|
||||
return
|
||||
|
||||
iterator = iter(self.argument_node.children)
|
||||
for child in iterator:
|
||||
if child == ',':
|
||||
continue
|
||||
elif child in ('*', '**'):
|
||||
yield len(child.value), next(iterator)
|
||||
else:
|
||||
yield 0, child
|
||||
|
||||
def get_parent_until(self, *args, **kwargs):
|
||||
if self.trailer is None:
|
||||
try:
|
||||
element = self.argument_node[0]
|
||||
from jedi.evaluate.iterable import AlreadyEvaluated
|
||||
if isinstance(element, AlreadyEvaluated):
|
||||
element = self._evaluator.eval_element(element)[0]
|
||||
except IndexError:
|
||||
return None
|
||||
else:
|
||||
return element.get_parent_until(*args, **kwargs)
|
||||
else:
|
||||
return self.trailer.get_parent_until(*args, **kwargs)
|
||||
|
||||
def as_tuple(self):
|
||||
for stars, argument in self._split():
|
||||
if pr.is_node(argument, 'argument'):
|
||||
argument, default = argument.children[::2]
|
||||
else:
|
||||
default = None
|
||||
yield argument, default, stars
|
||||
|
||||
def unpack(self, func=None):
|
||||
named_args = []
|
||||
for stars, el in self._split():
|
||||
if stars == 1:
|
||||
arrays = self._evaluator.eval_element(el)
|
||||
iterators = [_iterate_star_args(self._evaluator, a, el, func)
|
||||
for a in arrays]
|
||||
iterators = list(iterators)
|
||||
for values in list(zip_longest(*iterators)):
|
||||
yield None, [v for v in values if v is not None]
|
||||
elif stars == 2:
|
||||
arrays = self._evaluator.eval_element(el)
|
||||
dicts = [_star_star_dict(self._evaluator, a, el, func)
|
||||
for a in arrays]
|
||||
for dct in dicts:
|
||||
for key, values in dct.items():
|
||||
yield key, values
|
||||
else:
|
||||
if pr.is_node(el, 'argument'):
|
||||
c = el.children
|
||||
if len(c) == 3: # Keyword argument.
|
||||
named_args.append((c[0].value, (c[2],)))
|
||||
else: # Generator comprehension.
|
||||
# Include the brackets with the parent.
|
||||
comp = iterable.GeneratorComprehension(
|
||||
self._evaluator, self.argument_node.parent)
|
||||
yield None, (iterable.AlreadyEvaluated([comp]),)
|
||||
elif isinstance(el, (list, tuple)):
|
||||
yield None, el
|
||||
else:
|
||||
yield None, (el,)
|
||||
|
||||
# Reordering var_args is necessary, because star args sometimes appear
|
||||
# after named argument, but in the actual order it's prepended.
|
||||
for key_arg in named_args:
|
||||
yield key_arg
|
||||
|
||||
def _reorder_var_args(var_args):
|
||||
named_index = None
|
||||
new_args = []
|
||||
for i, stmt in enumerate(var_args):
|
||||
if isinstance(stmt, pr.ExprStmt):
|
||||
if named_index is None and stmt.assignment_details:
|
||||
named_index = i
|
||||
|
||||
if named_index is not None:
|
||||
expression_list = stmt.expression_list()
|
||||
if expression_list and expression_list[0] == '*':
|
||||
new_args.insert(named_index, stmt)
|
||||
named_index += 1
|
||||
continue
|
||||
|
||||
new_args.append(stmt)
|
||||
return new_args
|
||||
|
||||
def eval_argument_clinic(self, arguments):
|
||||
"""Uses a list with argument clinic information (see PEP 436)."""
|
||||
iterator = self.unpack()
|
||||
for i, (name, optional, allow_kwargs) in enumerate(arguments):
|
||||
key, va_values = next(iterator, (None, []))
|
||||
if key is not None:
|
||||
raise NotImplementedError
|
||||
if not va_values and not optional:
|
||||
debug.warning('TypeError: %s expected at least %s arguments, got %s',
|
||||
name, len(arguments), i)
|
||||
raise ValueError
|
||||
values = list(chain.from_iterable(self._evaluator.eval_element(el)
|
||||
for el in va_values))
|
||||
if not values and not optional:
|
||||
# For the stdlib we always want values. If we don't get them,
|
||||
# that's ok, maybe something is too hard to resolve, however,
|
||||
# we will not proceed with the evaluation of that function.
|
||||
debug.warning('argument_clinic "%s" not resolvable.', name)
|
||||
raise ValueError
|
||||
yield values
|
||||
|
||||
def scope(self):
|
||||
# Returns the scope in which the arguments are used.
|
||||
return (self.trailer or self.argument_node).get_parent_until(pr.IsScope)
|
||||
|
||||
def eval_args(self):
|
||||
# TODO this method doesn't work with named args and a lot of other
|
||||
# things. Use unpack.
|
||||
return [self._evaluator.eval_element(el) for stars, el in self._split()]
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s: %s>' % (type(self).__name__, self.argument_node)
|
||||
|
||||
def get_calling_var_args(self):
|
||||
if pr.is_node(self.argument_node, 'arglist', 'argument') \
|
||||
or self.argument_node == () and self.trailer is not None:
|
||||
return _get_calling_var_args(self._evaluator, self)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class ExecutedParam(pr.Param):
|
||||
def __init__(self):
|
||||
"""Don't use this method, it's just here to overwrite the old one."""
|
||||
pass
|
||||
"""Fake a param and give it values."""
|
||||
def __init__(self, original_param, var_args, values):
|
||||
self._original_param = original_param
|
||||
self.var_args = var_args
|
||||
self._values = values
|
||||
|
||||
@classmethod
|
||||
def from_param(cls, param, parent, var_args):
|
||||
instance = cls()
|
||||
before = ()
|
||||
for cls in param.__class__.__mro__:
|
||||
with common.ignored(AttributeError):
|
||||
if before == cls.__slots__:
|
||||
continue
|
||||
before = cls.__slots__
|
||||
for name in before:
|
||||
setattr(instance, name, getattr(param, name))
|
||||
def eval(self, evaluator):
|
||||
types = []
|
||||
for v in self._values:
|
||||
types += evaluator.eval_element(v)
|
||||
return types
|
||||
|
||||
instance.original_param = param
|
||||
instance.is_generated = True
|
||||
instance.parent = parent
|
||||
instance.var_args = var_args
|
||||
return instance
|
||||
@property
|
||||
def position_nr(self):
|
||||
# Need to use the original logic here, because it uses the parent.
|
||||
return self._original_param.position_nr
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
def name(self):
|
||||
return FakeName(str(self._original_param.name), self, self.start_pos)
|
||||
|
||||
def __getattr__(self, name):
|
||||
return getattr(self._original_param, name)
|
||||
|
||||
|
||||
def _get_calling_var_args(evaluator, var_args):
|
||||
old_var_args = None
|
||||
while var_args != old_var_args:
|
||||
old_var_args = var_args
|
||||
for argument in reversed(var_args):
|
||||
if not isinstance(argument, pr.Statement):
|
||||
continue
|
||||
exp_list = argument.expression_list()
|
||||
if len(exp_list) != 2 or exp_list[0] not in ('*', '**'):
|
||||
for name, default, stars in reversed(list(var_args.as_tuple())):
|
||||
if not stars or not isinstance(name, pr.Name):
|
||||
continue
|
||||
|
||||
names = evaluator.goto(argument, [exp_list[1].get_code()])
|
||||
names = evaluator.goto(name)
|
||||
if len(names) != 1:
|
||||
break
|
||||
param = names[0].get_definition()
|
||||
@@ -55,41 +208,43 @@ def _get_calling_var_args(evaluator, var_args):
|
||||
break
|
||||
# We never want var_args to be a tuple. This should be enough for
|
||||
# now, we can change it later, if we need to.
|
||||
if isinstance(param.var_args, pr.Array):
|
||||
if isinstance(param.var_args, Arguments):
|
||||
var_args = param.var_args
|
||||
return var_args
|
||||
return var_args.argument_node or var_args.trailer
|
||||
|
||||
|
||||
def get_params(evaluator, func, var_args):
|
||||
result = []
|
||||
param_names = []
|
||||
param_dict = {}
|
||||
for param in func.params:
|
||||
param_dict[str(param.get_name())] = param
|
||||
# There may be calls, which don't fit all the params, this just ignores it.
|
||||
unpacked_va = _unpack_var_args(evaluator, var_args, func)
|
||||
param_dict[str(param.name)] = param
|
||||
unpacked_va = list(var_args.unpack(func))
|
||||
from jedi.evaluate.representation import InstanceElement
|
||||
if isinstance(func, InstanceElement):
|
||||
# Include self at this place.
|
||||
unpacked_va.insert(0, (None, [iterable.AlreadyEvaluated([func.instance])]))
|
||||
var_arg_iterator = common.PushBackIterator(iter(unpacked_va))
|
||||
|
||||
non_matching_keys = []
|
||||
keys_used = set()
|
||||
non_matching_keys = defaultdict(lambda: [])
|
||||
keys_used = {}
|
||||
keys_only = False
|
||||
va_values = None
|
||||
had_multiple_value_error = False
|
||||
for param in func.params:
|
||||
# The value and key can both be null. There, the defaults apply.
|
||||
# args / kwargs will just be empty arrays / dicts, respectively.
|
||||
# Wrong value count is just ignored. If you try to test cases that are
|
||||
# not allowed in Python, Jedi will maybe not show any completions.
|
||||
key, va_values = next(var_arg_iterator, (None, []))
|
||||
while key:
|
||||
default = [] if param.default is None else [param.default]
|
||||
key, va_values = next(var_arg_iterator, (None, default))
|
||||
while key is not None:
|
||||
keys_only = True
|
||||
k = unicode(key)
|
||||
try:
|
||||
key_param = param_dict[unicode(key)]
|
||||
except KeyError:
|
||||
non_matching_keys.append((key, va_values))
|
||||
non_matching_keys[key] += va_values
|
||||
else:
|
||||
result.append(_gen_param_name_copy(func, var_args, key_param,
|
||||
values=va_values))
|
||||
param_names.append(ExecutedParam(key_param, var_args, va_values).name)
|
||||
|
||||
if k in keys_used:
|
||||
had_multiple_value_error = True
|
||||
@@ -100,70 +255,61 @@ def get_params(evaluator, func, var_args):
|
||||
analysis.add(evaluator, 'type-error-multiple-values',
|
||||
calling_va, message=m)
|
||||
else:
|
||||
keys_used.add(k)
|
||||
key, va_values = next(var_arg_iterator, (None, []))
|
||||
try:
|
||||
keys_used[k] = param_names[-1]
|
||||
except IndexError:
|
||||
# TODO this is wrong stupid and whatever.
|
||||
pass
|
||||
key, va_values = next(var_arg_iterator, (None, ()))
|
||||
|
||||
keys = []
|
||||
values = []
|
||||
array_type = None
|
||||
has_default_value = False
|
||||
if param.stars == 1:
|
||||
# *args param
|
||||
array_type = pr.Array.TUPLE
|
||||
lst_values = [va_values]
|
||||
lst_values = [iterable.MergedNodes(va_values)] if va_values else []
|
||||
for key, va_values in var_arg_iterator:
|
||||
# Iterate until a key argument is found.
|
||||
if key:
|
||||
var_arg_iterator.push_back((key, va_values))
|
||||
break
|
||||
lst_values.append(va_values)
|
||||
if lst_values[0]:
|
||||
values = [helpers.stmts_to_stmt(v) for v in lst_values]
|
||||
if va_values:
|
||||
lst_values.append(iterable.MergedNodes(va_values))
|
||||
seq = iterable.FakeSequence(evaluator, lst_values, 'tuple')
|
||||
values = [iterable.AlreadyEvaluated([seq])]
|
||||
elif param.stars == 2:
|
||||
# **kwargs param
|
||||
array_type = pr.Array.DICT
|
||||
if non_matching_keys:
|
||||
keys, values = zip(*non_matching_keys)
|
||||
values = [helpers.stmts_to_stmt(list(v)) for v in values]
|
||||
non_matching_keys = []
|
||||
dct = iterable.FakeDict(evaluator, dict(non_matching_keys))
|
||||
values = [iterable.AlreadyEvaluated([dct])]
|
||||
non_matching_keys = {}
|
||||
else:
|
||||
# normal param
|
||||
if va_values:
|
||||
values = va_values
|
||||
else:
|
||||
if param.assignment_details:
|
||||
# No value: Return the default values.
|
||||
has_default_value = True
|
||||
result.append(param.get_name())
|
||||
# TODO is this allowed? it changes it long time.
|
||||
param.is_generated = True
|
||||
else:
|
||||
# No value: Return an empty container
|
||||
values = []
|
||||
if not keys_only and isinstance(var_args, pr.Array):
|
||||
calling_va = _get_calling_var_args(evaluator, var_args)
|
||||
if calling_va is not None:
|
||||
m = _error_argument_count(func, len(unpacked_va))
|
||||
analysis.add(evaluator, 'type-error-too-few-arguments',
|
||||
calling_va, message=m)
|
||||
# No value: Return an empty container
|
||||
values = []
|
||||
if not keys_only:
|
||||
calling_va = var_args.get_calling_var_args()
|
||||
if calling_va is not None:
|
||||
m = _error_argument_count(func, len(unpacked_va))
|
||||
analysis.add(evaluator, 'type-error-too-few-arguments',
|
||||
calling_va, message=m)
|
||||
|
||||
# Now add to result if it's not one of the previously covered cases.
|
||||
if not has_default_value and (not keys_only or param.stars == 2):
|
||||
keys_used.add(unicode(param.get_name()))
|
||||
result.append(_gen_param_name_copy(func, var_args, param,
|
||||
keys=keys, values=values,
|
||||
array_type=array_type))
|
||||
if (not keys_only or param.stars == 2):
|
||||
param_names.append(ExecutedParam(param, var_args, values).name)
|
||||
keys_used[unicode(param.name)] = param_names[-1]
|
||||
|
||||
if keys_only:
|
||||
# All arguments should be handed over to the next function. It's not
|
||||
# about the values inside, it's about the names. Jedi needs to now that
|
||||
# there's nothing to find for certain names.
|
||||
for k in set(param_dict) - keys_used:
|
||||
for k in set(param_dict) - set(keys_used):
|
||||
param = param_dict[k]
|
||||
result.append(_gen_param_name_copy(func, var_args, param))
|
||||
values = [] if param.default is None else [param.default]
|
||||
param_names.append(ExecutedParam(param, var_args, values).name)
|
||||
|
||||
if not (non_matching_keys or had_multiple_value_error
|
||||
or param.stars or param.assignment_details):
|
||||
or param.stars or param.default):
|
||||
# add a warning only if there's not another one.
|
||||
calling_va = _get_calling_var_args(evaluator, var_args)
|
||||
if calling_va is not None:
|
||||
@@ -171,202 +317,83 @@ def get_params(evaluator, func, var_args):
|
||||
analysis.add(evaluator, 'type-error-too-few-arguments',
|
||||
calling_va, message=m)
|
||||
|
||||
for key, va_values in non_matching_keys:
|
||||
for key, va_values in non_matching_keys.items():
|
||||
m = "TypeError: %s() got an unexpected keyword argument '%s'." \
|
||||
% (func.name, key)
|
||||
for value in va_values:
|
||||
analysis.add(evaluator, 'type-error-keyword-argument', value, message=m)
|
||||
analysis.add(evaluator, 'type-error-keyword-argument', value.parent, message=m)
|
||||
|
||||
remaining_params = list(var_arg_iterator)
|
||||
if remaining_params:
|
||||
m = _error_argument_count(func, len(unpacked_va))
|
||||
for p in remaining_params[0][1]:
|
||||
analysis.add(evaluator, 'type-error-too-many-arguments',
|
||||
p, message=m)
|
||||
return result
|
||||
|
||||
|
||||
def _unpack_var_args(evaluator, var_args, func):
|
||||
"""
|
||||
Yields a key/value pair, the key is None, if its not a named arg.
|
||||
"""
|
||||
argument_list = []
|
||||
from jedi.evaluate.representation import InstanceElement
|
||||
if isinstance(func, InstanceElement):
|
||||
# Include self at this place.
|
||||
argument_list.append((None, [helpers.FakeStatement([func.instance])]))
|
||||
|
||||
# `var_args` is typically an Array, and not a list.
|
||||
for stmt in _reorder_var_args(var_args):
|
||||
if not isinstance(stmt, pr.Statement):
|
||||
if stmt is None:
|
||||
argument_list.append((None, []))
|
||||
# TODO generate warning?
|
||||
continue
|
||||
old = stmt
|
||||
# generate a statement if it's not already one.
|
||||
stmt = helpers.FakeStatement([old])
|
||||
|
||||
expression_list = stmt.expression_list()
|
||||
if not len(expression_list):
|
||||
continue
|
||||
# *args
|
||||
if expression_list[0] == '*':
|
||||
arrays = evaluator.eval_expression_list(expression_list[1:])
|
||||
iterators = [_iterate_star_args(evaluator, a, expression_list[1:], func)
|
||||
for a in arrays]
|
||||
for values in list(zip_longest(*iterators)):
|
||||
argument_list.append((None, [v for v in values if v is not None]))
|
||||
# **kwargs
|
||||
elif expression_list[0] == '**':
|
||||
dct = {}
|
||||
for array in evaluator.eval_expression_list(expression_list[1:]):
|
||||
# Merge multiple kwargs dictionaries, if used with dynamic
|
||||
# parameters.
|
||||
s = _star_star_dict(evaluator, array, expression_list[1:], func)
|
||||
for name, (key, value) in s.items():
|
||||
try:
|
||||
dct[name][1].add(value)
|
||||
except KeyError:
|
||||
dct[name] = key, set([value])
|
||||
|
||||
for key, values in dct.values():
|
||||
# merge **kwargs/*args also for dynamic parameters
|
||||
for i, p in enumerate(func.params):
|
||||
if str(p.get_name()) == str(key) and not p.stars:
|
||||
try:
|
||||
k, vs = argument_list[i]
|
||||
except IndexError:
|
||||
pass
|
||||
else:
|
||||
if k is None: # k would imply a named argument
|
||||
# Don't merge if they orginate at the same
|
||||
# place. -> type-error-multiple-values
|
||||
if [v.parent for v in values] != [v.parent for v in vs]:
|
||||
vs.extend(values)
|
||||
break
|
||||
# Just report an error for the first param that is not needed (like
|
||||
# cPython).
|
||||
first_key, first_values = remaining_params[0]
|
||||
for v in first_values:
|
||||
if first_key is not None:
|
||||
# Is a keyword argument, return the whole thing instead of just
|
||||
# the value node.
|
||||
v = v.parent
|
||||
try:
|
||||
non_kw_param = keys_used[first_key]
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
# default is to merge
|
||||
argument_list.append((key, values))
|
||||
# Normal arguments (including key arguments).
|
||||
else:
|
||||
if stmt.assignment_details:
|
||||
key_arr, op = stmt.assignment_details[0]
|
||||
# Filter error tokens
|
||||
key_arr = [x for x in key_arr if isinstance(x, pr.Call)]
|
||||
# named parameter
|
||||
if key_arr and isinstance(key_arr[0], pr.Call):
|
||||
argument_list.append((key_arr[0].name, [stmt]))
|
||||
else:
|
||||
argument_list.append((None, [stmt]))
|
||||
return argument_list
|
||||
origin_args = non_kw_param.parent.var_args.argument_node
|
||||
# TODO calculate the var_args tree and check if it's in
|
||||
# the tree (if not continue).
|
||||
# print('\t\tnonkw', non_kw_param.parent.var_args.argument_node, )
|
||||
if origin_args not in [f.parent.parent for f in first_values]:
|
||||
continue
|
||||
analysis.add(evaluator, 'type-error-too-many-arguments',
|
||||
v, message=m)
|
||||
return param_names
|
||||
|
||||
|
||||
def _reorder_var_args(var_args):
|
||||
"""
|
||||
Reordering var_args is necessary, because star args sometimes appear after
|
||||
named argument, but in the actual order it's prepended.
|
||||
"""
|
||||
named_index = None
|
||||
new_args = []
|
||||
for i, stmt in enumerate(var_args):
|
||||
if isinstance(stmt, pr.Statement):
|
||||
if named_index is None and stmt.assignment_details:
|
||||
named_index = i
|
||||
|
||||
if named_index is not None:
|
||||
expression_list = stmt.expression_list()
|
||||
if expression_list and expression_list[0] == '*':
|
||||
new_args.insert(named_index, stmt)
|
||||
named_index += 1
|
||||
continue
|
||||
|
||||
new_args.append(stmt)
|
||||
return new_args
|
||||
|
||||
|
||||
def _iterate_star_args(evaluator, array, expression_list, func):
|
||||
def _iterate_star_args(evaluator, array, input_node, func=None):
|
||||
from jedi.evaluate.representation import Instance
|
||||
if isinstance(array, iterable.Array):
|
||||
for field_stmt in array: # yield from plz!
|
||||
yield field_stmt
|
||||
elif isinstance(array, iterable.Generator):
|
||||
for field_stmt in array.iter_content():
|
||||
yield helpers.FakeStatement([field_stmt])
|
||||
yield iterable.AlreadyEvaluated([field_stmt])
|
||||
elif isinstance(array, Instance) and array.name.get_code() == 'tuple':
|
||||
pass
|
||||
debug.warning('Ignored a tuple *args input %s' % array)
|
||||
else:
|
||||
if expression_list:
|
||||
if func is not None:
|
||||
m = "TypeError: %s() argument after * must be a sequence, not %s" \
|
||||
% (func.name.get_code(), array)
|
||||
analysis.add(evaluator, 'type-error-star',
|
||||
expression_list[0], message=m)
|
||||
% (func.name.value, array)
|
||||
analysis.add(evaluator, 'type-error-star', input_node, message=m)
|
||||
|
||||
|
||||
def _star_star_dict(evaluator, array, expression_list, func):
|
||||
dct = {}
|
||||
def _star_star_dict(evaluator, array, input_node, func):
|
||||
dct = defaultdict(lambda: [])
|
||||
from jedi.evaluate.representation import Instance
|
||||
if isinstance(array, Instance) and array.name.get_code() == 'dict':
|
||||
# For now ignore this case. In the future add proper iterators and just
|
||||
# make one call without crazy isinstance checks.
|
||||
return {}
|
||||
|
||||
if isinstance(array, iterable.Array) and array.type == pr.Array.DICT:
|
||||
for key_stmt, value_stmt in array.items():
|
||||
# first index, is the key if syntactically correct
|
||||
call = key_stmt.expression_list()[0]
|
||||
if isinstance(call, pr.Name):
|
||||
key = call
|
||||
elif isinstance(call, pr.Call):
|
||||
key = call.name
|
||||
else:
|
||||
continue # We ignore complicated statements here, for now.
|
||||
if isinstance(array, iterable.FakeDict):
|
||||
return array._dct
|
||||
elif isinstance(array, iterable.Array) and array.type == 'dict':
|
||||
# TODO bad call to non-public API
|
||||
for key_node, values in array._items():
|
||||
for key in evaluator.eval_element(key_node):
|
||||
if precedence.is_string(key):
|
||||
dct[key.obj] += values
|
||||
|
||||
# If the string is a duplicate, we don't care it's illegal Python
|
||||
# anyway.
|
||||
dct[str(key)] = key, value_stmt
|
||||
else:
|
||||
if expression_list:
|
||||
if func is not None:
|
||||
m = "TypeError: %s argument after ** must be a mapping, not %s" \
|
||||
% (func.name.get_code(), array)
|
||||
analysis.add(evaluator, 'type-error-star-star',
|
||||
expression_list[0], message=m)
|
||||
return dct
|
||||
|
||||
|
||||
def _gen_param_name_copy(func, var_args, param, keys=(), values=(), array_type=None):
|
||||
"""
|
||||
Create a param with the original scope (of varargs) as parent.
|
||||
"""
|
||||
if isinstance(var_args, pr.Array):
|
||||
parent = var_args.parent
|
||||
start_pos = var_args.start_pos
|
||||
else:
|
||||
parent = func
|
||||
start_pos = 0, 0
|
||||
|
||||
new_param = ExecutedParam.from_param(param, parent, var_args)
|
||||
|
||||
# create an Array (-> needed for *args/**kwargs tuples/dicts)
|
||||
arr = pr.Array(helpers.FakeSubModule, start_pos, array_type, parent)
|
||||
arr.values = list(values) # Arrays only work with list.
|
||||
key_stmts = []
|
||||
for key in keys:
|
||||
key_stmts.append(helpers.FakeStatement([key], start_pos))
|
||||
arr.keys = key_stmts
|
||||
arr.type = array_type
|
||||
|
||||
new_param.set_expression_list([arr])
|
||||
|
||||
name = copy.copy(param.get_name())
|
||||
name.names = [copy.copy(name.names[0])]
|
||||
name.names[0].parent = name
|
||||
name.parent = new_param
|
||||
return name
|
||||
% (func.name.value, array)
|
||||
analysis.add(evaluator, 'type-error-star-star', input_node, message=m)
|
||||
return dict(dct)
|
||||
|
||||
|
||||
def _error_argument_count(func, actual_count):
|
||||
default_arguments = sum(1 for p in func.params if p.assignment_details or p.stars)
|
||||
default_arguments = sum(1 for p in func.params if p.default or p.stars)
|
||||
|
||||
if default_arguments == 0:
|
||||
before = 'exactly '
|
||||
|
||||
@@ -4,9 +4,8 @@ Handles operator precedence.
|
||||
import operator
|
||||
|
||||
from jedi._compatibility import unicode
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi.common import PushBackIterator
|
||||
from jedi.evaluate.compiled import (CompiledObject, create, builtin,
|
||||
keyword_from_value, true_obj, false_obj)
|
||||
from jedi.evaluate import analysis
|
||||
@@ -24,188 +23,6 @@ COMPARISON_OPERATORS = {
|
||||
}
|
||||
|
||||
|
||||
class PythonGrammar(object):
|
||||
"""
|
||||
Some kind of mirror of http://docs.python.org/3/reference/grammar.html.
|
||||
"""
|
||||
|
||||
class MultiPart(str):
|
||||
def __new__(cls, first, second):
|
||||
self = str.__new__(cls, first)
|
||||
self.second = second
|
||||
return self
|
||||
|
||||
def __str__(self):
|
||||
return str.__str__(self) + ' ' + self.second
|
||||
|
||||
FACTOR = '+', '-', '~'
|
||||
POWER = '**',
|
||||
TERM = '*', '/', '%', '//'
|
||||
ARITH_EXPR = '+', '-'
|
||||
|
||||
SHIFT_EXPR = '<<', '>>'
|
||||
AND_EXPR = '&',
|
||||
XOR_EXPR = '^',
|
||||
EXPR = '|',
|
||||
|
||||
COMPARISON = ('<', '>', '==', '>=', '<=', '!=', 'in',
|
||||
MultiPart('not', 'in'), MultiPart('is', 'not'), 'is')
|
||||
|
||||
NOT_TEST = 'not',
|
||||
AND_TEST = 'and',
|
||||
OR_TEST = 'or',
|
||||
|
||||
#TEST = or_test ['if' or_test 'else' test] | lambdef
|
||||
|
||||
TERNARY = 'if',
|
||||
SLICE = ':',
|
||||
|
||||
ORDER = (POWER, TERM, ARITH_EXPR, SHIFT_EXPR, AND_EXPR, XOR_EXPR,
|
||||
EXPR, COMPARISON, AND_TEST, OR_TEST, TERNARY, SLICE)
|
||||
|
||||
FACTOR_PRIORITY = 0 # highest priority
|
||||
LOWEST_PRIORITY = len(ORDER)
|
||||
NOT_TEST_PRIORITY = LOWEST_PRIORITY - 4 # priority only lower for `and`/`or`
|
||||
SLICE_PRIORITY = LOWEST_PRIORITY - 1 # priority only lower for `and`/`or`
|
||||
|
||||
|
||||
class Precedence(object):
|
||||
def __init__(self, left, operator, right):
|
||||
self.left = left
|
||||
self.operator = operator
|
||||
self.right = right
|
||||
|
||||
def parse_tree(self, strip_literals=False):
|
||||
def process(which):
|
||||
try:
|
||||
which = which.parse_tree(strip_literals)
|
||||
except AttributeError:
|
||||
pass
|
||||
if strip_literals and isinstance(which, pr.Literal):
|
||||
which = which.value
|
||||
return which
|
||||
|
||||
return (process(self.left), self.operator.string, process(self.right))
|
||||
|
||||
def __repr__(self):
|
||||
return '(%s %s %s)' % (self.left, self.operator, self.right)
|
||||
|
||||
|
||||
class TernaryPrecedence(Precedence):
|
||||
def __init__(self, left, operator, right, check):
|
||||
super(TernaryPrecedence, self).__init__(left, operator, right)
|
||||
self.check = check
|
||||
|
||||
|
||||
def create_precedence(expression_list):
|
||||
iterator = PushBackIterator(iter(expression_list))
|
||||
return _check_operator(iterator)
|
||||
|
||||
|
||||
def _syntax_error(element, msg='SyntaxError in precedence'):
|
||||
debug.warning('%s: %s, %s' % (msg, element, element.start_pos))
|
||||
|
||||
|
||||
def _get_number(iterator, priority=PythonGrammar.LOWEST_PRIORITY):
|
||||
el = next(iterator)
|
||||
if isinstance(el, pr.Operator):
|
||||
if el in PythonGrammar.FACTOR:
|
||||
right = _get_number(iterator, PythonGrammar.FACTOR_PRIORITY)
|
||||
elif el in PythonGrammar.NOT_TEST \
|
||||
and priority >= PythonGrammar.NOT_TEST_PRIORITY:
|
||||
right = _get_number(iterator, PythonGrammar.NOT_TEST_PRIORITY)
|
||||
elif el in PythonGrammar.SLICE \
|
||||
and priority >= PythonGrammar.SLICE_PRIORITY:
|
||||
iterator.push_back(el)
|
||||
return None
|
||||
else:
|
||||
_syntax_error(el)
|
||||
return _get_number(iterator, priority)
|
||||
return Precedence(None, el, right)
|
||||
elif isinstance(el, pr.tokenize.Token):
|
||||
return _get_number(iterator, priority)
|
||||
else:
|
||||
return el
|
||||
|
||||
|
||||
class MergedOperator(pr.Operator):
|
||||
"""
|
||||
A way to merge the two operators `is not` and `not int`, which are two
|
||||
words instead of one.
|
||||
Maybe there's a better way (directly in the tokenizer/parser? but for now
|
||||
this is fine.)
|
||||
"""
|
||||
def __init__(self, first, second):
|
||||
string = first.string + ' ' + second.string
|
||||
super(MergedOperator, self).__init__(first._sub_module, string,
|
||||
first.parent, first.start_pos)
|
||||
self.first = first
|
||||
self.second = second
|
||||
|
||||
|
||||
def _check_operator(iterator, priority=PythonGrammar.LOWEST_PRIORITY):
|
||||
try:
|
||||
left = _get_number(iterator, priority)
|
||||
except StopIteration:
|
||||
return None
|
||||
|
||||
for el in iterator:
|
||||
if not isinstance(el, pr.Operator):
|
||||
_syntax_error(el)
|
||||
continue
|
||||
|
||||
operator = None
|
||||
for check_prio, check in enumerate(PythonGrammar.ORDER):
|
||||
if check_prio >= priority:
|
||||
# respect priorities.
|
||||
iterator.push_back(el)
|
||||
return left
|
||||
|
||||
try:
|
||||
match_index = check.index(el)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
match = check[match_index]
|
||||
if isinstance(match, PythonGrammar.MultiPart):
|
||||
next_tok = next(iterator)
|
||||
if next_tok == match.second:
|
||||
el = MergedOperator(el, next_tok)
|
||||
else:
|
||||
iterator.push_back(next_tok)
|
||||
if el == 'not':
|
||||
continue
|
||||
|
||||
operator = el
|
||||
break
|
||||
|
||||
if operator is None:
|
||||
_syntax_error(el)
|
||||
continue
|
||||
|
||||
if operator in PythonGrammar.POWER:
|
||||
check_prio += 1 # to the power of is right-associative
|
||||
elif operator in PythonGrammar.TERNARY:
|
||||
try:
|
||||
middle = []
|
||||
for each in iterator:
|
||||
if each == 'else':
|
||||
break
|
||||
middle.append(each)
|
||||
middle = create_precedence(middle)
|
||||
except StopIteration:
|
||||
_syntax_error(operator, 'SyntaxError ternary incomplete')
|
||||
right = _check_operator(iterator, check_prio)
|
||||
if right is None and not operator in PythonGrammar.SLICE:
|
||||
_syntax_error(iterator.current, 'SyntaxError operand missing')
|
||||
else:
|
||||
if operator in PythonGrammar.TERNARY:
|
||||
left = TernaryPrecedence(left, operator, right, middle)
|
||||
else:
|
||||
left = Precedence(left, operator, right)
|
||||
return left
|
||||
|
||||
|
||||
def _literals_to_types(evaluator, result):
|
||||
# Changes literals ('a', 1, 1.0, etc) to its type instances (str(),
|
||||
# int(), float(), etc).
|
||||
@@ -218,72 +35,68 @@ def _literals_to_types(evaluator, result):
|
||||
return list(set(result))
|
||||
|
||||
|
||||
def process_precedence_element(evaluator, precedence):
|
||||
if precedence is None:
|
||||
return None
|
||||
else:
|
||||
if isinstance(precedence, Precedence):
|
||||
left_objs = process_precedence_element(evaluator, precedence.left)
|
||||
operator = precedence.operator
|
||||
lazy_right = lambda: process_precedence_element(evaluator, precedence.right)
|
||||
# handle lazy evaluation of and/or here.
|
||||
if operator in ('and', 'or'):
|
||||
left_bools = set([left.py__bool__() for left in left_objs])
|
||||
if left_bools == set([True]):
|
||||
if operator == 'and':
|
||||
return lazy_right()
|
||||
else:
|
||||
return left_objs
|
||||
elif left_bools == set([False]):
|
||||
if operator == 'and':
|
||||
return left_objs
|
||||
else:
|
||||
return lazy_right()
|
||||
# Otherwise continue, because of uncertainty.
|
||||
return calculate(evaluator, left_objs, precedence.operator,
|
||||
lazy_right())
|
||||
def calculate_children(evaluator, children):
|
||||
"""
|
||||
Calculate a list of children with operators.
|
||||
"""
|
||||
iterator = iter(children)
|
||||
types = evaluator.eval_element(next(iterator))
|
||||
for operator in iterator:
|
||||
right = next(iterator)
|
||||
if pr.is_node(operator, 'comp_op'): # not in / is not
|
||||
operator = ' '.join(str(c.value) for c in operator.children)
|
||||
|
||||
# handle lazy evaluation of and/or here.
|
||||
if operator in ('and', 'or'):
|
||||
left_bools = set([left.py__bool__() for left in types])
|
||||
if left_bools == set([True]):
|
||||
if operator == 'and':
|
||||
types = evaluator.eval_element(right)
|
||||
elif left_bools == set([False]):
|
||||
if operator != 'and':
|
||||
types = evaluator.eval_element(right)
|
||||
# Otherwise continue, because of uncertainty.
|
||||
else:
|
||||
# normal element, no operators
|
||||
return evaluator.eval_statement_element(precedence)
|
||||
types = calculate(evaluator, types, operator,
|
||||
evaluator.eval_element(right))
|
||||
debug.dbg('calculate_children types %s', types)
|
||||
return types
|
||||
|
||||
|
||||
def calculate(evaluator, left_result, operator, right_result):
|
||||
result = []
|
||||
if left_result is None and right_result:
|
||||
# Cases like `-1`, `1 + ~1` or `not X`.
|
||||
for right in right_result:
|
||||
obj = _factor_calculate(evaluator, operator, right)
|
||||
if obj is not None:
|
||||
result.append(obj)
|
||||
return result
|
||||
if not left_result or not right_result:
|
||||
# illegal slices e.g. cause left/right_result to be None
|
||||
result = (left_result or []) + (right_result or [])
|
||||
result = _literals_to_types(evaluator, result)
|
||||
else:
|
||||
if not left_result or not right_result:
|
||||
# illegal slices e.g. cause left/right_result to be None
|
||||
result = (left_result or []) + (right_result or [])
|
||||
result = _literals_to_types(evaluator, result)
|
||||
# I don't think there's a reasonable chance that a string
|
||||
# operation is still correct, once we pass something like six
|
||||
# objects.
|
||||
if len(left_result) * len(right_result) > 6:
|
||||
result = _literals_to_types(evaluator, left_result + right_result)
|
||||
else:
|
||||
# I don't think there's a reasonable chance that a string
|
||||
# operation is still correct, once we pass something like six
|
||||
# objects.
|
||||
if len(left_result) * len(right_result) > 6:
|
||||
result = _literals_to_types(evaluator, left_result + right_result)
|
||||
else:
|
||||
for left in left_result:
|
||||
for right in right_result:
|
||||
result += _element_calculate(evaluator, left, operator, right)
|
||||
for left in left_result:
|
||||
for right in right_result:
|
||||
result += _element_calculate(evaluator, left, operator, right)
|
||||
return result
|
||||
|
||||
|
||||
def _factor_calculate(evaluator, operator, right):
|
||||
if _is_number(right):
|
||||
def factor_calculate(evaluator, types, operator):
|
||||
"""
|
||||
Calculates `+`, `-`, `~` and `not` prefixes.
|
||||
"""
|
||||
for typ in types:
|
||||
if operator == '-':
|
||||
return create(evaluator, -right.obj)
|
||||
if operator == 'not':
|
||||
value = right.py__bool__()
|
||||
if value is None: # Uncertainty.
|
||||
return None
|
||||
return keyword_from_value(not value)
|
||||
return right
|
||||
if _is_number(typ):
|
||||
yield create(evaluator, -typ.obj)
|
||||
elif operator == 'not':
|
||||
value = typ.py__bool__()
|
||||
if value is None: # Uncertainty.
|
||||
return
|
||||
yield keyword_from_value(not value)
|
||||
else:
|
||||
yield typ
|
||||
|
||||
|
||||
def _is_number(obj):
|
||||
@@ -302,12 +115,12 @@ def is_literal(obj):
|
||||
|
||||
def _is_tuple(obj):
|
||||
from jedi.evaluate import iterable
|
||||
return isinstance(obj, iterable.Array) and obj.type == pr.Array.TUPLE
|
||||
return isinstance(obj, iterable.Array) and obj.type == 'tuple'
|
||||
|
||||
|
||||
def _is_list(obj):
|
||||
from jedi.evaluate import iterable
|
||||
return isinstance(obj, iterable.Array) and obj.type == pr.Array.LIST
|
||||
return isinstance(obj, iterable.Array) and obj.type == 'list'
|
||||
|
||||
|
||||
def _element_calculate(evaluator, left, operator, right):
|
||||
@@ -344,6 +157,8 @@ def _element_calculate(evaluator, left, operator, right):
|
||||
except TypeError:
|
||||
# Could be True or False.
|
||||
return [true_obj, false_obj]
|
||||
elif operator == 'in':
|
||||
return []
|
||||
|
||||
def check(obj):
|
||||
"""Checks if a Jedi object is either a float or an int."""
|
||||
|
||||
@@ -7,7 +7,7 @@ Next to :mod:`jedi.evaluate.cache` this module also makes |jedi| not
|
||||
thread-safe. Why? ``execution_recursion_decorator`` uses class variables to
|
||||
count the function calls.
|
||||
"""
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi import settings
|
||||
from jedi.evaluate import compiled
|
||||
@@ -88,24 +88,18 @@ class _RecursionNode(object):
|
||||
if not other:
|
||||
return None
|
||||
|
||||
# List Comprehensions start on the same line as its statement.
|
||||
# Therefore we have the unfortunate situation of the same start_pos for
|
||||
# two statements.
|
||||
is_list_comp = lambda x: isinstance(x, pr.ListComprehension)
|
||||
return self.script == other.script \
|
||||
and self.position == other.position \
|
||||
and not is_list_comp(self.stmt.parent) \
|
||||
and not is_list_comp(other.parent) \
|
||||
and not self.is_ignored and not other.is_ignored
|
||||
|
||||
|
||||
def execution_recursion_decorator(func):
|
||||
def run(execution):
|
||||
def run(execution, **kwargs):
|
||||
detector = execution._evaluator.execution_recursion_detector
|
||||
if detector.push_execution(execution):
|
||||
result = []
|
||||
else:
|
||||
result = func(execution)
|
||||
result = func(execution, **kwargs)
|
||||
detector.pop_execution()
|
||||
return result
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
"""
|
||||
Like described in the :mod:`jedi.parser.representation` module,
|
||||
Like described in the :mod:`jedi.parser.tree` module,
|
||||
there's a need for an ast like module to represent the states of parsed
|
||||
modules.
|
||||
|
||||
@@ -22,21 +22,22 @@ py__bool__() Returns True/False/None; None means that
|
||||
there's no certainty.
|
||||
py__bases__(evaluator) Returns a list of base classes.
|
||||
py__mro__(evaluator) Returns a list of classes (the mro).
|
||||
py__getattribute__(evaluator, name) Returns a list of attribute values. The
|
||||
name can be str or Name.
|
||||
====================================== ========================================
|
||||
|
||||
__
|
||||
"""
|
||||
import copy
|
||||
import os
|
||||
import pkgutil
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import use_metaclass, unicode, Python3Method
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser.tokenize import Token
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi import common
|
||||
from jedi.cache import underscore_memoization
|
||||
from jedi.evaluate.cache import memoize_default, CachedMetaClass
|
||||
from jedi.cache import underscore_memoization, cache_star_import
|
||||
from jedi.evaluate.cache import memoize_default, CachedMetaClass, NO_DEFAULT
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import recursion
|
||||
from jedi.evaluate import iterable
|
||||
@@ -44,13 +45,17 @@ from jedi.evaluate import docstrings
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.evaluate import param
|
||||
from jedi.evaluate import flow_analysis
|
||||
from jedi.evaluate import imports
|
||||
|
||||
|
||||
def wrap(evaluator, element):
|
||||
if isinstance(element, pr.Class):
|
||||
return Class(evaluator, element)
|
||||
elif isinstance(element, pr.Function):
|
||||
return Function(evaluator, element)
|
||||
if isinstance(element, pr.Lambda):
|
||||
return LambdaWrapper(evaluator, element)
|
||||
else:
|
||||
return Function(evaluator, element)
|
||||
elif isinstance(element, (pr.Module)) \
|
||||
and not isinstance(element, ModuleWrapper):
|
||||
return ModuleWrapper(evaluator, element)
|
||||
@@ -61,7 +66,7 @@ def wrap(evaluator, element):
|
||||
class Executed(pr.Base):
|
||||
"""
|
||||
An instance is also an executable - because __init__ is called
|
||||
:param var_args: The param input array, consist of `pr.Array` or list.
|
||||
:param var_args: The param input array, consist of a parser node or a list.
|
||||
"""
|
||||
def __init__(self, evaluator, base, var_args=()):
|
||||
self._evaluator = evaluator
|
||||
@@ -72,7 +77,7 @@ class Executed(pr.Base):
|
||||
return True
|
||||
|
||||
def get_parent_until(self, *args, **kwargs):
|
||||
return self.base.get_parent_until(*args, **kwargs)
|
||||
return pr.Base.get_parent_until(self, *args, **kwargs)
|
||||
|
||||
@common.safe_property
|
||||
def parent(self):
|
||||
@@ -83,20 +88,26 @@ class Instance(use_metaclass(CachedMetaClass, Executed)):
|
||||
"""
|
||||
This class is used to evaluate instances.
|
||||
"""
|
||||
def __init__(self, evaluator, base, var_args=()):
|
||||
def __init__(self, evaluator, base, var_args, is_generated=False):
|
||||
super(Instance, self).__init__(evaluator, base, var_args)
|
||||
self.decorates = None
|
||||
# Generated instances are classes that are just generated by self
|
||||
# (No var_args) used.
|
||||
self.is_generated = is_generated
|
||||
|
||||
if base.name.get_code() in ['list', 'set'] \
|
||||
and compiled.builtin == base.get_parent_until():
|
||||
# compare the module path with the builtin name.
|
||||
self.var_args = iterable.check_array_instances(evaluator, self)
|
||||
else:
|
||||
elif not is_generated:
|
||||
# Need to execute the __init__ function, because the dynamic param
|
||||
# searching needs it.
|
||||
with common.ignored(KeyError):
|
||||
self.execute_subscope_by_name('__init__', self.var_args)
|
||||
# Generated instances are classes that are just generated by self
|
||||
# (No var_args) used.
|
||||
self.is_generated = False
|
||||
try:
|
||||
method = self.get_subscope_by_name('__init__')
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
evaluator.execute(method, self.var_args)
|
||||
|
||||
@property
|
||||
def py__call__(self):
|
||||
@@ -129,23 +140,12 @@ class Instance(use_metaclass(CachedMetaClass, Executed)):
|
||||
normally self.
|
||||
"""
|
||||
try:
|
||||
return str(func.params[0].get_name())
|
||||
return str(func.params[0].name)
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
@memoize_default([])
|
||||
def get_self_attributes(self):
|
||||
def add_self_dot_name(name):
|
||||
"""
|
||||
Need to copy and rewrite the name, because names are now
|
||||
``instance_usage.variable`` instead of ``self.variable``.
|
||||
"""
|
||||
n = copy.copy(name)
|
||||
n.names = n.names[1:]
|
||||
n._get_code = unicode(n.names[-1])
|
||||
names.append(get_instance_el(self._evaluator, self, n))
|
||||
|
||||
names = []
|
||||
def _self_names_dict(self, add_mro=True):
|
||||
names = {}
|
||||
# This loop adds the names of the self object, copies them and removes
|
||||
# the self.
|
||||
for sub in self.base.subscopes:
|
||||
@@ -153,58 +153,62 @@ class Instance(use_metaclass(CachedMetaClass, Executed)):
|
||||
continue
|
||||
# Get the self name, if there's one.
|
||||
self_name = self._get_func_self_name(sub)
|
||||
if not self_name:
|
||||
if self_name is None:
|
||||
continue
|
||||
|
||||
if sub.name.get_code() == '__init__':
|
||||
if sub.name.value == '__init__' and not self.is_generated:
|
||||
# ``__init__`` is special because the params need are injected
|
||||
# this way. Therefore an execution is necessary.
|
||||
if not sub.decorators:
|
||||
if not sub.get_decorators():
|
||||
# __init__ decorators should generally just be ignored,
|
||||
# because to follow them and their self variables is too
|
||||
# complicated.
|
||||
sub = self._get_method_execution(sub)
|
||||
for n in sub.get_defined_names():
|
||||
# Only names with the selfname are being added.
|
||||
# It is also important, that they have a len() of 2,
|
||||
# because otherwise, they are just something else
|
||||
if unicode(n.names[0]) == self_name and len(n.names) == 2:
|
||||
add_self_dot_name(n)
|
||||
|
||||
for s in self.base.py__bases__(self._evaluator):
|
||||
if not isinstance(s, compiled.CompiledObject):
|
||||
for inst in self._evaluator.execute(s):
|
||||
names += inst.get_self_attributes()
|
||||
for name_list in sub.names_dict.values():
|
||||
for name in name_list:
|
||||
if name.value == self_name and name.prev_sibling() is None:
|
||||
trailer = name.next_sibling()
|
||||
if pr.is_node(trailer, 'trailer') \
|
||||
and len(trailer.children) == 2 \
|
||||
and trailer.children[0] == '.':
|
||||
name = trailer.children[1] # After dot.
|
||||
if name.is_definition():
|
||||
arr = names.setdefault(name.value, [])
|
||||
arr.append(get_instance_el(self._evaluator, self, name))
|
||||
return names
|
||||
|
||||
def get_subscope_by_name(self, name):
|
||||
sub = self.base.get_subscope_by_name(name)
|
||||
return get_instance_el(self._evaluator, self, sub, True)
|
||||
|
||||
def execute_subscope_by_name(self, name, args=()):
|
||||
def execute_subscope_by_name(self, name, *args):
|
||||
method = self.get_subscope_by_name(name)
|
||||
return self._evaluator.execute(method, args)
|
||||
return self._evaluator.execute_evaluated(method, *args)
|
||||
|
||||
def get_descriptor_return(self, obj):
|
||||
def get_descriptor_returns(self, obj):
|
||||
""" Throws a KeyError if there's no method. """
|
||||
# Arguments in __get__ descriptors are obj, class.
|
||||
# `method` is the new parent of the array, don't know if that's good.
|
||||
args = [obj, obj.base] if isinstance(obj, Instance) else [compiled.none_obj, obj]
|
||||
return self.execute_subscope_by_name('__get__', args)
|
||||
try:
|
||||
return self.execute_subscope_by_name('__get__', *args)
|
||||
except KeyError:
|
||||
return [self]
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
"""
|
||||
An Instance has two scopes: The scope with self names and the class
|
||||
scope. Instance variables have priority over the class scope.
|
||||
"""
|
||||
yield self, self.get_self_attributes()
|
||||
@memoize_default()
|
||||
def names_dicts(self, search_global):
|
||||
yield self._self_names_dict()
|
||||
|
||||
for scope, names in self.base.scope_names_generator(add_class_vars=False):
|
||||
yield self, [get_instance_el(self._evaluator, self, var, True)
|
||||
for var in names]
|
||||
for s in self.base.py__mro__(self._evaluator)[1:]:
|
||||
if not isinstance(s, compiled.CompiledObject):
|
||||
# Compiled objects don't have `self.` names.
|
||||
for inst in self._evaluator.execute(s):
|
||||
yield inst._self_names_dict(add_mro=False)
|
||||
|
||||
def get_index_types(self, index_array):
|
||||
for names_dict in self.base.names_dicts(search_global=False, is_instance=True):
|
||||
yield LazyInstanceDict(self._evaluator, self, names_dict)
|
||||
|
||||
def get_index_types(self, evaluator, index_array):
|
||||
indexes = iterable.create_indexes_or_slices(self._evaluator, index_array)
|
||||
if any([isinstance(i, iterable.Slice) for i in indexes]):
|
||||
# Slice support in Jedi is very marginal, at the moment, so just
|
||||
@@ -212,12 +216,13 @@ class Instance(use_metaclass(CachedMetaClass, Executed)):
|
||||
# TODO support slices in a more general way.
|
||||
indexes = []
|
||||
|
||||
index = helpers.FakeStatement(indexes, parent=compiled.builtin)
|
||||
try:
|
||||
return self.execute_subscope_by_name('__getitem__', [index])
|
||||
method = self.get_subscope_by_name('__getitem__')
|
||||
except KeyError:
|
||||
debug.warning('No __getitem__, cannot access the array.')
|
||||
return []
|
||||
else:
|
||||
return self._evaluator.execute(method, [iterable.AlreadyEvaluated(indexes)])
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
@@ -226,15 +231,44 @@ class Instance(use_metaclass(CachedMetaClass, Executed)):
|
||||
return helpers.FakeName(unicode(name), self, name.start_pos)
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in ['start_pos', 'end_pos', 'get_imports',
|
||||
'doc', 'raw_doc', 'asserts']:
|
||||
if name not in ['start_pos', 'end_pos', 'get_imports', 'type',
|
||||
'doc', 'raw_doc']:
|
||||
raise AttributeError("Instance %s: Don't touch this (%s)!"
|
||||
% (self, name))
|
||||
return getattr(self.base, name)
|
||||
|
||||
def __repr__(self):
|
||||
return "<e%s of %s (var_args: %s)>" % \
|
||||
(type(self).__name__, self.base, len(self.var_args or []))
|
||||
dec = ''
|
||||
if self.decorates is not None:
|
||||
dec = " decorates " + repr(self.decorates)
|
||||
return "<e%s of %s(%s)%s>" % (type(self).__name__, self.base,
|
||||
self.var_args, dec)
|
||||
|
||||
|
||||
class LazyInstanceDict(object):
|
||||
def __init__(self, evaluator, instance, dct):
|
||||
self._evaluator = evaluator
|
||||
self._instance = instance
|
||||
self._dct = dct
|
||||
|
||||
def __getitem__(self, name):
|
||||
return [get_instance_el(self._evaluator, self._instance, var, True)
|
||||
for var in self._dct[name]]
|
||||
|
||||
def values(self):
|
||||
return [self[key] for key in self._dct]
|
||||
|
||||
|
||||
class InstanceName(pr.Name):
|
||||
def __init__(self, origin_name, parent):
|
||||
super(InstanceName, self).__init__(pr.zero_position_modifier,
|
||||
origin_name.value,
|
||||
origin_name.start_pos)
|
||||
self._origin_name = origin_name
|
||||
self.parent = parent
|
||||
|
||||
def is_definition(self):
|
||||
return self._origin_name.is_definition()
|
||||
|
||||
|
||||
def get_instance_el(evaluator, instance, var, is_class_var=False):
|
||||
@@ -242,8 +276,11 @@ def get_instance_el(evaluator, instance, var, is_class_var=False):
|
||||
Returns an InstanceElement if it makes sense, otherwise leaves the object
|
||||
untouched.
|
||||
"""
|
||||
if isinstance(var, (Instance, compiled.CompiledObject, pr.Operator, Token,
|
||||
pr.Module, FunctionExecution)):
|
||||
if isinstance(var, pr.Name):
|
||||
parent = get_instance_el(evaluator, instance, var.parent, is_class_var)
|
||||
return InstanceName(var, parent)
|
||||
elif isinstance(var, (Instance, compiled.CompiledObject, pr.Leaf,
|
||||
pr.Module, FunctionExecution)):
|
||||
return var
|
||||
|
||||
var = wrap(evaluator, var)
|
||||
@@ -275,7 +312,7 @@ class InstanceElement(use_metaclass(CachedMetaClass, pr.Base)):
|
||||
return par
|
||||
|
||||
def get_parent_until(self, *args, **kwargs):
|
||||
return pr.Simple.get_parent_until(self, *args, **kwargs)
|
||||
return pr.BaseNode.get_parent_until(self, *args, **kwargs)
|
||||
|
||||
def get_definition(self):
|
||||
return self.get_parent_until((pr.ExprStmt, pr.IsScope, pr.Import))
|
||||
@@ -286,19 +323,21 @@ class InstanceElement(use_metaclass(CachedMetaClass, pr.Base)):
|
||||
func = get_instance_el(self._evaluator, self.instance, func)
|
||||
return func
|
||||
|
||||
def expression_list(self):
|
||||
def get_rhs(self):
|
||||
return get_instance_el(self._evaluator, self.instance,
|
||||
self.var.get_rhs(), self.is_class_var)
|
||||
|
||||
def is_definition(self):
|
||||
return self.var.is_definition()
|
||||
|
||||
@property
|
||||
def children(self):
|
||||
# Copy and modify the array.
|
||||
return [get_instance_el(self._evaluator, self.instance, command, self.is_class_var)
|
||||
for command in self.var.expression_list()]
|
||||
for command in self.var.children]
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
def names(self):
|
||||
return [pr.NamePart(helpers.FakeSubModule, unicode(n), self, n.start_pos)
|
||||
for n in self.var.names]
|
||||
|
||||
@property
|
||||
@underscore_memoization
|
||||
@memoize_default()
|
||||
def name(self):
|
||||
name = self.var.name
|
||||
return helpers.FakeName(unicode(name), self, name.start_pos)
|
||||
@@ -366,68 +405,70 @@ class Class(use_metaclass(CachedMetaClass, Wrapper)):
|
||||
for cls in self.py__bases__(self._evaluator):
|
||||
# TODO detect for TypeError: duplicate base class str,
|
||||
# e.g. `class X(str, str): pass`
|
||||
add(cls)
|
||||
for cls_new in cls.py__mro__(evaluator):
|
||||
add(cls_new)
|
||||
try:
|
||||
mro_method = cls.py__mro__
|
||||
except AttributeError:
|
||||
# TODO add a TypeError like:
|
||||
"""
|
||||
>>> class Y(lambda: test): pass
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
TypeError: function() argument 1 must be code, not str
|
||||
>>> class Y(1): pass
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
TypeError: int() takes at most 2 arguments (3 given)
|
||||
"""
|
||||
pass
|
||||
else:
|
||||
add(cls)
|
||||
for cls_new in mro_method(evaluator):
|
||||
add(cls_new)
|
||||
return tuple(mro)
|
||||
|
||||
@memoize_default(default=())
|
||||
def py__bases__(self, evaluator):
|
||||
supers = []
|
||||
for s in self.base.supers:
|
||||
# Super classes are statements.
|
||||
for cls in self._evaluator.eval_statement(s):
|
||||
if not isinstance(cls, (Class, compiled.CompiledObject)):
|
||||
debug.warning('Received non class as a super class.')
|
||||
continue # Just ignore other stuff (user input error).
|
||||
supers.append(cls)
|
||||
|
||||
if not supers:
|
||||
# Add `object` to classes (implicit in Python 3.)
|
||||
supers.append(compiled.object_obj)
|
||||
return supers
|
||||
arglist = self.base.get_super_arglist()
|
||||
if arglist:
|
||||
args = param.Arguments(self._evaluator, arglist)
|
||||
return list(chain.from_iterable(args.eval_args()))
|
||||
else:
|
||||
return [compiled.object_obj]
|
||||
|
||||
def py__call__(self, evaluator, params):
|
||||
return [Instance(evaluator, self, params)]
|
||||
|
||||
def scope_names_generator(self, position=None, add_class_vars=True):
|
||||
def in_iterable(name, iterable):
|
||||
""" checks if the name is in the variable 'iterable'. """
|
||||
for i in iterable:
|
||||
# Only the last name is important, because these names have a
|
||||
# maximal length of 2, with the first one being `self`.
|
||||
if unicode(i.names[-1]) == unicode(name.names[-1]):
|
||||
return True
|
||||
return False
|
||||
def py__getattribute__(self, name):
|
||||
return self._evaluator.find_types(self, name)
|
||||
|
||||
all_names = []
|
||||
for cls in self.py__mro__(self._evaluator):
|
||||
names = []
|
||||
if isinstance(cls, compiled.CompiledObject):
|
||||
x = cls.instance_names()
|
||||
else:
|
||||
x = reversed(cls.base.get_defined_names())
|
||||
for n in x:
|
||||
if not in_iterable(n, all_names):
|
||||
names.append(n)
|
||||
yield cls, names
|
||||
if add_class_vars:
|
||||
yield self, compiled.type_names
|
||||
@property
|
||||
def params(self):
|
||||
return self.get_subscope_by_name('__init__').params
|
||||
|
||||
def names_dicts(self, search_global, is_instance=False):
|
||||
if search_global:
|
||||
yield self.names_dict
|
||||
else:
|
||||
for scope in self.py__mro__(self._evaluator):
|
||||
if isinstance(scope, compiled.CompiledObject):
|
||||
yield scope.names_dicts(False, is_instance)[0]
|
||||
else:
|
||||
yield scope.names_dict
|
||||
|
||||
def is_class(self):
|
||||
return True
|
||||
|
||||
def get_subscope_by_name(self, name):
|
||||
for s in [self] + self.py__bases__(self._evaluator):
|
||||
for s in self.py__mro__(self._evaluator):
|
||||
for sub in reversed(s.subscopes):
|
||||
if sub.name.get_code() == name:
|
||||
return sub
|
||||
if sub.name.value == name:
|
||||
return sub
|
||||
raise KeyError("Couldn't find subscope.")
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in ['start_pos', 'end_pos', 'parent', 'asserts', 'raw_doc',
|
||||
if name not in ['start_pos', 'end_pos', 'parent', 'raw_doc',
|
||||
'doc', 'get_imports', 'get_parent_until', 'get_code',
|
||||
'subscopes']:
|
||||
'subscopes', 'names_dict', 'type']:
|
||||
raise AttributeError("Don't touch this: %s of %s !" % (name, self))
|
||||
return getattr(self.base, name)
|
||||
|
||||
@@ -444,66 +485,71 @@ class Function(use_metaclass(CachedMetaClass, Wrapper)):
|
||||
self._evaluator = evaluator
|
||||
self.base = self.base_func = func
|
||||
self.is_decorated = is_decorated
|
||||
# A property that is set by the decorator resolution.
|
||||
self.decorates = None
|
||||
|
||||
@memoize_default()
|
||||
def _decorated_func(self):
|
||||
def get_decorated_func(self):
|
||||
"""
|
||||
Returns the function, that is to be executed in the end.
|
||||
Returns the function, that should to be executed in the end.
|
||||
This is also the places where the decorators are processed.
|
||||
"""
|
||||
f = self.base_func
|
||||
decorators = self.base_func.get_decorators()
|
||||
|
||||
if not decorators or self.is_decorated:
|
||||
return self
|
||||
|
||||
# Only enter it, if has not already been processed.
|
||||
if not self.is_decorated:
|
||||
for dec in reversed(self.base_func.decorators):
|
||||
for dec in reversed(decorators):
|
||||
debug.dbg('decorator: %s %s', dec, f)
|
||||
dec_results = self._evaluator.eval_statement(dec)
|
||||
dec_results = self._evaluator.eval_element(dec.children[1])
|
||||
trailer = dec.children[2:-1]
|
||||
if trailer:
|
||||
# Create a trailer and evaluate it.
|
||||
trailer = pr.Node('trailer', trailer)
|
||||
dec_results = self._evaluator.eval_trailer(dec_results, trailer)
|
||||
|
||||
if not len(dec_results):
|
||||
debug.warning('decorator not found: %s on %s', dec, self.base_func)
|
||||
return None
|
||||
return self
|
||||
decorator = dec_results.pop()
|
||||
if dec_results:
|
||||
debug.warning('multiple decorators found %s %s',
|
||||
self.base_func, dec_results)
|
||||
# Create param array.
|
||||
old_func = Function(self._evaluator, f, is_decorated=True)
|
||||
|
||||
wrappers = self._evaluator.execute(decorator, (old_func,))
|
||||
# Create param array.
|
||||
if isinstance(f, Function):
|
||||
old_func = f # TODO this is just hacky. change.
|
||||
else:
|
||||
old_func = Function(self._evaluator, f, is_decorated=True)
|
||||
|
||||
wrappers = self._evaluator.execute_evaluated(decorator, old_func)
|
||||
if not len(wrappers):
|
||||
debug.warning('no wrappers found %s', self.base_func)
|
||||
return None
|
||||
return self
|
||||
if len(wrappers) > 1:
|
||||
# TODO resolve issue with multiple wrappers -> multiple types
|
||||
debug.warning('multiple wrappers found %s %s',
|
||||
self.base_func, wrappers)
|
||||
f = wrappers[0]
|
||||
if isinstance(f, (Instance, Function)):
|
||||
f.decorates = self
|
||||
|
||||
debug.dbg('decorator end %s', f)
|
||||
|
||||
if isinstance(f, pr.Function):
|
||||
f = Function(self._evaluator, f, True)
|
||||
return f
|
||||
|
||||
def get_decorated_func(self):
|
||||
"""
|
||||
This function exists for the sole purpose of returning itself if the
|
||||
decorator doesn't turn out to "work".
|
||||
|
||||
We just ignore the decorator here, because sometimes decorators are
|
||||
just really complicated and Jedi cannot understand them.
|
||||
"""
|
||||
return self._decorated_func() \
|
||||
or Function(self._evaluator, self.base_func, True)
|
||||
|
||||
def get_magic_function_names(self):
|
||||
return compiled.magic_function_class.get_defined_names()
|
||||
|
||||
def get_magic_function_scope(self):
|
||||
return compiled.magic_function_class
|
||||
def names_dicts(self, search_global):
|
||||
if search_global:
|
||||
yield self.names_dict
|
||||
else:
|
||||
for names_dict in compiled.magic_function_class.names_dicts(False):
|
||||
yield names_dict
|
||||
|
||||
@Python3Method
|
||||
def py__call__(self, evaluator, params):
|
||||
if self.is_generator:
|
||||
if self.base.is_generator():
|
||||
return [iterable.Generator(evaluator, self, params)]
|
||||
else:
|
||||
return FunctionExecution(evaluator, self, params).get_return_types()
|
||||
@@ -515,13 +561,17 @@ class Function(use_metaclass(CachedMetaClass, Wrapper)):
|
||||
return getattr(self.base_func, name)
|
||||
|
||||
def __repr__(self):
|
||||
dec_func = self._decorated_func()
|
||||
dec = ''
|
||||
if not self.is_decorated and self.base_func.decorators:
|
||||
dec = " is " + repr(dec_func)
|
||||
if self.decorates is not None:
|
||||
dec = " decorates " + repr(self.decorates)
|
||||
return "<e%s of %s%s>" % (type(self).__name__, self.base_func, dec)
|
||||
|
||||
|
||||
class LambdaWrapper(Function):
|
||||
def get_decorated_func(self):
|
||||
return self
|
||||
|
||||
|
||||
class FunctionExecution(Executed):
|
||||
"""
|
||||
This class is used to evaluate functions and their returns.
|
||||
@@ -531,16 +581,23 @@ class FunctionExecution(Executed):
|
||||
multiple calls to functions and recursion has to be avoided. But this is
|
||||
responsibility of the decorators.
|
||||
"""
|
||||
type = 'funcdef'
|
||||
|
||||
def __init__(self, evaluator, base, *args, **kwargs):
|
||||
super(FunctionExecution, self).__init__(evaluator, base, *args, **kwargs)
|
||||
# for deep_ast_copy
|
||||
self._copy_dict = {base.base_func: self}
|
||||
self._copy_dict = {}
|
||||
new_func = helpers.deep_ast_copy(base.base_func, self, self._copy_dict)
|
||||
self.children = new_func.children
|
||||
self.names_dict = new_func.names_dict
|
||||
|
||||
@memoize_default(default=())
|
||||
@recursion.execution_recursion_decorator
|
||||
def get_return_types(self):
|
||||
def get_return_types(self, check_yields=False):
|
||||
func = self.base
|
||||
|
||||
if func.isinstance(LambdaWrapper):
|
||||
return self._evaluator.eval_element(self.children[-1])
|
||||
|
||||
if func.listeners:
|
||||
# Feed the listeners, with the params.
|
||||
for listener in func.listeners:
|
||||
@@ -550,27 +607,28 @@ class FunctionExecution(Executed):
|
||||
# inserted params, not in the actual execution of the function.
|
||||
return []
|
||||
|
||||
types = list(docstrings.find_return_types(self._evaluator, func))
|
||||
for r in self.returns:
|
||||
if isinstance(r, pr.KeywordStatement):
|
||||
stmt = r.stmt
|
||||
else:
|
||||
stmt = r # Lambdas
|
||||
if check_yields:
|
||||
types = []
|
||||
returns = self.yields
|
||||
else:
|
||||
returns = self.returns
|
||||
types = list(docstrings.find_return_types(self._evaluator, func))
|
||||
|
||||
if stmt is None:
|
||||
continue
|
||||
|
||||
check = flow_analysis.break_check(self._evaluator, self, r.parent)
|
||||
for r in returns:
|
||||
check = flow_analysis.break_check(self._evaluator, self, r)
|
||||
if check is flow_analysis.UNREACHABLE:
|
||||
debug.dbg('Return unreachable: %s', r)
|
||||
else:
|
||||
types += self._evaluator.eval_statement(stmt)
|
||||
types += self._evaluator.eval_element(r.children[1])
|
||||
if check is flow_analysis.REACHABLE:
|
||||
debug.dbg('Return reachable: %s', r)
|
||||
break
|
||||
return types
|
||||
|
||||
@memoize_default(default=())
|
||||
def names_dicts(self, search_global):
|
||||
yield self.names_dict
|
||||
|
||||
@memoize_default(default=NO_DEFAULT)
|
||||
def _get_params(self):
|
||||
"""
|
||||
This returns the params for an TODO and is injected as a
|
||||
@@ -580,26 +638,19 @@ class FunctionExecution(Executed):
|
||||
"""
|
||||
return param.get_params(self._evaluator, self.base, self.var_args)
|
||||
|
||||
def get_defined_names(self):
|
||||
"""
|
||||
Call the default method with the own instance (self implements all
|
||||
the necessary functions). Add also the params.
|
||||
"""
|
||||
return self._get_params() + pr.Scope.get_defined_names(self)
|
||||
def param_by_name(self, name):
|
||||
return [n for n in self._get_params() if str(n) == name][0]
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
names = pr.filter_after_position(pr.Scope.get_defined_names(self), position)
|
||||
yield self, self._get_params() + names
|
||||
def name_for_position(self, position):
|
||||
return pr.Function.name_for_position(self, position)
|
||||
|
||||
def _copy_list(self, list_name):
|
||||
def _copy_list(self, lst):
|
||||
"""
|
||||
Copies a list attribute of a parser Function. Copying is very
|
||||
expensive, because it is something like `copy.deepcopy`. However, these
|
||||
copied objects can be used for the executions, as if they were in the
|
||||
execution.
|
||||
"""
|
||||
# Copy all these lists into this local function.
|
||||
lst = getattr(self.base, list_name)
|
||||
objects = []
|
||||
for element in lst:
|
||||
self._scope_copy(element.parent)
|
||||
@@ -608,11 +659,12 @@ class FunctionExecution(Executed):
|
||||
return objects
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name not in ['start_pos', 'end_pos', 'imports', '_sub_module']:
|
||||
if name not in ['start_pos', 'end_pos', 'imports', 'name', 'type']:
|
||||
raise AttributeError('Tried to access %s: %s. Why?' % (name, self))
|
||||
return getattr(self.base, name)
|
||||
|
||||
def _scope_copy(self, scope):
|
||||
raise NotImplementedError
|
||||
""" Copies a scope (e.g. `if foo:`) in an execution """
|
||||
if scope != self.base.base_func:
|
||||
# Just make sure the parents been copied.
|
||||
@@ -622,67 +674,105 @@ class FunctionExecution(Executed):
|
||||
@common.safe_property
|
||||
@memoize_default([])
|
||||
def returns(self):
|
||||
return self._copy_list('returns')
|
||||
return pr.Scope._search_in_scope(self, pr.ReturnStmt)
|
||||
|
||||
@common.safe_property
|
||||
@memoize_default([])
|
||||
def asserts(self):
|
||||
return self._copy_list('asserts')
|
||||
def yields(self):
|
||||
return pr.Scope._search_in_scope(self, pr.YieldExpr)
|
||||
|
||||
@common.safe_property
|
||||
@memoize_default([])
|
||||
def statements(self):
|
||||
return self._copy_list('statements')
|
||||
return pr.Scope._search_in_scope(self, pr.ExprStmt)
|
||||
|
||||
@common.safe_property
|
||||
@memoize_default([])
|
||||
def subscopes(self):
|
||||
return self._copy_list('subscopes')
|
||||
|
||||
def get_statement_for_position(self, pos):
|
||||
return pr.Scope.get_statement_for_position(self, pos)
|
||||
return pr.Scope._search_in_scope(self, pr.Scope)
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s of %s>" % (type(self).__name__, self.base)
|
||||
|
||||
|
||||
class GlobalName(helpers.FakeName):
|
||||
def __init__(self, name):
|
||||
"""
|
||||
We need to mark global names somehow. Otherwise they are just normal
|
||||
names that are not definitions.
|
||||
"""
|
||||
super(GlobalName, self).__init__(name.value, name.parent,
|
||||
name.start_pos, is_definition=True)
|
||||
|
||||
|
||||
class ModuleWrapper(use_metaclass(CachedMetaClass, pr.Module, Wrapper)):
|
||||
def __init__(self, evaluator, module):
|
||||
self._evaluator = evaluator
|
||||
self.base = self._module = module
|
||||
|
||||
def scope_names_generator(self, position=None):
|
||||
yield self, pr.filter_after_position(self._module.get_defined_names(), position)
|
||||
yield self, self._module_attributes()
|
||||
sub_modules = self._sub_modules()
|
||||
if sub_modules:
|
||||
yield self, self._sub_modules()
|
||||
def names_dicts(self, search_global):
|
||||
yield self.base.names_dict
|
||||
yield self._module_attributes_dict()
|
||||
|
||||
for star_module in self.star_imports():
|
||||
yield star_module.names_dict
|
||||
|
||||
yield dict((str(n), [GlobalName(n)]) for n in self.base.global_names)
|
||||
yield self._sub_modules_dict()
|
||||
|
||||
@cache_star_import
|
||||
@memoize_default([])
|
||||
def star_imports(self):
|
||||
modules = []
|
||||
for i in self.base.imports:
|
||||
if i.is_star_import():
|
||||
name = i.star_import_name()
|
||||
new = imports.ImportWrapper(self._evaluator, name).follow()
|
||||
for module in new:
|
||||
if isinstance(module, pr.Module):
|
||||
modules += module.star_imports()
|
||||
modules += new
|
||||
return modules
|
||||
|
||||
@memoize_default()
|
||||
def _module_attributes(self):
|
||||
def _module_attributes_dict(self):
|
||||
def parent_callback():
|
||||
return Instance(self._evaluator, compiled.create(self._evaluator, str))
|
||||
return self._evaluator.execute(compiled.create(self._evaluator, str))[0]
|
||||
|
||||
names = ['__file__', '__package__', '__doc__', '__name__', '__version__']
|
||||
names = ['__file__', '__package__', '__doc__', '__name__']
|
||||
# All the additional module attributes are strings.
|
||||
return [helpers.LazyName(n, parent_callback) for n in names]
|
||||
return dict((n, [helpers.LazyName(n, parent_callback, is_definition=True)])
|
||||
for n in names)
|
||||
|
||||
@property
|
||||
@memoize_default()
|
||||
def name(self):
|
||||
return helpers.FakeName(unicode(self.base.name), self, (1, 0))
|
||||
|
||||
@memoize_default()
|
||||
def _sub_modules(self):
|
||||
def _sub_modules_dict(self):
|
||||
"""
|
||||
Lists modules in the directory of this module (if this module is a
|
||||
package).
|
||||
"""
|
||||
path = self._module.path
|
||||
names = []
|
||||
names = {}
|
||||
if path is not None and path.endswith(os.path.sep + '__init__.py'):
|
||||
mods = pkgutil.iter_modules([os.path.dirname(path)])
|
||||
for module_loader, name, is_pkg in mods:
|
||||
name = helpers.FakeName(name)
|
||||
fake_n = helpers.FakeName(name)
|
||||
# It's obviously a relative import to the current module.
|
||||
imp = helpers.FakeImport(name, self, level=1)
|
||||
name.parent = imp
|
||||
names.append(name)
|
||||
imp = helpers.FakeImport(fake_n, self, level=1)
|
||||
fake_n.parent = imp
|
||||
names[name] = [fake_n]
|
||||
|
||||
# TODO add something like this in the future, its cleaner than the
|
||||
# import hacks.
|
||||
# ``os.path`` is a hardcoded exception, because it's a
|
||||
# ``sys.modules`` modification.
|
||||
#if str(self.name) == 'os':
|
||||
# names.append(helpers.FakeName('path', parent=self))
|
||||
|
||||
return names
|
||||
|
||||
def __getattr__(self, name):
|
||||
|
||||
@@ -7,14 +7,17 @@ To add a new implementation, create a function and add it to the
|
||||
|
||||
"""
|
||||
import collections
|
||||
import re
|
||||
|
||||
from jedi._compatibility import unicode
|
||||
from jedi.evaluate import compiled
|
||||
from jedi.evaluate import representation as er
|
||||
from jedi.evaluate import iterable
|
||||
from jedi.evaluate.helpers import FakeArray, FakeStatement
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi.evaluate import precedence
|
||||
from jedi.evaluate import param
|
||||
|
||||
|
||||
class NotInStdLib(LookupError):
|
||||
@@ -44,42 +47,77 @@ def execute(evaluator, obj, params):
|
||||
|
||||
def _follow_param(evaluator, params, index):
|
||||
try:
|
||||
stmt = params[index]
|
||||
key, values = list(params.unpack())[index]
|
||||
except IndexError:
|
||||
return []
|
||||
else:
|
||||
if isinstance(stmt, pr.Statement):
|
||||
return evaluator.eval_statement(stmt)
|
||||
else:
|
||||
return [stmt] # just some arbitrary object
|
||||
return iterable.unite(evaluator.eval_element(v) for v in values)
|
||||
|
||||
|
||||
def builtins_getattr(evaluator, obj, params):
|
||||
stmts = []
|
||||
def argument_clinic(string, want_obj=False, want_scope=False):
|
||||
"""
|
||||
Works like Argument Clinic (PEP 436), to validate function params.
|
||||
"""
|
||||
clinic_args = []
|
||||
allow_kwargs = False
|
||||
optional = False
|
||||
while string:
|
||||
# Optional arguments have to begin with a bracket. And should always be
|
||||
# at the end of the arguments. This is therefore not a proper argument
|
||||
# clinic implementation. `range()` for exmple allows an optional start
|
||||
# value at the beginning.
|
||||
match = re.match('(?:(?:(\[),? ?|, ?|)(\w+)|, ?/)\]*', string)
|
||||
string = string[len(match.group(0)):]
|
||||
if not match.group(2): # A slash -> allow named arguments
|
||||
allow_kwargs = True
|
||||
continue
|
||||
optional = optional or bool(match.group(1))
|
||||
word = match.group(2)
|
||||
clinic_args.append((word, optional, allow_kwargs))
|
||||
|
||||
def f(func):
|
||||
def wrapper(evaluator, obj, arguments):
|
||||
try:
|
||||
lst = list(arguments.eval_argument_clinic(clinic_args))
|
||||
except ValueError:
|
||||
return []
|
||||
else:
|
||||
kwargs = {}
|
||||
if want_scope:
|
||||
kwargs['scope'] = arguments.scope()
|
||||
if want_obj:
|
||||
kwargs['obj'] = obj
|
||||
return func(evaluator, *lst, **kwargs)
|
||||
|
||||
return wrapper
|
||||
return f
|
||||
|
||||
|
||||
@argument_clinic('object, name[, default], /')
|
||||
def builtins_getattr(evaluator, objects, names, defaults=None):
|
||||
types = []
|
||||
# follow the first param
|
||||
objects = _follow_param(evaluator, params, 0)
|
||||
names = _follow_param(evaluator, params, 1)
|
||||
for obj in objects:
|
||||
if not isinstance(obj, (er.Instance, er.Class, pr.Module, compiled.CompiledObject)):
|
||||
debug.warning('getattr called without instance')
|
||||
continue
|
||||
|
||||
for name in names:
|
||||
s = unicode, str
|
||||
if isinstance(name, compiled.CompiledObject) and isinstance(name.obj, s):
|
||||
stmts += evaluator.follow_path(iter([name.obj]), [obj], obj)
|
||||
if precedence.is_string(name):
|
||||
return evaluator.find_types(obj, name.obj)
|
||||
else:
|
||||
debug.warning('getattr called without str')
|
||||
continue
|
||||
return stmts
|
||||
return types
|
||||
|
||||
|
||||
def builtins_type(evaluator, obj, params):
|
||||
if len(params) == 1:
|
||||
# otherwise it would be a metaclass... maybe someday...
|
||||
objects = _follow_param(evaluator, params, 0)
|
||||
@argument_clinic('object[, bases, dict], /')
|
||||
def builtins_type(evaluator, objects, bases, dicts):
|
||||
if bases or dicts:
|
||||
# metaclass... maybe someday...
|
||||
return []
|
||||
else:
|
||||
return [o.base for o in objects if isinstance(o, er.Instance)]
|
||||
return []
|
||||
|
||||
|
||||
class SuperInstance(er.Instance):
|
||||
@@ -89,14 +127,14 @@ class SuperInstance(er.Instance):
|
||||
super().__init__(evaluator, su and su[0] or self)
|
||||
|
||||
|
||||
def builtins_super(evaluator, obj, params):
|
||||
@argument_clinic('[type[, obj]], /', want_scope=True)
|
||||
def builtins_super(evaluator, types, objects, scope):
|
||||
# TODO make this able to detect multiple inheritance super
|
||||
accept = (pr.Function, er.FunctionExecution)
|
||||
func = params.get_parent_until(accept)
|
||||
if func.isinstance(*accept):
|
||||
if scope.isinstance(*accept):
|
||||
wanted = (pr.Class, er.Instance)
|
||||
cls = func.get_parent_until(accept + wanted,
|
||||
include_current=False)
|
||||
cls = scope.get_parent_until(accept + wanted,
|
||||
include_current=False)
|
||||
if isinstance(cls, wanted):
|
||||
if isinstance(cls, pr.Class):
|
||||
cls = er.Class(evaluator, cls)
|
||||
@@ -108,27 +146,25 @@ def builtins_super(evaluator, obj, params):
|
||||
return []
|
||||
|
||||
|
||||
def builtins_reversed(evaluator, obj, params):
|
||||
objects = tuple(_follow_param(evaluator, params, 0))
|
||||
if objects:
|
||||
# unpack the iterator values
|
||||
objects = tuple(iterable.get_iterator_types(objects))
|
||||
if objects:
|
||||
rev = reversed(objects)
|
||||
# Repack iterator values and then run it the normal way. This is
|
||||
# necessary, because `reversed` is a function and autocompletion
|
||||
# would fail in certain cases like `reversed(x).__iter__` if we
|
||||
# just returned the result directly.
|
||||
stmts = [FakeStatement([r]) for r in rev]
|
||||
objects = (iterable.Array(evaluator, FakeArray(stmts, objects[0].parent)),)
|
||||
return [er.Instance(evaluator, obj, objects)]
|
||||
@argument_clinic('sequence, /', want_obj=True)
|
||||
def builtins_reversed(evaluator, sequences, obj):
|
||||
# Unpack the iterator values
|
||||
objects = tuple(iterable.get_iterator_types(sequences))
|
||||
rev = [iterable.AlreadyEvaluated([o]) for o in reversed(objects)]
|
||||
# Repack iterator values and then run it the normal way. This is
|
||||
# necessary, because `reversed` is a function and autocompletion
|
||||
# would fail in certain cases like `reversed(x).__iter__` if we
|
||||
# just returned the result directly.
|
||||
rev = iterable.AlreadyEvaluated(
|
||||
[iterable.FakeSequence(evaluator, rev, 'list')]
|
||||
)
|
||||
return [er.Instance(evaluator, obj, param.Arguments(evaluator, [rev]))]
|
||||
|
||||
|
||||
def builtins_isinstance(evaluator, obj, params):
|
||||
obj = _follow_param(evaluator, params, 0)
|
||||
raw_classes = _follow_param(evaluator, params, 1)
|
||||
@argument_clinic('obj, type, /')
|
||||
def builtins_isinstance(evaluator, objects, types):
|
||||
bool_results = set([])
|
||||
for o in obj:
|
||||
for o in objects:
|
||||
try:
|
||||
mro_func = o.py__class__(evaluator).py__mro__
|
||||
except AttributeError:
|
||||
@@ -139,7 +175,7 @@ def builtins_isinstance(evaluator, obj, params):
|
||||
|
||||
mro = mro_func(evaluator)
|
||||
|
||||
for cls_or_tup in raw_classes:
|
||||
for cls_or_tup in types:
|
||||
if cls_or_tup.is_class():
|
||||
bool_results.add(cls_or_tup in mro)
|
||||
else:
|
||||
@@ -189,14 +225,13 @@ def collections_namedtuple(evaluator, obj, params):
|
||||
)
|
||||
|
||||
# Parse source
|
||||
generated_class = Parser(unicode(source)).module.subscopes[0]
|
||||
generated_class = Parser(evaluator.grammar, unicode(source)).module.subscopes[0]
|
||||
return [er.Class(evaluator, generated_class)]
|
||||
|
||||
|
||||
def _return_first_param(evaluator, obj, params):
|
||||
if len(params) == 1:
|
||||
return _follow_param(evaluator, params, 0)
|
||||
return []
|
||||
@argument_clinic('first, /')
|
||||
def _return_first_param(evaluator, firsts):
|
||||
return firsts
|
||||
|
||||
|
||||
_implemented = {
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
import glob
|
||||
import os
|
||||
import sys
|
||||
|
||||
from jedi._compatibility import exec_function, unicode
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pr
|
||||
from jedi.parser import Parser
|
||||
from jedi.evaluate.cache import memoize_default
|
||||
from jedi import debug
|
||||
@@ -16,17 +17,26 @@ def get_sys_path():
|
||||
if not venv:
|
||||
return
|
||||
venv = os.path.abspath(venv)
|
||||
if os.name == 'nt':
|
||||
p = os.path.join(venv, 'lib', 'site-packages')
|
||||
else:
|
||||
p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2],
|
||||
'site-packages')
|
||||
p = _get_venv_sitepackages(venv)
|
||||
if p not in sys_path:
|
||||
sys_path.insert(0, p)
|
||||
|
||||
# Add all egg-links from the virtualenv.
|
||||
for egg_link in glob.glob(os.path.join(p, '*.egg-link')):
|
||||
with open(egg_link) as fd:
|
||||
sys_path.insert(0, fd.readline().rstrip())
|
||||
|
||||
check_virtual_env(sys.path)
|
||||
return [p for p in sys.path if p != ""]
|
||||
|
||||
def _get_venv_sitepackages(venv):
|
||||
if os.name == 'nt':
|
||||
p = os.path.join(venv, 'lib', 'site-packages')
|
||||
else:
|
||||
p = os.path.join(venv, 'lib', 'python%d.%d' % sys.version_info[:2],
|
||||
'site-packages')
|
||||
return p
|
||||
|
||||
|
||||
def _execute_code(module_path, code):
|
||||
c = "import os; from os.path import *; result=%s"
|
||||
@@ -35,18 +45,17 @@ def _execute_code(module_path, code):
|
||||
exec_function(c % code, variables)
|
||||
except Exception:
|
||||
debug.warning('sys.path manipulation detected, but failed to evaluate.')
|
||||
return None
|
||||
try:
|
||||
res = variables['result']
|
||||
if isinstance(res, str):
|
||||
return os.path.abspath(res)
|
||||
else:
|
||||
return None
|
||||
except KeyError:
|
||||
return None
|
||||
else:
|
||||
try:
|
||||
res = variables['result']
|
||||
if isinstance(res, str):
|
||||
return [os.path.abspath(res)]
|
||||
except KeyError:
|
||||
pass
|
||||
return []
|
||||
|
||||
|
||||
def _paths_from_assignment(evaluator, statement):
|
||||
def _paths_from_assignment(evaluator, expr_stmt):
|
||||
"""
|
||||
Extracts the assigned strings from an assignment that looks as follows::
|
||||
|
||||
@@ -57,74 +66,82 @@ def _paths_from_assignment(evaluator, statement):
|
||||
because it will only affect Jedi in very random situations and by adding
|
||||
more paths than necessary, it usually benefits the general user.
|
||||
"""
|
||||
for exp_list, operator in statement.assignment_details:
|
||||
if len(exp_list) != 1 or not isinstance(exp_list[0], pr.Call):
|
||||
for assignee, operator in zip(expr_stmt.children[::2], expr_stmt.children[1::2]):
|
||||
try:
|
||||
assert operator in ['=', '+=']
|
||||
assert pr.is_node(assignee, 'power') and len(assignee.children) > 1
|
||||
c = assignee.children
|
||||
assert c[0].type == 'name' and c[0].value == 'sys'
|
||||
trailer = c[1]
|
||||
assert trailer.children[0] == '.' and trailer.children[1].value == 'path'
|
||||
# TODO Essentially we're not checking details on sys.path
|
||||
# manipulation. Both assigment of the sys.path and changing/adding
|
||||
# parts of the sys.path are the same: They get added to the current
|
||||
# sys.path.
|
||||
"""
|
||||
execution = c[2]
|
||||
assert execution.children[0] == '['
|
||||
subscript = execution.children[1]
|
||||
assert subscript.type == 'subscript'
|
||||
assert ':' in subscript.children
|
||||
"""
|
||||
except AssertionError:
|
||||
continue
|
||||
if unicode(exp_list[0].name) != 'sys.path':
|
||||
continue
|
||||
# TODO at this point we ignore all ways what could be assigned to
|
||||
# sys.path or an execution of it. Here we could do way more
|
||||
# complicated checks.
|
||||
|
||||
from jedi.evaluate.iterable import get_iterator_types
|
||||
from jedi.evaluate.precedence import is_string
|
||||
for val in get_iterator_types(evaluator.eval_statement(statement)):
|
||||
for val in get_iterator_types(evaluator.eval_statement(expr_stmt)):
|
||||
if is_string(val):
|
||||
yield val.obj
|
||||
|
||||
|
||||
def _paths_from_insert(module_path, exe):
|
||||
""" extract the inserted module path from an "sys.path.insert" statement
|
||||
"""
|
||||
exe_type, exe.type = exe.type, pr.Array.NOARRAY
|
||||
try:
|
||||
exe_pop = exe.values.pop(0)
|
||||
res = _execute_code(module_path, exe.get_code())
|
||||
finally:
|
||||
exe.type = exe_type
|
||||
exe.values.insert(0, exe_pop)
|
||||
return res
|
||||
|
||||
|
||||
def _paths_from_call_expression(module_path, call):
|
||||
def _paths_from_list_modifications(module_path, trailer1, trailer2):
|
||||
""" extract the path from either "sys.path.append" or "sys.path.insert" """
|
||||
if not call.next_is_execution():
|
||||
return
|
||||
# Guarantee that both are trailers, the first one a name and the second one
|
||||
# a function execution with at least one param.
|
||||
if not (pr.is_node(trailer1, 'trailer') and trailer1.children[0] == '.'
|
||||
and pr.is_node(trailer2, 'trailer') and trailer2.children[0] == '('
|
||||
and len(trailer2.children) == 3):
|
||||
return []
|
||||
|
||||
n = call.name
|
||||
if not isinstance(n, pr.Name) or len(n.names) != 3:
|
||||
return
|
||||
names = [unicode(x) for x in n.names]
|
||||
if names[:2] != ['sys', 'path']:
|
||||
return
|
||||
cmd = names[2]
|
||||
exe = call.next
|
||||
if cmd == 'insert' and len(exe) == 2:
|
||||
path = _paths_from_insert(module_path, exe)
|
||||
elif cmd == 'append' and len(exe) == 1:
|
||||
path = _execute_code(module_path, exe.get_code())
|
||||
return path and [path] or []
|
||||
name = trailer1.children[1].value
|
||||
if name not in ['insert', 'append']:
|
||||
return []
|
||||
|
||||
arg = trailer2.children[1]
|
||||
if name == 'insert' and len(arg.children) in (3, 4): # Possible trailing comma.
|
||||
arg = arg.children[2]
|
||||
return _execute_code(module_path, arg.get_code())
|
||||
|
||||
|
||||
def _check_module(evaluator, module):
|
||||
try:
|
||||
possible_stmts = module.used_names['path']
|
||||
except KeyError:
|
||||
return get_sys_path()
|
||||
def get_sys_path_powers(names):
|
||||
for name in names:
|
||||
power = name.parent.parent
|
||||
if pr.is_node(power, 'power'):
|
||||
c = power.children
|
||||
if isinstance(c[0], pr.Name) and c[0].value == 'sys' \
|
||||
and pr.is_node(c[1], 'trailer'):
|
||||
n = c[1].children[1]
|
||||
if isinstance(n, pr.Name) and n.value == 'path':
|
||||
yield name, power
|
||||
|
||||
sys_path = list(get_sys_path()) # copy
|
||||
statements = (p for p in possible_stmts if isinstance(p, pr.ExprStmt))
|
||||
for stmt in statements:
|
||||
expressions = stmt.expression_list()
|
||||
if len(expressions) == 1 and isinstance(expressions[0], pr.Call):
|
||||
sys_path.extend(
|
||||
_paths_from_call_expression(module.path, expressions[0]) or [])
|
||||
elif hasattr(stmt, 'assignment_details') \
|
||||
and len(stmt.assignment_details) == 1:
|
||||
sys_path.extend(_paths_from_assignment(evaluator, stmt))
|
||||
try:
|
||||
possible_names = module.used_names['path']
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
for name, power in get_sys_path_powers(possible_names):
|
||||
stmt = name.get_definition()
|
||||
if len(power.children) >= 4:
|
||||
sys_path.extend(_paths_from_list_modifications(module.path, *power.children[2:4]))
|
||||
elif name.get_definition().type == 'expr_stmt':
|
||||
sys_path.extend(_paths_from_assignment(evaluator, stmt))
|
||||
return sys_path
|
||||
|
||||
|
||||
@memoize_default(evaluator_is_first_arg=True)
|
||||
@memoize_default(evaluator_is_first_arg=True, default=[])
|
||||
def sys_path_with_modifications(evaluator, module):
|
||||
if module.path is None:
|
||||
# Support for modules without a path is bad, therefore return the
|
||||
@@ -147,7 +164,7 @@ def sys_path_with_modifications(evaluator, module):
|
||||
except IOError:
|
||||
pass
|
||||
else:
|
||||
p = Parser(common.source_to_unicode(source), module_path)
|
||||
p = Parser(evaluator.grammar, common.source_to_unicode(source), module_path)
|
||||
for path in _check_module(p.module):
|
||||
if path not in buildout_paths:
|
||||
buildout_paths.add(path)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""
|
||||
The ``Parser`` tries to convert the available Python code in an easy to read
|
||||
format, something like an abstract syntax tree. The classes who represent this
|
||||
tree, are sitting in the :mod:`jedi.parser.representation` module.
|
||||
tree, are sitting in the :mod:`jedi.parser.tree` module.
|
||||
|
||||
The Python module ``tokenize`` is a very important part in the ``Parser``,
|
||||
because it splits the code into different words (tokens). Sometimes it looks a
|
||||
@@ -15,13 +15,16 @@ within the statement. This lowers memory usage and cpu time and reduces the
|
||||
complexity of the ``Parser`` (there's another parser sitting inside
|
||||
``Statement``, which produces ``Array`` and ``Call``).
|
||||
"""
|
||||
import keyword
|
||||
import os
|
||||
import re
|
||||
|
||||
from jedi._compatibility import next, unicode
|
||||
from jedi import debug
|
||||
from jedi import common
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pt
|
||||
from jedi.parser import tokenize
|
||||
from jedi.parser import token
|
||||
from jedi.parser.token import (DEDENT, INDENT, ENDMARKER, NEWLINE, NUMBER,
|
||||
STRING, OP, ERRORTOKEN)
|
||||
from jedi.parser.pgen2.pgen import generate_grammar
|
||||
from jedi.parser.pgen2.parse import PgenParser
|
||||
|
||||
OPERATOR_KEYWORDS = 'and', 'for', 'if', 'else', 'in', 'is', 'lambda', 'not', 'or'
|
||||
# Not used yet. In the future I intend to add something like KeywordStatement
|
||||
@@ -29,623 +32,362 @@ STATEMENT_KEYWORDS = 'assert', 'del', 'global', 'nonlocal', 'raise', \
|
||||
'return', 'yield', 'pass', 'continue', 'break'
|
||||
|
||||
|
||||
_loaded_grammars = {}
|
||||
|
||||
|
||||
def load_grammar(file='grammar3.4'):
|
||||
# For now we only support two different Python syntax versions: The latest
|
||||
# Python 3 and Python 2. This may change.
|
||||
if file.startswith('grammar3'):
|
||||
file = 'grammar3.4'
|
||||
else:
|
||||
file = 'grammar2.7'
|
||||
|
||||
global _loaded_grammars
|
||||
path = os.path.join(os.path.dirname(__file__), file) + '.txt'
|
||||
try:
|
||||
return _loaded_grammars[path]
|
||||
except KeyError:
|
||||
return _loaded_grammars.setdefault(path, generate_grammar(path))
|
||||
|
||||
|
||||
class ErrorStatement(object):
|
||||
def __init__(self, stack, next_token, position_modifier, next_start_pos):
|
||||
self.stack = stack
|
||||
self._position_modifier = position_modifier
|
||||
self.next_token = next_token
|
||||
self._next_start_pos = next_start_pos
|
||||
|
||||
@property
|
||||
def next_start_pos(self):
|
||||
s = self._next_start_pos
|
||||
return s[0] + self._position_modifier.line, s[1]
|
||||
|
||||
@property
|
||||
def first_pos(self):
|
||||
first_type, nodes = self.stack[0]
|
||||
return nodes[0].start_pos
|
||||
|
||||
@property
|
||||
def first_type(self):
|
||||
first_type, nodes = self.stack[0]
|
||||
return first_type
|
||||
|
||||
|
||||
class ParserSyntaxError(object):
|
||||
def __init__(self, message, position):
|
||||
self.message = message
|
||||
self.position = position
|
||||
|
||||
|
||||
class Parser(object):
|
||||
"""
|
||||
This class is used to parse a Python file, it then divides them into a
|
||||
class structure of different scopes.
|
||||
|
||||
:param source: The codebase for the parser.
|
||||
:type source: str
|
||||
:param grammar: The grammar object of pgen2. Loaded by load_grammar.
|
||||
:param source: The codebase for the parser. Must be unicode.
|
||||
:param module_path: The path of the module in the file system, may be None.
|
||||
:type module_path: str
|
||||
:param no_docstr: If True, a string at the beginning is not a docstr.
|
||||
:param top_module: Use this module as a parent instead of `self.module`.
|
||||
"""
|
||||
def __init__(self, source, module_path=None, no_docstr=False,
|
||||
tokenizer=None, top_module=None):
|
||||
self.no_docstr = no_docstr
|
||||
def __init__(self, grammar, source, module_path=None, tokenizer=None):
|
||||
self._ast_mapping = {
|
||||
'expr_stmt': pt.ExprStmt,
|
||||
'classdef': pt.Class,
|
||||
'funcdef': pt.Function,
|
||||
'file_input': pt.Module,
|
||||
'import_name': pt.ImportName,
|
||||
'import_from': pt.ImportFrom,
|
||||
'break_stmt': pt.KeywordStatement,
|
||||
'continue_stmt': pt.KeywordStatement,
|
||||
'return_stmt': pt.ReturnStmt,
|
||||
'raise_stmt': pt.KeywordStatement,
|
||||
'yield_expr': pt.YieldExpr,
|
||||
'del_stmt': pt.KeywordStatement,
|
||||
'pass_stmt': pt.KeywordStatement,
|
||||
'global_stmt': pt.GlobalStmt,
|
||||
'nonlocal_stmt': pt.KeywordStatement,
|
||||
'assert_stmt': pt.AssertStmt,
|
||||
'if_stmt': pt.IfStmt,
|
||||
'with_stmt': pt.WithStmt,
|
||||
'for_stmt': pt.ForStmt,
|
||||
'while_stmt': pt.WhileStmt,
|
||||
'try_stmt': pt.TryStmt,
|
||||
'comp_for': pt.CompFor,
|
||||
'decorator': pt.Decorator,
|
||||
'lambdef': pt.Lambda,
|
||||
'old_lambdef': pt.Lambda,
|
||||
'lambdef_nocond': pt.Lambda,
|
||||
}
|
||||
|
||||
self.syntax_errors = []
|
||||
|
||||
self._global_names = []
|
||||
self._omit_dedent_list = []
|
||||
self._indent_counter = 0
|
||||
self._last_failed_start_pos = (0, 0)
|
||||
|
||||
# TODO do print absolute import detection here.
|
||||
#try:
|
||||
# del python_grammar_no_print_statement.keywords["print"]
|
||||
#except KeyError:
|
||||
# pass # Doesn't exist in the Python 3 grammar.
|
||||
|
||||
#if self.options["print_function"]:
|
||||
# python_grammar = pygram.python_grammar_no_print_statement
|
||||
#else:
|
||||
self._used_names = {}
|
||||
self._scope_names_stack = [{}]
|
||||
self._error_statement_stacks = []
|
||||
|
||||
added_newline = False
|
||||
# The Python grammar needs a newline at the end of each statement.
|
||||
if not source.endswith('\n'):
|
||||
source += '\n'
|
||||
added_newline = True
|
||||
|
||||
# For the fast parser.
|
||||
self.position_modifier = pt.PositionModifier()
|
||||
p = PgenParser(grammar, self.convert_node, self.convert_leaf,
|
||||
self.error_recovery)
|
||||
tokenizer = tokenizer or tokenize.source_tokens(source)
|
||||
self._gen = PushBackTokenizer(tokenizer)
|
||||
self.module = p.parse(self._tokenize(tokenizer))
|
||||
if self.module.type != 'file_input':
|
||||
# If there's only one statement, we get back a non-module. That's
|
||||
# not what we want, we want a module, so we add it here:
|
||||
self.module = self.convert_node(grammar,
|
||||
grammar.symbol2number['file_input'],
|
||||
[self.module])
|
||||
|
||||
# initialize global Scope
|
||||
start_pos = next(self._gen).start_pos
|
||||
self._gen.push_last_back()
|
||||
self.module = pr.SubModule(module_path, start_pos, top_module)
|
||||
self._scope = self.module
|
||||
self._top_module = top_module or self.module
|
||||
if added_newline:
|
||||
self.remove_last_newline()
|
||||
self.module.used_names = self._used_names
|
||||
self.module.path = module_path
|
||||
self.module.global_names = self._global_names
|
||||
self.module.error_statement_stacks = self._error_statement_stacks
|
||||
|
||||
def convert_node(self, grammar, type, children):
|
||||
"""
|
||||
Convert raw node information to a Node instance.
|
||||
|
||||
This is passed to the parser driver which calls it whenever a reduction of a
|
||||
grammar rule produces a new complete node, so that the tree is build
|
||||
strictly bottom-up.
|
||||
"""
|
||||
symbol = grammar.number2symbol[type]
|
||||
try:
|
||||
self._parse()
|
||||
except (common.MultiLevelStopIteration, StopIteration):
|
||||
# StopIteration needs to be added as well, because python 2 has a
|
||||
# strange way of handling StopIterations.
|
||||
# sometimes StopIteration isn't catched. Just ignore it.
|
||||
new_node = self._ast_mapping[symbol](children)
|
||||
except KeyError:
|
||||
new_node = pt.Node(symbol, children)
|
||||
|
||||
# on finish, set end_pos correctly
|
||||
pass
|
||||
s = self._scope
|
||||
while s is not None:
|
||||
s.end_pos = self._gen.current.end_pos
|
||||
s = s.parent
|
||||
# We need to check raw_node always, because the same node can be
|
||||
# returned by convert multiple times.
|
||||
if symbol == 'global_stmt':
|
||||
self._global_names += new_node.get_global_names()
|
||||
elif isinstance(new_node, pt.Lambda):
|
||||
new_node.names_dict = self._scope_names_stack.pop()
|
||||
elif isinstance(new_node, (pt.ClassOrFunc, pt.Module)) \
|
||||
and symbol in ('funcdef', 'classdef', 'file_input'):
|
||||
# scope_name_stack handling
|
||||
scope_names = self._scope_names_stack.pop()
|
||||
if isinstance(new_node, pt.ClassOrFunc):
|
||||
n = new_node.name
|
||||
scope_names[n.value].remove(n)
|
||||
# Set the func name of the current node
|
||||
arr = self._scope_names_stack[-1].setdefault(n.value, [])
|
||||
arr.append(n)
|
||||
new_node.names_dict = scope_names
|
||||
elif isinstance(new_node, pt.CompFor):
|
||||
# The name definitions of comprehenions shouldn't be part of the
|
||||
# current scope. They are part of the comprehension scope.
|
||||
for n in new_node.get_defined_names():
|
||||
self._scope_names_stack[-1][n.value].remove(n)
|
||||
return new_node
|
||||
|
||||
# clean up unused decorators
|
||||
for d in self._decorators:
|
||||
# set a parent for unused decorators, avoid NullPointerException
|
||||
# because of `self.module.used_names`.
|
||||
d.parent = self.module
|
||||
def convert_leaf(self, grammar, type, value, prefix, start_pos):
|
||||
#print('leaf', value, pytree.type_repr(type))
|
||||
if type == tokenize.NAME:
|
||||
if value in grammar.keywords:
|
||||
if value in ('def', 'class', 'lambda'):
|
||||
self._scope_names_stack.append({})
|
||||
|
||||
self.module.end_pos = self._gen.current.end_pos
|
||||
if self._gen.current.type == tokenize.NEWLINE:
|
||||
# This case is only relevant with the FastTokenizer, because
|
||||
# otherwise there's always an ENDMARKER.
|
||||
# we added a newline before, so we need to "remove" it again.
|
||||
#
|
||||
# NOTE: It should be keep end_pos as-is if the last token of
|
||||
# a source is a NEWLINE, otherwise the newline at the end of
|
||||
# a source is not included in a ParserNode.code.
|
||||
if self._gen.previous.type != tokenize.NEWLINE:
|
||||
self.module.end_pos = self._gen.previous.end_pos
|
||||
return pt.Keyword(self.position_modifier, value, start_pos, prefix)
|
||||
else:
|
||||
name = pt.Name(self.position_modifier, value, start_pos, prefix)
|
||||
# Keep a listing of all used names
|
||||
arr = self._used_names.setdefault(name.value, [])
|
||||
arr.append(name)
|
||||
arr = self._scope_names_stack[-1].setdefault(name.value, [])
|
||||
arr.append(name)
|
||||
return name
|
||||
elif type == STRING:
|
||||
return pt.String(self.position_modifier, value, start_pos, prefix)
|
||||
elif type == NUMBER:
|
||||
return pt.Number(self.position_modifier, value, start_pos, prefix)
|
||||
elif type in (NEWLINE, ENDMARKER):
|
||||
return pt.Whitespace(self.position_modifier, value, start_pos, prefix)
|
||||
else:
|
||||
return pt.Operator(self.position_modifier, value, start_pos, prefix)
|
||||
|
||||
del self._gen
|
||||
def error_recovery(self, grammar, stack, typ, value, start_pos, prefix,
|
||||
add_token_callback):
|
||||
"""
|
||||
This parser is written in a dynamic way, meaning that this parser
|
||||
allows using different grammars (even non-Python). However, error
|
||||
recovery is purely written for Python.
|
||||
"""
|
||||
def current_suite(stack):
|
||||
# For now just discard everything that is not a suite or
|
||||
# file_input, if we detect an error.
|
||||
for index, (dfa, state, (typ, nodes)) in reversed(list(enumerate(stack))):
|
||||
# `suite` can sometimes be only simple_stmt, not stmt.
|
||||
symbol = grammar.number2symbol[typ]
|
||||
if symbol == 'file_input':
|
||||
break
|
||||
elif symbol == 'suite' and len(nodes) > 1:
|
||||
# suites without an indent in them get discarded.
|
||||
break
|
||||
elif symbol == 'simple_stmt' and len(nodes) > 1:
|
||||
# simple_stmt can just be turned into a Node, if there are
|
||||
# enough statements. Ignore the rest after that.
|
||||
break
|
||||
return index, symbol, nodes
|
||||
|
||||
index, symbol, nodes = current_suite(stack)
|
||||
if symbol == 'simple_stmt':
|
||||
index -= 2
|
||||
(_, _, (typ, suite_nodes)) = stack[index]
|
||||
symbol = grammar.number2symbol[typ]
|
||||
suite_nodes.append(pt.Node(symbol, list(nodes)))
|
||||
# Remove
|
||||
nodes[:] = []
|
||||
nodes = suite_nodes
|
||||
stack[index]
|
||||
|
||||
#print('err', token.tok_name[typ], repr(value), start_pos, len(stack), index)
|
||||
self._stack_removal(grammar, stack, index + 1, value, start_pos)
|
||||
if typ == INDENT:
|
||||
# For every deleted INDENT we have to delete a DEDENT as well.
|
||||
# Otherwise the parser will get into trouble and DEDENT too early.
|
||||
self._omit_dedent_list.append(self._indent_counter)
|
||||
|
||||
if value in ('import', 'from', 'class', 'def', 'try', 'while', 'return'):
|
||||
# Those can always be new statements.
|
||||
add_token_callback(typ, value, prefix, start_pos)
|
||||
elif typ == DEDENT and symbol == 'suite':
|
||||
# Close the current suite, with DEDENT.
|
||||
# Note that this may cause some suites to not contain any
|
||||
# statements at all. This is contrary to valid Python syntax. We
|
||||
# keep incomplete suites in Jedi to be able to complete param names
|
||||
# or `with ... as foo` names. If we want to use this parser for
|
||||
# syntax checks, we have to check in a separate turn if suites
|
||||
# contain statements or not. However, a second check is necessary
|
||||
# anyway (compile.c does that for Python), because Python's grammar
|
||||
# doesn't stop you from defining `continue` in a module, etc.
|
||||
add_token_callback(typ, value, prefix, start_pos)
|
||||
|
||||
def _stack_removal(self, grammar, stack, start_index, value, start_pos):
|
||||
def clear_names(children):
|
||||
for c in children:
|
||||
try:
|
||||
clear_names(c.children)
|
||||
except AttributeError:
|
||||
if isinstance(c, pt.Name):
|
||||
try:
|
||||
self._scope_names_stack[-1][c.value].remove(c)
|
||||
self._used_names[c.value].remove(c)
|
||||
except ValueError:
|
||||
pass # This may happen with CompFor.
|
||||
|
||||
for dfa, state, node in stack[start_index:]:
|
||||
clear_names(children=node[1])
|
||||
|
||||
failed_stack = []
|
||||
found = False
|
||||
for dfa, state, (typ, nodes) in stack[start_index:]:
|
||||
if nodes:
|
||||
found = True
|
||||
if found:
|
||||
symbol = grammar.number2symbol[typ]
|
||||
failed_stack.append((symbol, nodes))
|
||||
if nodes and nodes[0] in ('def', 'class', 'lambda'):
|
||||
self._scope_names_stack.pop()
|
||||
if failed_stack:
|
||||
err = ErrorStatement(failed_stack, value, self.position_modifier, start_pos)
|
||||
self._error_statement_stacks.append(err)
|
||||
|
||||
self._last_failed_start_pos = start_pos
|
||||
|
||||
stack[start_index:] = []
|
||||
|
||||
def _tokenize(self, tokenizer):
|
||||
for typ, value, start_pos, prefix in tokenizer:
|
||||
#print(tokenize.tok_name[typ], repr(value), start_pos, repr(prefix))
|
||||
if typ == DEDENT:
|
||||
# We need to count indents, because if we just omit any DEDENT,
|
||||
# we might omit them in the wrong place.
|
||||
o = self._omit_dedent_list
|
||||
if o and o[-1] == self._indent_counter:
|
||||
o.pop()
|
||||
continue
|
||||
|
||||
self._indent_counter -= 1
|
||||
elif typ == INDENT:
|
||||
self._indent_counter += 1
|
||||
elif typ == ERRORTOKEN:
|
||||
self._add_syntax_error('Strange token', start_pos)
|
||||
continue
|
||||
|
||||
if typ == OP:
|
||||
typ = token.opmap[value]
|
||||
yield typ, value, prefix, start_pos
|
||||
|
||||
def _add_syntax_error(self, message, position):
|
||||
self.syntax_errors.append(ParserSyntaxError(message, position))
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s: %s>" % (type(self).__name__, self.module)
|
||||
|
||||
def _check_user_stmt(self, simple):
|
||||
# this is not user checking, just update the used_names
|
||||
for tok_name in self.module.temp_used_names:
|
||||
try:
|
||||
self.module.used_names[tok_name].add(simple)
|
||||
except KeyError:
|
||||
self.module.used_names[tok_name] = set([simple])
|
||||
self.module.temp_used_names = []
|
||||
|
||||
def _parse_dot_name(self, pre_used_token=None):
|
||||
def remove_last_newline(self):
|
||||
"""
|
||||
The dot name parser parses a name, variable or function and returns
|
||||
their names.
|
||||
|
||||
:return: tuple of Name, next_token
|
||||
In all of this we need to work with _start_pos, because if we worked
|
||||
with start_pos, we would need to check the position_modifier as well
|
||||
(which is accounted for in the start_pos property).
|
||||
"""
|
||||
def append(el):
|
||||
names.append(el)
|
||||
self.module.temp_used_names.append(el[0])
|
||||
|
||||
names = []
|
||||
tok = next(self._gen) if pre_used_token is None else pre_used_token
|
||||
|
||||
if tok.type != tokenize.NAME and tok.string != '*':
|
||||
return None, tok
|
||||
|
||||
first_pos = tok.start_pos
|
||||
append((tok.string, first_pos))
|
||||
while True:
|
||||
end_pos = tok.end_pos
|
||||
tok = next(self._gen)
|
||||
if tok.string != '.':
|
||||
break
|
||||
tok = next(self._gen)
|
||||
if tok.type != tokenize.NAME:
|
||||
break
|
||||
append((tok.string, tok.start_pos))
|
||||
|
||||
n = pr.Name(self.module, names, first_pos, end_pos) if names else None
|
||||
return n, tok
|
||||
|
||||
def _parse_import_list(self):
|
||||
"""
|
||||
The parser for the imports. Unlike the class and function parse
|
||||
function, this returns no Import class, but rather an import list,
|
||||
which is then added later on.
|
||||
The reason, why this is not done in the same class lies in the nature
|
||||
of imports. There are two ways to write them:
|
||||
|
||||
- from ... import ...
|
||||
- import ...
|
||||
|
||||
To distinguish, this has to be processed after the parser.
|
||||
|
||||
:return: List of imports.
|
||||
:rtype: list
|
||||
"""
|
||||
imports = []
|
||||
brackets = False
|
||||
continue_kw = [",", ";", "\n", '\r\n', ')'] \
|
||||
+ list(set(keyword.kwlist) - set(['as']))
|
||||
while True:
|
||||
defunct = False
|
||||
tok = next(self._gen)
|
||||
if tok.string == '(': # python allows only one `(` in the statement.
|
||||
brackets = True
|
||||
tok = next(self._gen)
|
||||
if brackets and tok.type == tokenize.NEWLINE:
|
||||
tok = next(self._gen)
|
||||
i, tok = self._parse_dot_name(tok)
|
||||
if not i:
|
||||
defunct = True
|
||||
name2 = None
|
||||
if tok.string == 'as':
|
||||
name2, tok = self._parse_dot_name()
|
||||
imports.append((i, name2, defunct))
|
||||
while tok.string not in continue_kw:
|
||||
tok = next(self._gen)
|
||||
if not (tok.string == "," or brackets and tok.type == tokenize.NEWLINE):
|
||||
break
|
||||
return imports
|
||||
|
||||
def _parse_parentheses(self, is_class):
|
||||
"""
|
||||
Functions and Classes have params (which means for classes
|
||||
super-classes). They are parsed here and returned as Statements.
|
||||
|
||||
:return: List of Statements
|
||||
:rtype: list
|
||||
"""
|
||||
params = []
|
||||
tok = None
|
||||
pos = 0
|
||||
breaks = [',', ':']
|
||||
while tok is None or tok.string not in (')', ':'):
|
||||
# Classes don't have params, a Class works more like a function
|
||||
# call.
|
||||
param, tok = self._parse_statement(added_breaks=breaks,
|
||||
stmt_class=pr.ExprStmt
|
||||
if is_class else pr.Param)
|
||||
if is_class:
|
||||
if param is not None:
|
||||
params.append(param)
|
||||
else:
|
||||
if param is not None and tok.string == ':':
|
||||
# parse annotations
|
||||
annotation, tok = self._parse_statement(added_breaks=breaks)
|
||||
if annotation:
|
||||
param.add_annotation(annotation)
|
||||
|
||||
# Function params without vars are usually syntax errors.
|
||||
# expressions are valid in superclass declarations.
|
||||
if param is not None and param.get_defined_names():
|
||||
param.position_nr = pos
|
||||
params.append(param)
|
||||
pos += 1
|
||||
|
||||
return params
|
||||
|
||||
def _parse_function(self):
|
||||
"""
|
||||
The parser for a text functions. Process the tokens, which follow a
|
||||
function definition.
|
||||
|
||||
:return: Return a Scope representation of the tokens.
|
||||
:rtype: Function
|
||||
"""
|
||||
first_pos = self._gen.current.start_pos
|
||||
tok = next(self._gen)
|
||||
if tok.type != tokenize.NAME:
|
||||
return None
|
||||
|
||||
fname = pr.Name(self.module, [(tok.string, tok.start_pos)], tok.start_pos,
|
||||
tok.end_pos)
|
||||
|
||||
tok = next(self._gen)
|
||||
if tok.string != '(':
|
||||
return None
|
||||
params = self._parse_parentheses(is_class=False)
|
||||
|
||||
colon = next(self._gen)
|
||||
annotation = None
|
||||
if colon.string in ('-', '->'):
|
||||
# parse annotations
|
||||
if colon.string == '-':
|
||||
# The Python 2 tokenizer doesn't understand this
|
||||
colon = next(self._gen)
|
||||
if colon.string != '>':
|
||||
return None
|
||||
annotation, colon = self._parse_statement(added_breaks=[':'])
|
||||
|
||||
if colon.string != ':':
|
||||
return None
|
||||
|
||||
# because of 2 line func param definitions
|
||||
return pr.Function(self.module, fname, params, first_pos, annotation)
|
||||
|
||||
def _parse_class(self):
|
||||
"""
|
||||
The parser for a text class. Process the tokens, which follow a
|
||||
class definition.
|
||||
|
||||
:return: Return a Scope representation of the tokens.
|
||||
:rtype: Class
|
||||
"""
|
||||
first_pos = self._gen.current.start_pos
|
||||
cname = next(self._gen)
|
||||
if cname.type != tokenize.NAME:
|
||||
debug.warning("class: syntax err, token is not a name@%s (%s: %s)",
|
||||
cname.start_pos[0], tokenize.tok_name[cname.type], cname.string)
|
||||
return None
|
||||
|
||||
cname = pr.Name(self.module, [(cname.string, cname.start_pos)],
|
||||
cname.start_pos, cname.end_pos)
|
||||
|
||||
superclasses = []
|
||||
_next = next(self._gen)
|
||||
if _next.string == '(':
|
||||
superclasses = self._parse_parentheses(is_class=True)
|
||||
_next = next(self._gen)
|
||||
|
||||
if _next.string != ':':
|
||||
debug.warning("class syntax: %s@%s", cname, _next.start_pos[0])
|
||||
return None
|
||||
|
||||
return pr.Class(self.module, cname, superclasses, first_pos)
|
||||
|
||||
def _parse_statement(self, pre_used_token=None, added_breaks=None,
|
||||
stmt_class=pr.ExprStmt, names_are_set_vars=False,
|
||||
maybe_docstr=False):
|
||||
"""
|
||||
Parses statements like::
|
||||
|
||||
a = test(b)
|
||||
a += 3 - 2 or b
|
||||
|
||||
and so on. One line at a time.
|
||||
|
||||
:param pre_used_token: The pre parsed token.
|
||||
:type pre_used_token: set
|
||||
:return: ExprStmt + last parsed token.
|
||||
:rtype: (ExprStmt, str)
|
||||
"""
|
||||
set_vars = []
|
||||
level = 0 # The level of parentheses
|
||||
|
||||
if pre_used_token:
|
||||
tok = pre_used_token
|
||||
endmarker = self.module.children[-1]
|
||||
# The newline is either in the endmarker as a prefix or the previous
|
||||
# leaf as a newline token.
|
||||
if endmarker.prefix.endswith('\n'):
|
||||
endmarker.prefix = endmarker.prefix[:-1]
|
||||
last_line = re.sub('.*\n', '', endmarker.prefix)
|
||||
endmarker._start_pos = endmarker._start_pos[0] - 1, len(last_line)
|
||||
else:
|
||||
tok = next(self._gen)
|
||||
|
||||
while tok.type == tokenize.COMMENT:
|
||||
# remove newline and comment
|
||||
next(self._gen)
|
||||
tok = next(self._gen)
|
||||
|
||||
first_pos = tok.start_pos
|
||||
opening_brackets = ['{', '(', '[']
|
||||
closing_brackets = ['}', ')', ']']
|
||||
|
||||
# the difference between "break" and "always break" is that the latter
|
||||
# will even break in parentheses. This is true for typical flow
|
||||
# commands like def and class and the imports, which will never be used
|
||||
# in a statement.
|
||||
breaks = set(['\n', '\r\n', ':', ')'])
|
||||
always_break = [';', 'import', 'from', 'class', 'def', 'try', 'except',
|
||||
'finally', 'while', 'return', 'yield']
|
||||
not_first_break = ['del', 'raise']
|
||||
if added_breaks:
|
||||
breaks |= set(added_breaks)
|
||||
|
||||
tok_list = []
|
||||
as_names = []
|
||||
in_lambda_param = False
|
||||
while not (tok.string in always_break
|
||||
or tok.string in not_first_break and not tok_list
|
||||
or tok.string in breaks and level <= 0
|
||||
and not (in_lambda_param and tok.string in ',:')):
|
||||
try:
|
||||
is_kw = tok.string in OPERATOR_KEYWORDS
|
||||
if tok.type == tokenize.OP or is_kw:
|
||||
tok_list.append(
|
||||
pr.Operator(self.module, tok.string, self._scope, tok.start_pos)
|
||||
)
|
||||
else:
|
||||
tok_list.append(tok)
|
||||
|
||||
if tok.string == 'as':
|
||||
tok = next(self._gen)
|
||||
if tok.type == tokenize.NAME:
|
||||
n, tok = self._parse_dot_name(self._gen.current)
|
||||
if n:
|
||||
set_vars.append(n)
|
||||
as_names.append(n)
|
||||
tok_list.append(n)
|
||||
continue
|
||||
elif tok.string == 'lambda':
|
||||
breaks.discard(':')
|
||||
in_lambda_param = True
|
||||
elif in_lambda_param and tok.string == ':':
|
||||
in_lambda_param = False
|
||||
elif tok.type == tokenize.NAME and not is_kw:
|
||||
n, tok = self._parse_dot_name(self._gen.current)
|
||||
# removed last entry, because we add Name
|
||||
tok_list.pop()
|
||||
if n:
|
||||
tok_list.append(n)
|
||||
continue
|
||||
elif tok.string in opening_brackets:
|
||||
level += 1
|
||||
elif tok.string in closing_brackets:
|
||||
level -= 1
|
||||
|
||||
tok = next(self._gen)
|
||||
except (StopIteration, common.MultiLevelStopIteration):
|
||||
# comes from tokenizer
|
||||
break
|
||||
|
||||
if not tok_list:
|
||||
return None, tok
|
||||
|
||||
first_tok = tok_list[0]
|
||||
# docstrings
|
||||
if len(tok_list) == 1 and isinstance(first_tok, tokenize.Token) \
|
||||
and first_tok.type == tokenize.STRING and maybe_docstr:
|
||||
# Normal docstring check
|
||||
if self.freshscope and not self.no_docstr:
|
||||
self._scope.add_docstr(first_tok)
|
||||
return None, tok
|
||||
|
||||
# Attribute docstring (PEP 224) support (sphinx uses it, e.g.)
|
||||
# If string literal is being parsed...
|
||||
else:
|
||||
with common.ignored(IndexError, AttributeError):
|
||||
# ...then set it as a docstring
|
||||
self._scope.statements[-1].add_docstr(first_tok)
|
||||
return None, tok
|
||||
|
||||
stmt = stmt_class(self.module, tok_list, first_pos, tok.end_pos,
|
||||
as_names=as_names,
|
||||
names_are_set_vars=names_are_set_vars)
|
||||
|
||||
stmt.parent = self._top_module
|
||||
self._check_user_stmt(stmt)
|
||||
|
||||
if tok.string in always_break + not_first_break:
|
||||
self._gen.push_last_back()
|
||||
return stmt, tok
|
||||
|
||||
def _parse(self):
|
||||
"""
|
||||
The main part of the program. It analyzes the given code-text and
|
||||
returns a tree-like scope. For a more detailed description, see the
|
||||
class description.
|
||||
|
||||
:param text: The code which should be parsed.
|
||||
:param type: str
|
||||
|
||||
:raises: IndentationError
|
||||
"""
|
||||
extended_flow = ['else', 'elif', 'except', 'finally']
|
||||
statement_toks = ['{', '[', '(', '`']
|
||||
|
||||
self._decorators = []
|
||||
self.freshscope = True
|
||||
for tok in self._gen:
|
||||
token_type = tok.type
|
||||
tok_str = tok.string
|
||||
first_pos = tok.start_pos
|
||||
self.module.temp_used_names = []
|
||||
# debug.dbg('main: tok=[%s] type=[%s] indent=[%s]', \
|
||||
# tok, tokenize.tok_name[token_type], start_position[0])
|
||||
|
||||
# check again for unindented stuff. this is true for syntax
|
||||
# errors. only check for names, because thats relevant here. If
|
||||
# some docstrings are not indented, I don't care.
|
||||
while first_pos[1] <= self._scope.start_pos[1] \
|
||||
and (token_type == tokenize.NAME or tok_str in ('(', '['))\
|
||||
and self._scope != self.module:
|
||||
self._scope.end_pos = first_pos
|
||||
self._scope = self._scope.parent
|
||||
if isinstance(self._scope, pr.Module) \
|
||||
and not isinstance(self._scope, pr.SubModule):
|
||||
self._scope = self.module
|
||||
|
||||
if isinstance(self._scope, pr.SubModule):
|
||||
use_as_parent_scope = self._top_module
|
||||
else:
|
||||
use_as_parent_scope = self._scope
|
||||
if tok_str == 'def':
|
||||
func = self._parse_function()
|
||||
if func is None:
|
||||
debug.warning("function: syntax error@%s", first_pos[0])
|
||||
continue
|
||||
self.freshscope = True
|
||||
self._scope = self._scope.add_scope(func, self._decorators)
|
||||
self._decorators = []
|
||||
elif tok_str == 'class':
|
||||
cls = self._parse_class()
|
||||
if cls is None:
|
||||
debug.warning("class: syntax error@%s" % first_pos[0])
|
||||
continue
|
||||
self.freshscope = True
|
||||
self._scope = self._scope.add_scope(cls, self._decorators)
|
||||
self._decorators = []
|
||||
# import stuff
|
||||
elif tok_str == 'import':
|
||||
imports = self._parse_import_list()
|
||||
for count, (m, alias, defunct) in enumerate(imports):
|
||||
e = (alias or m or self._gen.previous).end_pos
|
||||
end_pos = self._gen.previous.end_pos if count + 1 == len(imports) else e
|
||||
i = pr.Import(self.module, first_pos, end_pos, m,
|
||||
alias, defunct=defunct)
|
||||
self._check_user_stmt(i)
|
||||
self._scope.add_import(i)
|
||||
if not imports:
|
||||
i = pr.Import(self.module, first_pos, self._gen.current.end_pos,
|
||||
None, defunct=True)
|
||||
self._check_user_stmt(i)
|
||||
self.freshscope = False
|
||||
elif tok_str == 'from':
|
||||
defunct = False
|
||||
# take care for relative imports
|
||||
relative_count = 0
|
||||
while True:
|
||||
tok = next(self._gen)
|
||||
if tok.string != '.':
|
||||
break
|
||||
relative_count += 1
|
||||
# the from import
|
||||
mod, tok = self._parse_dot_name(self._gen.current)
|
||||
tok_str = tok.string
|
||||
if str(mod) == 'import' and relative_count:
|
||||
self._gen.push_last_back()
|
||||
tok_str = 'import'
|
||||
mod = None
|
||||
if not mod and not relative_count or tok_str != "import":
|
||||
debug.warning("from: syntax error@%s", tok.start_pos[0])
|
||||
defunct = True
|
||||
if tok_str != 'import':
|
||||
self._gen.push_last_back()
|
||||
names = self._parse_import_list()
|
||||
for count, (name, alias, defunct2) in enumerate(names):
|
||||
star = name is not None and unicode(name.names[0]) == '*'
|
||||
if star:
|
||||
name = None
|
||||
e = (alias or name or self._gen.previous).end_pos
|
||||
end_pos = self._gen.previous.end_pos if count + 1 == len(names) else e
|
||||
i = pr.Import(self.module, first_pos, end_pos, name,
|
||||
alias, mod, star, relative_count,
|
||||
defunct=defunct or defunct2)
|
||||
self._check_user_stmt(i)
|
||||
self._scope.add_import(i)
|
||||
self.freshscope = False
|
||||
# loops
|
||||
elif tok_str == 'for':
|
||||
set_stmt, tok = self._parse_statement(added_breaks=['in'],
|
||||
names_are_set_vars=True)
|
||||
if tok.string != 'in':
|
||||
debug.warning('syntax err, for flow incomplete @%s', tok.start_pos[0])
|
||||
|
||||
try:
|
||||
statement, tok = self._parse_statement()
|
||||
except StopIteration:
|
||||
statement, tok = None, None
|
||||
s = [] if statement is None else [statement]
|
||||
f = pr.ForFlow(self.module, s, first_pos, set_stmt)
|
||||
self._scope = self._scope.add_statement(f)
|
||||
if tok is None or tok.string != ':':
|
||||
debug.warning('syntax err, for flow started @%s', first_pos[0])
|
||||
elif tok_str in ['if', 'while', 'try', 'with'] + extended_flow:
|
||||
added_breaks = []
|
||||
command = tok_str
|
||||
if command in ('except', 'with'):
|
||||
added_breaks.append(',')
|
||||
# multiple inputs because of with
|
||||
inputs = []
|
||||
first = True
|
||||
while first or command == 'with' and tok.string not in (':', '\n', '\r\n'):
|
||||
statement, tok = \
|
||||
self._parse_statement(added_breaks=added_breaks)
|
||||
if command == 'except' and tok.string == ',':
|
||||
# the except statement defines a var
|
||||
# this is only true for python 2
|
||||
n, tok = self._parse_dot_name()
|
||||
if n:
|
||||
n.parent = statement
|
||||
statement.as_names.append(n)
|
||||
if statement:
|
||||
inputs.append(statement)
|
||||
first = False
|
||||
|
||||
f = pr.Flow(self.module, command, inputs, first_pos)
|
||||
if command in extended_flow:
|
||||
# The last statement has to be another part of the flow
|
||||
# statement, because a dedent releases the main scope, so
|
||||
# just take the last statement.
|
||||
newline = endmarker.get_previous()
|
||||
except IndexError:
|
||||
return # This means that the parser is empty.
|
||||
while True:
|
||||
if newline.value == '':
|
||||
# Must be a DEDENT, just continue.
|
||||
try:
|
||||
s = self._scope.statements[-1].set_next(f)
|
||||
except (AttributeError, IndexError):
|
||||
# If set_next doesn't exist, just add it.
|
||||
s = self._scope.add_statement(f)
|
||||
newline = newline.get_previous()
|
||||
except IndexError:
|
||||
# If there's a statement that fails to be parsed, there
|
||||
# will be no previous leaf. So just ignore it.
|
||||
break
|
||||
elif newline.value != '\n':
|
||||
# This may happen if error correction strikes and removes
|
||||
# a whole statement including '\n'.
|
||||
break
|
||||
else:
|
||||
s = self._scope.add_statement(f)
|
||||
self._scope = s
|
||||
if tok.string != ':':
|
||||
debug.warning('syntax err, flow started @%s', tok.start_pos[0])
|
||||
# returns
|
||||
elif tok_str in ('return', 'yield'):
|
||||
s = tok.start_pos
|
||||
self.freshscope = False
|
||||
# Add returns to the scope
|
||||
# Should be a function, otherwise just add it to a module!
|
||||
func = self._scope.get_parent_until((pr.Function, pr.Module))
|
||||
if tok_str == 'yield':
|
||||
func.is_generator = True
|
||||
|
||||
stmt, tok = self._parse_statement()
|
||||
if stmt is not None:
|
||||
stmt.parent = use_as_parent_scope
|
||||
try:
|
||||
kw_stmt = pr.KeywordStatement(tok_str, s,
|
||||
use_as_parent_scope, stmt)
|
||||
self._scope.statements.append(kw_stmt)
|
||||
func.returns.append(kw_stmt)
|
||||
# start_pos is the one of the return statement
|
||||
stmt.start_pos = s
|
||||
except AttributeError:
|
||||
debug.warning('return in non-function')
|
||||
stmt = None
|
||||
elif tok_str == 'assert':
|
||||
stmt, tok = self._parse_statement()
|
||||
if stmt is not None:
|
||||
stmt.parent = use_as_parent_scope
|
||||
self._scope.statements.append(stmt)
|
||||
self._scope.asserts.append(stmt)
|
||||
elif tok_str in STATEMENT_KEYWORDS:
|
||||
stmt, _ = self._parse_statement()
|
||||
kw = pr.KeywordStatement(tok_str, tok.start_pos,
|
||||
use_as_parent_scope, stmt)
|
||||
self._scope.add_statement(kw)
|
||||
if stmt is not None and tok_str == 'global':
|
||||
for t in stmt._token_list:
|
||||
if isinstance(t, pr.Name):
|
||||
# Add the global to the top module, it counts there.
|
||||
self.module.add_global(t)
|
||||
# decorator
|
||||
elif tok_str == '@':
|
||||
stmt, tok = self._parse_statement()
|
||||
if stmt is not None:
|
||||
self._decorators.append(stmt)
|
||||
elif tok_str == 'pass':
|
||||
continue
|
||||
# default
|
||||
elif token_type in (tokenize.NAME, tokenize.STRING,
|
||||
tokenize.NUMBER, tokenize.OP) \
|
||||
or tok_str in statement_toks:
|
||||
# this is the main part - a name can be a function or a
|
||||
# normal var, which can follow anything. but this is done
|
||||
# by the statement parser.
|
||||
stmt, tok = self._parse_statement(self._gen.current,
|
||||
maybe_docstr=True)
|
||||
if stmt:
|
||||
self._scope.add_statement(stmt)
|
||||
self.freshscope = False
|
||||
else:
|
||||
if token_type not in (tokenize.COMMENT, tokenize.NEWLINE, tokenize.ENDMARKER):
|
||||
debug.warning('Token not used: %s %s %s', tok_str,
|
||||
tokenize.tok_name[token_type], first_pos)
|
||||
continue
|
||||
self.no_docstr = False
|
||||
|
||||
|
||||
class PushBackTokenizer(object):
|
||||
def __init__(self, tokenizer):
|
||||
self._tokenizer = tokenizer
|
||||
self._push_backs = []
|
||||
self.current = self.previous = tokenize.Token(None, '', (0, 0))
|
||||
|
||||
def push_last_back(self):
|
||||
self._push_backs.append(self.current)
|
||||
|
||||
def next(self):
|
||||
""" Python 2 Compatibility """
|
||||
return self.__next__()
|
||||
|
||||
def __next__(self):
|
||||
if self._push_backs:
|
||||
return self._push_backs.pop(0)
|
||||
|
||||
previous = self.current
|
||||
self.current = next(self._tokenizer)
|
||||
self.previous = previous
|
||||
return self.current
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
newline.value = ''
|
||||
if self._last_failed_start_pos > newline._start_pos:
|
||||
# It may be the case that there was a syntax error in a
|
||||
# function. In that case error correction removes the
|
||||
# right newline. So we use the previously assigned
|
||||
# _last_failed_start_pos variable to account for that.
|
||||
endmarker._start_pos = self._last_failed_start_pos
|
||||
else:
|
||||
endmarker._start_pos = newline._start_pos
|
||||
break
|
||||
|
||||
@@ -4,65 +4,106 @@ anything changes, it only reparses the changed parts. But because it's not
|
||||
finished (and still not working as I want), I won't document it any further.
|
||||
"""
|
||||
import re
|
||||
from itertools import chain
|
||||
|
||||
from jedi._compatibility import use_metaclass, unicode
|
||||
from jedi._compatibility import use_metaclass
|
||||
from jedi import settings
|
||||
from jedi import common
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tokenize
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import cache
|
||||
from jedi.parser.tokenize import (source_tokens, Token, FLOWS, NEWLINE,
|
||||
COMMENT, ENDMARKER)
|
||||
from jedi import debug
|
||||
from jedi.parser.tokenize import (source_tokens, NEWLINE,
|
||||
ENDMARKER, INDENT, DEDENT)
|
||||
|
||||
FLOWS = 'if', 'else', 'elif', 'while', 'with', 'try', 'except', 'finally', 'for'
|
||||
|
||||
|
||||
class Module(pr.Module, pr.Simple):
|
||||
def __init__(self, parsers):
|
||||
super(Module, self).__init__(self, (1, 0))
|
||||
self.parsers = parsers
|
||||
class FastModule(pr.Module):
|
||||
type = 'file_input'
|
||||
|
||||
def __init__(self, module_path):
|
||||
super(FastModule, self).__init__([])
|
||||
self.modules = []
|
||||
self.reset_caches()
|
||||
|
||||
self.start_pos = 1, 0
|
||||
self.end_pos = None, None
|
||||
self.names_dict = {}
|
||||
self.path = module_path
|
||||
|
||||
def reset_caches(self):
|
||||
""" This module does a whole lot of caching, because it uses different
|
||||
parsers. """
|
||||
with common.ignored(AttributeError):
|
||||
del self._used_names
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name.startswith('__'):
|
||||
raise AttributeError('Not available!')
|
||||
else:
|
||||
return getattr(self.parsers[0].module, name)
|
||||
self.modules = []
|
||||
try:
|
||||
del self._used_names # Remove the used names cache.
|
||||
except AttributeError:
|
||||
pass # It was never used.
|
||||
|
||||
@property
|
||||
@cache.underscore_memoization
|
||||
def used_names(self):
|
||||
used_names = {}
|
||||
for p in self.parsers:
|
||||
for k, statement_set in p.module.used_names.items():
|
||||
if k in used_names:
|
||||
used_names[k] |= statement_set
|
||||
else:
|
||||
used_names[k] = set(statement_set)
|
||||
return used_names
|
||||
return MergedNamesDict([m.used_names for m in self.modules])
|
||||
|
||||
@property
|
||||
def global_names(self):
|
||||
return [name for m in self.modules for name in m.global_names]
|
||||
|
||||
@property
|
||||
def error_statement_stacks(self):
|
||||
return [e for m in self.modules for e in m.error_statement_stacks]
|
||||
|
||||
def __repr__(self):
|
||||
return "<fast.%s: %s@%s-%s>" % (type(self).__name__, self.name,
|
||||
self.start_pos[0], self.end_pos[0])
|
||||
|
||||
# To avoid issues with with the `parser.Parser`, we need setters that do
|
||||
# nothing, because if pickle comes along and sets those values.
|
||||
@global_names.setter
|
||||
def global_names(self, value):
|
||||
pass
|
||||
|
||||
@error_statement_stacks.setter
|
||||
def error_statement_stacks(self, value):
|
||||
pass
|
||||
|
||||
@used_names.setter
|
||||
def used_names(self, value):
|
||||
pass
|
||||
|
||||
|
||||
class MergedNamesDict(object):
|
||||
def __init__(self, dicts):
|
||||
self.dicts = dicts
|
||||
|
||||
def __iter__(self):
|
||||
return iter(set(key for dct in self.dicts for key in dct))
|
||||
|
||||
def __getitem__(self, value):
|
||||
return list(chain.from_iterable(dct.get(value, []) for dct in self.dicts))
|
||||
|
||||
def items(self):
|
||||
dct = {}
|
||||
for d in self.dicts:
|
||||
for key, values in d.items():
|
||||
try:
|
||||
dct_values = dct[key]
|
||||
dct_values += values
|
||||
except KeyError:
|
||||
dct[key] = list(values)
|
||||
return dct.items()
|
||||
|
||||
def values(self):
|
||||
lst = []
|
||||
for dct in self.dicts:
|
||||
lst += dct.values()
|
||||
return lst
|
||||
|
||||
|
||||
class CachedFastParser(type):
|
||||
""" This is a metaclass for caching `FastParser`. """
|
||||
def __call__(self, source, module_path=None):
|
||||
def __call__(self, grammar, source, module_path=None):
|
||||
if not settings.fast_parser:
|
||||
return Parser(source, module_path)
|
||||
return Parser(grammar, source, module_path)
|
||||
|
||||
pi = cache.parser_cache.get(module_path, None)
|
||||
if pi is None or isinstance(pi.parser, Parser):
|
||||
p = super(CachedFastParser, self).__call__(source, module_path)
|
||||
p = super(CachedFastParser, self).__call__(grammar, source, module_path)
|
||||
else:
|
||||
p = pi.parser # pi is a `cache.ParserCacheItem`
|
||||
p.update(source)
|
||||
@@ -70,397 +111,470 @@ class CachedFastParser(type):
|
||||
|
||||
|
||||
class ParserNode(object):
|
||||
def __init__(self, parser, code, parent=None):
|
||||
self.parent = parent
|
||||
def __init__(self, fast_module, parser, source):
|
||||
self._fast_module = fast_module
|
||||
self.parent = None
|
||||
self._node_children = []
|
||||
|
||||
self.children = []
|
||||
# must be created before new things are added to it.
|
||||
self.save_contents(parser, code)
|
||||
|
||||
def save_contents(self, parser, code):
|
||||
self.code = code
|
||||
self.hash = hash(code)
|
||||
self.source = source
|
||||
self.hash = hash(source)
|
||||
self.parser = parser
|
||||
|
||||
try:
|
||||
# with fast_parser we have either 1 subscope or only statements.
|
||||
self.content_scope = parser.module.subscopes[0]
|
||||
# With fast_parser we have either 1 subscope or only statements.
|
||||
self._content_scope = parser.module.subscopes[0]
|
||||
except IndexError:
|
||||
self.content_scope = parser.module
|
||||
self._content_scope = parser.module
|
||||
else:
|
||||
self._rewrite_last_newline()
|
||||
|
||||
scope = self.content_scope
|
||||
self._contents = {}
|
||||
for c in pr.SCOPE_CONTENTS:
|
||||
self._contents[c] = list(getattr(scope, c))
|
||||
self._is_generator = scope.is_generator
|
||||
# We need to be able to reset the original children of a parser.
|
||||
self._old_children = list(self._content_scope.children)
|
||||
|
||||
self.old_children = self.children
|
||||
self.children = []
|
||||
def _rewrite_last_newline(self):
|
||||
"""
|
||||
The ENDMARKER can contain a newline in the prefix. However this prefix
|
||||
really belongs to the function - respectively to the next function or
|
||||
parser node. If we don't rewrite that newline, we end up with a newline
|
||||
in the wrong position, i.d. at the end of the file instead of in the
|
||||
middle.
|
||||
"""
|
||||
c = self._content_scope.children
|
||||
if pr.is_node(c[-1], 'suite'): # In a simple_stmt there's no DEDENT.
|
||||
end_marker = self.parser.module.children[-1]
|
||||
# Set the DEDENT prefix instead of the ENDMARKER.
|
||||
c[-1].children[-1].prefix = end_marker.prefix
|
||||
end_marker.prefix = ''
|
||||
|
||||
def reset_contents(self):
|
||||
scope = self.content_scope
|
||||
for key, c in self._contents.items():
|
||||
setattr(scope, key, list(c))
|
||||
scope.is_generator = self._is_generator
|
||||
def __repr__(self):
|
||||
module = self.parser.module
|
||||
try:
|
||||
return '<%s: %s-%s>' % (type(self).__name__, module.start_pos, module.end_pos)
|
||||
except IndexError:
|
||||
# There's no module yet.
|
||||
return '<%s: empty>' % type(self).__name__
|
||||
|
||||
if self.parent is None:
|
||||
# Global vars of the first one can be deleted, in the global scope
|
||||
# they make no sense.
|
||||
self.parser.module.global_vars = []
|
||||
def reset_node(self):
|
||||
"""
|
||||
Removes changes that were applied in this class.
|
||||
"""
|
||||
self._node_children = []
|
||||
scope = self._content_scope
|
||||
scope.children = list(self._old_children)
|
||||
try:
|
||||
# This works if it's a MergedNamesDict.
|
||||
# We are correcting it, because the MergedNamesDicts are artificial
|
||||
# and can change after closing a node.
|
||||
scope.names_dict = scope.names_dict.dicts[0]
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
for c in self.children:
|
||||
c.reset_contents()
|
||||
def close(self):
|
||||
"""
|
||||
Closes the current parser node. This means that after this no further
|
||||
nodes should be added anymore.
|
||||
"""
|
||||
# We only need to replace the dict if multiple dictionaries are used:
|
||||
if self._node_children:
|
||||
dcts = [n.parser.module.names_dict for n in self._node_children]
|
||||
# Need to insert the own node as well.
|
||||
dcts.insert(0, self._content_scope.names_dict)
|
||||
self._content_scope.names_dict = MergedNamesDict(dcts)
|
||||
|
||||
def parent_until_indent(self, indent=None):
|
||||
if indent is None or self.indent >= indent and self.parent:
|
||||
self.old_children = []
|
||||
if self.parent is not None:
|
||||
return self.parent.parent_until_indent(indent)
|
||||
if (indent is None or self._indent >= indent) and self.parent is not None:
|
||||
self.close()
|
||||
return self.parent.parent_until_indent(indent)
|
||||
return self
|
||||
|
||||
@property
|
||||
def indent(self):
|
||||
def _indent(self):
|
||||
if not self.parent:
|
||||
return 0
|
||||
module = self.parser.module
|
||||
try:
|
||||
el = module.subscopes[0]
|
||||
except IndexError:
|
||||
try:
|
||||
el = module.statements[0]
|
||||
except IndexError:
|
||||
try:
|
||||
el = module.imports[0]
|
||||
except IndexError:
|
||||
try:
|
||||
el = [r for r in module.returns if r is not None][0]
|
||||
except IndexError:
|
||||
return self.parent.indent + 1
|
||||
return el.start_pos[1]
|
||||
|
||||
def _set_items(self, parser, set_parent=False):
|
||||
# insert parser objects into current structure
|
||||
scope = self.content_scope
|
||||
for c in pr.SCOPE_CONTENTS:
|
||||
content = getattr(scope, c)
|
||||
items = getattr(parser.module, c)
|
||||
if set_parent:
|
||||
for i in items:
|
||||
if i is None:
|
||||
continue # happens with empty returns
|
||||
i.parent = scope.use_as_parent
|
||||
if isinstance(i, (pr.Function, pr.Class)):
|
||||
for d in i.decorators:
|
||||
d.parent = scope.use_as_parent
|
||||
content += items
|
||||
return self.parser.module.children[0].start_pos[1]
|
||||
|
||||
# global_vars
|
||||
cur = self
|
||||
while cur.parent is not None:
|
||||
cur = cur.parent
|
||||
cur.parser.module.global_vars += parser.module.global_vars
|
||||
|
||||
scope.is_generator |= parser.module.is_generator
|
||||
|
||||
def add_node(self, node, set_parent=False):
|
||||
def add_node(self, node, line_offset):
|
||||
"""Adding a node means adding a node that was already added earlier"""
|
||||
self.children.append(node)
|
||||
self._set_items(node.parser, set_parent=set_parent)
|
||||
node.old_children = node.children # TODO potential memory leak?
|
||||
node.children = []
|
||||
# Changing the line offsets is very important, because if they don't
|
||||
# fit, all the start_pos values will be wrong.
|
||||
m = node.parser.module
|
||||
node.parser.position_modifier.line = line_offset
|
||||
self._fast_module.modules.append(m)
|
||||
node.parent = self
|
||||
|
||||
self._node_children.append(node)
|
||||
|
||||
# Insert parser objects into current structure. We only need to set the
|
||||
# parents and children in a good way.
|
||||
scope = self._content_scope
|
||||
for child in m.children:
|
||||
child.parent = scope
|
||||
scope.children.append(child)
|
||||
|
||||
scope = self.content_scope
|
||||
while scope is not None:
|
||||
#print('x',scope)
|
||||
if not isinstance(scope, pr.SubModule):
|
||||
# TODO This seems like a strange thing. Check again.
|
||||
scope.end_pos = node.content_scope.end_pos
|
||||
scope = scope.parent
|
||||
return node
|
||||
|
||||
def add_parser(self, parser, code):
|
||||
return self.add_node(ParserNode(parser, code, self), True)
|
||||
def all_sub_nodes(self):
|
||||
"""
|
||||
Returns all nodes including nested ones.
|
||||
"""
|
||||
for n in self._node_children:
|
||||
yield n
|
||||
for y in n.all_sub_nodes():
|
||||
yield y
|
||||
|
||||
@cache.underscore_memoization # Should only happen once!
|
||||
def remove_last_newline(self):
|
||||
self.parser.remove_last_newline()
|
||||
|
||||
|
||||
class FastParser(use_metaclass(CachedFastParser)):
|
||||
_FLOWS_NEED_SPACE = 'if', 'elif', 'while', 'with', 'except', 'for'
|
||||
_FLOWS_NEED_COLON = 'else', 'try', 'except', 'finally'
|
||||
_keyword_re = re.compile('^[ \t]*(def |class |@|(?:%s)|(?:%s)\s*:)'
|
||||
% ('|'.join(_FLOWS_NEED_SPACE),
|
||||
'|'.join(_FLOWS_NEED_COLON)))
|
||||
|
||||
_keyword_re = re.compile('^[ \t]*(def|class|@|%s)' % '|'.join(tokenize.FLOWS))
|
||||
|
||||
def __init__(self, code, module_path=None):
|
||||
def __init__(self, grammar, source, module_path=None):
|
||||
# set values like `pr.Module`.
|
||||
self._grammar = grammar
|
||||
self.module_path = module_path
|
||||
self._reset_caches()
|
||||
self.update(source)
|
||||
|
||||
self.current_node = None
|
||||
self.parsers = []
|
||||
self.module = Module(self.parsers)
|
||||
self.reset_caches()
|
||||
def _reset_caches(self):
|
||||
self.module = FastModule(self.module_path)
|
||||
self.current_node = ParserNode(self.module, self, '')
|
||||
|
||||
def update(self, source):
|
||||
# For testing purposes: It is important that the number of parsers used
|
||||
# can be minimized. With these variables we can test against that.
|
||||
self.number_parsers_used = 0
|
||||
self.number_of_splits = 0
|
||||
self.number_of_misses = 0
|
||||
self.module.reset_caches()
|
||||
try:
|
||||
self._parse(code)
|
||||
self._parse(source)
|
||||
except:
|
||||
# FastParser is cached, be careful with exceptions
|
||||
del self.parsers[:]
|
||||
# FastParser is cached, be careful with exceptions.
|
||||
self._reset_caches()
|
||||
raise
|
||||
|
||||
def update(self, code):
|
||||
self.reset_caches()
|
||||
|
||||
try:
|
||||
self._parse(code)
|
||||
except:
|
||||
# FastParser is cached, be careful with exceptions
|
||||
del self.parsers[:]
|
||||
raise
|
||||
|
||||
def _split_parts(self, code):
|
||||
def _split_parts(self, source):
|
||||
"""
|
||||
Split the code into different parts. This makes it possible to parse
|
||||
each part seperately and therefore cache parts of the file and not
|
||||
everything.
|
||||
Split the source code into different parts. This makes it possible to
|
||||
parse each part seperately and therefore cache parts of the file and
|
||||
not everything.
|
||||
"""
|
||||
def gen_part():
|
||||
text = '\n'.join(current_lines)
|
||||
text = ''.join(current_lines)
|
||||
del current_lines[:]
|
||||
self.number_of_splits += 1
|
||||
return text
|
||||
|
||||
def just_newlines(current_lines):
|
||||
for line in current_lines:
|
||||
line = line.lstrip('\t \n\r')
|
||||
if line and line[0] != '#':
|
||||
return False
|
||||
return True
|
||||
|
||||
# Split only new lines. Distinction between \r\n is the tokenizer's
|
||||
# job.
|
||||
self._lines = code.split('\n')
|
||||
# It seems like there's no problem with form feed characters here,
|
||||
# because we're not counting lines.
|
||||
self._lines = source.splitlines(True)
|
||||
current_lines = []
|
||||
is_decorator = False
|
||||
current_indent = 0
|
||||
old_indent = 0
|
||||
# Use -1, because that indent is always smaller than any other.
|
||||
indent_list = [-1, 0]
|
||||
new_indent = False
|
||||
in_flow = False
|
||||
parentheses_level = 0
|
||||
flow_indent = None
|
||||
previous_line = None
|
||||
# All things within flows are simply being ignored.
|
||||
for l in self._lines:
|
||||
for i, l in enumerate(self._lines):
|
||||
# Handle backslash newline escaping.
|
||||
if l.endswith('\\\n') or l.endswith('\\\r\n'):
|
||||
if previous_line is not None:
|
||||
previous_line += l
|
||||
else:
|
||||
previous_line = l
|
||||
continue
|
||||
if previous_line is not None:
|
||||
l = previous_line + l
|
||||
previous_line = None
|
||||
|
||||
# check for dedents
|
||||
s = l.lstrip('\t ')
|
||||
s = l.lstrip('\t \n\r')
|
||||
indent = len(l) - len(s)
|
||||
if not s or s[0] in ('#', '\r'):
|
||||
current_lines.append(l) # just ignore comments and blank lines
|
||||
if not s or s[0] == '#':
|
||||
current_lines.append(l) # Just ignore comments and blank lines
|
||||
continue
|
||||
|
||||
if indent < current_indent: # -> dedent
|
||||
current_indent = indent
|
||||
new_indent = False
|
||||
if not in_flow or indent < old_indent:
|
||||
if current_lines:
|
||||
yield gen_part()
|
||||
in_flow = False
|
||||
elif new_indent:
|
||||
current_indent = indent
|
||||
if new_indent:
|
||||
if indent > indent_list[-2]:
|
||||
# Set the actual indent, not just the random old indent + 1.
|
||||
indent_list[-1] = indent
|
||||
new_indent = False
|
||||
|
||||
while indent <= indent_list[-2]: # -> dedent
|
||||
indent_list.pop()
|
||||
# This automatically resets the flow_indent if there was a
|
||||
# dedent or a flow just on one line (with one simple_stmt).
|
||||
new_indent = False
|
||||
if flow_indent is None and current_lines and not parentheses_level:
|
||||
yield gen_part()
|
||||
flow_indent = None
|
||||
|
||||
# Check lines for functions/classes and split the code there.
|
||||
if not in_flow:
|
||||
if flow_indent is None:
|
||||
m = self._keyword_re.match(l)
|
||||
if m:
|
||||
in_flow = m.group(1) in tokenize.FLOWS
|
||||
if not is_decorator and not in_flow:
|
||||
if current_lines:
|
||||
# Strip whitespace and colon from flows as a check.
|
||||
if m.group(1).strip(' \t\r\n:') in FLOWS:
|
||||
if not parentheses_level:
|
||||
flow_indent = indent
|
||||
else:
|
||||
if not is_decorator and not just_newlines(current_lines):
|
||||
yield gen_part()
|
||||
is_decorator = '@' == m.group(1)
|
||||
if not is_decorator:
|
||||
old_indent = current_indent
|
||||
current_indent += 1 # it must be higher
|
||||
parentheses_level = 0
|
||||
# The new indent needs to be higher
|
||||
indent_list.append(indent + 1)
|
||||
new_indent = True
|
||||
elif is_decorator:
|
||||
is_decorator = False
|
||||
|
||||
parentheses_level = \
|
||||
max(0, (l.count('(') + l.count('[') + l.count('{')
|
||||
- l.count(')') - l.count(']') - l.count('}')))
|
||||
|
||||
current_lines.append(l)
|
||||
if current_lines:
|
||||
yield gen_part()
|
||||
|
||||
def _parse(self, code):
|
||||
""" :type code: str """
|
||||
def empty_parser():
|
||||
new, temp = self._get_parser(unicode(''), unicode(''), 0, [], False)
|
||||
return new
|
||||
def _parse(self, source):
|
||||
""" :type source: str """
|
||||
added_newline = False
|
||||
if not source or source[-1] != '\n':
|
||||
# To be compatible with Pythons grammar, we need a newline at the
|
||||
# end. The parser would handle it, but since the fast parser abuses
|
||||
# the normal parser in various ways, we need to care for this
|
||||
# ourselves.
|
||||
source += '\n'
|
||||
added_newline = True
|
||||
|
||||
del self.parsers[:]
|
||||
|
||||
line_offset = 0
|
||||
next_line_offset = line_offset = 0
|
||||
start = 0
|
||||
p = None
|
||||
is_first = True
|
||||
for code_part in self._split_parts(code):
|
||||
if is_first or line_offset >= p.module.end_pos[0]:
|
||||
indent = len(code_part) - len(code_part.lstrip('\t '))
|
||||
if is_first and self.current_node is not None:
|
||||
nodes = [self.current_node]
|
||||
else:
|
||||
nodes = []
|
||||
if self.current_node is not None:
|
||||
self.current_node = \
|
||||
self.current_node.parent_until_indent(indent)
|
||||
nodes += self.current_node.old_children
|
||||
nodes = list(self.current_node.all_sub_nodes())
|
||||
# Now we can reset the node, because we have all the old nodes.
|
||||
self.current_node.reset_node()
|
||||
last_end_line = 1
|
||||
|
||||
# check if code_part has already been parsed
|
||||
# print '#'*45,line_offset, p and p.module.end_pos, '\n', code_part
|
||||
p, node = self._get_parser(code_part, code[start:],
|
||||
line_offset, nodes, not is_first)
|
||||
for code_part in self._split_parts(source):
|
||||
next_line_offset += code_part.count('\n')
|
||||
# If the last code part parsed isn't equal to the current end_pos,
|
||||
# we know that the parser went further (`def` start in a
|
||||
# docstring). So just parse the next part.
|
||||
if line_offset + 1 == last_end_line:
|
||||
self.current_node = self._get_node(code_part, source[start:],
|
||||
line_offset, nodes)
|
||||
else:
|
||||
# Means that some lines where not fully parsed. Parse it now.
|
||||
# This is a very rare case. Should only happens with very
|
||||
# strange code bits.
|
||||
self.number_of_misses += 1
|
||||
while last_end_line < next_line_offset + 1:
|
||||
line_offset = last_end_line - 1
|
||||
# We could calculate the src in a more complicated way to
|
||||
# make caching here possible as well. However, this is
|
||||
# complicated and error-prone. Since this is not very often
|
||||
# called - just ignore it.
|
||||
src = ''.join(self._lines[line_offset:])
|
||||
self.current_node = self._get_node(code_part, src,
|
||||
line_offset, nodes)
|
||||
last_end_line = self.current_node.parser.module.end_pos[0]
|
||||
|
||||
# The actual used code_part is different from the given code
|
||||
# part, because of docstrings for example there's a chance that
|
||||
# splits are wrong.
|
||||
used_lines = self._lines[line_offset:p.module.end_pos[0]]
|
||||
code_part_actually_used = '\n'.join(used_lines)
|
||||
debug.dbg('While parsing %s, line %s slowed down the fast parser.',
|
||||
self.module_path, line_offset + 1)
|
||||
|
||||
if is_first and p.module.subscopes:
|
||||
# special case, we cannot use a function subscope as a
|
||||
# base scope, subscopes would save all the other contents
|
||||
new = empty_parser()
|
||||
if self.current_node is None:
|
||||
self.current_node = ParserNode(new, '')
|
||||
else:
|
||||
self.current_node.save_contents(new, '')
|
||||
self.parsers.append(new)
|
||||
is_first = False
|
||||
line_offset = next_line_offset
|
||||
start += len(code_part)
|
||||
|
||||
if is_first:
|
||||
if self.current_node is None:
|
||||
self.current_node = ParserNode(p, code_part_actually_used)
|
||||
else:
|
||||
self.current_node.save_contents(p, code_part_actually_used)
|
||||
else:
|
||||
if node is None:
|
||||
self.current_node = \
|
||||
self.current_node.add_parser(p, code_part_actually_used)
|
||||
else:
|
||||
self.current_node = self.current_node.add_node(node)
|
||||
last_end_line = self.current_node.parser.module.end_pos[0]
|
||||
|
||||
self.parsers.append(p)
|
||||
if added_newline:
|
||||
self.current_node.remove_last_newline()
|
||||
|
||||
is_first = False
|
||||
#else:
|
||||
#print '#'*45, line_offset, p.module.end_pos, 'theheck\n', repr(code_part)
|
||||
# Now that the for loop is finished, we still want to close all nodes.
|
||||
self.current_node = self.current_node.parent_until_indent()
|
||||
self.current_node.close()
|
||||
|
||||
line_offset += code_part.count('\n') + 1
|
||||
start += len(code_part) + 1 # +1 for newline
|
||||
debug.dbg('Parsed %s, with %s parsers in %s splits.'
|
||||
% (self.module_path, self.number_parsers_used,
|
||||
self.number_of_splits))
|
||||
|
||||
if self.parsers:
|
||||
self.current_node = self.current_node.parent_until_indent()
|
||||
else:
|
||||
self.parsers.append(empty_parser())
|
||||
def _get_node(self, source, parser_code, line_offset, nodes):
|
||||
"""
|
||||
Side effect: Alters the list of nodes.
|
||||
"""
|
||||
indent = len(source) - len(source.lstrip('\t '))
|
||||
self.current_node = self.current_node.parent_until_indent(indent)
|
||||
|
||||
self.module.end_pos = self.parsers[-1].module.end_pos
|
||||
|
||||
# print(self.parsers[0].module.get_code())
|
||||
|
||||
def _get_parser(self, code, parser_code, line_offset, nodes, no_docstr):
|
||||
h = hash(code)
|
||||
h = hash(source)
|
||||
for index, node in enumerate(nodes):
|
||||
if node.hash != h or node.code != code:
|
||||
continue
|
||||
|
||||
if node != self.current_node:
|
||||
offset = int(nodes[0] == self.current_node)
|
||||
self.current_node.old_children.pop(index - offset)
|
||||
p = node.parser
|
||||
m = p.module
|
||||
m.line_offset += line_offset + 1 - m.start_pos[0]
|
||||
break
|
||||
if node.hash == h and node.source == source:
|
||||
node.reset_node()
|
||||
nodes.remove(node)
|
||||
break
|
||||
else:
|
||||
tokenizer = FastTokenizer(parser_code, line_offset)
|
||||
p = Parser(parser_code, self.module_path, tokenizer=tokenizer,
|
||||
top_module=self.module, no_docstr=no_docstr)
|
||||
p.module.parent = self.module
|
||||
node = None
|
||||
tokenizer = FastTokenizer(parser_code)
|
||||
self.number_parsers_used += 1
|
||||
p = Parser(self._grammar, parser_code, self.module_path, tokenizer=tokenizer)
|
||||
|
||||
return p, node
|
||||
end = line_offset + p.module.end_pos[0]
|
||||
used_lines = self._lines[line_offset:end - 1]
|
||||
code_part_actually_used = ''.join(used_lines)
|
||||
|
||||
def reset_caches(self):
|
||||
self.module.reset_caches()
|
||||
if self.current_node is not None:
|
||||
self.current_node.reset_contents()
|
||||
node = ParserNode(self.module, p, code_part_actually_used)
|
||||
|
||||
self.current_node.add_node(node, line_offset)
|
||||
return node
|
||||
|
||||
|
||||
class FastTokenizer(object):
|
||||
"""
|
||||
Breaks when certain conditions are met, i.e. a new function or class opens.
|
||||
"""
|
||||
def __init__(self, source, line_offset=0):
|
||||
def __init__(self, source):
|
||||
self.source = source
|
||||
self.gen = source_tokens(source, line_offset)
|
||||
self.closed = False
|
||||
self._gen = source_tokens(source)
|
||||
self._closed = False
|
||||
|
||||
# fast parser options
|
||||
self.current = self.previous = Token(None, '', (0, 0))
|
||||
self.in_flow = False
|
||||
self.new_indent = False
|
||||
self.parser_indent = self.old_parser_indent = 0
|
||||
self.is_decorator = False
|
||||
self.first_stmt = True
|
||||
self.parentheses_level = 0
|
||||
self.current = self.previous = NEWLINE, '', (0, 0)
|
||||
self._in_flow = False
|
||||
self._is_decorator = False
|
||||
self._first_stmt = True
|
||||
self._parentheses_level = 0
|
||||
self._indent_counter = 0
|
||||
self._flow_indent_counter = 0
|
||||
self._returned_endmarker = False
|
||||
self._expect_indent = False
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
""" Python 2 Compatibility """
|
||||
return self.__next__()
|
||||
|
||||
def __next__(self):
|
||||
if self.closed:
|
||||
raise common.MultiLevelStopIteration()
|
||||
if self._closed:
|
||||
return self._finish_dedents()
|
||||
|
||||
current = next(self.gen)
|
||||
tok_type = current.type
|
||||
tok_str = current.string
|
||||
if tok_type == ENDMARKER:
|
||||
raise common.MultiLevelStopIteration()
|
||||
typ, value, start_pos, prefix = current = next(self._gen)
|
||||
if typ == ENDMARKER:
|
||||
self._closed = True
|
||||
self._returned_endmarker = True
|
||||
return current
|
||||
|
||||
self.previous = self.current
|
||||
self.current = current
|
||||
|
||||
# this is exactly the same check as in fast_parser, but this time with
|
||||
# tokenize and therefore precise.
|
||||
breaks = ['def', 'class', '@']
|
||||
if typ == INDENT:
|
||||
self._indent_counter += 1
|
||||
if not self._expect_indent and not self._first_stmt and not self._in_flow:
|
||||
# This does not mean that there is an actual flow, it means
|
||||
# that the INDENT is syntactically wrong.
|
||||
self._flow_indent_counter = self._indent_counter - 1
|
||||
self._in_flow = True
|
||||
self._expect_indent = False
|
||||
elif typ == DEDENT:
|
||||
self._indent_counter -= 1
|
||||
if self._in_flow:
|
||||
if self._indent_counter == self._flow_indent_counter:
|
||||
self._in_flow = False
|
||||
else:
|
||||
self._closed = True
|
||||
return current
|
||||
|
||||
def close():
|
||||
if not self.first_stmt:
|
||||
self.closed = True
|
||||
raise common.MultiLevelStopIteration()
|
||||
if value in ('def', 'class') and self._parentheses_level \
|
||||
and re.search(r'\n[ \t]*\Z', prefix):
|
||||
# Account for the fact that an open parentheses before a function
|
||||
# will reset the parentheses counter, but new lines before will
|
||||
# still be ignored. So check the prefix.
|
||||
|
||||
# Ignore comments/newlines, irrelevant for indentation.
|
||||
if self.previous.type in (None, NEWLINE) \
|
||||
and tok_type not in (COMMENT, NEWLINE):
|
||||
# print c, tok_name[c[0]]
|
||||
indent = current.start_pos[1]
|
||||
if self.parentheses_level:
|
||||
# parentheses ignore the indentation rules.
|
||||
pass
|
||||
elif indent < self.parser_indent: # -> dedent
|
||||
self.parser_indent = indent
|
||||
self.new_indent = False
|
||||
if not self.in_flow or indent < self.old_parser_indent:
|
||||
close()
|
||||
# TODO what about flow parentheses counter resets in the tokenizer?
|
||||
self._parentheses_level = 0
|
||||
return self._close()
|
||||
|
||||
self.in_flow = False
|
||||
elif self.new_indent:
|
||||
self.parser_indent = indent
|
||||
self.new_indent = False
|
||||
# Parentheses ignore the indentation rules. The other three stand for
|
||||
# new lines.
|
||||
if self.previous[0] in (NEWLINE, INDENT, DEDENT) \
|
||||
and not self._parentheses_level and typ not in (INDENT, DEDENT):
|
||||
if not self._in_flow:
|
||||
if value in FLOWS:
|
||||
self._flow_indent_counter = self._indent_counter
|
||||
self._first_stmt = False
|
||||
elif value in ('def', 'class', '@'):
|
||||
# The values here are exactly the same check as in
|
||||
# _split_parts, but this time with tokenize and therefore
|
||||
# precise.
|
||||
if not self._first_stmt and not self._is_decorator:
|
||||
return self._close()
|
||||
|
||||
if not self.in_flow:
|
||||
if tok_str in FLOWS or tok_str in breaks:
|
||||
self.in_flow = tok_str in FLOWS
|
||||
if not self.is_decorator and not self.in_flow:
|
||||
close()
|
||||
self._is_decorator = '@' == value
|
||||
if not self._is_decorator:
|
||||
self._first_stmt = False
|
||||
self._expect_indent = True
|
||||
elif self._expect_indent:
|
||||
return self._close()
|
||||
else:
|
||||
self._first_stmt = False
|
||||
|
||||
self.is_decorator = '@' == tok_str
|
||||
if not self.is_decorator:
|
||||
self.old_parser_indent = self.parser_indent
|
||||
self.parser_indent += 1 # new scope: must be higher
|
||||
self.new_indent = True
|
||||
|
||||
if tok_str != '@':
|
||||
if self.first_stmt and not self.new_indent:
|
||||
self.parser_indent = indent
|
||||
self.first_stmt = False
|
||||
|
||||
# Ignore closing parentheses, because they are all
|
||||
# irrelevant for the indentation.
|
||||
|
||||
if tok_str in '([{':
|
||||
self.parentheses_level += 1
|
||||
elif tok_str in ')]}':
|
||||
self.parentheses_level = max(self.parentheses_level - 1, 0)
|
||||
if value in '([{' and value:
|
||||
self._parentheses_level += 1
|
||||
elif value in ')]}' and value:
|
||||
# Ignore closing parentheses, because they are all
|
||||
# irrelevant for the indentation.
|
||||
self._parentheses_level = max(self._parentheses_level - 1, 0)
|
||||
return current
|
||||
|
||||
def _close(self):
|
||||
if self._first_stmt:
|
||||
# Continue like nothing has happened, because we want to enter
|
||||
# the first class/function.
|
||||
if self.current[1] != '@':
|
||||
self._first_stmt = False
|
||||
return self.current
|
||||
else:
|
||||
self._closed = True
|
||||
return self._finish_dedents()
|
||||
|
||||
def _finish_dedents(self):
|
||||
if self._indent_counter:
|
||||
self._indent_counter -= 1
|
||||
return DEDENT, '', self.current[2], ''
|
||||
elif not self._returned_endmarker:
|
||||
self._returned_endmarker = True
|
||||
return ENDMARKER, '', self.current[2], self._get_prefix()
|
||||
else:
|
||||
raise StopIteration
|
||||
|
||||
def _get_prefix(self):
|
||||
"""
|
||||
We're using the current prefix for the endmarker to not loose any
|
||||
information. However we care about "lost" lines. The prefix of the
|
||||
current line (indent) will always be included in the current line.
|
||||
"""
|
||||
cur = self.current
|
||||
while cur[0] == DEDENT:
|
||||
cur = next(self._gen)
|
||||
prefix = cur[3]
|
||||
|
||||
# \Z for the end of the string. $ is bugged, because it has the
|
||||
# same behavior with or without re.MULTILINE.
|
||||
return re.sub(r'[^\n]+\Z', '', prefix)
|
||||
|
||||
163
jedi/parser/grammar2.7.txt
Normal file
163
jedi/parser/grammar2.7.txt
Normal file
@@ -0,0 +1,163 @@
|
||||
# Grammar for 2to3. This grammar supports Python 2.x and 3.x.
|
||||
|
||||
# Note: Changing the grammar specified in this file will most likely
|
||||
# require corresponding changes in the parser module
|
||||
# (../Modules/parsermodule.c). If you can't make the changes to
|
||||
# that module yourself, please co-ordinate the required changes
|
||||
# with someone who can; ask around on python-dev for help. Fred
|
||||
# Drake <fdrake@acm.org> will probably be listening there.
|
||||
|
||||
# NOTE WELL: You should also follow all the steps listed in PEP 306,
|
||||
# "How to Change Python's Grammar"
|
||||
|
||||
# Commands for Kees Blom's railroad program
|
||||
#diagram:token NAME
|
||||
#diagram:token NUMBER
|
||||
#diagram:token STRING
|
||||
#diagram:token NEWLINE
|
||||
#diagram:token ENDMARKER
|
||||
#diagram:token INDENT
|
||||
#diagram:output\input python.bla
|
||||
#diagram:token DEDENT
|
||||
#diagram:output\textwidth 20.04cm\oddsidemargin 0.0cm\evensidemargin 0.0cm
|
||||
#diagram:rules
|
||||
|
||||
# Start symbols for the grammar:
|
||||
# file_input is a module or sequence of commands read from an input file;
|
||||
# single_input is a single interactive statement;
|
||||
# eval_input is the input for the eval() and input() functions.
|
||||
# NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
file_input: (NEWLINE | stmt)* ENDMARKER
|
||||
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
|
||||
eval_input: testlist NEWLINE* ENDMARKER
|
||||
|
||||
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
|
||||
decorators: decorator+
|
||||
decorated: decorators (classdef | funcdef)
|
||||
funcdef: 'def' NAME parameters ['->' test] ':' suite
|
||||
parameters: '(' [typedargslist] ')'
|
||||
typedargslist: ((tfpdef ['=' test] ',')*
|
||||
('*' [tname] (',' tname ['=' test])* [',' '**' tname] | '**' tname)
|
||||
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
|
||||
tname: NAME [':' test]
|
||||
tfpdef: tname | '(' tfplist ')'
|
||||
tfplist: tfpdef (',' tfpdef)* [',']
|
||||
varargslist: ((vfpdef ['=' test] ',')*
|
||||
('*' [vname] (',' vname ['=' test])* [',' '**' vname] | '**' vname)
|
||||
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
|
||||
vname: NAME
|
||||
vfpdef: vname | '(' vfplist ')'
|
||||
vfplist: vfpdef (',' vfpdef)* [',']
|
||||
|
||||
stmt: simple_stmt | compound_stmt
|
||||
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
|
||||
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
|
||||
import_stmt | global_stmt | exec_stmt | assert_stmt)
|
||||
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
|
||||
('=' (yield_expr|testlist_star_expr))*)
|
||||
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
|
||||
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
|
||||
'<<=' | '>>=' | '**=' | '//=')
|
||||
# For normal assignments, additional restrictions enforced by the interpreter
|
||||
print_stmt: 'print' ( [ test (',' test)* [','] ] |
|
||||
'>>' test [ (',' test)+ [','] ] )
|
||||
del_stmt: 'del' exprlist
|
||||
pass_stmt: 'pass'
|
||||
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
|
||||
break_stmt: 'break'
|
||||
continue_stmt: 'continue'
|
||||
return_stmt: 'return' [testlist]
|
||||
yield_stmt: yield_expr
|
||||
raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
|
||||
import_stmt: import_name | import_from
|
||||
import_name: 'import' dotted_as_names
|
||||
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
|
||||
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
|
||||
'import' ('*' | '(' import_as_names ')' | import_as_names))
|
||||
import_as_name: NAME ['as' NAME]
|
||||
dotted_as_name: dotted_name ['as' NAME]
|
||||
import_as_names: import_as_name (',' import_as_name)* [',']
|
||||
dotted_as_names: dotted_as_name (',' dotted_as_name)*
|
||||
dotted_name: NAME ('.' NAME)*
|
||||
global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
|
||||
exec_stmt: 'exec' expr ['in' test [',' test]]
|
||||
assert_stmt: 'assert' test [',' test]
|
||||
|
||||
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
|
||||
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
|
||||
while_stmt: 'while' test ':' suite ['else' ':' suite]
|
||||
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
|
||||
try_stmt: ('try' ':' suite
|
||||
((except_clause ':' suite)+
|
||||
['else' ':' suite]
|
||||
['finally' ':' suite] |
|
||||
'finally' ':' suite))
|
||||
with_stmt: 'with' with_item (',' with_item)* ':' suite
|
||||
with_item: test ['as' expr]
|
||||
with_var: 'as' expr
|
||||
# NB compile.c makes sure that the default except clause is last
|
||||
except_clause: 'except' [test [(',' | 'as') test]]
|
||||
# Edit by David Halter: The stmt is now optional. This reflects how Jedi allows
|
||||
# classes and functions to be empty, which is beneficial for autocompletion.
|
||||
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
|
||||
|
||||
# Backward compatibility cruft to support:
|
||||
# [ x for x in lambda: True, lambda: False if x() ]
|
||||
# even while also allowing:
|
||||
# lambda x: 5 if x else 2
|
||||
# (But not a mix of the two)
|
||||
testlist_safe: old_test [(',' old_test)+ [',']]
|
||||
old_test: or_test | old_lambdef
|
||||
old_lambdef: 'lambda' [varargslist] ':' old_test
|
||||
|
||||
test: or_test ['if' or_test 'else' test] | lambdef
|
||||
or_test: and_test ('or' and_test)*
|
||||
and_test: not_test ('and' not_test)*
|
||||
not_test: 'not' not_test | comparison
|
||||
comparison: expr (comp_op expr)*
|
||||
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
|
||||
star_expr: '*' expr
|
||||
expr: xor_expr ('|' xor_expr)*
|
||||
xor_expr: and_expr ('^' and_expr)*
|
||||
and_expr: shift_expr ('&' shift_expr)*
|
||||
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
|
||||
arith_expr: term (('+'|'-') term)*
|
||||
term: factor (('*'|'/'|'%'|'//') factor)*
|
||||
factor: ('+'|'-'|'~') factor | power
|
||||
power: atom trailer* ['**' factor]
|
||||
atom: ('(' [yield_expr|testlist_comp] ')' |
|
||||
'[' [testlist_comp] ']' |
|
||||
'{' [dictorsetmaker] '}' |
|
||||
'`' testlist1 '`' |
|
||||
NAME | NUMBER | STRING+ | '.' '.' '.')
|
||||
# Modification by David Halter, remove `testlist_gexp` and `listmaker`
|
||||
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
|
||||
lambdef: 'lambda' [varargslist] ':' test
|
||||
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
|
||||
subscriptlist: subscript (',' subscript)* [',']
|
||||
subscript: test | [test] ':' [test] [sliceop]
|
||||
sliceop: ':' [test]
|
||||
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
|
||||
testlist: test (',' test)* [',']
|
||||
# Modification by David Halter, dictsetmaker -> dictorsetmaker (so that it's
|
||||
# the same as in the 3.4 grammar).
|
||||
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
|
||||
(test (comp_for | (',' test)* [','])) )
|
||||
|
||||
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
|
||||
|
||||
arglist: (argument ',')* (argument [',']
|
||||
|'*' test (',' argument)* [',' '**' test]
|
||||
|'**' test)
|
||||
argument: test [comp_for] | test '=' test # Really [keyword '='] test
|
||||
|
||||
comp_iter: comp_for | comp_if
|
||||
comp_for: 'for' exprlist 'in' testlist_safe [comp_iter]
|
||||
comp_if: 'if' old_test [comp_iter]
|
||||
|
||||
testlist1: test (',' test)*
|
||||
|
||||
# not used in grammar, but may appear in "node" passed from Parser to Compiler
|
||||
encoding_decl: NAME
|
||||
|
||||
yield_expr: 'yield' [testlist]
|
||||
135
jedi/parser/grammar3.4.txt
Normal file
135
jedi/parser/grammar3.4.txt
Normal file
@@ -0,0 +1,135 @@
|
||||
# Grammar for Python
|
||||
|
||||
# Note: Changing the grammar specified in this file will most likely
|
||||
# require corresponding changes in the parser module
|
||||
# (../Modules/parsermodule.c). If you can't make the changes to
|
||||
# that module yourself, please co-ordinate the required changes
|
||||
# with someone who can; ask around on python-dev for help. Fred
|
||||
# Drake <fdrake@acm.org> will probably be listening there.
|
||||
|
||||
# NOTE WELL: You should also follow all the steps listed in PEP 306,
|
||||
# "How to Change Python's Grammar"
|
||||
|
||||
# Start symbols for the grammar:
|
||||
# single_input is a single interactive statement;
|
||||
# file_input is a module or sequence of commands read from an input file;
|
||||
# eval_input is the input for the eval() functions.
|
||||
# NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
file_input: (NEWLINE | stmt)* ENDMARKER
|
||||
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
|
||||
eval_input: testlist NEWLINE* ENDMARKER
|
||||
|
||||
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
|
||||
decorators: decorator+
|
||||
decorated: decorators (classdef | funcdef)
|
||||
funcdef: 'def' NAME parameters ['->' test] ':' suite
|
||||
parameters: '(' [typedargslist] ')'
|
||||
typedargslist: (tfpdef ['=' test] (',' tfpdef ['=' test])* [','
|
||||
['*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef]]
|
||||
| '*' [tfpdef] (',' tfpdef ['=' test])* [',' '**' tfpdef] | '**' tfpdef)
|
||||
tfpdef: NAME [':' test]
|
||||
varargslist: (vfpdef ['=' test] (',' vfpdef ['=' test])* [','
|
||||
['*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef]]
|
||||
| '*' [vfpdef] (',' vfpdef ['=' test])* [',' '**' vfpdef] | '**' vfpdef)
|
||||
vfpdef: NAME
|
||||
|
||||
stmt: simple_stmt | compound_stmt
|
||||
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
|
||||
small_stmt: (expr_stmt | del_stmt | pass_stmt | flow_stmt |
|
||||
import_stmt | global_stmt | nonlocal_stmt | assert_stmt)
|
||||
expr_stmt: testlist_star_expr (augassign (yield_expr|testlist) |
|
||||
('=' (yield_expr|testlist_star_expr))*)
|
||||
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
|
||||
augassign: ('+=' | '-=' | '*=' | '/=' | '%=' | '&=' | '|=' | '^=' |
|
||||
'<<=' | '>>=' | '**=' | '//=')
|
||||
# For normal assignments, additional restrictions enforced by the interpreter
|
||||
del_stmt: 'del' exprlist
|
||||
pass_stmt: 'pass'
|
||||
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
|
||||
break_stmt: 'break'
|
||||
continue_stmt: 'continue'
|
||||
return_stmt: 'return' [testlist]
|
||||
yield_stmt: yield_expr
|
||||
raise_stmt: 'raise' [test ['from' test]]
|
||||
import_stmt: import_name | import_from
|
||||
import_name: 'import' dotted_as_names
|
||||
# note below: the ('.' | '...') is necessary because '...' is tokenized as ELLIPSIS
|
||||
import_from: ('from' (('.' | '...')* dotted_name | ('.' | '...')+)
|
||||
'import' ('*' | '(' import_as_names ')' | import_as_names))
|
||||
import_as_name: NAME ['as' NAME]
|
||||
dotted_as_name: dotted_name ['as' NAME]
|
||||
import_as_names: import_as_name (',' import_as_name)* [',']
|
||||
dotted_as_names: dotted_as_name (',' dotted_as_name)*
|
||||
dotted_name: NAME ('.' NAME)*
|
||||
global_stmt: 'global' NAME (',' NAME)*
|
||||
nonlocal_stmt: 'nonlocal' NAME (',' NAME)*
|
||||
assert_stmt: 'assert' test [',' test]
|
||||
|
||||
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated
|
||||
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
|
||||
while_stmt: 'while' test ':' suite ['else' ':' suite]
|
||||
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
|
||||
try_stmt: ('try' ':' suite
|
||||
((except_clause ':' suite)+
|
||||
['else' ':' suite]
|
||||
['finally' ':' suite] |
|
||||
'finally' ':' suite))
|
||||
with_stmt: 'with' with_item (',' with_item)* ':' suite
|
||||
with_item: test ['as' expr]
|
||||
# NB compile.c makes sure that the default except clause is last
|
||||
except_clause: 'except' [test ['as' NAME]]
|
||||
# Edit by David Halter: The stmt is now optional. This reflects how Jedi allows
|
||||
# classes and functions to be empty, which is beneficial for autocompletion.
|
||||
suite: simple_stmt | NEWLINE INDENT stmt* DEDENT
|
||||
|
||||
test: or_test ['if' or_test 'else' test] | lambdef
|
||||
test_nocond: or_test | lambdef_nocond
|
||||
lambdef: 'lambda' [varargslist] ':' test
|
||||
lambdef_nocond: 'lambda' [varargslist] ':' test_nocond
|
||||
or_test: and_test ('or' and_test)*
|
||||
and_test: not_test ('and' not_test)*
|
||||
not_test: 'not' not_test | comparison
|
||||
comparison: expr (comp_op expr)*
|
||||
# <> isn't actually a valid comparison operator in Python. It's here for the
|
||||
# sake of a __future__ import described in PEP 401
|
||||
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
|
||||
star_expr: '*' expr
|
||||
expr: xor_expr ('|' xor_expr)*
|
||||
xor_expr: and_expr ('^' and_expr)*
|
||||
and_expr: shift_expr ('&' shift_expr)*
|
||||
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
|
||||
arith_expr: term (('+'|'-') term)*
|
||||
term: factor (('*'|'/'|'%'|'//') factor)*
|
||||
factor: ('+'|'-'|'~') factor | power
|
||||
power: atom trailer* ['**' factor]
|
||||
atom: ('(' [yield_expr|testlist_comp] ')' |
|
||||
'[' [testlist_comp] ']' |
|
||||
'{' [dictorsetmaker] '}' |
|
||||
NAME | NUMBER | STRING+ | '...' | 'None' | 'True' | 'False')
|
||||
testlist_comp: (test|star_expr) ( comp_for | (',' (test|star_expr))* [','] )
|
||||
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
|
||||
subscriptlist: subscript (',' subscript)* [',']
|
||||
subscript: test | [test] ':' [test] [sliceop]
|
||||
sliceop: ':' [test]
|
||||
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
|
||||
testlist: test (',' test)* [',']
|
||||
dictorsetmaker: ( (test ':' test (comp_for | (',' test ':' test)* [','])) |
|
||||
(test (comp_for | (',' test)* [','])) )
|
||||
|
||||
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
|
||||
|
||||
arglist: (argument ',')* (argument [',']
|
||||
|'*' test (',' argument)* [',' '**' test]
|
||||
|'**' test)
|
||||
# The reason that keywords are test nodes instead of NAME is that using NAME
|
||||
# results in an ambiguity. ast.c makes sure it's a NAME.
|
||||
argument: test [comp_for] | test '=' test # Really [keyword '='] test
|
||||
comp_iter: comp_for | comp_if
|
||||
comp_for: 'for' exprlist 'in' or_test [comp_iter]
|
||||
comp_if: 'if' test_nocond [comp_iter]
|
||||
|
||||
# not used in grammar, but may appear in "node" passed from Parser to Compiler
|
||||
encoding_decl: NAME
|
||||
|
||||
yield_expr: 'yield' [yield_arg]
|
||||
yield_arg: 'from' test | testlist
|
||||
8
jedi/parser/pgen2/__init__.py
Normal file
8
jedi/parser/pgen2/__init__.py
Normal file
@@ -0,0 +1,8 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
# Modifications:
|
||||
# Copyright 2006 Google, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
# Copyright 2014 David Halter. Integration into Jedi.
|
||||
# Modifications are dual-licensed: MIT and PSF.
|
||||
125
jedi/parser/pgen2/grammar.py
Normal file
125
jedi/parser/pgen2/grammar.py
Normal file
@@ -0,0 +1,125 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
# Modifications:
|
||||
# Copyright 2014 David Halter. Integration into Jedi.
|
||||
# Modifications are dual-licensed: MIT and PSF.
|
||||
|
||||
"""This module defines the data structures used to represent a grammar.
|
||||
|
||||
These are a bit arcane because they are derived from the data
|
||||
structures used by Python's 'pgen' parser generator.
|
||||
|
||||
There's also a table here mapping operators to their names in the
|
||||
token module; the Python tokenize module reports all operators as the
|
||||
fallback token code OP, but the parser needs the actual token code.
|
||||
|
||||
"""
|
||||
|
||||
# Python imports
|
||||
import pickle
|
||||
|
||||
|
||||
class Grammar(object):
|
||||
"""Pgen parsing tables conversion class.
|
||||
|
||||
Once initialized, this class supplies the grammar tables for the
|
||||
parsing engine implemented by parse.py. The parsing engine
|
||||
accesses the instance variables directly. The class here does not
|
||||
provide initialization of the tables; several subclasses exist to
|
||||
do this (see the conv and pgen modules).
|
||||
|
||||
The load() method reads the tables from a pickle file, which is
|
||||
much faster than the other ways offered by subclasses. The pickle
|
||||
file is written by calling dump() (after loading the grammar
|
||||
tables using a subclass). The report() method prints a readable
|
||||
representation of the tables to stdout, for debugging.
|
||||
|
||||
The instance variables are as follows:
|
||||
|
||||
symbol2number -- a dict mapping symbol names to numbers. Symbol
|
||||
numbers are always 256 or higher, to distinguish
|
||||
them from token numbers, which are between 0 and
|
||||
255 (inclusive).
|
||||
|
||||
number2symbol -- a dict mapping numbers to symbol names;
|
||||
these two are each other's inverse.
|
||||
|
||||
states -- a list of DFAs, where each DFA is a list of
|
||||
states, each state is a list of arcs, and each
|
||||
arc is a (i, j) pair where i is a label and j is
|
||||
a state number. The DFA number is the index into
|
||||
this list. (This name is slightly confusing.)
|
||||
Final states are represented by a special arc of
|
||||
the form (0, j) where j is its own state number.
|
||||
|
||||
dfas -- a dict mapping symbol numbers to (DFA, first)
|
||||
pairs, where DFA is an item from the states list
|
||||
above, and first is a set of tokens that can
|
||||
begin this grammar rule (represented by a dict
|
||||
whose values are always 1).
|
||||
|
||||
labels -- a list of (x, y) pairs where x is either a token
|
||||
number or a symbol number, and y is either None
|
||||
or a string; the strings are keywords. The label
|
||||
number is the index in this list; label numbers
|
||||
are used to mark state transitions (arcs) in the
|
||||
DFAs.
|
||||
|
||||
start -- the number of the grammar's start symbol.
|
||||
|
||||
keywords -- a dict mapping keyword strings to arc labels.
|
||||
|
||||
tokens -- a dict mapping token numbers to arc labels.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.symbol2number = {}
|
||||
self.number2symbol = {}
|
||||
self.states = []
|
||||
self.dfas = {}
|
||||
self.labels = [(0, "EMPTY")]
|
||||
self.keywords = {}
|
||||
self.tokens = {}
|
||||
self.symbol2label = {}
|
||||
self.start = 256
|
||||
|
||||
def dump(self, filename):
|
||||
"""Dump the grammar tables to a pickle file."""
|
||||
with open(filename, "wb") as f:
|
||||
pickle.dump(self.__dict__, f, 2)
|
||||
|
||||
def load(self, filename):
|
||||
"""Load the grammar tables from a pickle file."""
|
||||
with open(filename, "rb") as f:
|
||||
d = pickle.load(f)
|
||||
self.__dict__.update(d)
|
||||
|
||||
def copy(self):
|
||||
"""
|
||||
Copy the grammar.
|
||||
"""
|
||||
new = self.__class__()
|
||||
for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords",
|
||||
"tokens", "symbol2label"):
|
||||
setattr(new, dict_attr, getattr(self, dict_attr).copy())
|
||||
new.labels = self.labels[:]
|
||||
new.states = self.states[:]
|
||||
new.start = self.start
|
||||
return new
|
||||
|
||||
def report(self):
|
||||
"""Dump the grammar tables to standard output, for debugging."""
|
||||
from pprint import pprint
|
||||
print("s2n")
|
||||
pprint(self.symbol2number)
|
||||
print("n2s")
|
||||
pprint(self.number2symbol)
|
||||
print("states")
|
||||
pprint(self.states)
|
||||
print("dfas")
|
||||
pprint(self.dfas)
|
||||
print("labels")
|
||||
pprint(self.labels)
|
||||
print("start", self.start)
|
||||
205
jedi/parser/pgen2/parse.py
Normal file
205
jedi/parser/pgen2/parse.py
Normal file
@@ -0,0 +1,205 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
# Modifications:
|
||||
# Copyright 2014 David Halter. Integration into Jedi.
|
||||
# Modifications are dual-licensed: MIT and PSF.
|
||||
|
||||
"""
|
||||
Parser engine for the grammar tables generated by pgen.
|
||||
|
||||
The grammar table must be loaded first.
|
||||
|
||||
See Parser/parser.c in the Python distribution for additional info on
|
||||
how this parsing engine works.
|
||||
"""
|
||||
|
||||
# Local imports
|
||||
from jedi.parser import tokenize
|
||||
|
||||
|
||||
class ParseError(Exception):
|
||||
"""Exception to signal the parser is stuck."""
|
||||
|
||||
def __init__(self, msg, type, value, start_pos):
|
||||
Exception.__init__(self, "%s: type=%r, value=%r, start_pos=%r" %
|
||||
(msg, tokenize.tok_name[type], value, start_pos))
|
||||
self.msg = msg
|
||||
self.type = type
|
||||
self.value = value
|
||||
self.start_pos = start_pos
|
||||
|
||||
|
||||
class PgenParser(object):
|
||||
"""Parser engine.
|
||||
|
||||
The proper usage sequence is:
|
||||
|
||||
p = Parser(grammar, [converter]) # create instance
|
||||
p.setup([start]) # prepare for parsing
|
||||
<for each input token>:
|
||||
if p.addtoken(...): # parse a token; may raise ParseError
|
||||
break
|
||||
root = p.rootnode # root of abstract syntax tree
|
||||
|
||||
A Parser instance may be reused by calling setup() repeatedly.
|
||||
|
||||
A Parser instance contains state pertaining to the current token
|
||||
sequence, and should not be used concurrently by different threads
|
||||
to parse separate token sequences.
|
||||
|
||||
See driver.py for how to get input tokens by tokenizing a file or
|
||||
string.
|
||||
|
||||
Parsing is complete when addtoken() returns True; the root of the
|
||||
abstract syntax tree can then be retrieved from the rootnode
|
||||
instance variable. When a syntax error occurs, addtoken() raises
|
||||
the ParseError exception. There is no error recovery; the parser
|
||||
cannot be used after a syntax error was reported (but it can be
|
||||
reinitialized by calling setup()).
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, grammar, convert_node, convert_leaf, error_recovery):
|
||||
"""Constructor.
|
||||
|
||||
The grammar argument is a grammar.Grammar instance; see the
|
||||
grammar module for more information.
|
||||
|
||||
The parser is not ready yet for parsing; you must call the
|
||||
setup() method to get it started.
|
||||
|
||||
The optional convert argument is a function mapping concrete
|
||||
syntax tree nodes to abstract syntax tree nodes. If not
|
||||
given, no conversion is done and the syntax tree produced is
|
||||
the concrete syntax tree. If given, it must be a function of
|
||||
two arguments, the first being the grammar (a grammar.Grammar
|
||||
instance), and the second being the concrete syntax tree node
|
||||
to be converted. The syntax tree is converted from the bottom
|
||||
up.
|
||||
|
||||
A concrete syntax tree node is a (type, nodes) tuple, where
|
||||
type is the node type (a token or symbol number) and nodes
|
||||
is a list of children for symbols, and None for tokens.
|
||||
|
||||
An abstract syntax tree node may be anything; this is entirely
|
||||
up to the converter function.
|
||||
|
||||
"""
|
||||
self.grammar = grammar
|
||||
self.convert_node = convert_node
|
||||
self.convert_leaf = convert_leaf
|
||||
|
||||
# Prepare for parsing.
|
||||
start = self.grammar.start
|
||||
# Each stack entry is a tuple: (dfa, state, node).
|
||||
# A node is a tuple: (type, children),
|
||||
# where children is a list of nodes or None
|
||||
newnode = (start, [])
|
||||
stackentry = (self.grammar.dfas[start], 0, newnode)
|
||||
self.stack = [stackentry]
|
||||
self.rootnode = None
|
||||
self.error_recovery = error_recovery
|
||||
|
||||
def parse(self, tokenizer):
|
||||
for type, value, prefix, start_pos in tokenizer:
|
||||
if self.addtoken(type, value, prefix, start_pos):
|
||||
break
|
||||
else:
|
||||
# We never broke out -- EOF is too soon -- Unfinished statement.
|
||||
self.error_recovery(self.grammar, self.stack, type, value,
|
||||
start_pos, prefix, self.addtoken)
|
||||
# Add the ENDMARKER again.
|
||||
if not self.addtoken(type, value, prefix, start_pos):
|
||||
raise ParseError("incomplete input", type, value, start_pos)
|
||||
return self.rootnode
|
||||
|
||||
def addtoken(self, type, value, prefix, start_pos):
|
||||
"""Add a token; return True if this is the end of the program."""
|
||||
# Map from token to label
|
||||
if type == tokenize.NAME:
|
||||
# Check for reserved words (keywords)
|
||||
try:
|
||||
ilabel = self.grammar.keywords[value]
|
||||
except KeyError:
|
||||
ilabel = self.grammar.tokens[type]
|
||||
else:
|
||||
ilabel = self.grammar.tokens[type]
|
||||
|
||||
# Loop until the token is shifted; may raise exceptions
|
||||
while True:
|
||||
dfa, state, node = self.stack[-1]
|
||||
states, first = dfa
|
||||
arcs = states[state]
|
||||
# Look for a state with this label
|
||||
for i, newstate in arcs:
|
||||
t, v = self.grammar.labels[i]
|
||||
if ilabel == i:
|
||||
# Look it up in the list of labels
|
||||
assert t < 256
|
||||
# Shift a token; we're done with it
|
||||
self.shift(type, value, newstate, prefix, start_pos)
|
||||
# Pop while we are in an accept-only state
|
||||
state = newstate
|
||||
while states[state] == [(0, state)]:
|
||||
self.pop()
|
||||
if not self.stack:
|
||||
# Done parsing!
|
||||
return True
|
||||
dfa, state, node = self.stack[-1]
|
||||
states, first = dfa
|
||||
# Done with this token
|
||||
return False
|
||||
elif t >= 256:
|
||||
# See if it's a symbol and if we're in its first set
|
||||
itsdfa = self.grammar.dfas[t]
|
||||
itsstates, itsfirst = itsdfa
|
||||
if ilabel in itsfirst:
|
||||
# Push a symbol
|
||||
self.push(t, itsdfa, newstate)
|
||||
break # To continue the outer while loop
|
||||
else:
|
||||
if (0, state) in arcs:
|
||||
# An accepting state, pop it and try something else
|
||||
self.pop()
|
||||
if not self.stack:
|
||||
# Done parsing, but another token is input
|
||||
raise ParseError("too much input", type, value, start_pos)
|
||||
else:
|
||||
self.error_recovery(self.grammar, self.stack, type,
|
||||
value, start_pos, prefix, self.addtoken)
|
||||
break
|
||||
|
||||
def shift(self, type, value, newstate, prefix, start_pos):
|
||||
"""Shift a token. (Internal)"""
|
||||
dfa, state, node = self.stack[-1]
|
||||
newnode = self.convert_leaf(self.grammar, type, value, prefix, start_pos)
|
||||
node[-1].append(newnode)
|
||||
self.stack[-1] = (dfa, newstate, node)
|
||||
|
||||
def push(self, type, newdfa, newstate):
|
||||
"""Push a nonterminal. (Internal)"""
|
||||
dfa, state, node = self.stack[-1]
|
||||
newnode = (type, [])
|
||||
self.stack[-1] = (dfa, newstate, node)
|
||||
self.stack.append((newdfa, 0, newnode))
|
||||
|
||||
def pop(self):
|
||||
"""Pop a nonterminal. (Internal)"""
|
||||
popdfa, popstate, (type, children) = self.stack.pop()
|
||||
# If there's exactly one child, return that child instead of creating a
|
||||
# new node. We still create expr_stmt and file_input though, because a
|
||||
# lot of Jedi depends on its logic.
|
||||
if len(children) == 1:
|
||||
newnode = children[0]
|
||||
else:
|
||||
newnode = self.convert_node(self.grammar, type, children)
|
||||
|
||||
try:
|
||||
# Equal to:
|
||||
# dfa, state, node = self.stack[-1]
|
||||
# symbol, children = node
|
||||
self.stack[-1][2][1].append(newnode)
|
||||
except IndexError:
|
||||
# Stack is empty, set the rootnode.
|
||||
self.rootnode = newnode
|
||||
394
jedi/parser/pgen2/pgen.py
Normal file
394
jedi/parser/pgen2/pgen.py
Normal file
@@ -0,0 +1,394 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
# Modifications:
|
||||
# Copyright 2014 David Halter. Integration into Jedi.
|
||||
# Modifications are dual-licensed: MIT and PSF.
|
||||
|
||||
# Pgen imports
|
||||
from . import grammar
|
||||
from jedi.parser import token
|
||||
from jedi.parser import tokenize
|
||||
|
||||
|
||||
class ParserGenerator(object):
|
||||
def __init__(self, filename, stream=None):
|
||||
close_stream = None
|
||||
if stream is None:
|
||||
stream = open(filename)
|
||||
close_stream = stream.close
|
||||
self.filename = filename
|
||||
self.stream = stream
|
||||
self.generator = tokenize.generate_tokens(stream.readline)
|
||||
self.gettoken() # Initialize lookahead
|
||||
self.dfas, self.startsymbol = self.parse()
|
||||
if close_stream is not None:
|
||||
close_stream()
|
||||
self.first = {} # map from symbol name to set of tokens
|
||||
self.addfirstsets()
|
||||
|
||||
def make_grammar(self):
|
||||
c = grammar.Grammar()
|
||||
names = list(self.dfas.keys())
|
||||
names.sort()
|
||||
names.remove(self.startsymbol)
|
||||
names.insert(0, self.startsymbol)
|
||||
for name in names:
|
||||
i = 256 + len(c.symbol2number)
|
||||
c.symbol2number[name] = i
|
||||
c.number2symbol[i] = name
|
||||
for name in names:
|
||||
dfa = self.dfas[name]
|
||||
states = []
|
||||
for state in dfa:
|
||||
arcs = []
|
||||
for label, next in state.arcs.items():
|
||||
arcs.append((self.make_label(c, label), dfa.index(next)))
|
||||
if state.isfinal:
|
||||
arcs.append((0, dfa.index(state)))
|
||||
states.append(arcs)
|
||||
c.states.append(states)
|
||||
c.dfas[c.symbol2number[name]] = (states, self.make_first(c, name))
|
||||
c.start = c.symbol2number[self.startsymbol]
|
||||
return c
|
||||
|
||||
def make_first(self, c, name):
|
||||
rawfirst = self.first[name]
|
||||
first = {}
|
||||
for label in rawfirst:
|
||||
ilabel = self.make_label(c, label)
|
||||
##assert ilabel not in first # XXX failed on <> ... !=
|
||||
first[ilabel] = 1
|
||||
return first
|
||||
|
||||
def make_label(self, c, label):
|
||||
# XXX Maybe this should be a method on a subclass of converter?
|
||||
ilabel = len(c.labels)
|
||||
if label[0].isalpha():
|
||||
# Either a symbol name or a named token
|
||||
if label in c.symbol2number:
|
||||
# A symbol name (a non-terminal)
|
||||
if label in c.symbol2label:
|
||||
return c.symbol2label[label]
|
||||
else:
|
||||
c.labels.append((c.symbol2number[label], None))
|
||||
c.symbol2label[label] = ilabel
|
||||
return ilabel
|
||||
else:
|
||||
# A named token (NAME, NUMBER, STRING)
|
||||
itoken = getattr(token, label, None)
|
||||
assert isinstance(itoken, int), label
|
||||
assert itoken in token.tok_name, label
|
||||
if itoken in c.tokens:
|
||||
return c.tokens[itoken]
|
||||
else:
|
||||
c.labels.append((itoken, None))
|
||||
c.tokens[itoken] = ilabel
|
||||
return ilabel
|
||||
else:
|
||||
# Either a keyword or an operator
|
||||
assert label[0] in ('"', "'"), label
|
||||
value = eval(label)
|
||||
if value[0].isalpha():
|
||||
# A keyword
|
||||
if value in c.keywords:
|
||||
return c.keywords[value]
|
||||
else:
|
||||
c.labels.append((token.NAME, value))
|
||||
c.keywords[value] = ilabel
|
||||
return ilabel
|
||||
else:
|
||||
# An operator (any non-numeric token)
|
||||
itoken = token.opmap[value] # Fails if unknown token
|
||||
if itoken in c.tokens:
|
||||
return c.tokens[itoken]
|
||||
else:
|
||||
c.labels.append((itoken, None))
|
||||
c.tokens[itoken] = ilabel
|
||||
return ilabel
|
||||
|
||||
def addfirstsets(self):
|
||||
names = list(self.dfas.keys())
|
||||
names.sort()
|
||||
for name in names:
|
||||
if name not in self.first:
|
||||
self.calcfirst(name)
|
||||
#print name, self.first[name].keys()
|
||||
|
||||
def calcfirst(self, name):
|
||||
dfa = self.dfas[name]
|
||||
self.first[name] = None # dummy to detect left recursion
|
||||
state = dfa[0]
|
||||
totalset = {}
|
||||
overlapcheck = {}
|
||||
for label, next in state.arcs.items():
|
||||
if label in self.dfas:
|
||||
if label in self.first:
|
||||
fset = self.first[label]
|
||||
if fset is None:
|
||||
raise ValueError("recursion for rule %r" % name)
|
||||
else:
|
||||
self.calcfirst(label)
|
||||
fset = self.first[label]
|
||||
totalset.update(fset)
|
||||
overlapcheck[label] = fset
|
||||
else:
|
||||
totalset[label] = 1
|
||||
overlapcheck[label] = {label: 1}
|
||||
inverse = {}
|
||||
for label, itsfirst in overlapcheck.items():
|
||||
for symbol in itsfirst:
|
||||
if symbol in inverse:
|
||||
raise ValueError("rule %s is ambiguous; %s is in the"
|
||||
" first sets of %s as well as %s" %
|
||||
(name, symbol, label, inverse[symbol]))
|
||||
inverse[symbol] = label
|
||||
self.first[name] = totalset
|
||||
|
||||
def parse(self):
|
||||
dfas = {}
|
||||
startsymbol = None
|
||||
# MSTART: (NEWLINE | RULE)* ENDMARKER
|
||||
while self.type != token.ENDMARKER:
|
||||
while self.type == token.NEWLINE:
|
||||
self.gettoken()
|
||||
# RULE: NAME ':' RHS NEWLINE
|
||||
name = self.expect(token.NAME)
|
||||
self.expect(token.OP, ":")
|
||||
a, z = self.parse_rhs()
|
||||
self.expect(token.NEWLINE)
|
||||
#self.dump_nfa(name, a, z)
|
||||
dfa = self.make_dfa(a, z)
|
||||
#self.dump_dfa(name, dfa)
|
||||
# oldlen = len(dfa)
|
||||
self.simplify_dfa(dfa)
|
||||
# newlen = len(dfa)
|
||||
dfas[name] = dfa
|
||||
#print name, oldlen, newlen
|
||||
if startsymbol is None:
|
||||
startsymbol = name
|
||||
return dfas, startsymbol
|
||||
|
||||
def make_dfa(self, start, finish):
|
||||
# To turn an NFA into a DFA, we define the states of the DFA
|
||||
# to correspond to *sets* of states of the NFA. Then do some
|
||||
# state reduction. Let's represent sets as dicts with 1 for
|
||||
# values.
|
||||
assert isinstance(start, NFAState)
|
||||
assert isinstance(finish, NFAState)
|
||||
|
||||
def closure(state):
|
||||
base = {}
|
||||
addclosure(state, base)
|
||||
return base
|
||||
|
||||
def addclosure(state, base):
|
||||
assert isinstance(state, NFAState)
|
||||
if state in base:
|
||||
return
|
||||
base[state] = 1
|
||||
for label, next in state.arcs:
|
||||
if label is None:
|
||||
addclosure(next, base)
|
||||
|
||||
states = [DFAState(closure(start), finish)]
|
||||
for state in states: # NB states grows while we're iterating
|
||||
arcs = {}
|
||||
for nfastate in state.nfaset:
|
||||
for label, next in nfastate.arcs:
|
||||
if label is not None:
|
||||
addclosure(next, arcs.setdefault(label, {}))
|
||||
for label, nfaset in arcs.items():
|
||||
for st in states:
|
||||
if st.nfaset == nfaset:
|
||||
break
|
||||
else:
|
||||
st = DFAState(nfaset, finish)
|
||||
states.append(st)
|
||||
state.addarc(st, label)
|
||||
return states # List of DFAState instances; first one is start
|
||||
|
||||
def dump_nfa(self, name, start, finish):
|
||||
print("Dump of NFA for", name)
|
||||
todo = [start]
|
||||
for i, state in enumerate(todo):
|
||||
print(" State", i, state is finish and "(final)" or "")
|
||||
for label, next in state.arcs:
|
||||
if next in todo:
|
||||
j = todo.index(next)
|
||||
else:
|
||||
j = len(todo)
|
||||
todo.append(next)
|
||||
if label is None:
|
||||
print(" -> %d" % j)
|
||||
else:
|
||||
print(" %s -> %d" % (label, j))
|
||||
|
||||
def dump_dfa(self, name, dfa):
|
||||
print("Dump of DFA for", name)
|
||||
for i, state in enumerate(dfa):
|
||||
print(" State", i, state.isfinal and "(final)" or "")
|
||||
for label, next in state.arcs.items():
|
||||
print(" %s -> %d" % (label, dfa.index(next)))
|
||||
|
||||
def simplify_dfa(self, dfa):
|
||||
# This is not theoretically optimal, but works well enough.
|
||||
# Algorithm: repeatedly look for two states that have the same
|
||||
# set of arcs (same labels pointing to the same nodes) and
|
||||
# unify them, until things stop changing.
|
||||
|
||||
# dfa is a list of DFAState instances
|
||||
changes = True
|
||||
while changes:
|
||||
changes = False
|
||||
for i, state_i in enumerate(dfa):
|
||||
for j in range(i + 1, len(dfa)):
|
||||
state_j = dfa[j]
|
||||
if state_i == state_j:
|
||||
#print " unify", i, j
|
||||
del dfa[j]
|
||||
for state in dfa:
|
||||
state.unifystate(state_j, state_i)
|
||||
changes = True
|
||||
break
|
||||
|
||||
def parse_rhs(self):
|
||||
# RHS: ALT ('|' ALT)*
|
||||
a, z = self.parse_alt()
|
||||
if self.value != "|":
|
||||
return a, z
|
||||
else:
|
||||
aa = NFAState()
|
||||
zz = NFAState()
|
||||
aa.addarc(a)
|
||||
z.addarc(zz)
|
||||
while self.value == "|":
|
||||
self.gettoken()
|
||||
a, z = self.parse_alt()
|
||||
aa.addarc(a)
|
||||
z.addarc(zz)
|
||||
return aa, zz
|
||||
|
||||
def parse_alt(self):
|
||||
# ALT: ITEM+
|
||||
a, b = self.parse_item()
|
||||
while (self.value in ("(", "[") or
|
||||
self.type in (token.NAME, token.STRING)):
|
||||
c, d = self.parse_item()
|
||||
b.addarc(c)
|
||||
b = d
|
||||
return a, b
|
||||
|
||||
def parse_item(self):
|
||||
# ITEM: '[' RHS ']' | ATOM ['+' | '*']
|
||||
if self.value == "[":
|
||||
self.gettoken()
|
||||
a, z = self.parse_rhs()
|
||||
self.expect(token.OP, "]")
|
||||
a.addarc(z)
|
||||
return a, z
|
||||
else:
|
||||
a, z = self.parse_atom()
|
||||
value = self.value
|
||||
if value not in ("+", "*"):
|
||||
return a, z
|
||||
self.gettoken()
|
||||
z.addarc(a)
|
||||
if value == "+":
|
||||
return a, z
|
||||
else:
|
||||
return a, a
|
||||
|
||||
def parse_atom(self):
|
||||
# ATOM: '(' RHS ')' | NAME | STRING
|
||||
if self.value == "(":
|
||||
self.gettoken()
|
||||
a, z = self.parse_rhs()
|
||||
self.expect(token.OP, ")")
|
||||
return a, z
|
||||
elif self.type in (token.NAME, token.STRING):
|
||||
a = NFAState()
|
||||
z = NFAState()
|
||||
a.addarc(z, self.value)
|
||||
self.gettoken()
|
||||
return a, z
|
||||
else:
|
||||
self.raise_error("expected (...) or NAME or STRING, got %s/%s",
|
||||
self.type, self.value)
|
||||
|
||||
def expect(self, type, value=None):
|
||||
if self.type != type or (value is not None and self.value != value):
|
||||
self.raise_error("expected %s/%s, got %s/%s",
|
||||
type, value, self.type, self.value)
|
||||
value = self.value
|
||||
self.gettoken()
|
||||
return value
|
||||
|
||||
def gettoken(self):
|
||||
tup = next(self.generator)
|
||||
while tup[0] in (token.COMMENT, token.NL):
|
||||
tup = next(self.generator)
|
||||
self.type, self.value, self.begin, prefix = tup
|
||||
#print tokenize.tok_name[self.type], repr(self.value)
|
||||
|
||||
def raise_error(self, msg, *args):
|
||||
if args:
|
||||
try:
|
||||
msg = msg % args
|
||||
except:
|
||||
msg = " ".join([msg] + list(map(str, args)))
|
||||
line = open(self.filename).readlines()[self.begin[0]]
|
||||
raise SyntaxError(msg, (self.filename, self.begin[0],
|
||||
self.begin[1], line))
|
||||
|
||||
|
||||
class NFAState(object):
|
||||
def __init__(self):
|
||||
self.arcs = [] # list of (label, NFAState) pairs
|
||||
|
||||
def addarc(self, next, label=None):
|
||||
assert label is None or isinstance(label, str)
|
||||
assert isinstance(next, NFAState)
|
||||
self.arcs.append((label, next))
|
||||
|
||||
|
||||
class DFAState(object):
|
||||
def __init__(self, nfaset, final):
|
||||
assert isinstance(nfaset, dict)
|
||||
assert isinstance(next(iter(nfaset)), NFAState)
|
||||
assert isinstance(final, NFAState)
|
||||
self.nfaset = nfaset
|
||||
self.isfinal = final in nfaset
|
||||
self.arcs = {} # map from label to DFAState
|
||||
|
||||
def addarc(self, next, label):
|
||||
assert isinstance(label, str)
|
||||
assert label not in self.arcs
|
||||
assert isinstance(next, DFAState)
|
||||
self.arcs[label] = next
|
||||
|
||||
def unifystate(self, old, new):
|
||||
for label, next in self.arcs.items():
|
||||
if next is old:
|
||||
self.arcs[label] = new
|
||||
|
||||
def __eq__(self, other):
|
||||
# Equality test -- ignore the nfaset instance variable
|
||||
assert isinstance(other, DFAState)
|
||||
if self.isfinal != other.isfinal:
|
||||
return False
|
||||
# Can't just return self.arcs == other.arcs, because that
|
||||
# would invoke this method recursively, with cycles...
|
||||
if len(self.arcs) != len(other.arcs):
|
||||
return False
|
||||
for label, next in self.arcs.items():
|
||||
if next is not other.arcs.get(label):
|
||||
return False
|
||||
return True
|
||||
|
||||
__hash__ = None # For Py3 compatibility.
|
||||
|
||||
|
||||
def generate_grammar(filename="Grammar.txt"):
|
||||
p = ParserGenerator(filename)
|
||||
return p.make_grammar()
|
||||
File diff suppressed because it is too large
Load Diff
84
jedi/parser/token.py
Normal file
84
jedi/parser/token.py
Normal file
@@ -0,0 +1,84 @@
|
||||
from __future__ import absolute_import
|
||||
|
||||
from jedi._compatibility import is_py3
|
||||
from token import *
|
||||
|
||||
|
||||
COMMENT = N_TOKENS
|
||||
tok_name[COMMENT] = 'COMMENT'
|
||||
N_TOKENS += 1
|
||||
|
||||
NL = N_TOKENS
|
||||
tok_name[NL] = 'NL'
|
||||
N_TOKENS += 1
|
||||
|
||||
if is_py3:
|
||||
BACKQUOTE = N_TOKENS
|
||||
tok_name[BACKQUOTE] = 'BACKQUOTE'
|
||||
N_TOKENS += 1
|
||||
else:
|
||||
RARROW = N_TOKENS
|
||||
tok_name[RARROW] = 'RARROW'
|
||||
N_TOKENS += 1
|
||||
ELLIPSIS = N_TOKENS
|
||||
tok_name[ELLIPSIS] = 'ELLIPSIS'
|
||||
N_TOKENS += 1
|
||||
|
||||
|
||||
|
||||
# Map from operator to number (since tokenize doesn't do this)
|
||||
|
||||
opmap_raw = """\
|
||||
( LPAR
|
||||
) RPAR
|
||||
[ LSQB
|
||||
] RSQB
|
||||
: COLON
|
||||
, COMMA
|
||||
; SEMI
|
||||
+ PLUS
|
||||
- MINUS
|
||||
* STAR
|
||||
/ SLASH
|
||||
| VBAR
|
||||
& AMPER
|
||||
< LESS
|
||||
> GREATER
|
||||
= EQUAL
|
||||
. DOT
|
||||
% PERCENT
|
||||
` BACKQUOTE
|
||||
{ LBRACE
|
||||
} RBRACE
|
||||
@ AT
|
||||
== EQEQUAL
|
||||
!= NOTEQUAL
|
||||
<> NOTEQUAL
|
||||
<= LESSEQUAL
|
||||
>= GREATEREQUAL
|
||||
~ TILDE
|
||||
^ CIRCUMFLEX
|
||||
<< LEFTSHIFT
|
||||
>> RIGHTSHIFT
|
||||
** DOUBLESTAR
|
||||
+= PLUSEQUAL
|
||||
-= MINEQUAL
|
||||
*= STAREQUAL
|
||||
/= SLASHEQUAL
|
||||
%= PERCENTEQUAL
|
||||
&= AMPEREQUAL
|
||||
|= VBAREQUAL
|
||||
^= CIRCUMFLEXEQUAL
|
||||
<<= LEFTSHIFTEQUAL
|
||||
>>= RIGHTSHIFTEQUAL
|
||||
**= DOUBLESTAREQUAL
|
||||
// DOUBLESLASH
|
||||
//= DOUBLESLASHEQUAL
|
||||
-> RARROW
|
||||
... ELLIPSIS
|
||||
"""
|
||||
|
||||
opmap = {}
|
||||
for line in opmap_raw.splitlines():
|
||||
op, name = line.split()
|
||||
opmap[op] = globals()[name]
|
||||
@@ -14,95 +14,26 @@ from __future__ import absolute_import
|
||||
import string
|
||||
import re
|
||||
from io import StringIO
|
||||
from token import (tok_name, N_TOKENS, ENDMARKER, STRING, NUMBER, NAME, OP,
|
||||
ERRORTOKEN, NEWLINE)
|
||||
from jedi.parser.token import (tok_name, N_TOKENS, ENDMARKER, STRING, NUMBER,
|
||||
NAME, OP, ERRORTOKEN, NEWLINE, INDENT, DEDENT)
|
||||
from jedi._compatibility import is_py3
|
||||
|
||||
from jedi._compatibility import u
|
||||
|
||||
cookie_re = re.compile("coding[:=]\s*([-\w.]+)")
|
||||
|
||||
|
||||
# From here on we have custom stuff (everything before was originally Python
|
||||
# internal code).
|
||||
FLOWS = ['if', 'else', 'elif', 'while', 'with', 'try', 'except', 'finally']
|
||||
|
||||
|
||||
namechars = string.ascii_letters + '_'
|
||||
if is_py3:
|
||||
# Python 3 has str.isidentifier() to check if a char is a valid identifier
|
||||
is_identifier = str.isidentifier
|
||||
else:
|
||||
namechars = string.ascii_letters + '_'
|
||||
is_identifier = lambda s: s in namechars
|
||||
|
||||
|
||||
COMMENT = N_TOKENS
|
||||
tok_name[COMMENT] = 'COMMENT'
|
||||
|
||||
|
||||
class Token(object):
|
||||
"""
|
||||
The token object is an efficient representation of the structure
|
||||
(type, token, (start_pos_line, start_pos_col)). It has indexer
|
||||
methods that maintain compatibility to existing code that expects the above
|
||||
structure.
|
||||
|
||||
>>> repr(Token(1, "test", (1, 1)))
|
||||
"<Token: ('NAME', 'test', (1, 1))>"
|
||||
>>> Token(1, 'bar', (3, 4)).__getstate__()
|
||||
(1, 'bar', 3, 4)
|
||||
>>> a = Token(0, 'baz', (0, 0))
|
||||
>>> a.__setstate__((1, 'foo', 3, 4))
|
||||
>>> a
|
||||
<Token: ('NAME', 'foo', (3, 4))>
|
||||
>>> a.start_pos
|
||||
(3, 4)
|
||||
>>> a.string
|
||||
'foo'
|
||||
>>> a._start_pos_col
|
||||
4
|
||||
>>> Token(1, u("😷"), (1 ,1)).string + "p" == u("😷p")
|
||||
True
|
||||
"""
|
||||
__slots__ = ("type", "string", "_start_pos_line", "_start_pos_col")
|
||||
|
||||
def __init__(self, type, string, start_pos):
|
||||
self.type = type
|
||||
self.string = string
|
||||
self._start_pos_line = start_pos[0]
|
||||
self._start_pos_col = start_pos[1]
|
||||
|
||||
def __repr__(self):
|
||||
typ = tok_name[self.type]
|
||||
content = typ, self.string, (self._start_pos_line, self._start_pos_col)
|
||||
return "<%s: %s>" % (type(self).__name__, content)
|
||||
|
||||
@property
|
||||
def start_pos(self):
|
||||
return (self._start_pos_line, self._start_pos_col)
|
||||
|
||||
@property
|
||||
def end_pos(self):
|
||||
"""Returns end position respecting multiline tokens."""
|
||||
end_pos_line = self._start_pos_line
|
||||
lines = self.string.split('\n')
|
||||
if self.string.endswith('\n'):
|
||||
lines = lines[:-1]
|
||||
lines[-1] += '\n'
|
||||
end_pos_line += len(lines) - 1
|
||||
end_pos_col = self._start_pos_col
|
||||
# Check for multiline token
|
||||
if self._start_pos_line == end_pos_line:
|
||||
end_pos_col += len(lines[-1])
|
||||
else:
|
||||
end_pos_col = len(lines[-1])
|
||||
return (end_pos_line, end_pos_col)
|
||||
|
||||
# Make cache footprint smaller for faster unpickling
|
||||
def __getstate__(self):
|
||||
return (self.type, self.string, self._start_pos_line, self._start_pos_col)
|
||||
|
||||
def __setstate__(self, state):
|
||||
self.type = state[0]
|
||||
self.string = state[1]
|
||||
self._start_pos_line = state[2]
|
||||
self._start_pos_col = state[3]
|
||||
|
||||
|
||||
def group(*choices):
|
||||
return '(' + '|'.join(choices) + ')'
|
||||
|
||||
@@ -158,7 +89,8 @@ cont_str = group(r"[bBuU]?[rR]?'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
|
||||
r'[bBuU]?[rR]?"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
|
||||
group('"', r'\\\r?\n'))
|
||||
pseudo_extras = group(r'\\\r?\n', comment, triple)
|
||||
pseudo_token = whitespace + group(pseudo_extras, number, funny, cont_str, name)
|
||||
pseudo_token = group(whitespace) + \
|
||||
group(pseudo_extras, number, funny, cont_str, name)
|
||||
|
||||
|
||||
def _compile(expr):
|
||||
@@ -167,6 +99,7 @@ def _compile(expr):
|
||||
|
||||
pseudoprog, single3prog, double3prog = map(
|
||||
_compile, (pseudo_token, single3, double3))
|
||||
|
||||
endprogs = {"'": _compile(single), '"': _compile(double),
|
||||
"'''": single3prog, '"""': double3prog,
|
||||
"r'''": single3prog, 'r"""': double3prog,
|
||||
@@ -202,28 +135,43 @@ del _compile
|
||||
|
||||
tabsize = 8
|
||||
|
||||
ALWAYS_BREAK_TOKENS = (';', 'import', 'from', 'class', 'def', 'try', 'except',
|
||||
'finally', 'while', 'return')
|
||||
|
||||
def source_tokens(source, line_offset=0):
|
||||
|
||||
def source_tokens(source):
|
||||
"""Generate tokens from a the source code (string)."""
|
||||
source = source + '\n' # end with \n, because the parser needs it
|
||||
readline = StringIO(source).readline
|
||||
return generate_tokens(readline, line_offset)
|
||||
return generate_tokens(readline)
|
||||
|
||||
|
||||
def generate_tokens(readline, line_offset=0):
|
||||
def generate_tokens(readline):
|
||||
"""
|
||||
The original stdlib Python version with minor modifications.
|
||||
Modified to not care about dedents.
|
||||
A heavily modified Python standard library tokenizer.
|
||||
|
||||
Additionally to the default information, yields also the prefix of each
|
||||
token. This idea comes from lib2to3. The prefix contains all information
|
||||
that is irrelevant for the parser like newlines in parentheses or comments.
|
||||
"""
|
||||
lnum = line_offset
|
||||
paren_level = 0 # count parentheses
|
||||
indents = [0]
|
||||
lnum = 0
|
||||
numchars = '0123456789'
|
||||
contstr = ''
|
||||
contline = None
|
||||
while True: # loop over lines in stream
|
||||
line = readline() # readline returns empty if it's finished. See StringIO
|
||||
# We start with a newline. This makes indent at the first position
|
||||
# possible. It's not valid Python, but still better than an INDENT in the
|
||||
# second line (and not in the first). This makes quite a few things in
|
||||
# Jedi's fast parser possible.
|
||||
new_line = True
|
||||
prefix = '' # Should never be required, but here for safety
|
||||
additional_prefix = ''
|
||||
while True: # loop over lines in stream
|
||||
line = readline() # readline returns empty when finished. See StringIO
|
||||
if not line:
|
||||
if contstr:
|
||||
yield Token(ERRORTOKEN, contstr, contstr_start)
|
||||
yield ERRORTOKEN, contstr, contstr_start, prefix
|
||||
break
|
||||
|
||||
lnum += 1
|
||||
@@ -233,7 +181,7 @@ def generate_tokens(readline, line_offset=0):
|
||||
endmatch = endprog.match(line)
|
||||
if endmatch:
|
||||
pos = endmatch.end(0)
|
||||
yield Token(STRING, contstr + line[:pos], contstr_start)
|
||||
yield STRING, contstr + line[:pos], contstr_start, prefix
|
||||
contstr = ''
|
||||
contline = None
|
||||
else:
|
||||
@@ -248,32 +196,48 @@ def generate_tokens(readline, line_offset=0):
|
||||
if line[pos] in '"\'':
|
||||
# If a literal starts but doesn't end the whole rest of the
|
||||
# line is an error token.
|
||||
txt = txt = line[pos:]
|
||||
yield Token(ERRORTOKEN, txt, (lnum, pos))
|
||||
txt = line[pos:]
|
||||
yield ERRORTOKEN, txt, (lnum, pos), prefix
|
||||
pos += 1
|
||||
continue
|
||||
|
||||
start, pos = pseudomatch.span(1)
|
||||
prefix = additional_prefix + pseudomatch.group(1)
|
||||
additional_prefix = ''
|
||||
start, pos = pseudomatch.span(2)
|
||||
spos = (lnum, start)
|
||||
token, initial = line[start:pos], line[start]
|
||||
|
||||
if new_line and initial not in '\r\n#':
|
||||
new_line = False
|
||||
if paren_level == 0:
|
||||
if start > indents[-1]:
|
||||
yield INDENT, '', spos, ''
|
||||
indents.append(start)
|
||||
while start < indents[-1]:
|
||||
yield DEDENT, '', spos, ''
|
||||
indents.pop()
|
||||
|
||||
if (initial in numchars or # ordinary number
|
||||
(initial == '.' and token != '.' and token != '...')):
|
||||
yield Token(NUMBER, token, spos)
|
||||
yield NUMBER, token, spos, prefix
|
||||
elif initial in '\r\n':
|
||||
yield Token(NEWLINE, token, spos)
|
||||
elif initial == '#':
|
||||
if not new_line and paren_level == 0:
|
||||
yield NEWLINE, token, spos, prefix
|
||||
else:
|
||||
additional_prefix = prefix + token
|
||||
new_line = True
|
||||
elif initial == '#': # Comments
|
||||
assert not token.endswith("\n")
|
||||
yield Token(COMMENT, token, spos)
|
||||
additional_prefix = prefix + token
|
||||
elif token in triple_quoted:
|
||||
endprog = endprogs[token]
|
||||
endmatch = endprog.match(line, pos)
|
||||
if endmatch: # all on one line
|
||||
pos = endmatch.end(0)
|
||||
token = line[start:pos]
|
||||
yield Token(STRING, token, spos)
|
||||
yield STRING, token, spos, prefix
|
||||
else:
|
||||
contstr_start = (lnum, start) # multiple lines
|
||||
contstr_start = (lnum, start) # multiple lines
|
||||
contstr = line[start:]
|
||||
contline = line
|
||||
break
|
||||
@@ -288,12 +252,28 @@ def generate_tokens(readline, line_offset=0):
|
||||
contline = line
|
||||
break
|
||||
else: # ordinary string
|
||||
yield Token(STRING, token, spos)
|
||||
elif initial in namechars: # ordinary name
|
||||
yield Token(NAME, token, spos)
|
||||
yield STRING, token, spos, prefix
|
||||
elif is_identifier(initial): # ordinary name
|
||||
if token in ALWAYS_BREAK_TOKENS:
|
||||
paren_level = 0
|
||||
while True:
|
||||
indent = indents.pop()
|
||||
if indent > start:
|
||||
yield DEDENT, '', spos, ''
|
||||
else:
|
||||
indents.append(indent)
|
||||
break
|
||||
yield NAME, token, spos, prefix
|
||||
elif initial == '\\' and line[start:] == '\\\n': # continued stmt
|
||||
continue
|
||||
additional_prefix += prefix + line[start:]
|
||||
break
|
||||
else:
|
||||
yield Token(OP, token, spos)
|
||||
if token in '([{':
|
||||
paren_level += 1
|
||||
elif token in ')]}':
|
||||
paren_level -= 1
|
||||
yield OP, token, spos, prefix
|
||||
|
||||
yield Token(ENDMARKER, '', (lnum, 0))
|
||||
for indent in indents[1:]:
|
||||
yield DEDENT, '', spos, ''
|
||||
yield ENDMARKER, '', spos, prefix
|
||||
|
||||
1205
jedi/parser/tree.py
Normal file
1205
jedi/parser/tree.py
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,16 +1,23 @@
|
||||
import re
|
||||
import os
|
||||
import keyword
|
||||
|
||||
from jedi import cache
|
||||
from jedi import common
|
||||
from jedi.parser import tokenize
|
||||
from jedi.parser import tokenize, Parser
|
||||
from jedi._compatibility import u
|
||||
from jedi.parser.fast import FastParser
|
||||
from jedi.parser import representation
|
||||
from jedi.parser import tree as pr
|
||||
from jedi import debug
|
||||
from jedi.common import PushBackIterator
|
||||
|
||||
|
||||
REPLACE_STR = r"[bBuU]?[rR]?" + (r"(?:(')[^\n'\\]*(?:\\.[^\n'\\]*)*(?:'|$)" +
|
||||
'|' +
|
||||
r'(")[^\n"\\]*(?:\\.[^\n"\\]*)*(?:"|$))')
|
||||
REPLACE_STR = re.compile(REPLACE_STR)
|
||||
|
||||
|
||||
class UserContext(object):
|
||||
"""
|
||||
:param source: The source code of the file.
|
||||
@@ -22,8 +29,6 @@ class UserContext(object):
|
||||
self.position = position
|
||||
self._line_cache = None
|
||||
|
||||
# this two are only used, because there is no nonlocal in Python 2
|
||||
self._line_temp = None
|
||||
self._relevant_temp = None
|
||||
|
||||
@cache.underscore_memoization
|
||||
@@ -32,62 +37,64 @@ class UserContext(object):
|
||||
path, self._start_cursor_pos = self._calc_path_until_cursor(self.position)
|
||||
return path
|
||||
|
||||
def _calc_path_until_cursor(self, start_pos=None):
|
||||
def _backwards_line_generator(self, start_pos):
|
||||
self._line_temp, self._column_temp = start_pos
|
||||
first_line = self.get_line(start_pos[0])[:self._column_temp]
|
||||
|
||||
self._line_length = self._column_temp
|
||||
yield first_line[::-1] + '\n'
|
||||
|
||||
while True:
|
||||
self._line_temp -= 1
|
||||
line = self.get_line(self._line_temp)
|
||||
self._line_length = len(line)
|
||||
yield line[::-1] + '\n'
|
||||
|
||||
def _get_backwards_tokenizer(self, start_pos, line_gen=None):
|
||||
if line_gen is None:
|
||||
line_gen = self._backwards_line_generator(start_pos)
|
||||
token_gen = tokenize.generate_tokens(lambda: next(line_gen))
|
||||
for typ, tok_str, tok_start_pos, prefix in token_gen:
|
||||
line = self.get_line(self._line_temp)
|
||||
# Calculate the real start_pos of the token.
|
||||
if tok_start_pos[0] == 1:
|
||||
# We are in the first checked line
|
||||
column = start_pos[1] - tok_start_pos[1]
|
||||
else:
|
||||
column = len(line) - tok_start_pos[1]
|
||||
# Multi-line docstrings must be accounted for.
|
||||
first_line = common.splitlines(tok_str)[0]
|
||||
column -= len(first_line)
|
||||
# Reverse the token again, so that it is in normal order again.
|
||||
yield typ, tok_str[::-1], (self._line_temp, column), prefix[::-1]
|
||||
|
||||
def _calc_path_until_cursor(self, start_pos):
|
||||
"""
|
||||
Something like a reverse tokenizer that tokenizes the reversed strings.
|
||||
"""
|
||||
def fetch_line():
|
||||
if self._is_first:
|
||||
self._is_first = False
|
||||
self._line_length = self._column_temp
|
||||
line = first_line
|
||||
else:
|
||||
line = self.get_line(self._line_temp)
|
||||
self._line_length = len(line)
|
||||
line = '\n' + line
|
||||
|
||||
# add lines with a backslash at the end
|
||||
while True:
|
||||
self._line_temp -= 1
|
||||
last_line = self.get_line(self._line_temp)
|
||||
if last_line and last_line[-1] == '\\':
|
||||
line = last_line[:-1] + ' ' + line
|
||||
self._line_length = len(last_line)
|
||||
else:
|
||||
break
|
||||
return line[::-1]
|
||||
|
||||
self._is_first = True
|
||||
self._line_temp, self._column_temp = start_cursor = start_pos
|
||||
first_line = self.get_line(self._line_temp)[:self._column_temp]
|
||||
|
||||
open_brackets = ['(', '[', '{']
|
||||
close_brackets = [')', ']', '}']
|
||||
|
||||
gen = PushBackIterator(tokenize.generate_tokens(fetch_line))
|
||||
start_cursor = start_pos
|
||||
gen = PushBackIterator(self._get_backwards_tokenizer(start_pos))
|
||||
string = u('')
|
||||
level = 0
|
||||
force_point = False
|
||||
last_type = None
|
||||
is_first = True
|
||||
for tok in gen:
|
||||
tok_type = tok.type
|
||||
tok_str = tok.string
|
||||
end = tok.end_pos
|
||||
self._column_temp = self._line_length - end[1]
|
||||
for tok_type, tok_str, tok_start_pos, prefix in gen:
|
||||
if is_first:
|
||||
if tok.start_pos != (1, 0): # whitespace is not a path
|
||||
if prefix: # whitespace is not a path
|
||||
return u(''), start_cursor
|
||||
is_first = False
|
||||
|
||||
# print 'tok', token_type, tok_str, force_point
|
||||
if last_type == tok_type == tokenize.NAME:
|
||||
string += ' '
|
||||
string = ' ' + string
|
||||
|
||||
if level > 0:
|
||||
if level:
|
||||
if tok_str in close_brackets:
|
||||
level += 1
|
||||
if tok_str in open_brackets:
|
||||
elif tok_str in open_brackets:
|
||||
level -= 1
|
||||
elif tok_str == '.':
|
||||
force_point = False
|
||||
@@ -96,7 +103,7 @@ class UserContext(object):
|
||||
# floating point number.
|
||||
# The same is true for string prefixes -> represented as a
|
||||
# combination of string and name.
|
||||
if tok_type == tokenize.NUMBER and tok_str[0] == '.' \
|
||||
if tok_type == tokenize.NUMBER and tok_str[-1] == '.' \
|
||||
or tok_type == tokenize.NAME and last_type == tokenize.STRING:
|
||||
force_point = False
|
||||
else:
|
||||
@@ -104,28 +111,29 @@ class UserContext(object):
|
||||
elif tok_str in close_brackets:
|
||||
level += 1
|
||||
elif tok_type in [tokenize.NAME, tokenize.STRING]:
|
||||
if keyword.iskeyword(tok_str) and string:
|
||||
# If there's already something in the string, a keyword
|
||||
# never adds any meaning to the current statement.
|
||||
break
|
||||
force_point = True
|
||||
elif tok_type == tokenize.NUMBER:
|
||||
pass
|
||||
else:
|
||||
if tok_str == '-':
|
||||
next_tok = next(gen)
|
||||
if next_tok.string == 'e':
|
||||
if next_tok[1] == 'e':
|
||||
gen.push_back(next_tok)
|
||||
else:
|
||||
break
|
||||
else:
|
||||
break
|
||||
|
||||
x = start_pos[0] - end[0] + 1
|
||||
l = self.get_line(x)
|
||||
l = first_line if x == start_pos[0] else l
|
||||
start_cursor = x, len(l) - end[1]
|
||||
string += tok_str
|
||||
start_cursor = tok_start_pos
|
||||
string = tok_str + prefix + string
|
||||
last_type = tok_type
|
||||
|
||||
# string can still contain spaces at the end
|
||||
return string[::-1].strip(), start_cursor
|
||||
# Don't need whitespace around a statement.
|
||||
return string.strip(), start_cursor
|
||||
|
||||
def get_path_under_cursor(self):
|
||||
"""
|
||||
@@ -145,6 +153,62 @@ class UserContext(object):
|
||||
return (before.group(0) if before is not None else '') \
|
||||
+ (after.group(0) if after is not None else '')
|
||||
|
||||
def call_signature(self):
|
||||
"""
|
||||
:return: Tuple of string of the call and the index of the cursor.
|
||||
"""
|
||||
def get_line(pos):
|
||||
def simplify_str(match):
|
||||
"""
|
||||
To avoid having strings without end marks (error tokens) and
|
||||
strings that just screw up all the call signatures, just
|
||||
simplify everything.
|
||||
"""
|
||||
mark = match.group(1) or match.group(2)
|
||||
return mark + ' ' * (len(match.group(0)) - 2) + mark
|
||||
|
||||
line_gen = self._backwards_line_generator(pos)
|
||||
for line in line_gen:
|
||||
# We have to switch the already backwards lines twice, because
|
||||
# we scan them from start.
|
||||
line = line[::-1]
|
||||
modified = re.sub(REPLACE_STR, simplify_str, line)
|
||||
yield modified[::-1]
|
||||
|
||||
index = 0
|
||||
level = 0
|
||||
next_must_be_name = False
|
||||
next_is_key = False
|
||||
key_name = None
|
||||
generator = self._get_backwards_tokenizer(self.position, get_line(self.position))
|
||||
for tok_type, tok_str, start_pos, prefix in generator:
|
||||
if tok_str in tokenize.ALWAYS_BREAK_TOKENS:
|
||||
break
|
||||
elif next_must_be_name:
|
||||
if tok_type == tokenize.NAME:
|
||||
end_pos = start_pos[0], start_pos[1] + len(tok_str)
|
||||
call, start_pos = self._calc_path_until_cursor(start_pos=end_pos)
|
||||
return call, index, key_name, start_pos
|
||||
index = 0
|
||||
next_must_be_name = False
|
||||
elif next_is_key:
|
||||
if tok_type == tokenize.NAME:
|
||||
key_name = tok_str
|
||||
next_is_key = False
|
||||
|
||||
if tok_str == '(':
|
||||
level += 1
|
||||
if level == 1:
|
||||
next_must_be_name = True
|
||||
level = 0
|
||||
elif tok_str == ')':
|
||||
level -= 1
|
||||
elif tok_str == ',':
|
||||
index += 1
|
||||
elif tok_str == '=':
|
||||
next_is_key = True
|
||||
return None, 0, None, (0, 0)
|
||||
|
||||
def get_context(self, yield_positions=False):
|
||||
self.get_path_until_cursor() # In case _start_cursor_pos is undefined.
|
||||
pos = self._start_cursor_pos
|
||||
@@ -197,25 +261,31 @@ class UserContext(object):
|
||||
|
||||
|
||||
class UserContextParser(object):
|
||||
def __init__(self, source, path, position, user_context):
|
||||
def __init__(self, grammar, source, path, position, user_context,
|
||||
use_fast_parser=True):
|
||||
self._grammar = grammar
|
||||
self._source = source
|
||||
self._path = path and os.path.abspath(path)
|
||||
self._position = position
|
||||
self._user_context = user_context
|
||||
self._use_fast_parser = use_fast_parser
|
||||
|
||||
@cache.underscore_memoization
|
||||
def _parser(self):
|
||||
cache.invalidate_star_import_cache(self._path)
|
||||
parser = FastParser(self._source, self._path)
|
||||
# Don't pickle that module, because the main module is changing quickly
|
||||
cache.save_parser(self._path, None, parser, pickling=False)
|
||||
if self._use_fast_parser:
|
||||
parser = FastParser(self._grammar, self._source, self._path)
|
||||
# Don't pickle that module, because the main module is changing quickly
|
||||
cache.save_parser(self._path, None, parser, pickling=False)
|
||||
else:
|
||||
parser = Parser(self._grammar, self._source, self._path)
|
||||
return parser
|
||||
|
||||
@cache.underscore_memoization
|
||||
def user_stmt(self):
|
||||
module = self.module()
|
||||
debug.speed('parsed')
|
||||
return module.get_statement_for_position(self._position, include_imports=True)
|
||||
return module.get_statement_for_position(self._position)
|
||||
|
||||
@cache.underscore_memoization
|
||||
def user_stmt_with_whitespace(self):
|
||||
@@ -234,22 +304,29 @@ class UserContextParser(object):
|
||||
debug.warning('No statement under the cursor.')
|
||||
return
|
||||
pos = next(self._user_context.get_context(yield_positions=True))
|
||||
user_stmt = self.module().get_statement_for_position(pos, include_imports=True)
|
||||
user_stmt = self.module().get_statement_for_position(pos)
|
||||
return user_stmt
|
||||
|
||||
@cache.underscore_memoization
|
||||
def user_scope(self):
|
||||
"""
|
||||
Returns the scope in which the user resides. This includes flows.
|
||||
"""
|
||||
user_stmt = self.user_stmt()
|
||||
if user_stmt is None:
|
||||
def scan(scope):
|
||||
for s in scope.statements + scope.subscopes:
|
||||
if isinstance(s, representation.Scope):
|
||||
if s.start_pos <= self._position <= s.end_pos:
|
||||
return scan(s) or s
|
||||
for s in scope.children:
|
||||
if s.start_pos <= self._position <= s.end_pos:
|
||||
if isinstance(s, (pr.Scope, pr.Flow)):
|
||||
if isinstance(s, pr.Flow):
|
||||
return s
|
||||
return scan(s) or s
|
||||
elif s.type in ('suite', 'decorated'):
|
||||
return scan(s)
|
||||
|
||||
return scan(self.module()) or self.module()
|
||||
else:
|
||||
return user_stmt.parent
|
||||
return user_stmt.get_parent_scope(include_flows=True)
|
||||
|
||||
def module(self):
|
||||
return self._parser().module
|
||||
|
||||
@@ -16,7 +16,7 @@ import difflib
|
||||
|
||||
from jedi import common
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import tree as pt
|
||||
|
||||
|
||||
class Refactoring(object):
|
||||
|
||||
@@ -37,7 +37,6 @@ Parser
|
||||
Dynamic stuff
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. autodata:: dynamic_arrays_instances
|
||||
.. autodata:: dynamic_array_additions
|
||||
.. autodata:: dynamic_params
|
||||
.. autodata:: dynamic_params_for_other_modules
|
||||
@@ -148,14 +147,9 @@ function is being reparsed.
|
||||
# dynamic stuff
|
||||
# ----------------
|
||||
|
||||
dynamic_arrays_instances = True
|
||||
"""
|
||||
Check for `append`, etc. on array instances like list()
|
||||
"""
|
||||
|
||||
dynamic_array_additions = True
|
||||
"""
|
||||
check for `append`, etc. on arrays: [], {}, ()
|
||||
check for `append`, etc. on arrays: [], {}, () as well as list/set calls.
|
||||
"""
|
||||
|
||||
dynamic_params = True
|
||||
|
||||
@@ -108,7 +108,7 @@ def version_info():
|
||||
Returns a namedtuple of Jedi's version, similar to Python's
|
||||
``sys.version_info``.
|
||||
"""
|
||||
Version = namedtuple('Version', 'major, minor, micro, releaselevel, serial')
|
||||
Version = namedtuple('Version', 'major, minor, micro')
|
||||
from jedi import __version__
|
||||
tupl = re.findall('[a-z]+|\d+', __version__)
|
||||
return Version(*[x if i == 3 else int(x) for i, x in enumerate(tupl)])
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
addopts = --doctest-modules
|
||||
|
||||
# Ignore broken files in blackbox test directories
|
||||
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis
|
||||
norecursedirs = .* docs completion refactor absolute_import namespace_package scripts extensions speed static_analysis not_in_sys_path buildout_project egg-link
|
||||
|
||||
# Activate `clean_jedi_cache` fixture for all tests. This should be
|
||||
# fine as long as we are using `clean_jedi_cache` as a session scoped
|
||||
|
||||
@@ -12,7 +12,9 @@ Note: This requires the psutil library, available on PyPI.
|
||||
"""
|
||||
import time
|
||||
import sys
|
||||
import os
|
||||
import psutil
|
||||
sys.path.insert(0, os.path.abspath(os.path.dirname(__file__) + '/..'))
|
||||
import jedi
|
||||
|
||||
|
||||
|
||||
7
setup.py
7
setup.py
@@ -11,6 +11,8 @@ __AUTHOR__ = 'David Halter'
|
||||
__AUTHOR_EMAIL__ = 'davidhalter88@gmail.com'
|
||||
|
||||
readme = open('README.rst').read() + '\n\n' + open('CHANGELOG.rst').read()
|
||||
packages = ['jedi', 'jedi.parser', 'jedi.parser.pgen2',
|
||||
'jedi.evaluate', 'jedi.evaluate.compiled', 'jedi.api']
|
||||
|
||||
import jedi
|
||||
|
||||
@@ -25,8 +27,8 @@ setup(name='jedi',
|
||||
license='MIT',
|
||||
keywords='python completion refactoring vim',
|
||||
long_description=readme,
|
||||
packages=['jedi', 'jedi.parser', 'jedi.evaluate', 'jedi.evaluate.compiled', 'jedi.api'],
|
||||
package_data={'jedi': ['evaluate/compiled/fake/*.pym']},
|
||||
packages=packages,
|
||||
package_data={'jedi': ['evaluate/compiled/fake/*.pym', 'parser/grammar*.txt']},
|
||||
platforms=['any'],
|
||||
classifiers=[
|
||||
'Development Status :: 4 - Beta',
|
||||
@@ -40,6 +42,7 @@ setup(name='jedi',
|
||||
'Programming Language :: Python :: 3',
|
||||
'Programming Language :: Python :: 3.2',
|
||||
'Programming Language :: Python :: 3.3',
|
||||
'Programming Language :: Python :: 3.4',
|
||||
'Topic :: Software Development :: Libraries :: Python Modules',
|
||||
'Topic :: Text Editors :: Integrated Development Environments (IDE)',
|
||||
'Topic :: Utilities',
|
||||
|
||||
@@ -338,6 +338,11 @@ tuple((1,))[0]
|
||||
#? []
|
||||
list().__iterable
|
||||
|
||||
# With a list comprehension.
|
||||
for i in set(a for a in [1]):
|
||||
#? int()
|
||||
i
|
||||
|
||||
|
||||
# -----------------
|
||||
# Recursions
|
||||
@@ -352,3 +357,11 @@ def recursion1(foo):
|
||||
|
||||
#? int()
|
||||
recursion1([1,2])[0]
|
||||
|
||||
# -----------------
|
||||
# Merged Arrays
|
||||
# -----------------
|
||||
|
||||
for x in [1] + ['']:
|
||||
#? int() str()
|
||||
x
|
||||
|
||||
@@ -105,6 +105,13 @@ for i in b:
|
||||
#? float() str()
|
||||
a[0]
|
||||
|
||||
for i in [1,2,3]:
|
||||
#? int()
|
||||
i
|
||||
else:
|
||||
i
|
||||
|
||||
|
||||
# -----------------
|
||||
# range()
|
||||
# -----------------
|
||||
@@ -112,115 +119,6 @@ for i in range(10):
|
||||
#? int()
|
||||
i
|
||||
|
||||
# -----------------
|
||||
# list comprehensions
|
||||
# -----------------
|
||||
|
||||
# basics:
|
||||
|
||||
a = ['' for a in [1]]
|
||||
#? str()
|
||||
a[0]
|
||||
|
||||
a = [a for a in [1]]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
a = [a for a in 1,2]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
a = [a for a,b in [(1,'')]]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
arr = [1,'']
|
||||
a = [a for a in arr]
|
||||
#? int() str()
|
||||
a[0]
|
||||
|
||||
a = [a if 1.0 else '' for a in [1] if [1.0]]
|
||||
#? int() str()
|
||||
a[0]
|
||||
|
||||
# name resolve should be correct
|
||||
left, right = 'a', 'b'
|
||||
left, right = [x for x in (left, right)]
|
||||
#? str()
|
||||
left
|
||||
|
||||
# with a dict literal
|
||||
#? str()
|
||||
[a for a in {1:'x'}][0]
|
||||
|
||||
##? str()
|
||||
{a-1:b for a,b in {1:'a', 3:1.0}.items()}[0]
|
||||
|
||||
# list comprehensions should also work in combination with functions
|
||||
def listen(arg):
|
||||
for x in arg:
|
||||
#? str()
|
||||
x
|
||||
|
||||
listen(['' for x in [1]])
|
||||
#? str
|
||||
([str for x in []])[0]
|
||||
|
||||
|
||||
# -----------------
|
||||
# nested list comprehensions
|
||||
# -----------------
|
||||
|
||||
b = [a for arr in [[1]] for a in arr]
|
||||
#? int()
|
||||
b[0]
|
||||
|
||||
b = [a for arr in [[1]] if '' for a in arr if '']
|
||||
#? int()
|
||||
b[0]
|
||||
|
||||
b = [b for arr in [[[1.0]]] for a in arr for b in a]
|
||||
#? float()
|
||||
b[0]
|
||||
|
||||
# jedi issue #26
|
||||
#? list()
|
||||
a = [[int(v) for v in line.strip().split() if v] for line in ["123", "123", "123"] if line]
|
||||
#? list()
|
||||
a[0]
|
||||
#? int()
|
||||
a[0][0]
|
||||
|
||||
# -----------------
|
||||
# generator comprehensions
|
||||
# -----------------
|
||||
|
||||
left, right = (i for i in (1, ''))
|
||||
|
||||
#? int()
|
||||
left
|
||||
|
||||
gen = (i for i in (1,))
|
||||
|
||||
#? int()
|
||||
next(gen)
|
||||
#?
|
||||
gen[0]
|
||||
|
||||
gen = (a for arr in [[1.0]] for a in arr)
|
||||
#? float()
|
||||
next(gen)
|
||||
|
||||
#? int()
|
||||
(i for i in (1,)).send()
|
||||
|
||||
# issues with different formats
|
||||
left, right = (i for i in
|
||||
('1', '2'))
|
||||
#? str()
|
||||
left
|
||||
|
||||
|
||||
# -----------------
|
||||
# ternary operator
|
||||
# -----------------
|
||||
@@ -327,9 +225,10 @@ except ImportError as i_a:
|
||||
try:
|
||||
import math
|
||||
except ImportError, i_b:
|
||||
#? ['i_b']
|
||||
# TODO check this only in Python2
|
||||
##? ['i_b']
|
||||
i_b
|
||||
#? ImportError()
|
||||
##? ImportError()
|
||||
i_b
|
||||
|
||||
|
||||
|
||||
@@ -388,6 +388,8 @@ class PrivateVar():
|
||||
self.__var = 1
|
||||
#? int()
|
||||
self.__var
|
||||
#? ['__var']
|
||||
self.__var
|
||||
#? []
|
||||
PrivateVar().__var
|
||||
#?
|
||||
@@ -448,9 +450,26 @@ class TestX(object):
|
||||
# -----------------
|
||||
|
||||
class A(object):
|
||||
pass
|
||||
a = 3
|
||||
|
||||
#? ['mro']
|
||||
A.mro
|
||||
#? []
|
||||
A().mro
|
||||
|
||||
|
||||
# -----------------
|
||||
# mro resolution
|
||||
# -----------------
|
||||
|
||||
class B(A()):
|
||||
b = 3
|
||||
|
||||
#?
|
||||
B.a
|
||||
#?
|
||||
B().a
|
||||
#? int()
|
||||
B.b
|
||||
#? int()
|
||||
B().b
|
||||
|
||||
125
test/completion/comprehensions.py
Normal file
125
test/completion/comprehensions.py
Normal file
@@ -0,0 +1,125 @@
|
||||
# -----------------
|
||||
# list comprehensions
|
||||
# -----------------
|
||||
|
||||
# basics:
|
||||
|
||||
a = ['' for a in [1]]
|
||||
#? str()
|
||||
a[0]
|
||||
#? ['insert']
|
||||
a.insert
|
||||
|
||||
a = [a for a in [1]]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
y = 1.0
|
||||
# Should not leak.
|
||||
[y for y in [3]]
|
||||
#? float()
|
||||
y
|
||||
|
||||
a = [a for a in (1, 2)]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
a = [a for a,b in [(1,'')]]
|
||||
#? int()
|
||||
a[0]
|
||||
|
||||
arr = [1,'']
|
||||
a = [a for a in arr]
|
||||
#? int() str()
|
||||
a[0]
|
||||
|
||||
a = [a if 1.0 else '' for a in [1] if [1.0]]
|
||||
#? int() str()
|
||||
a[0]
|
||||
|
||||
# name resolve should be correct
|
||||
left, right = 'a', 'b'
|
||||
left, right = [x for x in (left, right)]
|
||||
#? str()
|
||||
left
|
||||
|
||||
# with a dict literal
|
||||
#? str()
|
||||
[a for a in {1:'x'}][0]
|
||||
|
||||
##? str()
|
||||
{a-1:b for a,b in {1:'a', 3:1.0}.items()}[0]
|
||||
|
||||
# list comprehensions should also work in combination with functions
|
||||
def listen(arg):
|
||||
for x in arg:
|
||||
#? str()
|
||||
x
|
||||
|
||||
listen(['' for x in [1]])
|
||||
#? str
|
||||
([str for x in []])[0]
|
||||
|
||||
|
||||
# -----------------
|
||||
# nested list comprehensions
|
||||
# -----------------
|
||||
|
||||
b = [a for arr in [[1]] for a in arr]
|
||||
#? int()
|
||||
b[0]
|
||||
|
||||
b = [a for arr in [[1]] if '' for a in arr if '']
|
||||
#? int()
|
||||
b[0]
|
||||
|
||||
b = [b for arr in [[[1.0]]] for a in arr for b in a]
|
||||
#? float()
|
||||
b[0]
|
||||
|
||||
# jedi issue #26
|
||||
#? list()
|
||||
a = [[int(v) for v in line.strip().split() if v] for line in ["123", "123", "123"] if line]
|
||||
#? list()
|
||||
a[0]
|
||||
#? int()
|
||||
a[0][0]
|
||||
|
||||
# -----------------
|
||||
# generator comprehensions
|
||||
# -----------------
|
||||
|
||||
left, right = (i for i in (1, ''))
|
||||
|
||||
#? int()
|
||||
left
|
||||
|
||||
gen = (i for i in (1,))
|
||||
|
||||
#? int()
|
||||
next(gen)
|
||||
#?
|
||||
gen[0]
|
||||
|
||||
gen = (a for arr in [[1.0]] for a in arr)
|
||||
#? float()
|
||||
next(gen)
|
||||
|
||||
#? int()
|
||||
(i for i in (1,)).send()
|
||||
|
||||
# issues with different formats
|
||||
left, right = (i for i in
|
||||
('1', '2'))
|
||||
#? str()
|
||||
left
|
||||
|
||||
# -----------------
|
||||
# name resolution in comprehensions.
|
||||
# -----------------
|
||||
|
||||
def x():
|
||||
"""Should not try to resolve to the if hio, which was a bug."""
|
||||
#? 22
|
||||
[a for a in h if hio]
|
||||
if hio: pass
|
||||
@@ -121,7 +121,7 @@ class SelfVars():
|
||||
@Decorator
|
||||
def __init__(self):
|
||||
"""
|
||||
init decorators should be ignored when looking up variables in the
|
||||
__init__ decorators should be ignored when looking up variables in the
|
||||
class.
|
||||
"""
|
||||
self.c = list
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# -----------------
|
||||
# sphinx style
|
||||
# -----------------
|
||||
def f(a, b, c, d, x):
|
||||
def sphinxy(a, b, c, d, x):
|
||||
""" asdfasdf
|
||||
:param a: blablabla
|
||||
:type a: str
|
||||
@@ -27,10 +27,10 @@ def f(a, b, c, d, x):
|
||||
x.lower
|
||||
|
||||
#? dict()
|
||||
f()
|
||||
sphinxy()
|
||||
|
||||
# wrong declarations
|
||||
def f(a, b, x):
|
||||
def sphinxy2(a, b, x):
|
||||
"""
|
||||
:param a: Forgot type declaration
|
||||
:type a:
|
||||
@@ -47,7 +47,7 @@ def f(a, b, x):
|
||||
x
|
||||
|
||||
#?
|
||||
f()
|
||||
sphinxy2()
|
||||
|
||||
# local classes -> github #370
|
||||
class ProgramNode():
|
||||
@@ -104,7 +104,7 @@ return_module_object().join
|
||||
# -----------------
|
||||
# epydoc style
|
||||
# -----------------
|
||||
def e(a, b):
|
||||
def epydoc(a, b):
|
||||
""" asdfasdf
|
||||
@type a: str
|
||||
@param a: blablabla
|
||||
@@ -121,7 +121,7 @@ def e(a, b):
|
||||
b[1]
|
||||
|
||||
#? list()
|
||||
e()
|
||||
epydoc()
|
||||
|
||||
|
||||
# Returns with param type only
|
||||
|
||||
@@ -1,108 +1,6 @@
|
||||
"""
|
||||
This is used for dynamic object completion.
|
||||
Jedi tries to guess the types with a backtracking approach.
|
||||
Checking for ``list.append`` and all the other possible array modifications.
|
||||
"""
|
||||
def func(a):
|
||||
#? int() str()
|
||||
return a
|
||||
|
||||
#? int()
|
||||
func(1)
|
||||
|
||||
func
|
||||
|
||||
int(1) + (int(2))+ func('')
|
||||
|
||||
# Again the same function, but with another call.
|
||||
def func(a):
|
||||
#? float()
|
||||
return a
|
||||
|
||||
func(1.0)
|
||||
|
||||
# Again the same function, but with no call.
|
||||
def func(a):
|
||||
#?
|
||||
return a
|
||||
|
||||
def func(a):
|
||||
#? float()
|
||||
return a
|
||||
str(func(1.0))
|
||||
|
||||
# -----------------
|
||||
# *args, **args
|
||||
# -----------------
|
||||
def arg(*args):
|
||||
#? tuple()
|
||||
args
|
||||
#? int()
|
||||
args[0]
|
||||
|
||||
arg(1,"")
|
||||
# -----------------
|
||||
# decorators
|
||||
# -----------------
|
||||
def def_func(f):
|
||||
def wrapper(*args, **kwargs):
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
@def_func
|
||||
def func(c):
|
||||
#? str()
|
||||
return c
|
||||
|
||||
#? str()
|
||||
func("str")
|
||||
|
||||
@def_func
|
||||
def func(c=1):
|
||||
#? int() float()
|
||||
return c
|
||||
|
||||
func(1.0)
|
||||
|
||||
# Needs to be here, because in this case func is an import -> shouldn't lead to
|
||||
# exceptions.
|
||||
import sys as func
|
||||
func.sys
|
||||
|
||||
# -----------------
|
||||
# classes
|
||||
# -----------------
|
||||
|
||||
class A():
|
||||
def __init__(self, a):
|
||||
#? str()
|
||||
a
|
||||
|
||||
A("s")
|
||||
|
||||
class A():
|
||||
def __init__(self, a):
|
||||
#? int()
|
||||
a
|
||||
self.a = a
|
||||
|
||||
def test(self, a):
|
||||
#? float()
|
||||
a
|
||||
self.c = self.test2()
|
||||
|
||||
def test2(self):
|
||||
#? int()
|
||||
return self.a
|
||||
|
||||
def test3(self):
|
||||
#? int()
|
||||
self.test2()
|
||||
#? int()
|
||||
self.c
|
||||
|
||||
A(3).test(2.0)
|
||||
A(3).test2()
|
||||
|
||||
# -----------------
|
||||
# list.append
|
||||
# -----------------
|
||||
@@ -344,13 +242,20 @@ class C():
|
||||
a[0]
|
||||
return a
|
||||
|
||||
def class_arr(self, el):
|
||||
def literal_arr(self, el):
|
||||
self.a = []
|
||||
self.a.append(el)
|
||||
#? int()
|
||||
self.a[0]
|
||||
return self.a
|
||||
|
||||
def list_arr(self, el):
|
||||
self.b = list([])
|
||||
self.b.append(el)
|
||||
#? float()
|
||||
self.b[0]
|
||||
return self.b
|
||||
|
||||
#? int()
|
||||
C().blub(1)[0]
|
||||
#? float()
|
||||
@@ -359,7 +264,12 @@ C().blub2(1)[0]
|
||||
#? int()
|
||||
C().a[0]
|
||||
#? int()
|
||||
C().class_arr(1)[0]
|
||||
C().literal_arr(1)[0]
|
||||
|
||||
#? float()
|
||||
C().b[0]
|
||||
#? float()
|
||||
C().list_arr(1.0)[0]
|
||||
|
||||
# -----------------
|
||||
# array recursions
|
||||
@@ -394,14 +304,3 @@ def third():
|
||||
return list(b)
|
||||
#?
|
||||
third()[0]
|
||||
|
||||
# -----------------
|
||||
# list comprehensions
|
||||
# -----------------
|
||||
|
||||
def from_comprehension(foo):
|
||||
#? int() float()
|
||||
return foo
|
||||
|
||||
[from_comprehension(1.0) for n in (1,)]
|
||||
[from_comprehension(n) for n in (1,)]
|
||||
134
test/completion/dynamic_params.py
Normal file
134
test/completion/dynamic_params.py
Normal file
@@ -0,0 +1,134 @@
|
||||
"""
|
||||
This is used for dynamic object completion.
|
||||
Jedi tries to guess param types with a backtracking approach.
|
||||
"""
|
||||
def func(a, default_arg=2):
|
||||
#? int()
|
||||
default_arg
|
||||
#? int() str()
|
||||
return a
|
||||
|
||||
#? int()
|
||||
func(1)
|
||||
|
||||
func
|
||||
|
||||
int(1) + (int(2))+ func('')
|
||||
|
||||
# Again the same function, but with another call.
|
||||
def func(a):
|
||||
#? float()
|
||||
return a
|
||||
|
||||
func(1.0)
|
||||
|
||||
# Again the same function, but with no call.
|
||||
def func(a):
|
||||
#?
|
||||
return a
|
||||
|
||||
def func(a):
|
||||
#? float()
|
||||
return a
|
||||
str(func(1.0))
|
||||
|
||||
# -----------------
|
||||
# *args, **args
|
||||
# -----------------
|
||||
def arg(*args):
|
||||
#? tuple()
|
||||
args
|
||||
#? int()
|
||||
args[0]
|
||||
|
||||
arg(1,"")
|
||||
# -----------------
|
||||
# decorators
|
||||
# -----------------
|
||||
def def_func(f):
|
||||
def wrapper(*args, **kwargs):
|
||||
return f(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
@def_func
|
||||
def func(c):
|
||||
#? str()
|
||||
return c
|
||||
|
||||
#? str()
|
||||
func("str")
|
||||
|
||||
@def_func
|
||||
def func(c=1):
|
||||
#? int() float()
|
||||
return c
|
||||
|
||||
func(1.0)
|
||||
|
||||
def tricky_decorator(func):
|
||||
def wrapper(*args):
|
||||
return func(1, *args)
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
@tricky_decorator
|
||||
def func(a, b):
|
||||
#? int()
|
||||
a
|
||||
#? float()
|
||||
b
|
||||
|
||||
func(1.0)
|
||||
|
||||
# Needs to be here, because in this case func is an import -> shouldn't lead to
|
||||
# exceptions.
|
||||
import sys as func
|
||||
func.sys
|
||||
|
||||
# -----------------
|
||||
# classes
|
||||
# -----------------
|
||||
|
||||
class A():
|
||||
def __init__(self, a):
|
||||
#? str()
|
||||
a
|
||||
|
||||
A("s")
|
||||
|
||||
class A():
|
||||
def __init__(self, a):
|
||||
#? int()
|
||||
a
|
||||
self.a = a
|
||||
|
||||
def test(self, a):
|
||||
#? float()
|
||||
a
|
||||
self.c = self.test2()
|
||||
|
||||
def test2(self):
|
||||
#? int()
|
||||
return self.a
|
||||
|
||||
def test3(self):
|
||||
#? int()
|
||||
self.test2()
|
||||
#? int()
|
||||
self.c
|
||||
|
||||
A(3).test(2.0)
|
||||
A(3).test2()
|
||||
|
||||
|
||||
# -----------------
|
||||
# list comprehensions
|
||||
# -----------------
|
||||
|
||||
def from_comprehension(foo):
|
||||
#? int() float()
|
||||
return foo
|
||||
|
||||
[from_comprehension(1.0) for n in (1,)]
|
||||
[from_comprehension(n) for n in (1,)]
|
||||
@@ -266,6 +266,7 @@ class Something():
|
||||
def x(self, a, b=1):
|
||||
return a
|
||||
|
||||
#? int()
|
||||
Something().x(1)
|
||||
|
||||
|
||||
@@ -288,10 +289,15 @@ exe['b']
|
||||
#? int() float()
|
||||
exe['c']
|
||||
|
||||
a = 'a'
|
||||
exe2 = kwargs_func(**{a:3,
|
||||
b:4.0})
|
||||
'b':4.0})
|
||||
#? int()
|
||||
exe2['a']
|
||||
#? float()
|
||||
exe2['b']
|
||||
#? int() float()
|
||||
exe2['c']
|
||||
|
||||
# -----------------
|
||||
# *args / ** kwargs
|
||||
@@ -352,20 +358,20 @@ def nested_kw(**kwargs1):
|
||||
def nested_kw2(**kwargs2):
|
||||
return nested_kw(**kwargs2)
|
||||
|
||||
#? int()
|
||||
# invalid command, doesn't need to return anything
|
||||
#?
|
||||
nested_kw(b=1, c=1.0, list)
|
||||
#? int()
|
||||
nested_kw(b=1)
|
||||
#? int()
|
||||
# invalid command, doesn't need to return anything
|
||||
#?
|
||||
nested_kw(d=1.0, b=1, list)
|
||||
#? int()
|
||||
nested_kw(b=1)
|
||||
#? int()
|
||||
nested_kw(a=3.0, b=1)
|
||||
#? int()
|
||||
nested_kw(b=1, a=r"")
|
||||
#? []
|
||||
nested_kw('')
|
||||
nested_kw(1, '')
|
||||
#? []
|
||||
nested_kw(a='')
|
||||
|
||||
@@ -391,10 +397,12 @@ def nested_both(*args, **kwargs):
|
||||
def nested_both2(*args, **kwargs):
|
||||
return nested_both(*args, **kwargs)
|
||||
|
||||
#? int()
|
||||
# invalid commands, may return whatever.
|
||||
#? list
|
||||
nested_both('', b=1, c=1.0, list)
|
||||
#? int()
|
||||
#? list
|
||||
nested_both('', c=1.0, b=1, list)
|
||||
|
||||
#? []
|
||||
nested_both('')
|
||||
|
||||
@@ -432,23 +440,6 @@ nested_def2('', c=1.0, b=1)[1]
|
||||
#? []
|
||||
nested_def2('')[1]
|
||||
|
||||
# -----------------
|
||||
# function annotations (should be ignored at the moment)
|
||||
# -----------------
|
||||
def annot(a:3, *args:3):
|
||||
return a, args[0]
|
||||
|
||||
#? str()
|
||||
annot('', 1.0)[0]
|
||||
#? float()
|
||||
annot('', 1.0)[1]
|
||||
|
||||
def annot_ret(a:3) -> 3:
|
||||
return a
|
||||
|
||||
#? str()
|
||||
annot_ret('')
|
||||
|
||||
# -----------------
|
||||
# magic methods
|
||||
# -----------------
|
||||
|
||||
@@ -24,6 +24,13 @@ next(gen_ret(1))
|
||||
#? []
|
||||
next(gen_ret())
|
||||
|
||||
# generators evaluate to true if cast by bool.
|
||||
a = ''
|
||||
if gen_ret():
|
||||
a = 3
|
||||
#? int()
|
||||
a
|
||||
|
||||
|
||||
# -----------------
|
||||
# generators should not be indexable
|
||||
|
||||
@@ -93,7 +93,7 @@ ClassVar().x = ''
|
||||
|
||||
# Recurring use of the same var name, github #315
|
||||
def f(t=None):
|
||||
#! 9 ['t = None']
|
||||
#! 9 ['t=None']
|
||||
t = t or 1
|
||||
|
||||
# -----------------
|
||||
@@ -181,13 +181,38 @@ ab1(ClassDef);ab2(ClassDef);ab3(ClassDef)
|
||||
# -----------------
|
||||
|
||||
for i in range(1):
|
||||
#! ['for i in range(1): i']
|
||||
#! ['for i in range(1): i']
|
||||
i
|
||||
|
||||
for key, value in [(1,2)]:
|
||||
#! ['for key,value in [(1, 2)]: key']
|
||||
#! ['for key, value in [(1,2)]: key']
|
||||
key
|
||||
|
||||
for i in []:
|
||||
#! ['for i in []: i']
|
||||
#! ['for i in []: i']
|
||||
i
|
||||
|
||||
# -----------------
|
||||
# decorator
|
||||
# -----------------
|
||||
def dec(dec_param=3):
|
||||
pass
|
||||
|
||||
#! 8 ['dec_param=3']
|
||||
@dec(dec_param=5)
|
||||
def y():
|
||||
pass
|
||||
|
||||
class ClassDec():
|
||||
def class_func(func):
|
||||
return func
|
||||
|
||||
#! 14 ['def class_func']
|
||||
@ClassDec.class_func
|
||||
def x():
|
||||
pass
|
||||
|
||||
#! 2 ['class ClassDec']
|
||||
@ClassDec.class_func
|
||||
def z():
|
||||
pass
|
||||
|
||||
@@ -1,2 +1,4 @@
|
||||
a = 1
|
||||
from import_tree.random import a as c
|
||||
|
||||
foobarbaz = 3.0
|
||||
|
||||
@@ -69,16 +69,6 @@ def scope_nested2():
|
||||
#? ['rename1']
|
||||
import_tree.rename1
|
||||
|
||||
def from_names():
|
||||
#? ['mod1']
|
||||
from import_tree.pkg.
|
||||
#? ['path']
|
||||
from os.
|
||||
|
||||
def builtin_test():
|
||||
#? ['math']
|
||||
import math
|
||||
|
||||
def scope_from_import_variable():
|
||||
"""
|
||||
All of them shouldn't work, because "fake" imports don't work in python
|
||||
@@ -97,13 +87,28 @@ def scope_from_import_variable():
|
||||
|
||||
def scope_from_import_variable_with_parenthesis():
|
||||
from import_tree.mod2.fake import (
|
||||
a, c
|
||||
a, foobarbaz
|
||||
)
|
||||
|
||||
#?
|
||||
a
|
||||
#?
|
||||
c
|
||||
foobarbaz
|
||||
# shouldn't complete, should still list the name though.
|
||||
#? ['foobarbaz']
|
||||
foobarbaz
|
||||
|
||||
|
||||
def as_imports():
|
||||
from import_tree.mod1 import a as xyz
|
||||
#? int()
|
||||
xyz
|
||||
import not_existant, import_tree.mod1 as foo
|
||||
#? int()
|
||||
foo.a
|
||||
import import_tree.mod1 as bar
|
||||
#? int()
|
||||
bar.a
|
||||
|
||||
# -----------------
|
||||
# std lib modules
|
||||
@@ -121,9 +126,6 @@ import os
|
||||
#? ['dirname']
|
||||
os.path.dirname
|
||||
|
||||
#? os.path.join
|
||||
from os.path import join
|
||||
|
||||
from os.path import (
|
||||
expanduser
|
||||
)
|
||||
@@ -173,28 +175,6 @@ def func_with_import():
|
||||
#? ['sleep']
|
||||
func_with_import().sleep
|
||||
|
||||
# -----------------
|
||||
# completions within imports
|
||||
# -----------------
|
||||
|
||||
#? ['sqlite3']
|
||||
import sqlite3
|
||||
|
||||
#? ['classes']
|
||||
import classes
|
||||
|
||||
#? ['timedelta']
|
||||
from datetime import timedel
|
||||
|
||||
# should not be possible, because names can only be looked up 1 level deep.
|
||||
#? []
|
||||
from datetime.timedelta import resolution
|
||||
#? []
|
||||
from datetime.timedelta import
|
||||
|
||||
#? ['Cursor']
|
||||
from sqlite3 import Cursor
|
||||
|
||||
# -----------------
|
||||
# relative imports
|
||||
# -----------------
|
||||
@@ -231,65 +211,10 @@ from . import datetime as mod1
|
||||
#? []
|
||||
mod1.
|
||||
|
||||
#? str()
|
||||
imp_tree.a
|
||||
|
||||
#? ['some_variable']
|
||||
from . import some_variable
|
||||
#? ['arrays']
|
||||
from . import arrays
|
||||
#? []
|
||||
from . import import_tree as ren
|
||||
|
||||
|
||||
# -----------------
|
||||
# special positions -> edge cases
|
||||
# -----------------
|
||||
import datetime
|
||||
|
||||
#? 6 datetime
|
||||
from datetime.time import time
|
||||
|
||||
#? []
|
||||
import datetime.
|
||||
#? []
|
||||
import datetime.date
|
||||
|
||||
#? 18 ['import']
|
||||
from import_tree. import pkg
|
||||
#? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'recurse_class1', 'recurse_class2']
|
||||
from import_tree. import pkg
|
||||
|
||||
#? 18 ['pkg']
|
||||
from import_tree.p import pkg
|
||||
|
||||
#? 17 ['import_tree']
|
||||
from .import_tree import
|
||||
#? 10 ['run']
|
||||
from ..run import
|
||||
#? ['run']
|
||||
from .. import run
|
||||
|
||||
#? []
|
||||
from not_a_module import
|
||||
|
||||
# self import
|
||||
# this can cause recursions
|
||||
from imports import *
|
||||
|
||||
#137
|
||||
import json
|
||||
#? 23 json.dump
|
||||
from json import load, dump
|
||||
#? 17 json.load
|
||||
from json import load, dump
|
||||
# without the from clause:
|
||||
import json, datetime
|
||||
#? 7 json
|
||||
import json, datetime
|
||||
#? 13 datetime
|
||||
import json, datetime
|
||||
|
||||
# -----------------
|
||||
# packages
|
||||
# -----------------
|
||||
|
||||
@@ -10,7 +10,7 @@ there should never be any errors.
|
||||
##? 5
|
||||
's'()
|
||||
|
||||
#? ['upper']
|
||||
#? []
|
||||
str()).upper
|
||||
|
||||
# -----------------
|
||||
@@ -19,18 +19,25 @@ str()).upper
|
||||
def asdf(a or b): # multiple param names
|
||||
return a
|
||||
|
||||
#? int()
|
||||
#?
|
||||
asdf(2)
|
||||
|
||||
asdf = ''
|
||||
|
||||
from a import (b
|
||||
def blub():
|
||||
return 0
|
||||
def openbrace():
|
||||
def wrong_indents():
|
||||
asdf = 3
|
||||
asdf
|
||||
asdf(
|
||||
#? int()
|
||||
asdf
|
||||
def openbrace():
|
||||
asdf = 3
|
||||
asdf(
|
||||
#? int()
|
||||
asdf
|
||||
return 1
|
||||
|
||||
#? int()
|
||||
@@ -58,7 +65,7 @@ normalfunc()
|
||||
# dots in param
|
||||
def f(seq1...=None):
|
||||
return seq1
|
||||
#? int()
|
||||
#?
|
||||
f(1)
|
||||
|
||||
@
|
||||
@@ -96,14 +103,17 @@ try:
|
||||
#? str()
|
||||
""
|
||||
|
||||
def break(): pass
|
||||
# wrong ternary expression
|
||||
a = ''
|
||||
a = 1 if
|
||||
#? int()
|
||||
#? str()
|
||||
a
|
||||
|
||||
# No completions for for loops without the right syntax
|
||||
for for_local in :
|
||||
for_local
|
||||
#? ['for_local']
|
||||
#? []
|
||||
for_local
|
||||
#?
|
||||
for_local
|
||||
@@ -122,7 +132,7 @@ a3 = [for xyz in]
|
||||
a3[0]
|
||||
|
||||
a3 = [a4 for in 'b']
|
||||
#? str()
|
||||
#?
|
||||
a3[0]
|
||||
|
||||
a3 = [a4 for a in for x in y]
|
||||
@@ -137,10 +147,10 @@ a[0]
|
||||
|
||||
a = [a for a in [1,2]
|
||||
def break(): pass
|
||||
#? int()
|
||||
#?
|
||||
a[0]
|
||||
|
||||
#? ['real']
|
||||
#? []
|
||||
int()).real
|
||||
|
||||
# -----------------
|
||||
@@ -165,14 +175,39 @@ import datetime as
|
||||
|
||||
call = ''
|
||||
invalid = .call
|
||||
#? str()
|
||||
#?
|
||||
invalid
|
||||
|
||||
invalid = call?.call
|
||||
#? str()
|
||||
#?
|
||||
invalid
|
||||
|
||||
# comma
|
||||
invalid = ,call
|
||||
#? str()
|
||||
#?
|
||||
invalid
|
||||
|
||||
|
||||
# -----------------
|
||||
# classes
|
||||
# -----------------
|
||||
|
||||
class BrokenPartsOfClass():
|
||||
def foo(self):
|
||||
# This construct contains two places where Jedi with Python 3 can fail.
|
||||
# It should just ignore those constructs and still execute `bar`.
|
||||
pass
|
||||
if 2:
|
||||
try:
|
||||
pass
|
||||
except ValueError, e:
|
||||
raise TypeError, e
|
||||
else:
|
||||
pass
|
||||
|
||||
def bar(self):
|
||||
self.x = 3
|
||||
return ''
|
||||
|
||||
#? str()
|
||||
BrokenPartsOfClass().bar()
|
||||
|
||||
@@ -55,11 +55,26 @@ a = lambda: 3
|
||||
a.__closure__
|
||||
|
||||
class C():
|
||||
def __init__(self):
|
||||
def __init__(self, foo=1.0):
|
||||
self.a = lambda: 1
|
||||
self.foo = foo
|
||||
|
||||
def ret(self):
|
||||
return lambda: self.foo
|
||||
|
||||
def with_param(self):
|
||||
return lambda x: x + self.a()
|
||||
|
||||
#? int()
|
||||
C().a()
|
||||
|
||||
#? str()
|
||||
C('foo').ret()()
|
||||
|
||||
index = C().with_param()(1)
|
||||
#? float()
|
||||
['', 1, 1.0][index]
|
||||
|
||||
|
||||
def xy(param):
|
||||
def ret(a, b):
|
||||
@@ -80,3 +95,12 @@ class Test(object):
|
||||
self.a
|
||||
#? float()
|
||||
pred(1.0, 2)
|
||||
|
||||
# -----------------
|
||||
# test_nocond in grammar (happens in list comprehensions with `if`)
|
||||
# -----------------
|
||||
# Doesn't need to do anything yet. It should just not raise an error. These
|
||||
# nocond lambdas make no sense at all.
|
||||
|
||||
#? int()
|
||||
[a for a in [1,2] if lambda: 3][0]
|
||||
|
||||
104
test/completion/on_import.py
Normal file
104
test/completion/on_import.py
Normal file
@@ -0,0 +1,104 @@
|
||||
def from_names():
|
||||
#? ['mod1']
|
||||
from import_tree.pkg.
|
||||
#? ['path']
|
||||
from os.
|
||||
|
||||
def from_names_goto():
|
||||
from import_tree import pkg
|
||||
#? pkg
|
||||
from import_tree.pkg
|
||||
|
||||
def builtin_test():
|
||||
#? ['math']
|
||||
import math
|
||||
|
||||
# -----------------
|
||||
# completions within imports
|
||||
# -----------------
|
||||
|
||||
#? ['sqlite3']
|
||||
import sqlite3
|
||||
|
||||
#? ['classes']
|
||||
import classes
|
||||
|
||||
#? ['timedelta']
|
||||
from datetime import timedel
|
||||
#? 21 []
|
||||
from datetime.timedel import timedel
|
||||
|
||||
# should not be possible, because names can only be looked up 1 level deep.
|
||||
#? []
|
||||
from datetime.timedelta import resolution
|
||||
#? []
|
||||
from datetime.timedelta import
|
||||
|
||||
#? ['Cursor']
|
||||
from sqlite3 import Cursor
|
||||
|
||||
#? ['some_variable']
|
||||
from . import some_variable
|
||||
#? ['arrays']
|
||||
from . import arrays
|
||||
#? []
|
||||
from . import import_tree as ren
|
||||
|
||||
import os
|
||||
#? os.path.join
|
||||
from os.path import join
|
||||
|
||||
# -----------------
|
||||
# special positions -> edge cases
|
||||
# -----------------
|
||||
import datetime
|
||||
|
||||
#? 6 datetime
|
||||
from datetime.time import time
|
||||
|
||||
#? []
|
||||
import datetime.
|
||||
#? []
|
||||
import datetime.date
|
||||
|
||||
#? 21 ['import']
|
||||
from import_tree.pkg import pkg
|
||||
#? 22 ['mod1']
|
||||
from import_tree.pkg. import mod1
|
||||
#? 17 ['mod1', 'mod2', 'random', 'pkg', 'rename1', 'rename2', 'recurse_class1', 'recurse_class2']
|
||||
from import_tree. import pkg
|
||||
|
||||
#? 18 ['pkg']
|
||||
from import_tree.p import pkg
|
||||
|
||||
#? 17 ['import_tree']
|
||||
from .import_tree import
|
||||
#? 10 ['run']
|
||||
from ..run import
|
||||
#? ['run']
|
||||
from ..run
|
||||
#? 10 ['run']
|
||||
from ..run.
|
||||
#? []
|
||||
from ..run.
|
||||
|
||||
#? ['run']
|
||||
from .. import run
|
||||
|
||||
#? []
|
||||
from not_a_module import
|
||||
|
||||
|
||||
#137
|
||||
import json
|
||||
#? 23 json.dump
|
||||
from json import load, dump
|
||||
#? 17 json.load
|
||||
from json import load, dump
|
||||
# without the from clause:
|
||||
import json, datetime
|
||||
#? 7 json
|
||||
import json, datetime
|
||||
#? 13 datetime
|
||||
import json, datetime
|
||||
|
||||
@@ -89,6 +89,22 @@ def f(b, a): return a
|
||||
#? []
|
||||
f(b=3)
|
||||
|
||||
# -----------------
|
||||
# closure
|
||||
# -----------------
|
||||
|
||||
def x():
|
||||
a = 0
|
||||
|
||||
def x():
|
||||
return a
|
||||
|
||||
a = 3.0
|
||||
return x()
|
||||
|
||||
#? float()
|
||||
x()
|
||||
|
||||
# -----------------
|
||||
# class
|
||||
# -----------------
|
||||
|
||||
@@ -68,6 +68,19 @@ i += 1
|
||||
#? int()
|
||||
x[i]
|
||||
|
||||
# -----------------
|
||||
# in
|
||||
# -----------------
|
||||
|
||||
if 'X' in 'Y':
|
||||
a = 3
|
||||
else:
|
||||
a = ''
|
||||
# For now don't really check for truth values. So in should return both
|
||||
# results.
|
||||
#? str() int()
|
||||
a
|
||||
|
||||
# -----------------
|
||||
# for flow assignments
|
||||
# -----------------
|
||||
@@ -108,16 +121,6 @@ for x in [l(0), l(1), l(2), l(3), l(4), l(5), l(6), l(7), l(8), l(9), l(10),
|
||||
b[1]
|
||||
|
||||
|
||||
# -----------------
|
||||
# syntax errors
|
||||
# -----------------
|
||||
|
||||
# strange slice
|
||||
z = sorted([1], key = lambda x : x):
|
||||
#? int()
|
||||
z[0]
|
||||
|
||||
|
||||
# -----------------
|
||||
# undefined names
|
||||
# -----------------
|
||||
|
||||
@@ -47,6 +47,17 @@ arr2.app
|
||||
#? int()
|
||||
arr.count(1)
|
||||
|
||||
x = []
|
||||
#?
|
||||
x.pop()
|
||||
x = [3]
|
||||
#? int()
|
||||
x.pop()
|
||||
x = []
|
||||
x.append(1.0)
|
||||
#? float()
|
||||
x.pop()
|
||||
|
||||
# -----------------
|
||||
# dicts
|
||||
# -----------------
|
||||
|
||||
@@ -3,26 +3,28 @@ Renaming tests. This means search for usages.
|
||||
I always leave a little bit of space to add room for additions, because the
|
||||
results always contain position informations.
|
||||
"""
|
||||
#< 4 (0,4), (3,0), (5,0)
|
||||
#< 4 (0,4), (3,0), (5,0), (17,0)
|
||||
def abc(): pass
|
||||
|
||||
#< 0 (-3,4), (0,0), (2,0)
|
||||
#< 0 (-3,4), (0,0), (2,0), (14,0)
|
||||
abc.d.a.bsaasd.abc.d
|
||||
|
||||
abc
|
||||
# unicode chars shouldn't be a problem.
|
||||
x['smörbröd'].abc
|
||||
|
||||
# With the new parser these statements are not recognized as stateents, because
|
||||
# they are not valid Python.
|
||||
if 1:
|
||||
abc =
|
||||
else:
|
||||
(abc) =
|
||||
|
||||
abc =
|
||||
|
||||
#< (-3,0), (0,0)
|
||||
#< (-17,4), (-14,0), (-12,0), (0,0)
|
||||
abc
|
||||
|
||||
abc = 5
|
||||
|
||||
|
||||
Abc = 3
|
||||
|
||||
@@ -47,12 +49,13 @@ class Abc():
|
||||
Abc.d.Abc
|
||||
|
||||
|
||||
#< 4 (0,4), (4,1)
|
||||
def blub():
|
||||
#< 4 (0,4), (5,1)
|
||||
def blubi():
|
||||
pass
|
||||
|
||||
|
||||
#< (-4,4), (0,1)
|
||||
@blub
|
||||
#< (-5,4), (0,1)
|
||||
@blubi
|
||||
def a(): pass
|
||||
|
||||
|
||||
@@ -96,7 +99,7 @@ from import_tree.rename1 import abc
|
||||
#< (0, 32),
|
||||
from import_tree.rename1 import not_existing
|
||||
|
||||
# Shouldn't work (would raise a NotFoundError, because there's no name.)
|
||||
# Shouldn't raise an error or do anything weird.
|
||||
from not_existing import *
|
||||
|
||||
# -----------------
|
||||
@@ -136,6 +139,9 @@ class TestInstanceVar():
|
||||
def b(self):
|
||||
#< (-4,13), (0,13)
|
||||
self._instance_var
|
||||
# A call to self used to trigger an error, because it's also a trailer
|
||||
# with two children.
|
||||
self()
|
||||
|
||||
|
||||
class NestedClass():
|
||||
@@ -161,7 +167,7 @@ class Super(object):
|
||||
def base_method(self):
|
||||
#< 13 (0,13), (20,13)
|
||||
self.base_var = 1
|
||||
#< 13 (0,13), (24,13), (29,13)
|
||||
#< 13 (0,13),
|
||||
self.instance_var = 1
|
||||
|
||||
#< 8 (0,8),
|
||||
@@ -190,7 +196,7 @@ class TestClass(Super):
|
||||
|
||||
#< 9 (0,8),
|
||||
def just_a_method(self):
|
||||
#< (-5,13), (0,13), (-29,13)
|
||||
#< (-5,13), (0,13)
|
||||
self.instance_var
|
||||
|
||||
|
||||
|
||||
@@ -276,7 +276,7 @@ def collect_file_tests(lines, lines_to_execute):
|
||||
|
||||
def collect_dir_tests(base_dir, test_files, check_thirdparty=False):
|
||||
for f_name in os.listdir(base_dir):
|
||||
files_to_execute = [a for a in test_files.items() if a[0] in f_name]
|
||||
files_to_execute = [a for a in test_files.items() if f_name.startswith(a[0])]
|
||||
lines_to_execute = reduce(lambda x, y: x + y[1], files_to_execute, [])
|
||||
if f_name.endswith(".py") and (not test_files or files_to_execute):
|
||||
skip = None
|
||||
|
||||
@@ -34,7 +34,7 @@ def two_params(x, y):
|
||||
two_params(y=2, x=1)
|
||||
two_params(1, y=2)
|
||||
|
||||
#! 10 type-error-multiple-values
|
||||
#! 11 type-error-multiple-values
|
||||
two_params(1, x=2)
|
||||
#! 17 type-error-too-many-arguments
|
||||
two_params(1, 2, y=3)
|
||||
|
||||
@@ -111,3 +111,9 @@ import import_tree
|
||||
|
||||
import_tree.a
|
||||
import_tree.b
|
||||
|
||||
# This is something that raised an error, because it was using a complex
|
||||
# mixture of Jedi fakes and compiled objects.
|
||||
import _sre
|
||||
#! 15 attribute-error
|
||||
_sre.compile().not_existing
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
def generator():
|
||||
yield 1
|
||||
|
||||
#! 11 type-error-generator
|
||||
#! 12 type-error-generator
|
||||
generator()[0]
|
||||
|
||||
list(generator())[0]
|
||||
|
||||
@@ -27,7 +27,7 @@ def nested_twice(*args1):
|
||||
return nested(*args1)
|
||||
|
||||
nested_twice(2, 3)
|
||||
#! 12 type-error-too-few-arguments
|
||||
#! 13 type-error-too-few-arguments
|
||||
nested_twice(2)
|
||||
#! 19 type-error-too-many-arguments
|
||||
nested_twice(2, 3, 4)
|
||||
@@ -47,13 +47,13 @@ def kwargs_test(**kwargs):
|
||||
return simple2(1, **kwargs)
|
||||
|
||||
kwargs_test(c=3, b=2)
|
||||
#! 11 type-error-too-few-arguments
|
||||
#! 12 type-error-too-few-arguments
|
||||
kwargs_test(c=3)
|
||||
#! 11 type-error-too-few-arguments
|
||||
#! 12 type-error-too-few-arguments
|
||||
kwargs_test(b=2)
|
||||
#! 22 type-error-keyword-argument
|
||||
kwargs_test(b=2, c=3, d=4)
|
||||
##! 11 type-error-multiple-values
|
||||
#! 12 type-error-multiple-values
|
||||
kwargs_test(b=2, c=3, a=4)
|
||||
|
||||
|
||||
@@ -65,10 +65,11 @@ kwargs_nested(c=3)
|
||||
kwargs_nested()
|
||||
#! 19 type-error-keyword-argument
|
||||
kwargs_nested(c=2, d=4)
|
||||
##! 13 type-error-multiple-values
|
||||
#! 14 type-error-multiple-values
|
||||
kwargs_nested(c=2, a=4)
|
||||
#! 13 type-error-multiple-values
|
||||
kwargs_nested(b=3, c=2)
|
||||
# TODO reenable
|
||||
##! 14 type-error-multiple-values
|
||||
#kwargs_nested(b=3, c=2)
|
||||
|
||||
# -----------------
|
||||
# mixed *args/**kwargs
|
||||
@@ -77,7 +78,6 @@ kwargs_nested(b=3, c=2)
|
||||
def simple_mixed(a, b, c):
|
||||
return b
|
||||
|
||||
|
||||
def mixed(*args, **kwargs):
|
||||
return simple_mixed(1, *args, **kwargs)
|
||||
|
||||
@@ -91,15 +91,16 @@ def mixed2(*args, **kwargs):
|
||||
return simple_mixed(1, *args, **kwargs)
|
||||
|
||||
|
||||
#! 6 type-error-too-few-arguments
|
||||
#! 7 type-error-too-few-arguments
|
||||
mixed2(c=2)
|
||||
#! 6 type-error-too-few-arguments
|
||||
#! 7 type-error-too-few-arguments
|
||||
mixed2(3)
|
||||
#! 13 type-error-too-many-arguments
|
||||
mixed2(3, 4, 5)
|
||||
#! 13 type-error-too-many-arguments
|
||||
mixed2(3, 4, c=5)
|
||||
#! 6 type-error-multiple-values
|
||||
# TODO reenable
|
||||
##! 13 type-error-too-many-arguments
|
||||
#mixed2(3, 4, c=5)
|
||||
#! 7 type-error-multiple-values
|
||||
mixed2(3, b=5)
|
||||
|
||||
# -----------------
|
||||
@@ -108,6 +109,11 @@ mixed2(3, b=5)
|
||||
|
||||
#! 12 type-error-star-star
|
||||
simple(1, **[])
|
||||
#! 12 type-error-star-star
|
||||
simple(1, **1)
|
||||
class A(): pass
|
||||
#! 12 type-error-star-star
|
||||
simple(1, **A())
|
||||
|
||||
#! 11 type-error-star
|
||||
simple(1, *1)
|
||||
|
||||
@@ -5,6 +5,7 @@ Test all things related to the ``jedi.api`` module.
|
||||
from textwrap import dedent
|
||||
|
||||
from jedi import api
|
||||
from jedi._compatibility import is_py3
|
||||
from pytest import raises
|
||||
|
||||
|
||||
@@ -79,7 +80,10 @@ def test_completion_on_number_literals():
|
||||
def test_completion_on_hex_literals():
|
||||
assert api.Script('0x1..').completions() == []
|
||||
_check_number('0x1.', 'int') # hexdecimal
|
||||
_check_number('0b3.', 'int') # binary
|
||||
# Completing binary literals doesn't work if they are not actually binary
|
||||
# (invalid statements).
|
||||
assert api.Script('0b2.').completions() == []
|
||||
_check_number('0b1.', 'int') # binary
|
||||
_check_number('0o7.', 'int') # octal
|
||||
|
||||
_check_number('0x2e.', 'int')
|
||||
@@ -98,12 +102,19 @@ def test_completion_on_complex_literals():
|
||||
assert api.Script('4j').completions() == []
|
||||
|
||||
|
||||
def test_goto_assignments_on_non_statement():
|
||||
with raises(api.NotFoundError):
|
||||
api.Script('for').goto_assignments()
|
||||
def test_goto_assignments_on_non_name():
|
||||
assert api.Script('for').goto_assignments() == []
|
||||
|
||||
with raises(api.NotFoundError):
|
||||
api.Script('assert').goto_assignments()
|
||||
assert api.Script('assert').goto_assignments() == []
|
||||
if is_py3:
|
||||
assert api.Script('True').goto_assignments() == []
|
||||
else:
|
||||
# In Python 2.7 True is still a name.
|
||||
assert api.Script('True').goto_assignments()[0].description == 'class bool'
|
||||
|
||||
|
||||
def test_goto_definitions_on_non_name():
|
||||
assert api.Script('import x', column=0).goto_definitions() == []
|
||||
|
||||
|
||||
def test_goto_definition_not_multiple():
|
||||
|
||||
@@ -95,7 +95,7 @@ def test_function_call_signature_in_doc():
|
||||
pass
|
||||
f""").goto_definitions()
|
||||
doc = defs[0].doc
|
||||
assert "f(x, y = 1, z = 'a')" in str(doc)
|
||||
assert "f(x, y=1, z='a')" in str(doc)
|
||||
|
||||
|
||||
def test_class_call_signature():
|
||||
@@ -105,7 +105,7 @@ def test_class_call_signature():
|
||||
pass
|
||||
Foo""").goto_definitions()
|
||||
doc = defs[0].doc
|
||||
assert "Foo(self, x, y = 1, z = 'a')" in str(doc)
|
||||
assert "Foo(self, x, y=1, z='a')" in str(doc)
|
||||
|
||||
|
||||
def test_position_none_if_builtin():
|
||||
@@ -119,11 +119,21 @@ def test_completion_docstring():
|
||||
"""
|
||||
Jedi should follow imports in certain conditions
|
||||
"""
|
||||
def docstr(src, result):
|
||||
c = Script(src).completions()[0]
|
||||
assert c.docstring(raw=True, fast=False) == cleandoc(result)
|
||||
|
||||
c = Script('import jedi\njed').completions()[0]
|
||||
assert c.docstring(fast=False) == cleandoc(jedi_doc)
|
||||
|
||||
c = Script('import jedi\njedi.Scr').completions()[0]
|
||||
assert c.docstring(raw=True, fast=False) == cleandoc(Script.__doc__)
|
||||
docstr('import jedi\njedi.Scr', cleandoc(Script.__doc__))
|
||||
|
||||
docstr('abcd=3;abcd', '')
|
||||
docstr('"hello"\nabcd=3\nabcd', 'hello')
|
||||
# It works with a ; as well.
|
||||
docstr('"hello"\nabcd=3;abcd', 'hello')
|
||||
# Shouldn't work with a tuple.
|
||||
docstr('"hello",0\nabcd=3\nabcd', '')
|
||||
|
||||
|
||||
def test_completion_params():
|
||||
@@ -148,6 +158,39 @@ def test_signature_params():
|
||||
check(Script(s + '\nbar=foo\nbar').goto_assignments())
|
||||
|
||||
|
||||
class TestIsDefinition(TestCase):
|
||||
def _def(self, source, index=-1):
|
||||
return names(dedent(source), references=True, all_scopes=True)[index]
|
||||
|
||||
def _bool_is_definitions(self, source):
|
||||
ns = names(dedent(source), references=True, all_scopes=True)
|
||||
# Assure that names are definitely sorted.
|
||||
ns = sorted(ns, key=lambda name: (name.line, name.column))
|
||||
return [name.is_definition() for name in ns]
|
||||
|
||||
def test_name(self):
|
||||
d = self._def('name')
|
||||
assert d.name == 'name'
|
||||
assert not d.is_definition()
|
||||
|
||||
def test_stmt(self):
|
||||
src = 'a = f(x)'
|
||||
d = self._def(src, 0)
|
||||
assert d.name == 'a'
|
||||
assert d.is_definition()
|
||||
d = self._def(src, 1)
|
||||
assert d.name == 'f'
|
||||
assert not d.is_definition()
|
||||
d = self._def(src)
|
||||
assert d.name == 'x'
|
||||
assert not d.is_definition()
|
||||
|
||||
def test_import(self):
|
||||
assert self._bool_is_definitions('import x as a') == [False, True]
|
||||
assert self._bool_is_definitions('from x import y') == [False, True]
|
||||
assert self._bool_is_definitions('from x.z import y') == [False, False, True]
|
||||
|
||||
|
||||
class TestParent(TestCase):
|
||||
def _parent(self, source, line=None, column=None):
|
||||
defs = Script(dedent(source), line, column).goto_assignments()
|
||||
@@ -179,7 +222,7 @@ class TestParent(TestCase):
|
||||
def bar(): pass
|
||||
Foo().bar''')).completions()[0].parent()
|
||||
assert parent.name == 'Foo'
|
||||
assert parent.type == 'class'
|
||||
assert parent.type == 'instance'
|
||||
|
||||
parent = Script('str.join').completions()[0].parent()
|
||||
assert parent.name == 'str'
|
||||
@@ -219,3 +262,63 @@ class TestGotoAssignments(TestCase):
|
||||
param = bar.goto_assignments()[0]
|
||||
assert param.start_pos == (1, 13)
|
||||
assert param.type == 'param'
|
||||
|
||||
def test_class_call(self):
|
||||
src = 'from threading import Thread; Thread(group=1)'
|
||||
n = names(src, references=True)[-1]
|
||||
assert n.name == 'group'
|
||||
param_def = n.goto_assignments()[0]
|
||||
assert param_def.name == 'group'
|
||||
assert param_def.type == 'param'
|
||||
|
||||
def test_parentheses(self):
|
||||
n = names('("").upper', references=True)[-1]
|
||||
assert n.goto_assignments()[0].name == 'upper'
|
||||
|
||||
def test_import(self):
|
||||
nms = names('from json import load', references=True)
|
||||
assert nms[0].name == 'json'
|
||||
assert nms[0].type == 'import'
|
||||
n = nms[0].goto_assignments()[0]
|
||||
assert n.name == 'json'
|
||||
assert n.type == 'module'
|
||||
|
||||
assert nms[1].name == 'load'
|
||||
assert nms[1].type == 'import'
|
||||
n = nms[1].goto_assignments()[0]
|
||||
assert n.name == 'load'
|
||||
assert n.type == 'function'
|
||||
|
||||
nms = names('import os; os.path', references=True)
|
||||
assert nms[0].name == 'os'
|
||||
assert nms[0].type == 'import'
|
||||
n = nms[0].goto_assignments()[0]
|
||||
assert n.name == 'os'
|
||||
assert n.type == 'module'
|
||||
|
||||
n = nms[2].goto_assignments()[0]
|
||||
assert n.name == 'path'
|
||||
assert n.type == 'import'
|
||||
|
||||
nms = names('import os.path', references=True)
|
||||
n = nms[0].goto_assignments()[0]
|
||||
assert n.name == 'os'
|
||||
assert n.type == 'module'
|
||||
n = nms[1].goto_assignments()[0]
|
||||
assert n.name == 'path'
|
||||
assert n.type == 'import'
|
||||
|
||||
def test_import_alias(self):
|
||||
nms = names('import json as foo', references=True)
|
||||
assert nms[0].name == 'json'
|
||||
assert nms[0].type == 'import'
|
||||
n = nms[0].goto_assignments()[0]
|
||||
assert n.name == 'json'
|
||||
assert n.type == 'module'
|
||||
|
||||
assert nms[1].name == 'foo'
|
||||
assert nms[1].type == 'import'
|
||||
ass = nms[1].goto_assignments()
|
||||
assert len(ass) == 1
|
||||
assert ass[0].name == 'json'
|
||||
assert ass[0].type == 'module'
|
||||
|
||||
@@ -22,6 +22,9 @@ class TestCallSignatures(TestCase):
|
||||
def _run_simple(self, source, name, index=0, column=None, line=1):
|
||||
self._run(source, name, index, line, column)
|
||||
|
||||
def test_valid_call(self):
|
||||
self._run('str()', 'str', column=4)
|
||||
|
||||
def test_simple(self):
|
||||
run = self._run_simple
|
||||
s7 = "str().upper().center("
|
||||
@@ -171,6 +174,27 @@ class TestCallSignatures(TestCase):
|
||||
def test_unterminated_strings(self):
|
||||
self._run('str(";', 'str', 0)
|
||||
|
||||
def test_whitespace_before_bracket(self):
|
||||
self._run('str (', 'str', 0)
|
||||
self._run('str (";', 'str', 0)
|
||||
# TODO this is not actually valid Python, the newline token should be
|
||||
# ignored.
|
||||
self._run('str\n(', 'str', 0)
|
||||
|
||||
def test_brackets_in_string_literals(self):
|
||||
self._run('str (" (', 'str', 0)
|
||||
self._run('str (" )', 'str', 0)
|
||||
|
||||
def test_function_definitions_should_break(self):
|
||||
"""
|
||||
Function definitions (and other tokens that cannot exist within call
|
||||
signatures) should break and not be able to return a call signature.
|
||||
"""
|
||||
assert not Script('str(\ndef x').call_signatures()
|
||||
|
||||
def test_flow_call(self):
|
||||
assert not Script('if (1').call_signatures()
|
||||
|
||||
|
||||
class TestParams(TestCase):
|
||||
def params(self, source, line=None, column=None):
|
||||
@@ -277,3 +301,12 @@ def test_signature_index():
|
||||
assert get(both + 'foo(a=2').index == 1
|
||||
assert get(both + 'foo(a=2, b=2').index == 1
|
||||
assert get(both + 'foo(a, b, c').index == 0
|
||||
|
||||
|
||||
def test_bracket_start():
|
||||
def bracket_start(src):
|
||||
signatures = Script(src).call_signatures()
|
||||
assert len(signatures) == 1
|
||||
return signatures[0].bracket_start
|
||||
|
||||
assert bracket_start('str(') == (1, 3)
|
||||
|
||||
@@ -10,10 +10,10 @@ from ..helpers import TestCase
|
||||
|
||||
class TestDefinedNames(TestCase):
|
||||
def assert_definition_names(self, definitions, names):
|
||||
self.assertEqual([d.name for d in definitions], names)
|
||||
assert [d.name for d in definitions] == names
|
||||
|
||||
def check_defined_names(self, source, names):
|
||||
definitions = api.defined_names(textwrap.dedent(source))
|
||||
definitions = api.names(textwrap.dedent(source))
|
||||
self.assert_definition_names(definitions, names)
|
||||
return definitions
|
||||
|
||||
@@ -31,7 +31,7 @@ class TestDefinedNames(TestCase):
|
||||
self.check_defined_names("""
|
||||
x = Class()
|
||||
x.y.z = None
|
||||
""", ['x'])
|
||||
""", ['x', 'z']) # TODO is this behavior what we want?
|
||||
|
||||
def test_multiple_assignment(self):
|
||||
self.check_defined_names("""
|
||||
|
||||
@@ -27,16 +27,14 @@ class MixinTestFullName(object):
|
||||
def check(self, source, desired):
|
||||
script = jedi.Script(textwrap.dedent(source))
|
||||
definitions = getattr(script, type(self).operation)()
|
||||
self.assertEqual(definitions[0].full_name, desired)
|
||||
for d in definitions:
|
||||
self.assertEqual(d.full_name, desired)
|
||||
|
||||
def test_os_path_join(self):
|
||||
self.check('import os; os.path.join', 'os.path.join')
|
||||
|
||||
def test_builtin(self):
|
||||
self.check('type', 'type')
|
||||
|
||||
def test_from_import(self):
|
||||
self.check('from os import path', 'os.path')
|
||||
self.check('TypeError', 'TypeError')
|
||||
|
||||
|
||||
class TestFullNameWithGotoDefinitions(MixinTestFullName, TestCase):
|
||||
@@ -47,7 +45,10 @@ class TestFullNameWithGotoDefinitions(MixinTestFullName, TestCase):
|
||||
self.check("""
|
||||
import re
|
||||
any_re = re.compile('.*')
|
||||
any_re""", 're.RegexObject')
|
||||
any_re""", '_sre.compile.SRE_Pattern')
|
||||
|
||||
def test_from_import(self):
|
||||
self.check('from os import path', 'os.path')
|
||||
|
||||
|
||||
class TestFullNameWithCompletions(MixinTestFullName, TestCase):
|
||||
|
||||
@@ -67,7 +67,9 @@ def test_star_import_cache_duration():
|
||||
old, jedi.settings.star_import_cache_validity = \
|
||||
jedi.settings.star_import_cache_validity, new
|
||||
|
||||
cache._star_import_cache = {} # first empty...
|
||||
dct = cache._time_caches['star_import_cache_validity']
|
||||
old_dct = dict(dct)
|
||||
dct.clear() # first empty...
|
||||
# path needs to be not-None (otherwise caching effects are not visible)
|
||||
jedi.Script('', 1, 0, '').completions()
|
||||
time.sleep(2 * new)
|
||||
@@ -75,7 +77,8 @@ def test_star_import_cache_duration():
|
||||
|
||||
# reset values
|
||||
jedi.settings.star_import_cache_validity = old
|
||||
assert len(cache._star_import_cache) == 1
|
||||
assert len(dct) == 1
|
||||
dct = old_dct
|
||||
cache._star_import_cache = {}
|
||||
|
||||
|
||||
|
||||
@@ -0,0 +1 @@
|
||||
/path/from/egg-link
|
||||
@@ -4,7 +4,7 @@ Python 2.X)
|
||||
"""
|
||||
import jedi
|
||||
from jedi._compatibility import u
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import Parser, load_grammar
|
||||
from .. import helpers
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ def test_explicit_absolute_imports():
|
||||
"""
|
||||
Detect modules with ``from __future__ import absolute_import``.
|
||||
"""
|
||||
parser = Parser(u("from __future__ import absolute_import"), "test.py")
|
||||
parser = Parser(load_grammar(), u("from __future__ import absolute_import"), "test.py")
|
||||
assert parser.module.has_explicit_absolute_import
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ def test_no_explicit_absolute_imports():
|
||||
"""
|
||||
Detect modules without ``from __future__ import absolute_import``.
|
||||
"""
|
||||
parser = Parser(u("1"), "test.py")
|
||||
parser = Parser(load_grammar(), u("1"), "test.py")
|
||||
assert not parser.module.has_explicit_absolute_import
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ def test_dont_break_imports_without_namespaces():
|
||||
assume that all imports have non-``None`` namespaces.
|
||||
"""
|
||||
src = u("from __future__ import absolute_import\nimport xyzzy")
|
||||
parser = Parser(src, "test.py")
|
||||
parser = Parser(load_grammar(), src, "test.py")
|
||||
assert parser.module.has_explicit_absolute_import
|
||||
|
||||
|
||||
|
||||
29
test/test_evaluate/test_annotations.py
Normal file
29
test/test_evaluate/test_annotations.py
Normal file
@@ -0,0 +1,29 @@
|
||||
from textwrap import dedent
|
||||
|
||||
import jedi
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.skipif('sys.version_info[0] < 3')
|
||||
def test_simple_annotations():
|
||||
"""
|
||||
Annotations only exist in Python 3.
|
||||
At the moment we ignore them. So they should be parsed and not interfere
|
||||
with anything.
|
||||
"""
|
||||
|
||||
source = dedent("""\
|
||||
def annot(a:3):
|
||||
return a
|
||||
|
||||
annot('')""")
|
||||
|
||||
assert [d.name for d in jedi.Script(source, ).goto_definitions()] == ['str']
|
||||
|
||||
source = dedent("""\
|
||||
|
||||
def annot_ret(a:3) -> 3:
|
||||
return a
|
||||
|
||||
annot_ret('')""")
|
||||
assert [d.name for d in jedi.Script(source, ).goto_definitions()] == ['str']
|
||||
@@ -6,7 +6,7 @@ from jedi.evaluate.sys_path import (_get_parent_dir_with_file,
|
||||
_get_buildout_scripts,
|
||||
_check_module)
|
||||
from jedi.evaluate import Evaluator
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import Parser, load_grammar
|
||||
|
||||
from ..helpers import cwd_at
|
||||
|
||||
@@ -35,8 +35,9 @@ def test_append_on_non_sys_path():
|
||||
|
||||
d = Dummy()
|
||||
d.path.append('foo')"""))
|
||||
p = Parser(SRC)
|
||||
paths = _check_module(Evaluator(), p.module)
|
||||
grammar = load_grammar()
|
||||
p = Parser(grammar, SRC)
|
||||
paths = _check_module(Evaluator(grammar), p.module)
|
||||
assert len(paths) > 0
|
||||
assert 'foo' not in paths
|
||||
|
||||
@@ -45,8 +46,9 @@ def test_path_from_invalid_sys_path_assignment():
|
||||
SRC = dedent(u("""
|
||||
import sys
|
||||
sys.path = 'invalid'"""))
|
||||
p = Parser(SRC)
|
||||
paths = _check_module(Evaluator(), p.module)
|
||||
grammar = load_grammar()
|
||||
p = Parser(grammar, SRC)
|
||||
paths = _check_module(Evaluator(grammar), p.module)
|
||||
assert len(paths) > 0
|
||||
assert 'invalid' not in paths
|
||||
|
||||
@@ -67,7 +69,8 @@ def test_path_from_sys_path_assignment():
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(important_package.main())"""))
|
||||
p = Parser(SRC)
|
||||
paths = _check_module(Evaluator(), p.module)
|
||||
grammar = load_grammar()
|
||||
p = Parser(grammar, SRC)
|
||||
paths = _check_module(Evaluator(grammar), p.module)
|
||||
assert 1 not in paths
|
||||
assert '/home/test/.buildout/eggs/important_package.egg' in paths
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
from jedi._compatibility import builtins, is_py3
|
||||
from jedi.parser.representation import Function
|
||||
from jedi.parser import load_grammar
|
||||
from jedi.parser.tree import Function
|
||||
from jedi.evaluate import compiled, representation
|
||||
from jedi.evaluate import Evaluator
|
||||
from jedi import Script
|
||||
|
||||
|
||||
def test_simple():
|
||||
e = Evaluator()
|
||||
e = Evaluator(load_grammar())
|
||||
bltn = compiled.CompiledObject(builtins)
|
||||
obj = compiled.CompiledObject('_str_', bltn)
|
||||
upper = e.find_types(obj, 'upper')
|
||||
@@ -17,7 +18,7 @@ def test_simple():
|
||||
|
||||
|
||||
def test_fake_loading():
|
||||
assert isinstance(compiled.create(Evaluator(), next), Function)
|
||||
assert isinstance(compiled.create(Evaluator(load_grammar()), next), Function)
|
||||
|
||||
string = compiled.builtin.get_subscope_by_name('str')
|
||||
from_name = compiled._create_from_name(
|
||||
@@ -29,7 +30,7 @@ def test_fake_loading():
|
||||
|
||||
|
||||
def test_fake_docstr():
|
||||
assert compiled.create(Evaluator(), next).raw_doc == next.__doc__
|
||||
assert compiled.create(Evaluator(load_grammar()), next).raw_doc == next.__doc__
|
||||
|
||||
|
||||
def test_parse_function_doc_illegal_docstr():
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
from jedi._compatibility import unicode
|
||||
|
||||
from jedi.evaluate import helpers
|
||||
from jedi.parser import representation as pr
|
||||
from jedi.parser import Parser
|
||||
|
||||
|
||||
def test_deep_ast_copy():
|
||||
name = pr.Name(object, [('hallo', (0, 0))], (0, 0), (0, 0))
|
||||
|
||||
# fast parent copy should switch parent
|
||||
new_name = helpers.deep_ast_copy(name)
|
||||
assert new_name.names[0].parent == new_name
|
||||
|
||||
|
||||
def test_statement_elements_in_statement():
|
||||
def get_stmt_els(string):
|
||||
p = Parser(unicode(string))
|
||||
return helpers.statement_elements_in_statement(p.module.statements[0])
|
||||
|
||||
# list comprehension
|
||||
stmt_els = get_stmt_els('foo = [(bar(f), f) for f in baz]')
|
||||
# stmt_els: count all names: 6; + count all arrays: 2 = 8
|
||||
assert len(stmt_els) == 8
|
||||
|
||||
# lambda
|
||||
stmt_els = get_stmt_els('foo = [lambda x: y]')
|
||||
# stmt_els: count all names: 3; + count all arrays: 1 = 4
|
||||
assert len(stmt_els) == 4
|
||||
@@ -28,21 +28,11 @@ def test_import_not_in_sys_path():
|
||||
assert a[0].name == 'str'
|
||||
|
||||
|
||||
def setup_function(function):
|
||||
sys.path.append(os.path.join(
|
||||
os.path.dirname(__file__), 'flask-site-packages'))
|
||||
|
||||
|
||||
def teardown_function(function):
|
||||
path = os.path.join(os.path.dirname(__file__), 'flask-site-packages')
|
||||
sys.path.remove(path)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("script,name", [
|
||||
("from flask.ext import foo; foo.", "Foo"), # flask_foo.py
|
||||
("from flask.ext import bar; bar.", "Bar"), # flaskext/bar.py
|
||||
("from flask.ext import baz; baz.", "Baz"), # flask_baz/__init__.py
|
||||
("from flask.ext import moo; moo.", "Moo"), # flaskext/moo/__init__.py
|
||||
("from flask.ext import foo; foo.", "Foo"), # flask_foo.py
|
||||
("from flask.ext import bar; bar.", "Bar"), # flaskext/bar.py
|
||||
("from flask.ext import baz; baz.", "Baz"), # flask_baz/__init__.py
|
||||
("from flask.ext import moo; moo.", "Moo"), # flaskext/moo/__init__.py
|
||||
("from flask.ext.", "foo"),
|
||||
("from flask.ext.", "bar"),
|
||||
("from flask.ext.", "baz"),
|
||||
@@ -55,5 +45,9 @@ def teardown_function(function):
|
||||
def test_flask_ext(script, name):
|
||||
"""flask.ext.foo is really imported from flaskext.foo or flask_foo.
|
||||
"""
|
||||
assert name in [c.name for c in jedi.Script(script).completions()]
|
||||
|
||||
path = os.path.join(os.path.dirname(__file__), 'flask-site-packages')
|
||||
sys.path.append(path)
|
||||
try:
|
||||
assert name in [c.name for c in jedi.Script(script).completions()]
|
||||
finally:
|
||||
sys.path.remove(path)
|
||||
|
||||
@@ -29,9 +29,11 @@ def test_namespace_package():
|
||||
# completion
|
||||
completions = jedi.Script('from pkg import ').completions()
|
||||
names = [str(c.name) for c in completions] # str because of unicode
|
||||
compare = ['foo', 'ns1_file', 'ns1_folder', 'ns2_folder', 'ns2_file']
|
||||
compare = ['foo', 'ns1_file', 'ns1_folder', 'ns2_folder', 'ns2_file',
|
||||
'pkg_resources', 'pkgutil', '__name__', '__path__',
|
||||
'__package__', '__file__', '__doc__']
|
||||
# must at least contain these items, other items are not important
|
||||
assert not (set(compare) - set(names))
|
||||
assert set(compare) == set(names)
|
||||
|
||||
tests = {
|
||||
'from pkg import ns2_folder as x': 'ns2_folder!',
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
from jedi._compatibility import u
|
||||
from jedi.parser import Parser
|
||||
from jedi.evaluate import precedence
|
||||
|
||||
|
||||
def parse_tree(statement_string, is_slice=False):
|
||||
p = Parser(u(statement_string), no_docstr=True)
|
||||
stmt = p.module.statements[0]
|
||||
if is_slice:
|
||||
# get the part of the execution that is the slice
|
||||
stmt = stmt.expression_list()[0].next[0]
|
||||
iterable = stmt.expression_list()
|
||||
pr = precedence.create_precedence(iterable)
|
||||
if isinstance(pr, precedence.Precedence):
|
||||
return pr.parse_tree(strip_literals=True)
|
||||
else:
|
||||
try:
|
||||
return pr.value # Literal
|
||||
except AttributeError:
|
||||
return pr
|
||||
|
||||
|
||||
def test_simple():
|
||||
assert parse_tree('1+2') == (1, '+', 2)
|
||||
assert parse_tree('+2') == (None, '+', 2)
|
||||
assert parse_tree('1+2-3') == ((1, '+', 2), '-', 3)
|
||||
|
||||
|
||||
def test_prefixed():
|
||||
assert parse_tree('--2') == (None, '-', (None, '-', 2))
|
||||
assert parse_tree('1 and not - 2') == (1, 'and', (None, 'not', (None, '-', 2)))
|
||||
|
||||
|
||||
def test_invalid():
|
||||
"""Should just return a simple operation."""
|
||||
assert parse_tree('1 +') == 1
|
||||
assert parse_tree('+') is None
|
||||
|
||||
assert parse_tree('* 1') == 1
|
||||
assert parse_tree('1 * * 1') == (1, '*', 1)
|
||||
|
||||
# invalid operator
|
||||
assert parse_tree('1 not - 1') == (1, '-', 1)
|
||||
assert parse_tree('1 - not ~1') == (1, '-', (None, '~', 1))
|
||||
|
||||
# not not allowed
|
||||
assert parse_tree('1 is not not 1') == (1, 'is not', 1)
|
||||
|
||||
|
||||
def test_multi_part():
|
||||
assert parse_tree('1 not in 2') == (1, 'not in', 2)
|
||||
assert parse_tree('1 is not -1') == (1, 'is not', (None, '-', 1))
|
||||
assert parse_tree('1 is 1') == (1, 'is', 1)
|
||||
|
||||
|
||||
def test_power():
|
||||
assert parse_tree('2 ** 3 ** 4') == (2, '**', (3, '**', 4))
|
||||
|
||||
|
||||
def test_slice():
|
||||
"""
|
||||
Should be parsed as normal operators. This is not proper Python syntax,
|
||||
but the warning shouldn't be given in the precedence generation.
|
||||
"""
|
||||
assert parse_tree('[0][2+1:3]', is_slice=True) == ((2, '+', 1), ':', 3)
|
||||
assert parse_tree('[0][:]', is_slice=True) == (None, ':', None)
|
||||
assert parse_tree('[0][1:]', is_slice=True) == (1, ':', None)
|
||||
assert parse_tree('[0][:2]', is_slice=True) == (None, ':', 2)
|
||||
|
||||
# 3 part slice
|
||||
assert parse_tree('[0][:2:1]', is_slice=True) == ((None, ':', 2), ':', 1)
|
||||
@@ -5,7 +5,7 @@ from jedi import Script
|
||||
|
||||
def get_definition_and_evaluator(source):
|
||||
d = Script(dedent(source)).goto_definitions()[0]
|
||||
return d._name.parent.parent, d._evaluator
|
||||
return d._name.parent, d._evaluator
|
||||
|
||||
|
||||
def test_function_execution():
|
||||
|
||||
@@ -1,12 +1,15 @@
|
||||
import os
|
||||
|
||||
from jedi._compatibility import unicode
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import Parser, load_grammar
|
||||
from jedi.evaluate import sys_path, Evaluator
|
||||
|
||||
|
||||
def test_paths_from_assignment():
|
||||
def paths(src):
|
||||
stmt = Parser(unicode(src)).module.statements[0]
|
||||
return list(sys_path._paths_from_assignment(Evaluator(), stmt))
|
||||
grammar = load_grammar()
|
||||
stmt = Parser(grammar, unicode(src)).module.statements[0]
|
||||
return list(sys_path._paths_from_assignment(Evaluator(grammar), stmt))
|
||||
|
||||
assert paths('sys.path[0:0] = ["a"]') == ['a']
|
||||
assert paths('sys.path = ["b", 1, x + 3, y, "c"]') == ['b', 'c']
|
||||
@@ -14,3 +17,15 @@ def test_paths_from_assignment():
|
||||
|
||||
# Fail for complicated examples.
|
||||
assert paths('sys.path, other = ["a"], 2') == []
|
||||
|
||||
|
||||
def test_get_sys_path(monkeypatch):
|
||||
monkeypatch.setenv('VIRTUAL_ENV', os.path.join(os.path.dirname(__file__),
|
||||
'egg-link', 'venv'))
|
||||
def sitepackages_dir(venv):
|
||||
return os.path.join(venv, 'lib', 'python3.4', 'site-packages')
|
||||
|
||||
monkeypatch.setattr('jedi.evaluate.sys_path._get_venv_sitepackages',
|
||||
sitepackages_dir)
|
||||
|
||||
assert '/path/from/egg-link' in sys_path.get_sys_path()
|
||||
|
||||
@@ -16,11 +16,22 @@ def test_goto_definition_on_import():
|
||||
|
||||
@cwd_at('jedi')
|
||||
def test_complete_on_empty_import():
|
||||
assert Script("from datetime import").completions()[0].name == 'import'
|
||||
# should just list the files in the directory
|
||||
assert 10 < len(Script("from .", path='').completions()) < 30
|
||||
assert 10 < len(Script("from . import", 1, 5, '').completions()) < 30
|
||||
assert 10 < len(Script("from . import classes", 1, 5, '').completions()) < 30
|
||||
assert len(Script("import").completions()) == 0
|
||||
|
||||
# Global import
|
||||
assert len(Script("from . import", 1, 5, '').completions()) > 30
|
||||
# relative import
|
||||
assert 10 < len(Script("from . import", 1, 6, '').completions()) < 30
|
||||
|
||||
# Global import
|
||||
assert len(Script("from . import classes", 1, 5, '').completions()) > 30
|
||||
# relative import
|
||||
assert 10 < len(Script("from . import classes", 1, 6, '').completions()) < 30
|
||||
|
||||
wanted = set(['ImportError', 'import', 'ImportWarning'])
|
||||
assert set([c.name for c in Script("import").completions()]) == wanted
|
||||
if not is_py26: # python 2.6 doesn't always come with a library `import*`.
|
||||
assert len(Script("import import", path='').completions()) > 0
|
||||
|
||||
@@ -63,6 +74,7 @@ def test_after_from():
|
||||
completions = Script(source, column=column).completions()
|
||||
assert [c.name for c in completions] == result
|
||||
|
||||
check('\nfrom os. ', ['path'])
|
||||
check('\nfrom os ', ['import'])
|
||||
check('from os ', ['import'])
|
||||
check('\nfrom os import whatever', ['import'], len('from os im'))
|
||||
|
||||
@@ -1,9 +1,8 @@
|
||||
"""
|
||||
Test of keywords and ``jedi.keywords``
|
||||
"""
|
||||
import jedi
|
||||
from jedi import Script, common
|
||||
import pytest
|
||||
from jedi._compatibility import is_py3
|
||||
from jedi import Script
|
||||
|
||||
|
||||
def test_goto_assignments_keyword():
|
||||
@@ -18,13 +17,13 @@ def test_goto_assignments_keyword():
|
||||
def test_keyword():
|
||||
""" github jedi-vim issue #44 """
|
||||
defs = Script("print").goto_definitions()
|
||||
assert [d.doc for d in defs]
|
||||
if is_py3:
|
||||
assert [d.doc for d in defs]
|
||||
else:
|
||||
assert defs == []
|
||||
|
||||
with pytest.raises(jedi.NotFoundError):
|
||||
Script("import").goto_assignments()
|
||||
assert Script("import").goto_assignments() == []
|
||||
|
||||
completions = Script("import", 1, 1).completions()
|
||||
assert len(completions) == 0
|
||||
with common.ignored(jedi.NotFoundError): # TODO shouldn't throw that.
|
||||
defs = Script("assert").goto_definitions()
|
||||
assert len(defs) == 1
|
||||
assert len(completions) > 10 and 'if' in [c.name for c in completions]
|
||||
assert Script("assert").goto_definitions() == []
|
||||
|
||||
13
test/test_new_parser.py
Normal file
13
test/test_new_parser.py
Normal file
@@ -0,0 +1,13 @@
|
||||
from jedi._compatibility import u
|
||||
from jedi.parser import Parser, load_grammar
|
||||
|
||||
|
||||
def test_basic_parsing():
|
||||
def compare(string):
|
||||
"""Generates the AST object and then regenerates the code."""
|
||||
assert Parser(load_grammar(), string).module.get_code() == string
|
||||
|
||||
compare(u('\na #pass\n'))
|
||||
compare(u('wblabla* 1\t\n'))
|
||||
compare(u('def x(a, b:3): pass\n'))
|
||||
compare(u('assert foo\n'))
|
||||
@@ -3,6 +3,7 @@ from textwrap import dedent
|
||||
import jedi
|
||||
from jedi._compatibility import u
|
||||
from jedi import cache
|
||||
from jedi.parser import load_grammar
|
||||
from jedi.parser.fast import FastParser
|
||||
|
||||
|
||||
@@ -12,15 +13,15 @@ def test_add_to_end():
|
||||
help of caches, this is an example that didn't work.
|
||||
"""
|
||||
|
||||
a = """
|
||||
class Abc():
|
||||
def abc(self):
|
||||
self.x = 3
|
||||
a = dedent("""
|
||||
class Abc():
|
||||
def abc(self):
|
||||
self.x = 3
|
||||
|
||||
class Two(Abc):
|
||||
def h(self):
|
||||
self
|
||||
""" # ^ here is the first completion
|
||||
class Two(Abc):
|
||||
def h(self):
|
||||
self
|
||||
""") # ^ here is the first completion
|
||||
|
||||
b = " def g(self):\n" \
|
||||
" self."
|
||||
@@ -54,30 +55,369 @@ def test_carriage_return_splitting():
|
||||
pass
|
||||
'''))
|
||||
source = source.replace('\n', '\r\n')
|
||||
p = FastParser(source)
|
||||
assert [str(n) for n in p.module.get_defined_names()] == ['Foo']
|
||||
p = FastParser(load_grammar(), source)
|
||||
assert [n.value for lst in p.module.names_dict.values() for n in lst] == ['Foo']
|
||||
|
||||
|
||||
def test_split_parts():
|
||||
cache.parser_cache.pop(None, None)
|
||||
|
||||
def splits(source):
|
||||
class Mock(FastParser):
|
||||
def __init__(self, *args):
|
||||
self.number_of_splits = 0
|
||||
|
||||
return tuple(FastParser._split_parts(Mock(None, None), source))
|
||||
|
||||
def test(*parts):
|
||||
assert splits(''.join(parts)) == parts
|
||||
|
||||
test('a\n\n', 'def b(): pass\n', 'c\n')
|
||||
test('a\n', 'def b():\n pass\n', 'c\n')
|
||||
|
||||
|
||||
def check_fp(src, number_parsers_used, number_of_splits=None, number_of_misses=0):
|
||||
if number_of_splits is None:
|
||||
number_of_splits = number_parsers_used
|
||||
|
||||
p = FastParser(load_grammar(), u(src))
|
||||
cache.save_parser(None, None, p, pickling=False)
|
||||
|
||||
# TODO Don't change get_code, the whole thing should be the same.
|
||||
# -> Need to refactor the parser first, though.
|
||||
assert src == p.module.get_code()
|
||||
assert p.number_of_splits == number_of_splits
|
||||
assert p.number_parsers_used == number_parsers_used
|
||||
assert p.number_of_misses == number_of_misses
|
||||
return p.module
|
||||
|
||||
|
||||
def test_change_and_undo():
|
||||
|
||||
def fp(src):
|
||||
p = FastParser(u(src))
|
||||
cache.save_parser(None, None, p, pickling=False)
|
||||
|
||||
# TODO Don't change get_code, the whole thing should be the same.
|
||||
# -> Need to refactor the parser first, though.
|
||||
assert src == p.module.get_code()[:-1]
|
||||
|
||||
# Empty the parser cache for the path None.
|
||||
cache.parser_cache.pop(None, None)
|
||||
func_before = 'def func():\n pass\n'
|
||||
fp(func_before + 'a')
|
||||
fp(func_before + 'b')
|
||||
fp(func_before + 'a')
|
||||
# Parse the function and a.
|
||||
check_fp(func_before + 'a', 2)
|
||||
# Parse just b.
|
||||
check_fp(func_before + 'b', 1, 2)
|
||||
# b has changed to a again, so parse that.
|
||||
check_fp(func_before + 'a', 1, 2)
|
||||
# Same as before no parsers should be used.
|
||||
check_fp(func_before + 'a', 0, 2)
|
||||
|
||||
# Getting rid of an old parser: Still no parsers used.
|
||||
check_fp('a', 0, 1)
|
||||
# Now the file has completely change and we need to parse.
|
||||
check_fp('b', 1, 1)
|
||||
# And again.
|
||||
check_fp('a', 1, 1)
|
||||
|
||||
|
||||
def test_positions():
|
||||
# Empty the parser cache for the path None.
|
||||
cache.parser_cache.pop(None, None)
|
||||
fp('a')
|
||||
fp('b')
|
||||
fp('a')
|
||||
|
||||
func_before = 'class A:\n pass\n'
|
||||
m = check_fp(func_before + 'a', 2)
|
||||
assert m.start_pos == (1, 0)
|
||||
assert m.end_pos == (3, 1)
|
||||
|
||||
m = check_fp('a', 0, 1)
|
||||
assert m.start_pos == (1, 0)
|
||||
assert m.end_pos == (1, 1)
|
||||
|
||||
|
||||
def test_if():
|
||||
src = dedent('''\
|
||||
def func():
|
||||
x = 3
|
||||
if x:
|
||||
def y():
|
||||
return x
|
||||
return y()
|
||||
|
||||
func()
|
||||
''')
|
||||
|
||||
# Two parsers needed, one for pass and one for the function.
|
||||
check_fp(src, 2)
|
||||
assert [d.name for d in jedi.Script(src, 8, 6).goto_definitions()] == ['int']
|
||||
|
||||
|
||||
def test_if_simple():
|
||||
src = dedent('''\
|
||||
if 1:
|
||||
a = 3
|
||||
''')
|
||||
check_fp(src + 'a', 1)
|
||||
check_fp(src + "else:\n a = ''\na", 1)
|
||||
|
||||
|
||||
def test_for():
|
||||
src = dedent("""\
|
||||
for a in [1,2]:
|
||||
a
|
||||
|
||||
for a1 in 1,"":
|
||||
a1
|
||||
""")
|
||||
check_fp(src, 1)
|
||||
|
||||
|
||||
def test_class_with_class_var():
|
||||
src = dedent("""\
|
||||
class SuperClass:
|
||||
class_super = 3
|
||||
def __init__(self):
|
||||
self.foo = 4
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 3)
|
||||
|
||||
|
||||
def test_func_with_if():
|
||||
src = dedent("""\
|
||||
def recursion(a):
|
||||
if foo:
|
||||
return recursion(a)
|
||||
else:
|
||||
if bar:
|
||||
return inexistent
|
||||
else:
|
||||
return a
|
||||
""")
|
||||
check_fp(src, 1)
|
||||
|
||||
|
||||
def test_decorator():
|
||||
src = dedent("""\
|
||||
class Decorator():
|
||||
@memoize
|
||||
def dec(self, a):
|
||||
return a
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
|
||||
def test_nested_funcs():
|
||||
src = dedent("""\
|
||||
def memoize(func):
|
||||
def wrapper(*args, **kwargs):
|
||||
return func(*args, **kwargs)
|
||||
return wrapper
|
||||
""")
|
||||
check_fp(src, 3)
|
||||
|
||||
|
||||
def test_class_and_if():
|
||||
src = dedent("""\
|
||||
class V:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
if 1:
|
||||
c = 3
|
||||
|
||||
def a_func():
|
||||
return 1
|
||||
|
||||
# COMMENT
|
||||
a_func()""")
|
||||
check_fp(src, 5, 5)
|
||||
assert [d.name for d in jedi.Script(src).goto_definitions()] == ['int']
|
||||
|
||||
|
||||
def test_func_with_for_and_comment():
|
||||
# The first newline is important, leave it. It should not trigger another
|
||||
# parser split.
|
||||
src = dedent("""\
|
||||
|
||||
def func():
|
||||
pass
|
||||
|
||||
|
||||
for a in [1]:
|
||||
# COMMENT
|
||||
a""")
|
||||
check_fp(src, 2)
|
||||
# We don't need to parse the for loop, but we need to parse the other two,
|
||||
# because the split is in a different place.
|
||||
check_fp('a\n' + src, 2, 3)
|
||||
|
||||
|
||||
def test_multi_line_params():
|
||||
src = dedent("""\
|
||||
def x(a,
|
||||
b):
|
||||
pass
|
||||
|
||||
foo = 1
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
|
||||
def test_one_statement_func():
|
||||
src = dedent("""\
|
||||
first
|
||||
def func(): a
|
||||
""")
|
||||
check_fp(src + 'second', 3)
|
||||
# Empty the parser cache, because we're not interested in modifications
|
||||
# here.
|
||||
cache.parser_cache.pop(None, None)
|
||||
check_fp(src + 'def second():\n a', 3)
|
||||
|
||||
|
||||
def test_class_func_if():
|
||||
src = dedent("""\
|
||||
class Class:
|
||||
def func(self):
|
||||
if 1:
|
||||
a
|
||||
else:
|
||||
b
|
||||
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 3)
|
||||
|
||||
|
||||
def test_for_on_one_line():
|
||||
src = dedent("""\
|
||||
foo = 1
|
||||
for x in foo: pass
|
||||
|
||||
def hi():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
src = dedent("""\
|
||||
def hi():
|
||||
for x in foo: pass
|
||||
pass
|
||||
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
src = dedent("""\
|
||||
def hi():
|
||||
for x in foo: pass
|
||||
|
||||
def nested():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
|
||||
def test_multi_line_for():
|
||||
src = dedent("""\
|
||||
for x in [1,
|
||||
2]:
|
||||
pass
|
||||
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 1)
|
||||
|
||||
|
||||
def test_wrong_indentation():
|
||||
src = dedent("""\
|
||||
def func():
|
||||
a
|
||||
b
|
||||
a
|
||||
""")
|
||||
check_fp(src, 1)
|
||||
|
||||
src = dedent("""\
|
||||
def complex():
|
||||
def nested():
|
||||
a
|
||||
b
|
||||
a
|
||||
|
||||
def other():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 3)
|
||||
|
||||
|
||||
def test_open_parentheses():
|
||||
func = 'def func():\n a'
|
||||
p = FastParser(load_grammar(), u('isinstance(\n\n' + func))
|
||||
# As you can see, the isinstance call cannot be seen anymore after
|
||||
# get_code, because it isn't valid code.
|
||||
assert p.module.get_code() == '\n\n' + func
|
||||
assert p.number_of_splits == 2
|
||||
assert p.number_parsers_used == 2
|
||||
cache.save_parser(None, None, p, pickling=False)
|
||||
|
||||
# Now with a correct parser it should work perfectly well.
|
||||
check_fp('isinstance()\n' + func, 1, 2)
|
||||
|
||||
|
||||
def test_strange_parentheses():
|
||||
src = dedent("""
|
||||
class X():
|
||||
a = (1
|
||||
if 1 else 2)
|
||||
def x():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
|
||||
def test_backslash():
|
||||
src = dedent(r"""
|
||||
a = 1\
|
||||
if 1 else 2
|
||||
def x():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
src = dedent(r"""
|
||||
def x():
|
||||
a = 1\
|
||||
if 1 else 2
|
||||
def y():
|
||||
pass
|
||||
""")
|
||||
# The dangling if leads to not splitting where we theoretically could
|
||||
# split.
|
||||
check_fp(src, 2)
|
||||
|
||||
src = dedent(r"""
|
||||
def first():
|
||||
if foo \
|
||||
and bar \
|
||||
or baz:
|
||||
pass
|
||||
def second():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 2)
|
||||
|
||||
|
||||
|
||||
def test_fake_parentheses():
|
||||
"""
|
||||
The fast parser splitting counts parentheses, but not as correct tokens.
|
||||
Therefore parentheses in string tokens are included as well. This needs to
|
||||
be accounted for.
|
||||
"""
|
||||
src = dedent(r"""
|
||||
def x():
|
||||
a = (')'
|
||||
if 1 else 2)
|
||||
def y():
|
||||
pass
|
||||
def z():
|
||||
pass
|
||||
""")
|
||||
check_fp(src, 3, 2, 1)
|
||||
|
||||
|
||||
def test_incomplete_function():
|
||||
|
||||
@@ -3,7 +3,7 @@ import difflib
|
||||
import pytest
|
||||
|
||||
from jedi._compatibility import u
|
||||
from jedi.parser import Parser
|
||||
from jedi.parser import Parser, load_grammar
|
||||
|
||||
code_basic_features = u('''
|
||||
"""A mod docstring"""
|
||||
@@ -44,21 +44,19 @@ def diff_code_assert(a, b, n=4):
|
||||
def test_basic_parsing():
|
||||
"""Validate the parsing features"""
|
||||
|
||||
prs = Parser(code_basic_features)
|
||||
prs = Parser(load_grammar(), code_basic_features)
|
||||
diff_code_assert(
|
||||
code_basic_features,
|
||||
prs.module.get_code2()
|
||||
prs.module.get_code()
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif('True', reason='Not yet working.')
|
||||
def test_operators():
|
||||
src = u('5 * 3')
|
||||
prs = Parser(src)
|
||||
prs = Parser(load_grammar(), src)
|
||||
diff_code_assert(src, prs.module.get_code())
|
||||
|
||||
|
||||
@pytest.mark.skipif('True', reason='Broke get_code support for yield/return statements.')
|
||||
def test_get_code():
|
||||
"""Use the same code that the parser also generates, to compare"""
|
||||
s = u('''"""a docstring"""
|
||||
@@ -84,4 +82,24 @@ def method_with_docstring():
|
||||
"""class docstr"""
|
||||
pass
|
||||
''')
|
||||
assert Parser(s).module.get_code() == s
|
||||
assert Parser(load_grammar(), s).module.get_code() == s
|
||||
|
||||
|
||||
def test_end_newlines():
|
||||
"""
|
||||
The Python grammar explicitly needs a newline at the end. Jedi though still
|
||||
wants to be able, to return the exact same code without the additional new
|
||||
line the parser needs.
|
||||
"""
|
||||
def test(source, end_pos):
|
||||
module = Parser(load_grammar(), u(source)).module
|
||||
assert module.get_code() == source
|
||||
assert module.end_pos == end_pos
|
||||
|
||||
test('a', (1, 1))
|
||||
test('a\n', (2, 0))
|
||||
test('a\nb', (2, 1))
|
||||
test('a\n#comment\n', (3, 0))
|
||||
test('a\n#comment', (2, 8))
|
||||
test('a#comment', (1, 9))
|
||||
test('def a():\n pass', (2, 5))
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user