Fix conflicts.

This commit is contained in:
ANtlord
2020-01-22 11:12:09 +02:00
228 changed files with 6062 additions and 2538 deletions

View File

@@ -11,3 +11,4 @@ omit =
exclude_lines =
# Don't complain about missing debug-only code:
def __repr__
debug.warning

1
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1 @@
github: [davidhalter]

View File

@@ -6,6 +6,7 @@ python:
- 3.5
- 3.6
- 3.7
- 3.8
env:
- JEDI_TEST_ENVIRONMENT=27
@@ -13,27 +14,31 @@ env:
- JEDI_TEST_ENVIRONMENT=35
- JEDI_TEST_ENVIRONMENT=36
- JEDI_TEST_ENVIRONMENT=37
- JEDI_TEST_ENVIRONMENT=38
- JEDI_TEST_ENVIRONMENT=interpreter
matrix:
include:
- python: 3.6
- python: 3.7
env:
- TOXENV=cov
- JEDI_TEST_ENVIRONMENT=36
- TOXENV=cov-py37
- JEDI_TEST_ENVIRONMENT=37
# For now ignore pypy, there are so many issues that we don't really need
# to run it.
#- python: pypy
- python: 3.8-dev
env:
- JEDI_TEST_ENVIRONMENT=38
# The 3.9 dev build does not seem to be available end 2019.
#- python: 3.9-dev
# env:
# - JEDI_TEST_ENVIRONMENT=39
install:
- pip install --quiet tox-travis
- sudo apt-get -y install python3-venv
script:
- |
# Setup/install Python for $JEDI_TEST_ENVIRONMENT.
set -ex
test_env_version=${JEDI_TEST_ENVIRONMENT:0:1}.${JEDI_TEST_ENVIRONMENT:1:1}
if [ "$TRAVIS_PYTHON_VERSION" != "$test_env_version" ]; then
if [ "$TRAVIS_PYTHON_VERSION" != "$test_env_version" ] && [ "$JEDI_TEST_ENVIRONMENT" != "interpreter" ]; then
python_bin=python$test_env_version
python_path="$(which $python_bin || true)"
if [ -z "$python_path" ]; then
@@ -59,7 +64,7 @@ script:
- tox
after_script:
- |
if [ $TOXENV == "cov" ]; then
if [ $TOXENV == "cov-py37" ]; then
pip install --quiet codecov coveralls
coverage xml
coverage report -m

View File

@@ -53,5 +53,7 @@ micbou (@micbou)
Dima Gerasimov (@karlicoss) <karlicoss@gmail.com>
Max Woerner Chase (@mwchase) <max.chase@gmail.com>
Johannes Maria Frank (@jmfrank63) <jmfrank63@gmail.com>
Shane Steinert-Threlkeld (@shanest) <ssshanest@gmail.com>
Tim Gates (@timgates42) <tim.gates@iress.com>
Note: (@user) means a github user name.

View File

@@ -3,6 +3,42 @@
Changelog
---------
0.16.0 (2020--)
+++++++++++++++++++
- **Added** ``Script.get_context`` to get information where you currently are.
- Goto on a function/attribute in a class now goes to the definition in its
super class.
- Dict key completions are working now. e.g. ``d = {1000: 3}; d[10`` will
expand to ``1000``.
- Completion for "proxies" works now. These are classes that have a
``__getattr__(self, name)`` method that does a ``return getattr(x, name)``.
- Understanding of Pytest fixtures.
- Tensorflow, Numpy and Pandas completions should now be about 4-10x faster
after loading them initially.
- Big **Script API Changes**:
- The line and column parameters of ``jedi.Script`` are now deprecated
- ``completions`` deprecated, use ``complete`` instead
- ``goto_assignments`` deprecated, use ``goto`` instead
- ``goto_definitions`` deprecated, use ``infer`` instead
- ``call_signatures`` deprecated, use ``get_signatures`` instead
- ``usages`` deprecated, use ``get_references`` instead
- ``jedi.names`` deprecated, use ``jedi.Script(...).get_names()``
- ``BaseDefinition.goto_assignments`` renamed to ``BaseDefinition.goto``
- Python 2 support deprecated. For this release it is best effort. Python 2 has
reached the end of its life and now it's just about a smooth transition.
0.15.2 (2019-12-20)
+++++++++++++++++++
- Signatures are now detected a lot better
- Add fuzzy completions with ``Script(...).completions(fuzzy=True)``
- Files bigger than one MB (about 20kLOC) get cropped to avoid getting
stuck completely.
- Many small Bugfixes
- A big refactoring around contexts/values
0.15.1 (2019-08-13)
+++++++++++++++++++
@@ -23,7 +59,7 @@ New APIs:
- ``Definition.get_signatures() -> List[Signature]``. Signatures are similar to
``CallSignature``. ``Definition.params`` is therefore deprecated.
- ``Signature.to_string()`` to format call signatures.
- ``Signature.to_string()`` to format signatures.
- ``Signature.params -> List[ParamDefinition]``, ParamDefinition has the
following additional attributes ``infer_default()``, ``infer_annotation()``,
``to_string()``, and ``kind``.

View File

@@ -32,7 +32,7 @@ Jedi has a focus on autocompletion and goto functionality. Jedi is fast and is
very well tested. It understands Python and stubs on a deep level.
Jedi has support for different goto functions. It's possible to search for
usages and list names in a Python file to get information about them.
references and list names in a Python file to get information about them.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
@@ -128,9 +128,9 @@ Autocompletion / Goto / Pydoc
Please check the API for a good explanation. There are the following commands:
- ``jedi.Script.goto_assignments``
- ``jedi.Script.completions``
- ``jedi.Script.usages``
- ``jedi.Script.goto``
- ``jedi.Script.complete``
- ``jedi.Script.get_references``
The returned objects are very powerful and really all you might need.
@@ -149,8 +149,9 @@ This means that in Python you can enable tab completion in a `REPL
Static Analysis
------------------------
To do all forms of static analysis, please try to use ``jedi.names``. It will
return a list of names that you can use to infer types and so on.
To do all forms of static analysis, please try to use
``jedi.Script(...).get_names``. It will return a list of names that you can use
to infer types and so on.
Refactoring

View File

@@ -16,22 +16,6 @@ environment:
PYTHON_PATH: C:\Python27
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py34
PYTHON_PATH: C:\Python34
JEDI_TEST_ENVIRONMENT: 27
- TOXENV: py34
PYTHON_PATH: C:\Python34
JEDI_TEST_ENVIRONMENT: 34
- TOXENV: py34
PYTHON_PATH: C:\Python34
JEDI_TEST_ENVIRONMENT: 35
- TOXENV: py34
PYTHON_PATH: C:\Python34
JEDI_TEST_ENVIRONMENT: 36
- TOXENV: py34
PYTHON_PATH: C:\Python34
JEDI_TEST_ENVIRONMENT: 37
- TOXENV: py35
PYTHON_PATH: C:\Python35
JEDI_TEST_ENVIRONMENT: 27

View File

@@ -90,13 +90,13 @@ def clean_jedi_cache(request):
@pytest.fixture(scope='session')
def environment(request):
if request.config.option.interpreter_env:
return InterpreterEnvironment()
version = request.config.option.env
if version is None:
version = os.environ.get('JEDI_TEST_ENVIRONMENT', str(py_version))
if request.config.option.interpreter_env or version == 'interpreter':
return InterpreterEnvironment()
return get_system_environment(version[0] + '.' + version[1:])
@@ -106,8 +106,18 @@ def Script(environment):
@pytest.fixture(scope='session')
def names(environment):
return partial(jedi.names, environment=environment)
def get_names(Script):
return lambda code, **kwargs: Script(code).get_names(**kwargs)
@pytest.fixture(scope='session', params=['goto', 'infer'])
def goto_or_infer(request, Script):
return lambda code, *args, **kwargs: getattr(Script(code), request.param)(*args, **kwargs)
@pytest.fixture(scope='session', params=['goto', 'help'])
def goto_or_help(request, Script):
return lambda code, *args, **kwargs: getattr(Script(code), request.param)(*args, **kwargs)
@pytest.fixture(scope='session')
@@ -118,7 +128,7 @@ def has_typing(environment):
return True
script = jedi.Script('import typing', environment=environment)
return bool(script.goto_definitions())
return bool(script.infer())
@pytest.fixture(scope='session')
@@ -156,3 +166,11 @@ def skip_pre_python35(environment):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()
@pytest.fixture()
def skip_pre_python36(environment):
if environment.version_info < (3, 6):
# This if is just needed to avoid that tests ever skip way more than
# they should for all Python versions.
pytest.skip()

View File

@@ -29,9 +29,8 @@ API Documentation
The API consists of a few different parts:
- The main starting points for completions/goto: :class:`.Script` and :class:`.Interpreter`
- Helpful functions: :func:`.names`, :func:`.preload_module` and
:func:`.set_debug_function`
- The main starting points for complete/goto: :class:`.Script` and :class:`.Interpreter`
- Helpful functions: :func:`.preload_module` and :func:`.set_debug_function`
- :ref:`API Result Classes <api-classes>`
- :ref:`Python Versions/Virtualenv Support <environments>` with functions like
:func:`.find_system_environments` and :func:`.find_virtualenvs`
@@ -47,7 +46,6 @@ Static Analysis Interface
:members:
.. autoclass:: jedi.Interpreter
:members:
.. autofunction:: jedi.names
.. autofunction:: jedi.preload_module
.. autofunction:: jedi.set_debug_function
@@ -76,10 +74,10 @@ Completions:
>>> import jedi
>>> source = '''import json; json.l'''
>>> script = jedi.Script(source, 1, 19, '')
>>> script = jedi.Script(source, path='')
>>> script
<jedi.api.Script object at 0x2121b10>
>>> completions = script.completions()
>>> completions = script.complete(1, 19)
>>> completions
[<Completion: load>, <Completion: loads>]
>>> completions[1]
@@ -102,15 +100,15 @@ Definitions / Goto:
... inception = my_list[2]
...
... inception()'''
>>> script = jedi.Script(source, 8, 1, '')
>>> script = jedi.Script(source, path='')
>>>
>>> script.goto_assignments()
>>> script.goto(8, 1)
[<Definition inception=my_list[2]>]
>>>
>>> script.goto_definitions()
>>> script.infer(8, 1)
[<Definition def my_func>]
Related names:
References:
.. sourcecode:: python
@@ -120,13 +118,12 @@ Related names:
... x = 4
... else:
... del x'''
>>> script = jedi.Script(source, 5, 8, '')
>>> rns = script.related_names()
>>> script = jedi.Script(source, '')
>>> rns = script.get_references(5, 8)
>>> rns
[<RelatedName x@3,4>, <RelatedName x@1,0>]
>>> rns[0].start_pos
(3, 4)
>>> rns[0].is_keyword
False
>>> rns[0].text
'x'
[<Definition full_name='__main__.x', description='x = 3'>,
<Definition full_name='__main__.x', description='x'>]
>>> rns[1].line
5
>>> rns[0].column
8

View File

@@ -6,7 +6,7 @@ Features and Caveats
Jedi obviously supports autocompletion. It's also possible to get it working in
(:ref:`your REPL (IPython, etc.) <repl-completion>`).
Static analysis is also possible by using the command ``jedi.names``.
Static analysis is also possible by using ``jedi.Script(...).get_names``.
Jedi would in theory support refactoring, but we have never publicized it,
because it's not production ready. If you're interested in helping out here,
@@ -52,6 +52,7 @@ Supported Python Features
- ``isinstance`` checks for if/while/assert
- namespace packages (includes ``pkgutil``, ``pkg_resources`` and PEP420 namespaces)
- Django / Flask / Buildout support
- Understands Pytest fixtures
Not Supported

View File

@@ -4,7 +4,7 @@ Jedi has a focus on autocompletion and goto functionality. Jedi is fast and is
very well tested. It understands Python and stubs on a deep level.
Jedi has support for different goto functions. It's possible to search for
usages and list names in a Python file to get information about them.
references and list names in a Python file to get information about them.
Jedi uses a very simple API to connect with IDE's. There's a reference
implementation as a `VIM-Plugin <https://github.com/davidhalter/jedi-vim>`_,
@@ -18,10 +18,10 @@ Here's a simple example of the autocompletion feature:
>>> source = '''
... import json
... json.lo'''
>>> script = jedi.Script(source, 3, len('json.lo'), 'example.py')
>>> script = jedi.Script(source, path='example.py')
>>> script
<Script: 'example.py' ...>
>>> completions = script.completions()
>>> completions = script.complete(3, len('json.lo'))
>>> completions
[<Completion: load>, <Completion: loads>]
>>> print(completions[0].complete)
@@ -33,7 +33,7 @@ As you see Jedi is pretty simple and allows you to concentrate on writing a
good text editor, while still having very good IDE features for Python.
"""
__version__ = '0.15.2'
__version__ = '0.16.0'
from jedi.api import Script, Interpreter, set_debug_function, \
preload_module, names

View File

@@ -665,7 +665,13 @@ if not is_py3:
info.weakref = weakref.ref(obj, self)
self._registry[self] = info
def __call__(self):
# To me it's an absolute mystery why in Python 2 we need _=None. It
# makes really no sense since it's never really called. Then again it
# might be called by Python 2.7 itself, but weakref.finalize is not
# documented in Python 2 and therefore shouldn't be randomly called.
# We never call this stuff with a parameter and therefore this
# parameter should not be needed. But it is. ~dave
def __call__(self, _=None):
"""Return func(*args, **kwargs) if alive."""
info = self._registry.pop(self, None)
if info:
@@ -684,3 +690,45 @@ if not is_py3:
atexit.register(finalize._exitfunc)
weakref.finalize = finalize
if is_py3 and sys.version_info[1] > 5:
from inspect import unwrap
else:
# Only Python >=3.6 does properly limit the amount of unwraps. This is very
# relevant in the case of unittest.mock.patch.
# Below is the implementation of Python 3.7.
def unwrap(func, stop=None):
"""Get the object wrapped by *func*.
Follows the chain of :attr:`__wrapped__` attributes returning the last
object in the chain.
*stop* is an optional callback accepting an object in the wrapper chain
as its sole argument that allows the unwrapping to be terminated early if
the callback returns a true value. If the callback never returns a true
value, the last object in the chain is returned as usual. For example,
:func:`signature` uses this to stop unwrapping if any object in the
chain has a ``__signature__`` attribute defined.
:exc:`ValueError` is raised if a cycle is encountered.
"""
if stop is None:
def _is_wrapper(f):
return hasattr(f, '__wrapped__')
else:
def _is_wrapper(f):
return hasattr(f, '__wrapped__') and not stop(f)
f = func # remember the original func for error reporting
# Memoise by id to tolerate non-hashable objects, but store objects to
# ensure they aren't destroyed, which would allow their IDs to be reused.
memo = {id(f): f}
recursion_limit = sys.getrecursionlimit()
while _is_wrapper(func):
func = func.__wrapped__
id_func = id(func)
if (id_func in memo) or (len(memo) >= recursion_limit):
raise ValueError('wrapper loop when unwrapping {!r}'.format(f))
memo[id_func] = func
return func

View File

@@ -25,12 +25,14 @@ from jedi.file_io import KnownContentFileIO
from jedi.api import classes
from jedi.api import interpreter
from jedi.api import helpers
from jedi.api.helpers import validate_line_column
from jedi.api.completion import Completion
from jedi.api.keywords import KeywordName
from jedi.api.environment import InterpreterEnvironment
from jedi.api.project import get_default_project, Project
from jedi.inference import InferenceState
from jedi.inference import imports
from jedi.inference import usages
from jedi.inference.references import find_references
from jedi.inference.arguments import try_iter_content
from jedi.inference.helpers import get_module_names, infer_call_of_leaf
from jedi.inference.sys_path import transform_path_to_dotted
@@ -68,9 +70,9 @@ class Script(object):
:param source: The source code of the current file, separated by newlines.
:type source: str
:param line: The line to perform actions on (starting with 1).
:param line: Deprecated, please use it directly on e.g. `.complete`
:type line: int
:param column: The column of the cursor (starting with 0).
:param column: Deprecated, please use it directly on e.g. `.complete`
:type column: int
:param path: The path of the file in the file system, or ``''`` if
it hasn't been saved yet.
@@ -126,22 +128,6 @@ class Script(object):
debug.speed('parsed')
self._code_lines = parso.split_lines(source, keepends=True)
self._code = source
line = max(len(self._code_lines), 1) if line is None else line
if not (0 < line <= len(self._code_lines)):
raise ValueError('`line` parameter is not in a valid range.')
line_string = self._code_lines[line - 1]
line_len = len(line_string)
if line_string.endswith('\r\n'):
line_len -= 1
if line_string.endswith('\n'):
line_len -= 1
column = line_len if column is None else column
if not (0 <= column <= line_len):
raise ValueError('`column` parameter (%d) is not in a valid range '
'(0-%d) for line %d (%r).' % (
column, line_len, line, line_string))
self._pos = line, column
cache.clear_time_caches()
@@ -181,7 +167,8 @@ class Script(object):
names = ('__main__',)
module = ModuleValue(
self._inference_state, self._module_node, file_io,
self._inference_state, self._module_node,
file_io=file_io,
string_names=names,
code_lines=self._code_lines,
is_package=is_package,
@@ -201,27 +188,38 @@ class Script(object):
self._inference_state.environment,
)
def completions(self):
@validate_line_column
def complete(self, line=None, column=None, **kwargs):
"""
Return :class:`classes.Completion` objects. Those objects contain
information about the completions, more than just names.
:return: Completion objects, sorted by name and __ comes last.
:param fuzzy: Default False. Will return fuzzy completions, which means
that e.g. ``ooa`` will match ``foobar``.
:return: Completion objects, sorted by name and ``__`` comes last.
:rtype: list of :class:`classes.Completion`
"""
with debug.increase_indent_cm('completions'):
return self._complete(line, column, **kwargs)
def _complete(self, line, column, fuzzy=False): # Python 2...
with debug.increase_indent_cm('complete'):
completion = Completion(
self._inference_state, self._get_module_context(), self._code_lines,
self._pos, self.call_signatures
(line, column), self.get_signatures, fuzzy=fuzzy,
)
return completion.completions()
return completion.complete()
def goto_definitions(self, **kwargs):
def completions(self, fuzzy=False):
# Deprecated, will be removed.
return self.complete(*self._pos, fuzzy=fuzzy)
@validate_line_column
def infer(self, line=None, column=None, **kwargs):
"""
Return the definitions of a the path under the cursor. goto function!
This follows complicated paths and returns the end, not the first
definition. The big difference between :meth:`goto_assignments` and
:meth:`goto_definitions` is that :meth:`goto_assignments` doesn't
definition. The big difference between :meth:`goto` and
:meth:`infer` is that :meth:`goto` doesn't
follow imports and statements. Multiple objects may be returned,
because Python itself is a dynamic language, which means depending on
an option you can have two different versions of a function.
@@ -231,19 +229,24 @@ class Script(object):
inference call.
:rtype: list of :class:`classes.Definition`
"""
with debug.increase_indent_cm('goto_definitions'):
return self._goto_definitions(**kwargs)
with debug.increase_indent_cm('infer'):
return self._infer(line, column, **kwargs)
def _goto_definitions(self, only_stubs=False, prefer_stubs=False):
leaf = self._module_node.get_name_of_position(self._pos)
def goto_definitions(self, **kwargs):
# Deprecated, will be removed.
return self.infer(*self._pos, **kwargs)
def _infer(self, line, column, only_stubs=False, prefer_stubs=False):
pos = line, column
leaf = self._module_node.get_name_of_position(pos)
if leaf is None:
leaf = self._module_node.get_leaf_for_position(self._pos)
if leaf is None:
leaf = self._module_node.get_leaf_for_position(pos)
if leaf is None or leaf.type == 'string':
return []
context = self._get_module_context().create_context(leaf)
values = helpers.infer_goto_definition(self._inference_state, context, leaf)
values = helpers.infer(self._inference_state, context, leaf)
values = convert_values(
values,
only_stubs=only_stubs,
@@ -257,15 +260,20 @@ class Script(object):
return helpers.sorted_definitions(set(defs))
def goto_assignments(self, follow_imports=False, follow_builtin_imports=False, **kwargs):
# Deprecated, will be removed.
return self.goto(*self._pos,
follow_imports=follow_imports,
follow_builtin_imports=follow_builtin_imports,
**kwargs)
@validate_line_column
def goto(self, line=None, column=None, **kwargs):
"""
Return the first definition found, while optionally following imports.
Multiple objects may be returned, because Python itself is a
dynamic language, which means depending on an option you can have two
different versions of a function.
.. note:: It is deprecated to use follow_imports and follow_builtin_imports as
positional arguments. Will be a keyword argument in 0.16.0.
:param follow_imports: The goto call will follow imports.
:param follow_builtin_imports: If follow_imports is True will decide if
it follow builtin imports.
@@ -273,11 +281,11 @@ class Script(object):
:param prefer_stubs: Prefer stubs to Python objects for this goto call.
:rtype: list of :class:`classes.Definition`
"""
with debug.increase_indent_cm('goto_assignments'):
return self._goto_assignments(follow_imports, follow_builtin_imports, **kwargs)
with debug.increase_indent_cm('goto'):
return self._goto(line, column, **kwargs)
def _goto_assignments(self, follow_imports, follow_builtin_imports,
only_stubs=False, prefer_stubs=False):
def _goto(self, line, column, follow_imports=False, follow_builtin_imports=False,
only_stubs=False, prefer_stubs=False):
def filter_follow_imports(names):
for name in names:
if name.is_import():
@@ -296,13 +304,28 @@ class Script(object):
else:
yield name
tree_name = self._module_node.get_name_of_position(self._pos)
tree_name = self._module_node.get_name_of_position((line, column))
if tree_name is None:
# Without a name we really just want to jump to the result e.g.
# executed by `foo()`, if we the cursor is after `)`.
return self.goto_definitions(only_stubs=only_stubs, prefer_stubs=prefer_stubs)
return self.infer(line, column, only_stubs=only_stubs, prefer_stubs=prefer_stubs)
name = self._get_module_context().create_name(tree_name)
names = list(name.goto())
# Make it possible to goto the super class function/attribute
# definitions, when they are overwritten.
names = []
if name.tree_name.is_definition() and name.parent_context.is_class():
class_node = name.parent_context.tree_node
class_value = self._get_module_context().create_value(class_node)
mro = class_value.py__mro__()
next(mro) # Ignore the first entry, because it's the class itself.
for cls in mro:
names = cls.goto(tree_name.value)
if names:
break
if not names:
names = list(name.goto())
if follow_imports:
names = filter_follow_imports(names)
@@ -315,42 +338,66 @@ class Script(object):
defs = [classes.Definition(self._inference_state, d) for d in set(names)]
return helpers.sorted_definitions(defs)
def usages(self, additional_module_paths=(), **kwargs):
@validate_line_column
def help(self, line=None, column=None):
"""
Works like goto and returns a list of Definition objects. Returns
additional definitions for keywords and operators.
The additional definitions are of ``Definition(...).type == 'keyword'``.
These definitions do not have a lot of value apart from their docstring
attribute, which contains the output of Python's ``help()`` function.
:rtype: list of :class:`classes.Definition`
"""
definitions = self.goto(line, column, follow_imports=True)
if definitions:
return definitions
leaf = self._module_node.get_leaf_for_position((line, column))
if leaf.type in ('keyword', 'operator', 'error_leaf'):
reserved = self._grammar._pgen_grammar.reserved_syntax_strings.keys()
if leaf.value in reserved:
name = KeywordName(self._inference_state, leaf.value)
return [classes.Definition(self._inference_state, name)]
return []
def usages(self, **kwargs):
# Deprecated, will be removed.
return self.get_references(*self._pos, **kwargs)
@validate_line_column
def get_references(self, line=None, column=None, **kwargs):
"""
Return :class:`classes.Definition` objects, which contain all
names that point to the definition of the name under the cursor. This
is very useful for refactoring (renaming), or to show all usages of a
variable.
is very useful for refactoring (renaming), or to show all references of
a variable.
.. todo:: Implement additional_module_paths
:param additional_module_paths: Deprecated, never ever worked.
:param include_builtins: Default True, checks if a usage is a builtin
(e.g. ``sys``) and in that case does not return it.
:param include_builtins: Default True, checks if a reference is a
builtin (e.g. ``sys``) and in that case does not return it.
:rtype: list of :class:`classes.Definition`
"""
if additional_module_paths:
warnings.warn(
"Deprecated since version 0.12.0. This never even worked, just ignore it.",
DeprecationWarning,
stacklevel=2
)
def _usages(include_builtins=True):
tree_name = self._module_node.get_name_of_position(self._pos)
def _references(include_builtins=True):
tree_name = self._module_node.get_name_of_position((line, column))
if tree_name is None:
# Must be syntax
return []
names = usages.usages(self._get_module_context(), tree_name)
names = find_references(self._get_module_context(), tree_name)
definitions = [classes.Definition(self._inference_state, n) for n in names]
if not include_builtins:
definitions = [d for d in definitions if not d.in_builtin_module()]
return helpers.sorted_definitions(definitions)
return _usages(**kwargs)
return _references(**kwargs)
def call_signatures(self):
# Deprecated, will be removed.
return self.get_signatures(*self._pos)
@validate_line_column
def get_signatures(self, line=None, column=None):
"""
Return the function object of the call you're currently in.
@@ -364,27 +411,63 @@ class Script(object):
This would return an empty list..
:rtype: list of :class:`classes.CallSignature`
:rtype: list of :class:`classes.Signature`
"""
call_details = helpers.get_call_signature_details(self._module_node, self._pos)
pos = line, column
call_details = helpers.get_signature_details(self._module_node, pos)
if call_details is None:
return []
context = self._get_module_context().create_context(call_details.bracket_leaf)
definitions = helpers.cache_call_signatures(
definitions = helpers.cache_signatures(
self._inference_state,
context,
call_details.bracket_leaf,
self._code_lines,
self._pos
pos
)
debug.speed('func_call followed')
# TODO here we use stubs instead of the actual values. We should use
# the signatures from stubs, but the actual values, probably?!
return [classes.CallSignature(self._inference_state, signature, call_details)
return [classes.Signature(self._inference_state, signature, call_details)
for signature in definitions.get_signatures()]
@validate_line_column
def get_context(self, line=None, column=None):
pos = (line, column)
leaf = self._module_node.get_leaf_for_position(pos, include_prefixes=True)
if leaf.start_pos > pos or leaf.type == 'endmarker':
previous_leaf = leaf.get_previous_leaf()
if previous_leaf is not None:
leaf = previous_leaf
module_context = self._get_module_context()
n = tree.search_ancestor(leaf, 'funcdef', 'classdef')
if n is not None and n.start_pos < pos <= n.children[-1].start_pos:
# This is a bit of a special case. The context of a function/class
# name/param/keyword is always it's parent context, not the
# function itself. Catch all the cases here where we are before the
# suite object, but still in the function.
context = module_context.create_value(n).as_context()
else:
context = module_context.create_context(leaf)
while context.name is None:
context = context.parent_context # comprehensions
definition = classes.Definition(self._inference_state, context.name)
while definition.type != 'module':
name = definition._name # TODO private access
tree_name = name.tree_name
if tree_name is not None: # Happens with lambdas.
scope = tree_name.get_definition()
if scope.start_pos[1] < column:
break
definition = definition.parent()
return definition
def _analysis(self):
self._inference_state.is_analysis = True
self._inference_state.analysis_modules = [self._module_node]
@@ -408,7 +491,7 @@ class Script(object):
unpack_tuple_to_dict(context, types, testlist)
else:
if node.type == 'name':
defs = self._inference_state.goto_definitions(context, node)
defs = self._inference_state.infer(context, node)
else:
defs = infer_call_of_leaf(context, node)
try_iter_content(defs)
@@ -419,6 +502,36 @@ class Script(object):
finally:
self._inference_state.is_analysis = False
def get_names(self, **kwargs):
"""
Returns a list of `Definition` objects, containing name parts.
This means you can call ``Definition.goto()`` and get the
reference of a name.
:param all_scopes: If True lists the names of all scopes instead of only
the module namespace.
:param definitions: If True lists the names that have been defined by a
class, function or a statement (``a = b`` returns ``a``).
:param references: If True lists all the names that are not listed by
``definitions=True``. E.g. ``a = b`` returns ``b``.
"""
return self._names(**kwargs) # Python 2...
def _names(self, all_scopes=False, definitions=True, references=False):
def def_ref_filter(_def):
is_def = _def._name.tree_name.is_definition()
return definitions and is_def or references and not is_def
# Set line/column to a random position, because they don't matter.
module_context = self._get_module_context()
defs = [
classes.Definition(
self._inference_state,
module_context.create_name(name)
) for name in get_module_names(self._module_node, all_scopes)
]
return sorted(filter(def_ref_filter, defs), key=lambda x: (x.line, x.column))
class Interpreter(Script):
"""
@@ -432,7 +545,7 @@ class Interpreter(Script):
>>> from os.path import join
>>> namespace = locals()
>>> script = Interpreter('join("").up', [namespace])
>>> print(script.completions()[0].name)
>>> print(script.complete()[0].name)
upper
"""
_allow_descriptor_getattr_default = True
@@ -484,34 +597,17 @@ class Interpreter(Script):
def names(source=None, path=None, encoding='utf-8', all_scopes=False,
definitions=True, references=False, environment=None):
"""
Returns a list of `Definition` objects, containing name parts.
This means you can call ``Definition.goto_assignments()`` and get the
reference of a name.
The parameters are the same as in :py:class:`Script`, except or the
following ones:
warnings.warn(
"Deprecated since version 0.16.0. Use Script(...).get_names instead.",
DeprecationWarning,
stacklevel=2
)
:param all_scopes: If True lists the names of all scopes instead of only
the module namespace.
:param definitions: If True lists the names that have been defined by a
class, function or a statement (``a = b`` returns ``a``).
:param references: If True lists all the names that are not listed by
``definitions=True``. E.g. ``a = b`` returns ``b``.
"""
def def_ref_filter(_def):
is_def = _def._name.tree_name.is_definition()
return definitions and is_def or references and not is_def
# Set line/column to a random position, because they don't matter.
script = Script(source, line=1, column=0, path=path, encoding=encoding, environment=environment)
module_context = script._get_module_context()
defs = [
classes.Definition(
script._inference_state,
module_context.create_name(name)
) for name in get_module_names(script._module_node, all_scopes)
]
return sorted(filter(def_ref_filter, defs), key=lambda x: (x.line, x.column))
return Script(source, path=path, encoding=encoding).get_names(
all_scopes=all_scopes,
definitions=definitions,
references=references,
)
def preload_module(*modules):
@@ -523,7 +619,7 @@ def preload_module(*modules):
"""
for m in modules:
s = "import %s as x; x." % m
Script(s, 1, len(s), None).completions()
Script(s, path=None).complete(1, len(s))
def set_debug_function(func_cb=debug.print_to_stdout, warnings=True,

View File

@@ -7,6 +7,8 @@ import re
import sys
import warnings
from parso.python.tree import search_ancestor
from jedi import settings
from jedi import debug
from jedi.inference.utils import unite
@@ -17,6 +19,7 @@ from jedi.inference.gradual.typeshed import StubModuleValue
from jedi.inference.gradual.conversion import convert_names, convert_values
from jedi.inference.base_value import ValueSet
from jedi.api.keywords import KeywordName
from jedi.api import completion_cache
def _sort_names_by_start_pos(names):
@@ -104,7 +107,7 @@ class BaseDefinition(object):
Here is an example of the value of this attribute. Let's consider
the following source. As what is in ``variable`` is unambiguous
to Jedi, :meth:`jedi.Script.goto_definitions` should return a list of
to Jedi, :meth:`jedi.Script.infer` should return a list of
definition for ``sys``, ``f``, ``C`` and ``x``.
>>> from jedi._compatibility import no_unicode_pprint
@@ -127,7 +130,7 @@ class BaseDefinition(object):
... variable'''
>>> script = Script(source)
>>> defs = script.goto_definitions()
>>> defs = script.infer()
Before showing what is in ``defs``, let's sort it by :attr:`line`
so that it is easy to relate the result to the source code.
@@ -177,7 +180,7 @@ class BaseDefinition(object):
>>> from jedi import Script
>>> source = 'import json'
>>> script = Script(source, path='example.py')
>>> d = script.goto_definitions()[0]
>>> d = script.infer()[0]
>>> print(d.module_name) # doctest: +ELLIPSIS
json
"""
@@ -217,18 +220,18 @@ class BaseDefinition(object):
... def f(a, b=1):
... "Document for function f."
... '''
>>> script = Script(source, 1, len('def f'), 'example.py')
>>> doc = script.goto_definitions()[0].docstring()
>>> script = Script(source, path='example.py')
>>> doc = script.infer(1, len('def f'))[0].docstring()
>>> print(doc)
f(a, b=1)
<BLANKLINE>
Document for function f.
Notice that useful extra information is added to the actual
docstring. For function, it is call signature. If you need
docstring. For function, it is signature. If you need
actual docstring, use ``raw=True`` instead.
>>> print(script.goto_definitions()[0].docstring(raw=True))
>>> print(script.infer(1, len('def f'))[0].docstring(raw=True))
Document for function f.
:param fast: Don't follow imports that are only one level deep like
@@ -237,12 +240,76 @@ class BaseDefinition(object):
the ``foo.docstring(fast=False)`` on every object, because it
parses all libraries starting with ``a``.
"""
return _Help(self._name).docstring(fast=fast, raw=raw)
if isinstance(self._name, ImportName) and fast:
return ''
doc = self._get_docstring()
if raw:
return doc
signature_text = self._get_docstring_signature()
if signature_text and doc:
return signature_text + '\n\n' + doc
else:
return signature_text + doc
def _get_docstring(self):
return self._name.py__doc__()
def _get_docstring_signature(self):
return '\n'.join(
signature.to_string()
for signature in self._get_signatures(for_docstring=True)
)
@property
def description(self):
"""A textual description of the object."""
return self._name.get_public_name()
"""
A description of the :class:`.Definition` object, which is heavily used
in testing. e.g. for ``isinstance`` it returns ``def isinstance``.
Example:
>>> from jedi._compatibility import no_unicode_pprint
>>> from jedi import Script
>>> source = '''
... def f():
... pass
...
... class C:
... pass
...
... variable = f if random.choice([0,1]) else C'''
>>> script = Script(source) # line is maximum by default
>>> defs = script.infer(column=3)
>>> defs = sorted(defs, key=lambda d: d.line)
>>> no_unicode_pprint(defs) # doctest: +NORMALIZE_WHITESPACE
[<Definition full_name='__main__.f', description='def f'>,
<Definition full_name='__main__.C', description='class C'>]
>>> str(defs[0].description) # strip literals in python2
'def f'
>>> str(defs[1].description)
'class C'
"""
typ = self.type
tree_name = self._name.tree_name
if typ == 'param':
return typ + ' ' + self._name.to_string()
if typ in ('function', 'class', 'module', 'instance') or tree_name is None:
if typ == 'function':
# For the description we want a short and a pythonic way.
typ = 'def'
return typ + ' ' + self._name.get_public_name()
definition = tree_name.get_definition(include_setitem=True) or tree_name
# Remove the prefix, because that's not what we want for get_code
# here.
txt = definition.get_code(include_prefix=False)
# Delete comments:
txt = re.sub(r'#[^\n]+\n', ' ', txt)
# Delete multi spaces/newlines
txt = re.sub(r'\s+', ' ', txt).strip()
return txt
@property
def full_name(self):
@@ -259,8 +326,8 @@ class BaseDefinition(object):
>>> source = '''
... import os
... os.path.join'''
>>> script = Script(source, 3, len('os.path.join'), 'example.py')
>>> print(script.goto_definitions()[0].full_name)
>>> script = Script(source, path='example.py')
>>> print(script.infer(3, len('os.path.join'))[0].full_name)
os.path.join
Notice that it returns ``'os.path.join'`` instead of (for example)
@@ -289,11 +356,19 @@ class BaseDefinition(object):
return self._name.get_root_context().is_stub()
def goto_assignments(self, **kwargs): # Python 2...
def goto(self, **kwargs):
with debug.increase_indent_cm('goto for %s' % self._name):
return self._goto_assignments(**kwargs)
return self._goto(**kwargs)
def _goto_assignments(self, only_stubs=False, prefer_stubs=False):
def goto_assignments(self, **kwargs): # Python 2...
warnings.warn(
"Deprecated since version 0.16.0. Use .goto.",
DeprecationWarning,
stacklevel=2
)
return self.goto(**kwargs)
def _goto(self, only_stubs=False, prefer_stubs=False):
assert not (only_stubs and prefer_stubs)
if not self._name.is_value_name:
@@ -339,14 +414,18 @@ class BaseDefinition(object):
Raises an ``AttributeError`` if the definition is not callable.
Otherwise returns a list of `Definition` that represents the params.
"""
warnings.warn(
"Deprecated since version 0.16.0. Use get_signatures()[...].params",
DeprecationWarning,
stacklevel=2
)
# Only return the first one. There might be multiple one, especially
# with overloading.
for value in self._name.infer():
for signature in value.get_signatures():
return [
Definition(self._inference_state, n)
for n in signature.get_param_names(resolve_stars=True)
]
for signature in self._get_signatures():
return [
Definition(self._inference_state, n)
for n in signature.get_param_names(resolve_stars=True)
]
if self.type == 'function' or self.type == 'class':
# Fallback, if no signatures were defined (which is probably by
@@ -358,12 +437,28 @@ class BaseDefinition(object):
if not self._name.is_value_name:
return None
context = self._name.parent_context
if self.type in ('function', 'class', 'param') and self._name.tree_name is not None:
# Since the parent_context doesn't really match what the user
# thinks of that the parent is here, we do these cases separately.
# The reason for this is the following:
# - class: Nested classes parent_context is always the
# parent_context of the most outer one.
# - function: Functions in classes have the module as
# parent_context.
# - param: The parent_context of a param is not its function but
# e.g. the outer class or module.
cls_or_func_node = self._name.tree_name.get_definition()
parent = search_ancestor(cls_or_func_node, 'funcdef', 'classdef', 'file_input')
context = self._get_module_context().create_value(parent).as_context()
else:
context = self._name.parent_context
if context is None:
return None
while context.name is None:
# Happens for comprehension contexts
context = context.parent_context
return Definition(self._inference_state, context.name)
def __repr__(self):
@@ -384,17 +479,32 @@ class BaseDefinition(object):
:return str: Returns the line(s) of code or an empty string if it's a
builtin.
"""
if not self._name.is_value_name or self.in_builtin_module():
if not self._name.is_value_name:
return ''
lines = self._name.get_root_context().code_lines
if lines is None:
# Probably a builtin module, just ignore in that case.
return ''
index = self._name.start_pos[0] - 1
start_index = max(index - before, 0)
return ''.join(lines[start_index:index + after + 1])
def _get_signatures(self, for_docstring=False):
if for_docstring and self._name.api_type == 'statement' and not self.is_stub():
# For docstrings we don't resolve signatures if they are simple
# statements and not stubs. This is a speed optimization.
return []
names = convert_names([self._name], prefer_stubs=True)
return [sig for name in names for sig in name.infer().get_signatures()]
def get_signatures(self):
return [Signature(self._inference_state, s) for s in self._name.infer().get_signatures()]
return [
BaseSignature(self._inference_state, s)
for s in self._get_signatures()
]
def execute(self):
return _values_to_definitions(self._name.infer().execute_with_values())
@@ -402,14 +512,17 @@ class BaseDefinition(object):
class Completion(BaseDefinition):
"""
`Completion` objects are returned from :meth:`api.Script.completions`. They
`Completion` objects are returned from :meth:`api.Script.complete`. They
provide additional information about a completion.
"""
def __init__(self, inference_state, name, stack, like_name_length):
def __init__(self, inference_state, name, stack, like_name_length,
is_fuzzy, cached_name=None):
super(Completion, self).__init__(inference_state, name)
self._like_name_length = like_name_length
self._stack = stack
self._is_fuzzy = is_fuzzy
self._cached_name = cached_name
# Completion objects with the same Completion name (which means
# duplicate items in the completion)
@@ -421,12 +534,6 @@ class Completion(BaseDefinition):
and self.type == 'function':
append = '('
if self._name.api_type == 'param' and self._stack is not None:
nonterminals = [stack_node.nonterminal for stack_node in self._stack]
if 'trailer' in nonterminals and 'argument' not in nonterminals:
# TODO this doesn't work for nested calls.
append += '='
name = self._name.get_public_name()
if like_name:
name = name[self._like_name_length:]
@@ -435,6 +542,9 @@ class Completion(BaseDefinition):
@property
def complete(self):
"""
Only works with non-fuzzy completions. Returns None if fuzzy
completions are used.
Return the rest of the word, e.g. completing ``isinstance``::
isinstan# <-- Cursor is here
@@ -449,9 +559,9 @@ class Completion(BaseDefinition):
completing ``foo(par`` would give a ``Completion`` which `complete`
would be `am=`
"""
if self._is_fuzzy:
return None
return self._complete(True)
@property
@@ -474,13 +584,46 @@ class Completion(BaseDefinition):
# In this case we can just resolve the like name, because we
# wouldn't load like > 100 Python modules anymore.
fast = False
return super(Completion, self).docstring(raw=raw, fast=fast)
def _get_docstring(self):
if self._cached_name is not None:
return completion_cache.get_docstring(
self._cached_name,
self._name.get_public_name(),
lambda: self._get_cache()
)
return super(Completion, self)._get_docstring()
def _get_docstring_signature(self):
if self._cached_name is not None:
return completion_cache.get_docstring_signature(
self._cached_name,
self._name.get_public_name(),
lambda: self._get_cache()
)
return super(Completion, self)._get_docstring_signature()
def _get_cache(self):
typ = super(Completion, self).type
return (
typ,
super(Completion, self)._get_docstring_signature(),
super(Completion, self)._get_docstring(),
)
@property
def description(self):
"""Provide a description of the completion object."""
# TODO improve the class structure.
return Definition.description.__get__(self)
def type(self):
# Purely a speed optimization.
if self._cached_name is not None:
return completion_cache.get_type(
self._cached_name,
self._name.get_public_name(),
lambda: self._get_cache()
)
return super(Completion, self).type
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self._name.get_public_name())
@@ -507,62 +650,12 @@ class Completion(BaseDefinition):
class Definition(BaseDefinition):
"""
*Definition* objects are returned from :meth:`api.Script.goto_assignments`
or :meth:`api.Script.goto_definitions`.
*Definition* objects are returned from :meth:`api.Script.goto`
or :meth:`api.Script.infer`.
"""
def __init__(self, inference_state, definition):
super(Definition, self).__init__(inference_state, definition)
@property
def description(self):
"""
A description of the :class:`.Definition` object, which is heavily used
in testing. e.g. for ``isinstance`` it returns ``def isinstance``.
Example:
>>> from jedi._compatibility import no_unicode_pprint
>>> from jedi import Script
>>> source = '''
... def f():
... pass
...
... class C:
... pass
...
... variable = f if random.choice([0,1]) else C'''
>>> script = Script(source, column=3) # line is maximum by default
>>> defs = script.goto_definitions()
>>> defs = sorted(defs, key=lambda d: d.line)
>>> no_unicode_pprint(defs) # doctest: +NORMALIZE_WHITESPACE
[<Definition full_name='__main__.f', description='def f'>,
<Definition full_name='__main__.C', description='class C'>]
>>> str(defs[0].description) # strip literals in python2
'def f'
>>> str(defs[1].description)
'class C'
"""
typ = self.type
tree_name = self._name.tree_name
if typ == 'param':
return typ + ' ' + self._name.to_string()
if typ in ('function', 'class', 'module', 'instance') or tree_name is None:
if typ == 'function':
# For the description we want a short and a pythonic way.
typ = 'def'
return typ + ' ' + self._name.get_public_name()
definition = tree_name.get_definition(include_setitem=True) or tree_name
# Remove the prefix, because that's not what we want for get_code
# here.
txt = definition.get_code(include_prefix=False)
# Delete comments:
txt = re.sub(r'#[^\n]+\n', ' ', txt)
# Delete multi spaces/newlines
txt = re.sub(r'\s+', ' ', txt).strip()
return txt
@property
def desc_with_module(self):
"""
@@ -613,14 +706,14 @@ class Definition(BaseDefinition):
return hash((self._name.start_pos, self.module_path, self.name, self._inference_state))
class Signature(Definition):
class BaseSignature(Definition):
"""
`Signature` objects is the return value of `Script.function_definition`.
`BaseSignature` objects is the return value of `Script.function_definition`.
It knows what functions you are currently in. e.g. `isinstance(` would
return the `isinstance` function. without `(` it would return nothing.
"""
def __init__(self, inference_state, signature):
super(Signature, self).__init__(inference_state, signature.name)
super(BaseSignature, self).__init__(inference_state, signature.name)
self._signature = signature
@property
@@ -635,15 +728,15 @@ class Signature(Definition):
return self._signature.to_string()
class CallSignature(Signature):
class Signature(BaseSignature):
"""
`CallSignature` objects is the return value of `Script.call_signatures`.
`Signature` objects is the return value of `Script.get_signatures`.
It knows what functions you are currently in. e.g. `isinstance(` would
return the `isinstance` function with its params. Without `(` it would
return nothing.
"""
def __init__(self, inference_state, signature, call_details):
super(CallSignature, self).__init__(inference_state, signature)
super(Signature, self).__init__(inference_state, signature)
self._call_details = call_details
self._signature = signature
@@ -705,62 +798,3 @@ class ParamDefinition(Definition):
'Python 2 is end-of-life, the new feature is not available for it'
)
return self._name.get_kind()
def _format_signatures(value):
return '\n'.join(
signature.to_string()
for signature in value.get_signatures()
)
class _Help(object):
"""
Temporary implementation, will be used as `Script.help() or something in
the future.
"""
def __init__(self, definition):
self._name = definition
@memoize_method
def _get_values(self, fast):
if isinstance(self._name, ImportName) and fast:
return {}
if self._name.api_type == 'statement':
return {}
return self._name.infer()
def docstring(self, fast=True, raw=True):
"""
The docstring ``__doc__`` for any object.
See :attr:`doc` for example.
"""
full_doc = ''
# Using the first docstring that we see.
for value in self._get_values(fast=fast):
if full_doc:
# In case we have multiple values, just return all of them
# separated by a few dashes.
full_doc += '\n' + '-' * 30 + '\n'
doc = value.py__doc__()
signature_text = ''
if self._name.is_value_name:
if not raw:
signature_text = _format_signatures(value)
if not doc and value.is_stub():
for c in convert_values(ValueSet({value}), ignore_compiled=False):
doc = c.py__doc__()
if doc:
break
if signature_text and doc:
full_doc += signature_text + '\n\n' + doc
else:
full_doc += signature_text + doc
return full_doc

View File

@@ -1,8 +1,10 @@
import re
from textwrap import dedent
from parso.python.token import PythonTokenTypes
from parso.python import tree
from parso.tree import search_ancestor, Leaf
from parso import split_lines
from jedi._compatibility import Parameter
from jedi import debug
@@ -10,25 +12,35 @@ from jedi import settings
from jedi.api import classes
from jedi.api import helpers
from jedi.api import keywords
from jedi.api.file_name import file_name_completions
from jedi.api.strings import complete_dict
from jedi.api.file_name import complete_file_name
from jedi.inference import imports
from jedi.inference.base_value import ValueSet
from jedi.inference.helpers import infer_call_of_leaf, parse_dotted_names
from jedi.inference.context import get_global_filters
from jedi.inference.value import TreeInstance, ModuleValue
from jedi.inference.names import ParamNameWrapper
from jedi.inference.gradual.conversion import convert_values
from jedi.parser_utils import cut_value_at_position
from jedi.plugins import plugin_manager
def get_call_signature_param_names(call_signatures):
class ParamNameWithEquals(ParamNameWrapper):
def get_public_name(self):
return self.string_name + '='
def get_signature_param_names(signatures):
# add named params
for call_sig in call_signatures:
for call_sig in signatures:
for p in call_sig.params:
# Allow protected access, because it's a public API.
if p._name.get_kind() in (Parameter.POSITIONAL_OR_KEYWORD,
Parameter.KEYWORD_ONLY):
yield p._name
yield ParamNameWithEquals(p._name)
def filter_names(inference_state, completion_names, stack, like_name):
def filter_names(inference_state, completion_names, stack, like_name, fuzzy, cached_name):
comp_dct = {}
if settings.case_insensitive_completion:
like_name = like_name.lower()
@@ -36,13 +48,18 @@ def filter_names(inference_state, completion_names, stack, like_name):
string = name.string_name
if settings.case_insensitive_completion:
string = string.lower()
if string.startswith(like_name):
if fuzzy:
match = helpers.fuzzy_match(string, like_name)
else:
match = helpers.start_match(string, like_name)
if match:
new = classes.Completion(
inference_state,
name,
stack,
len(like_name)
len(like_name),
is_fuzzy=fuzzy,
cached_name=cached_name,
)
k = (new.name, new.complete) # key
if k in comp_dct and settings.no_completion_duplicates:
@@ -52,6 +69,11 @@ def filter_names(inference_state, completion_names, stack, like_name):
yield new
def _remove_duplicates(completions, other_completions):
names = {d.name for d in other_completions}
return [c for c in completions if c.name not in names]
def get_user_context(module_context, position):
"""
Returns the scope in which the user resides. This includes flows.
@@ -68,9 +90,16 @@ def get_flow_scope_node(module_node, position):
return node
@plugin_manager.decorate()
def complete_param_names(context, function_name, decorator_nodes):
# Basically there's no way to do param completion. The plugins are
# responsible for this.
return []
class Completion:
def __init__(self, inference_state, module_context, code_lines, position,
call_signatures_callback):
signatures_callback, fuzzy=False):
self._inference_state = inference_state
self._module_context = module_context
self._module_node = module_context.tree_node
@@ -81,33 +110,56 @@ class Completion:
# The actual cursor position is not what we need to calculate
# everything. We want the start of the name we're on.
self._original_position = position
self._position = position[0], position[1] - len(self._like_name)
self._call_signatures_callback = call_signatures_callback
self._signatures_callback = signatures_callback
def completions(self):
leaf = self._module_node.get_leaf_for_position(self._position, include_prefixes=True)
string, start_leaf = _extract_string_while_in_string(leaf, self._position)
if string is not None:
completions = list(file_name_completions(
self._fuzzy = fuzzy
def complete(self):
leaf = self._module_node.get_leaf_for_position(
self._original_position,
include_prefixes=True
)
string, start_leaf, quote = _extract_string_while_in_string(leaf, self._original_position)
prefixed_completions = complete_dict(
self._module_context,
self._code_lines,
start_leaf or leaf,
self._original_position,
None if string is None else quote + string,
fuzzy=self._fuzzy,
)
if string is not None and not prefixed_completions:
prefixed_completions = list(complete_file_name(
self._inference_state, self._module_context, start_leaf, string,
self._like_name, self._call_signatures_callback,
self._code_lines, self._original_position
self._like_name, self._signatures_callback,
self._code_lines, self._original_position,
self._fuzzy
))
if completions:
return completions
if string is not None:
if not prefixed_completions and '\n' in string:
# Complete only multi line strings
prefixed_completions = self._complete_in_string(start_leaf, string)
return prefixed_completions
completion_names = self._get_value_completions(leaf)
cached_name, completion_names = self._complete_python(leaf)
completions = filter_names(self._inference_state, completion_names,
self.stack, self._like_name)
completions = list(filter_names(self._inference_state, completion_names,
self.stack, self._like_name,
self._fuzzy, cached_name=cached_name))
return sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
return (
# Removing duplicates mostly to remove False/True/None duplicates.
_remove_duplicates(prefixed_completions, completions)
+ sorted(completions, key=lambda x: (x.name.startswith('__'),
x.name.startswith('_'),
x.name.lower()))
)
def _get_value_completions(self, leaf):
def _complete_python(self, leaf):
"""
Analyzes the value that a completion is made in and decides what to
Analyzes the current context of a completion and decides what to
return.
Technically this works by generating a parser stack and analysing the
@@ -122,6 +174,11 @@ class Completion:
grammar = self._inference_state.grammar
self.stack = stack = None
self._position = (
self._original_position[0],
self._original_position[1] - len(self._like_name)
)
cached_name = None
try:
self.stack = stack = helpers.get_stack_at_position(
@@ -132,10 +189,10 @@ class Completion:
if value == '.':
# After ErrorLeaf's that are dots, we will not do any
# completions since this probably just confuses the user.
return []
return cached_name, []
# If we don't have a value, just use global completion.
return self._global_completions()
return cached_name, self._complete_global_scope()
allowed_transitions = \
list(stack._allowed_transition_names_and_token_types())
@@ -174,8 +231,12 @@ class Completion:
completion_names = []
current_line = self._code_lines[self._position[0] - 1][:self._position[1]]
if not current_line or current_line[-1] in ' \t.;':
completion_names += self._get_keyword_completion_names(allowed_transitions)
completion_names += self._complete_keywords(
allowed_transitions,
only_values=not (not current_line or current_line[-1] in ' \t.;'
and current_line[-3:] != '...')
)
if any(t in allowed_transitions for t in (PythonTokenTypes.NAME,
PythonTokenTypes.INDENT)):
@@ -183,17 +244,11 @@ class Completion:
nonterminals = [stack_node.nonterminal for stack_node in stack]
nodes = []
for stack_node in stack:
if stack_node.dfa.from_rule == 'small_stmt':
nodes = []
else:
nodes += stack_node.nodes
nodes = _gather_nodes(stack)
if nodes and nodes[-1] in ('as', 'def', 'class'):
# No completions for ``with x as foo`` and ``import x as foo``.
# Also true for defining names as a class or function.
return list(self._get_class_value_completions(is_function=True))
return cached_name, list(self._complete_inherited(is_function=True))
elif "import_stmt" in nonterminals:
level, names = parse_dotted_names(nodes, "import_from" in nonterminals)
@@ -205,23 +260,70 @@ class Completion:
)
elif nonterminals[-1] in ('trailer', 'dotted_name') and nodes[-1] == '.':
dot = self._module_node.get_leaf_for_position(self._position)
completion_names += self._trailer_completions(dot.get_previous_leaf())
cached_name, n = self._complete_trailer(dot.get_previous_leaf())
completion_names += n
elif self._is_parameter_completion():
completion_names += self._complete_params(leaf)
else:
completion_names += self._global_completions()
completion_names += self._get_class_value_completions(is_function=False)
completion_names += self._complete_global_scope()
completion_names += self._complete_inherited(is_function=False)
if 'trailer' in nonterminals:
call_signatures = self._call_signatures_callback()
completion_names += get_call_signature_param_names(call_signatures)
# Apparently this looks like it's good enough to filter most cases
# so that signature completions don't randomly appear.
# To understand why this works, three things are important:
# 1. trailer with a `,` in it is either a subscript or an arglist.
# 2. If there's no `,`, it's at the start and only signatures start
# with `(`. Other trailers could start with `.` or `[`.
# 3. Decorators are very primitive and have an optional `(` with
# optional arglist in them.
if nodes[-1] in ['(', ','] and nonterminals[-1] in ('trailer', 'arglist', 'decorator'):
signatures = self._signatures_callback(*self._position)
completion_names += get_signature_param_names(signatures)
return completion_names
return cached_name, completion_names
def _get_keyword_completion_names(self, allowed_transitions):
def _is_parameter_completion(self):
tos = self.stack[-1]
if tos.nonterminal == 'lambdef' and len(tos.nodes) == 1:
# We are at the position `lambda `, where basically the next node
# is a param.
return True
if tos.nonterminal in 'parameters':
# Basically we are at the position `foo(`, there's nothing there
# yet, so we have no `typedargslist`.
return True
# var args is for lambdas and typed args for normal functions
return tos.nonterminal in ('typedargslist', 'varargslist') and tos.nodes[-1] == ','
def _complete_params(self, leaf):
stack_node = self.stack[-2]
if stack_node.nonterminal == 'parameters':
stack_node = self.stack[-3]
if stack_node.nonterminal == 'funcdef':
context = get_user_context(self._module_context, self._position)
node = search_ancestor(leaf, 'error_node', 'funcdef')
if node.type == 'error_node':
n = node.children[0]
if n.type == 'decorators':
decorators = n.children
elif n.type == 'decorator':
decorators = [n]
else:
decorators = []
else:
decorators = node.get_decorators()
function_name = stack_node.nodes[1]
return complete_param_names(context, function_name.value, decorators)
return []
def _complete_keywords(self, allowed_transitions, only_values):
for k in allowed_transitions:
if isinstance(k, str) and k.isalpha():
yield keywords.KeywordName(self._inference_state, k)
if not only_values or k in ('True', 'False', 'None'):
yield keywords.KeywordName(self._inference_state, k)
def _global_completions(self):
def _complete_global_scope(self):
context = get_user_context(self._module_context, self._position)
debug.dbg('global completion scope: %s', context)
flow_scope_node = get_flow_scope_node(self._module_node, self._position)
@@ -235,29 +337,108 @@ class Completion:
completion_names += filter.values()
return completion_names
def _trailer_completions(self, previous_leaf):
user_value = get_user_context(self._module_context, self._position)
def _complete_trailer(self, previous_leaf):
inferred_context = self._module_context.create_context(previous_leaf)
values = infer_call_of_leaf(inferred_context, previous_leaf)
completion_names = []
debug.dbg('trailer completion values: %s', values, color='MAGENTA')
# The cached name simply exists to make speed optimizations for certain
# modules.
cached_name = None
if len(values) == 1:
v, = values
if v.is_module():
if len(v.string_names) == 1:
module_name = v.string_names[0]
if module_name in ('numpy', 'tensorflow', 'matplotlib', 'pandas'):
cached_name = module_name
return cached_name, self._complete_trailer_for_values(values)
def _complete_trailer_for_values(self, values):
user_context = get_user_context(self._module_context, self._position)
completion_names = []
for value in values:
for filter in value.get_filters(origin_scope=user_value.tree_node):
for filter in value.get_filters(origin_scope=user_context.tree_node):
completion_names += filter.values()
if not value.is_stub() and isinstance(value, TreeInstance):
completion_names += self._complete_getattr(value)
python_values = convert_values(values)
for c in python_values:
if c not in values:
for filter in c.get_filters(origin_scope=user_value.tree_node):
for filter in c.get_filters(origin_scope=user_context.tree_node):
completion_names += filter.values()
return completion_names
def _complete_getattr(self, instance):
"""
A heuristic to make completion for proxy objects work. This is not
intended to work in all cases. It works exactly in this case:
def __getattr__(self, name):
...
return getattr(any_object, name)
It is important that the return contains getattr directly, otherwise it
won't work anymore. It's really just a stupid heuristic. It will not
work if you write e.g. `return (getatr(o, name))`, because of the
additional parentheses. It will also not work if you move the getattr
to some other place that is not the return statement itself.
It is intentional that it doesn't work in all cases. Generally it's
really hard to do even this case (as you can see below). Most people
will write it like this anyway and the other ones, well they are just
out of luck I guess :) ~dave.
"""
names = (instance.get_function_slot_names(u'__getattr__')
or instance.get_function_slot_names(u'__getattribute__'))
functions = ValueSet.from_sets(
name.infer()
for name in names
)
for func in functions:
tree_node = func.tree_node
for return_stmt in tree_node.iter_return_stmts():
# Basically until the next comment we just try to find out if a
# return statement looks exactly like `return getattr(x, name)`.
if return_stmt.type != 'return_stmt':
continue
atom_expr = return_stmt.children[1]
if atom_expr.type != 'atom_expr':
continue
atom = atom_expr.children[0]
trailer = atom_expr.children[1]
if len(atom_expr.children) != 2 or atom.type != 'name' \
or atom.value != 'getattr':
continue
arglist = trailer.children[1]
if arglist.type != 'arglist' or len(arglist.children) < 3:
continue
context = func.as_context()
object_node = arglist.children[0]
# Make sure it's a param: foo in __getattr__(self, foo)
name_node = arglist.children[2]
name_list = context.goto(name_node, name_node.start_pos)
if not any(n.api_type == 'param' for n in name_list):
continue
# Now that we know that these are most probably completion
# objects, we just infer the object and return them as
# completions.
objects = context.infer_node(object_node)
return self._complete_trailer_for_values(objects)
return []
def _get_importer_names(self, names, level=0, only_modules=True):
names = [n.value for n in names]
i = imports.Importer(self._inference_state, names, self._module_context, level)
return i.completion_names(self._inference_state, only_modules=only_modules)
def _get_class_value_completions(self, is_function=True):
def _complete_inherited(self, is_function=True):
"""
Autocomplete inherited methods when overriding in child class.
"""
@@ -281,21 +462,110 @@ class Completion:
if (name.api_type == 'function') == is_function:
yield name
def _complete_in_string(self, start_leaf, string):
"""
To make it possible for people to have completions in doctests or
generally in "Python" code in docstrings, we use the following
heuristic:
- Having an indented block of code
- Having some doctest code that starts with `>>>`
- Having backticks that doesn't have whitespace inside it
"""
def iter_relevant_lines(lines):
include_next_line = False
for l in code_lines:
if include_next_line or l.startswith('>>>') or l.startswith(' '):
yield re.sub(r'^( *>>> ?| +)', '', l)
else:
yield None
include_next_line = bool(re.match(' *>>>', l))
string = dedent(string)
code_lines = split_lines(string, keepends=True)
relevant_code_lines = list(iter_relevant_lines(code_lines))
if relevant_code_lines[-1] is not None:
# Some code lines might be None, therefore get rid of that.
relevant_code_lines = [c or '\n' for c in relevant_code_lines]
return self._complete_code_lines(relevant_code_lines)
match = re.search(r'`([^`\s]+)', code_lines[-1])
if match:
return self._complete_code_lines([match.group(1)])
return []
def _complete_code_lines(self, code_lines):
module_node = self._inference_state.grammar.parse(''.join(code_lines))
module_value = ModuleValue(
self._inference_state,
module_node,
code_lines=code_lines,
)
module_value.parent_context = self._module_context
return Completion(
self._inference_state,
module_value.as_context(),
code_lines=code_lines,
position=module_node.end_pos,
signatures_callback=lambda *args, **kwargs: [],
fuzzy=self._fuzzy
).complete()
def _gather_nodes(stack):
nodes = []
for stack_node in stack:
if stack_node.dfa.from_rule == 'small_stmt':
nodes = []
else:
nodes += stack_node.nodes
return nodes
_string_start = re.compile(r'^\w*(\'{3}|"{3}|\'|")')
def _extract_string_while_in_string(leaf, position):
if leaf.type == 'string':
match = re.match(r'^\w*(\'{3}|"{3}|\'|")', leaf.value)
quote = match.group(1)
def return_part_of_leaf(leaf):
kwargs = {}
if leaf.line == position[0]:
kwargs['endpos'] = position[1] - leaf.column
match = _string_start.match(leaf.value, **kwargs)
start = match.group(0)
if leaf.line == position[0] and position[1] < leaf.column + match.end():
return None, None
if leaf.end_pos[0] == position[0] and position[1] > leaf.end_pos[1] - len(quote):
return None, None
return cut_value_at_position(leaf, position)[match.end():], leaf
return None, None, None
return cut_value_at_position(leaf, position)[match.end():], leaf, start
if position < leaf.start_pos:
return None, None, None
if leaf.type == 'string':
return return_part_of_leaf(leaf)
leaves = []
while leaf is not None and leaf.line == position[0]:
while leaf is not None:
if leaf.type == 'error_leaf' and ('"' in leaf.value or "'" in leaf.value):
return ''.join(l.get_code() for l in leaves), leaf
if len(leaf.value) > 1:
return return_part_of_leaf(leaf)
prefix_leaf = None
if not leaf.prefix:
prefix_leaf = leaf.get_previous_leaf()
if prefix_leaf is None or prefix_leaf.type != 'name' \
or not all(c in 'rubf' for c in prefix_leaf.value.lower()):
prefix_leaf = None
return (
''.join(cut_value_at_position(l, position) for l in leaves),
prefix_leaf or leaf,
('' if prefix_leaf is None else prefix_leaf.value)
+ cut_value_at_position(leaf, position),
)
if leaf.line != position[0]:
# Multi line strings are always simple error leaves and contain the
# whole string, single line error leaves are atherefore important
# now and since the line is different, it's not really a single
# line string anymore.
break
leaves.insert(0, leaf)
leaf = leaf.get_previous_leaf()
return None, None
return None, None, None

View File

@@ -0,0 +1,25 @@
_cache = {}
def save_entry(module_name, name, cache):
try:
module_cache = _cache[module_name]
except KeyError:
module_cache = _cache[module_name] = {}
module_cache[name] = cache
def _create_get_from_cache(number):
def _get_from_cache(module_name, name, get_cache_values):
try:
return _cache[module_name][name][number]
except KeyError:
v = get_cache_values()
save_entry(module_name, name, v)
return v[number]
return _get_from_cache
get_type = _create_get_from_cache(0)
get_docstring_signature = _create_get_from_cache(1)
get_docstring = _create_get_from_cache(2)

View File

@@ -19,6 +19,7 @@ _VersionInfo = namedtuple('VersionInfo', 'major minor micro')
_SUPPORTED_PYTHONS = ['3.8', '3.7', '3.6', '3.5', '3.4', '2.7']
_SAFE_PATHS = ['/usr/bin', '/usr/local/bin']
_CONDA_VAR = 'CONDA_PREFIX'
_CURRENT_VERSION = '%s.%s' % (sys.version_info.major, sys.version_info.minor)
@@ -147,13 +148,13 @@ class InterpreterEnvironment(_SameEnvironmentMixin, _BaseEnvironment):
return sys.path
def _get_virtual_env_from_var():
def _get_virtual_env_from_var(env_var='VIRTUAL_ENV'):
"""Get virtualenv environment from VIRTUAL_ENV environment variable.
It uses `safe=False` with ``create_environment``, because the environment
variable is considered to be safe / controlled by the user solely.
"""
var = os.environ.get('VIRTUAL_ENV')
var = os.environ.get(env_var)
if var:
# Under macOS in some cases - notably when using Pipenv - the
# sys.prefix of the virtualenv is /path/to/env/bin/.. instead of
@@ -178,7 +179,8 @@ def _calculate_sha256_for_file(path):
def get_default_environment():
"""
Tries to return an active Virtualenv. If there is no VIRTUAL_ENV variable
Tries to return an active Virtualenv or conda environment.
If there is no VIRTUAL_ENV variable or no CONDA_PREFIX variable set
set it will return the latest Python version installed on the system. This
makes it possible to use as many new Python features as possible when using
autocompletion and other functionality.
@@ -189,6 +191,10 @@ def get_default_environment():
if virtual_env is not None:
return virtual_env
conda_env = _get_virtual_env_from_var(_CONDA_VAR)
if conda_env is not None:
return conda_env
return _try_get_same_env()
@@ -233,7 +239,7 @@ def _try_get_same_env():
def get_cached_default_environment():
var = os.environ.get('VIRTUAL_ENV')
var = os.environ.get('VIRTUAL_ENV') or os.environ.get(_CONDA_VAR)
environment = _get_cached_default_environment()
# Under macOS in some cases - notably when using Pipenv - the
@@ -255,28 +261,37 @@ def find_virtualenvs(paths=None, **kwargs):
"""
:param paths: A list of paths in your file system to be scanned for
Virtualenvs. It will search in these paths and potentially execute the
Python binaries. Also the VIRTUAL_ENV variable will be checked if it
contains a valid Virtualenv.
Python binaries.
:param safe: Default True. In case this is False, it will allow this
function to execute potential `python` environments. An attacker might
be able to drop an executable in a path this function is searching by
default. If the executable has not been installed by root, it will not
be executed.
:param use_environment_vars: Default True. If True, the VIRTUAL_ENV
variable will be checked if it contains a valid VirtualEnv.
CONDA_PREFIX will be checked to see if it contains a valid conda
environment.
:yields: :class:`Environment`
"""
def py27_comp(paths=None, safe=True):
def py27_comp(paths=None, safe=True, use_environment_vars=True):
if paths is None:
paths = []
_used_paths = set()
# Using this variable should be safe, because attackers might be able
# to drop files (via git) but not environment variables.
virtual_env = _get_virtual_env_from_var()
if virtual_env is not None:
yield virtual_env
_used_paths.add(virtual_env.path)
if use_environment_vars:
# Using this variable should be safe, because attackers might be
# able to drop files (via git) but not environment variables.
virtual_env = _get_virtual_env_from_var()
if virtual_env is not None:
yield virtual_env
_used_paths.add(virtual_env.path)
conda_env = _get_virtual_env_from_var(_CONDA_VAR)
if conda_env is not None:
yield conda_env
_used_paths.add(conda_env.path)
for directory in paths:
if not os.path.isdir(directory):
@@ -446,8 +461,8 @@ def _is_unix_safe_simple(real_path):
# 2. The repository has an innocent looking folder called foobar. jedi
# searches for the folder and executes foobar/bin/python --version if
# there's also a foobar/bin/activate.
# 3. The bin/python is obviously not a python script but a bash script or
# whatever the attacker wants.
# 3. The attacker has gained code execution, since he controls
# foobar/bin/python.
return uid == 0

View File

@@ -1,16 +1,20 @@
import os
from jedi._compatibility import FileNotFoundError, force_unicode, scandir
from jedi.inference.names import AbstractArbitraryName
from jedi.api import classes
from jedi.api.strings import StringName, get_quote_ending
from jedi.api.helpers import fuzzy_match, start_match
from jedi.inference.helpers import get_str_or_none
from jedi.parser_utils import get_string_quote
def file_name_completions(inference_state, module_context, start_leaf, string,
like_name, call_signatures_callback, code_lines, position):
class PathName(StringName):
api_type = u'path'
def complete_file_name(inference_state, module_context, start_leaf, string,
like_name, signatures_callback, code_lines, position, fuzzy):
# First we want to find out what can actually be changed as a name.
like_name_length = len(os.path.basename(string) + like_name)
like_name_length = len(os.path.basename(string))
addition = _get_string_additions(module_context, start_leaf)
if addition is None:
@@ -19,10 +23,10 @@ def file_name_completions(inference_state, module_context, start_leaf, string,
# Here we use basename again, because if strings are added like
# `'foo' + 'bar`, it should complete to `foobar/`.
must_start_with = os.path.basename(string) + like_name
must_start_with = os.path.basename(string)
string = os.path.dirname(string)
sigs = call_signatures_callback()
sigs = signatures_callback(*position)
is_in_os_path_join = sigs and all(s.full_name == 'os.path.join' for s in sigs)
if is_in_os_path_join:
to_be_added = _add_os_path_join(module_context, start_leaf, sigs[0].bracket_start)
@@ -32,31 +36,28 @@ def file_name_completions(inference_state, module_context, start_leaf, string,
string = to_be_added + string
base_path = os.path.join(inference_state.project._path, string)
try:
listed = scandir(base_path)
except FileNotFoundError:
listed = sorted(scandir(base_path), key=lambda e: e.name)
# OSError: [Errno 36] File name too long: '...'
except (FileNotFoundError, OSError):
return
for entry in listed:
name = entry.name
if name.startswith(must_start_with):
if fuzzy:
match = fuzzy_match(name, must_start_with)
else:
match = start_match(name, must_start_with)
if match:
if is_in_os_path_join or not entry.is_dir():
if start_leaf.type == 'string':
quote = get_string_quote(start_leaf)
else:
assert start_leaf.type == 'error_leaf'
quote = start_leaf.value
potential_other_quote = \
code_lines[position[0] - 1][position[1]:position[1] + len(quote)]
# Add a quote if it's not already there.
if quote != potential_other_quote:
name += quote
name += get_quote_ending(start_leaf.value, code_lines, position)
else:
name += os.path.sep
yield classes.Completion(
inference_state,
FileName(inference_state, name[len(must_start_with) - like_name_length:]),
PathName(inference_state, name[len(must_start_with) - like_name_length:]),
stack=None,
like_name_length=like_name_length
like_name_length=like_name_length,
is_fuzzy=fuzzy,
)
@@ -99,11 +100,6 @@ def _add_strings(context, nodes, add_slash=False):
return string
class FileName(AbstractArbitraryName):
api_type = u'path'
is_value_name = False
def _add_os_path_join(module_context, start_leaf, bracket_start):
def check(maybe_bracket, nodes):
if maybe_bracket.start_pos != bracket_start:

View File

@@ -4,6 +4,7 @@ Helpers for the API
import re
from collections import namedtuple
from textwrap import dedent
from functools import wraps
from parso.python.parser import Parser
from parso.python import tree
@@ -13,12 +14,25 @@ from jedi.inference.base_value import NO_VALUES
from jedi.inference.syntax_tree import infer_atom
from jedi.inference.helpers import infer_call_of_leaf
from jedi.inference.compiled import get_string_value_set
from jedi.cache import call_signature_time_cache
from jedi.cache import signature_time_cache
CompletionParts = namedtuple('CompletionParts', ['path', 'has_dot', 'name'])
def start_match(string, like_name):
return string.startswith(like_name)
def fuzzy_match(string, like_name):
if len(like_name) <= 1:
return like_name in string
pos = string.find(like_name[0])
if pos >= 0:
return fuzzy_match(string[pos + 1:], like_name[1:])
return False
def sorted_definitions(defs):
# Note: `or ''` below is required because `module_path` could be
return sorted(defs, key=lambda x: (x.module_path or '', x.line or 0, x.column or 0, x.name))
@@ -136,11 +150,9 @@ def get_stack_at_position(grammar, code_lines, leaf, pos):
)
def infer_goto_definition(inference_state, context, leaf):
def infer(inference_state, context, leaf):
if leaf.type == 'name':
# In case of a name we can just use goto_definition which does all the
# magic itself.
return inference_state.goto_definitions(context, leaf)
return inference_state.infer(context, leaf)
parent = leaf.parent
definitions = NO_VALUES
@@ -314,7 +326,7 @@ def _get_index_and_key(nodes, position):
return nodes_before.count(','), key_str
def _get_call_signature_details_from_error_node(node, additional_children, position):
def _get_signature_details_from_error_node(node, additional_children, position):
for index, element in reversed(list(enumerate(node.children))):
# `index > 0` means that it's a trailer and not an atom.
if element == '(' and element.end_pos <= position and index > 0:
@@ -328,33 +340,30 @@ def _get_call_signature_details_from_error_node(node, additional_children, posit
return CallDetails(element, children + additional_children, position)
def get_call_signature_details(module, position):
def get_signature_details(module, position):
leaf = module.get_leaf_for_position(position, include_prefixes=True)
# It's easier to deal with the previous token than the next one in this
# case.
if leaf.start_pos >= position:
# Whitespace / comments after the leaf count towards the previous leaf.
leaf = leaf.get_previous_leaf()
if leaf is None:
return None
if leaf == ')':
# TODO is this ok?
if leaf.end_pos == position:
leaf = leaf.get_next_leaf()
# Now that we know where we are in the syntax tree, we start to look at
# parents for possible function definitions.
node = leaf.parent
while node is not None:
if node.type in ('funcdef', 'classdef'):
# Don't show call signatures if there's stuff before it that just
# makes it feel strange to have a call signature.
# Don't show signatures if there's stuff before it that just
# makes it feel strange to have a signature.
return None
additional_children = []
for n in reversed(node.children):
if n.start_pos < position:
if n.type == 'error_node':
result = _get_call_signature_details_from_error_node(
result = _get_signature_details_from_error_node(
n, additional_children, position
)
if result is not None:
@@ -364,19 +373,25 @@ def get_call_signature_details(module, position):
continue
additional_children.insert(0, n)
# Find a valid trailer
if node.type == 'trailer' and node.children[0] == '(':
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallDetails(node.children[0], node.children, position)
# Additionally we have to check that an ending parenthesis isn't
# interpreted wrong. There are two cases:
# 1. Cursor before paren -> The current signature is good
# 2. Cursor after paren -> We need to skip the current signature
if not (leaf is node.children[-1] and position >= leaf.end_pos):
leaf = node.get_previous_leaf()
if leaf is None:
return None
return CallDetails(node.children[0], node.children, position)
node = node.parent
return None
@call_signature_time_cache("call_signatures_validity")
def cache_call_signatures(inference_state, context, bracket_leaf, code_lines, user_pos):
@signature_time_cache("call_signatures_validity")
def cache_signatures(inference_state, context, bracket_leaf, code_lines, user_pos):
"""This function calculates the cache key."""
line_index = user_pos[0] - 1
@@ -390,8 +405,31 @@ def cache_call_signatures(inference_state, context, bracket_leaf, code_lines, us
yield None # Don't cache!
else:
yield (module_path, before_bracket, bracket_leaf.start_pos)
yield infer_goto_definition(
yield infer(
inference_state,
context,
bracket_leaf.get_previous_leaf(),
)
def validate_line_column(func):
@wraps(func)
def wrapper(self, line=None, column=None, *args, **kwargs):
line = max(len(self._code_lines), 1) if line is None else line
if not (0 < line <= len(self._code_lines)):
raise ValueError('`line` parameter is not in a valid range.')
line_string = self._code_lines[line - 1]
line_len = len(line_string)
if line_string.endswith('\r\n'):
line_len -= 1
if line_string.endswith('\n'):
line_len -= 1
column = line_len if column is None else column
if not (0 <= column <= line_len):
raise ValueError('`column` parameter (%d) is not in a valid range '
'(0-%d) for line %d (%r).' % (
column, line_len, line, line_string))
return func(self, line, column, *args, **kwargs)
return wrapper

View File

@@ -15,41 +15,11 @@ except ImportError:
pydoc_topics = None
def get_operator(inference_state, string, pos):
return Keyword(inference_state, string, pos)
class KeywordName(AbstractArbitraryName):
api_type = u'keyword'
def infer(self):
return [Keyword(self.inference_state, self.string_name, (0, 0))]
class Keyword(object):
api_type = u'keyword'
def __init__(self, inference_state, name, pos):
self.name = KeywordName(inference_state, name)
self.start_pos = pos
self.parent = inference_state.builtins_module
@property
def names(self):
""" For a `parsing.Name` like comparision """
return [self.name]
def py__doc__(self):
return imitate_pydoc(self.name.string_name)
def get_signatures(self):
# TODO this makes no sense, I think Keyword should somehow merge with
# Value to make it easier for the api/classes.py to deal with all
# of it.
return []
def __repr__(self):
return '<%s: %s>' % (type(self).__name__, self.name)
return imitate_pydoc(self.string_name)
def imitate_pydoc(string):
@@ -69,7 +39,9 @@ def imitate_pydoc(string):
string = h.symbols[string]
string, _, related = string.partition(' ')
get_target = lambda s: h.topics.get(s, h.keywords.get(s))
def get_target(s):
return h.topics.get(s, h.keywords.get(s))
while isinstance(string, str):
string = get_target(string)

View File

@@ -94,7 +94,8 @@ class Project(object):
return sys_path
@inference_state_as_method_param_cache()
def _get_sys_path(self, inference_state, environment=None, add_parent_paths=True):
def _get_sys_path(self, inference_state, environment=None,
add_parent_paths=True, add_init_paths=False):
"""
Keep this method private for all users of jedi. However internally this
one is used like a public method.
@@ -110,7 +111,17 @@ class Project(object):
suffixed += discover_buildout_paths(inference_state, inference_state.script_path)
if add_parent_paths:
traversed = list(traverse_parents(inference_state.script_path))
# Collect directories in upward search by:
# 1. Skipping directories with __init__.py
# 2. Stopping immediately when above self._path
traversed = []
for parent_path in traverse_parents(inference_state.script_path):
if not parent_path.startswith(self._path):
break
if not add_init_paths \
and os.path.isfile(os.path.join(parent_path, "__init__.py")):
continue
traversed.append(parent_path)
# AFAIK some libraries have imports like `foo.foo.bar`, which
# leads to the conclusion to by default prefer longer paths

View File

@@ -21,7 +21,7 @@ import jedi.utils
from jedi import __version__ as __jedi_version__
print('REPL completion using Jedi %s' % __jedi_version__)
jedi.utils.setup_readline()
jedi.utils.setup_readline(fuzzy=False)
del jedi

110
jedi/api/strings.py Normal file
View File

@@ -0,0 +1,110 @@
"""
This module is here for string completions. This means mostly stuff where
strings are returned, like `foo = dict(bar=3); foo["ba` would complete to
`"bar"]`.
It however does the same for numbers. The difference between string completions
and other completions is mostly that this module doesn't return defined
names in a module, but pretty much an arbitrary string.
"""
import re
from jedi._compatibility import unicode
from jedi.inference.names import AbstractArbitraryName
from jedi.inference.helpers import infer_call_of_leaf
from jedi.api.classes import Completion
from jedi.parser_utils import cut_value_at_position
_sentinel = object()
class StringName(AbstractArbitraryName):
api_type = u'string'
is_value_name = False
def complete_dict(module_context, code_lines, leaf, position, string, fuzzy):
bracket_leaf = leaf
if bracket_leaf != '[':
bracket_leaf = leaf.get_previous_leaf()
cut_end_quote = ''
if string:
cut_end_quote = get_quote_ending(string, code_lines, position, invert_result=True)
if bracket_leaf == '[':
if string is None and leaf is not bracket_leaf:
string = cut_value_at_position(leaf, position)
context = module_context.create_context(bracket_leaf)
before_bracket_leaf = bracket_leaf.get_previous_leaf()
if before_bracket_leaf.type in ('atom', 'trailer', 'name'):
values = infer_call_of_leaf(context, before_bracket_leaf)
return list(_completions_for_dicts(
module_context.inference_state,
values,
'' if string is None else string,
cut_end_quote,
fuzzy=fuzzy,
))
return []
def _completions_for_dicts(inference_state, dicts, literal_string, cut_end_quote, fuzzy):
for dict_key in sorted(_get_python_keys(dicts), key=lambda x: repr(x)):
dict_key_str = _create_repr_string(literal_string, dict_key)
if dict_key_str.startswith(literal_string):
name = StringName(inference_state, dict_key_str[:-len(cut_end_quote) or None])
yield Completion(
inference_state,
name,
stack=None,
like_name_length=len(literal_string),
is_fuzzy=fuzzy
)
def _create_repr_string(literal_string, dict_key):
if not isinstance(dict_key, (unicode, bytes)) or not literal_string:
return repr(dict_key)
r = repr(dict_key)
prefix, quote = _get_string_prefix_and_quote(literal_string)
if quote is None:
return r
if quote == r[0]:
return prefix + r
return prefix + quote + r[1:-1] + quote
def _get_python_keys(dicts):
for dct in dicts:
if dct.array_type == 'dict':
for key in dct.get_key_values():
dict_key = key.get_safe_value(default=_sentinel)
if dict_key is not _sentinel:
yield dict_key
def _get_string_prefix_and_quote(string):
match = re.match(r'(\w*)("""|\'{3}|"|\')', string)
if match is None:
return None, None
return match.group(1), match.group(2)
def _get_string_quote(string):
return _get_string_prefix_and_quote(string)[1]
def _matches_quote_at_position(code_lines, quote, position):
string = code_lines[position[0] - 1][position[1]:position[1] + len(quote)]
return string == quote
def get_quote_ending(string, code_lines, position, invert_result=False):
quote = _get_string_quote(string)
# Add a quote only if it's not already there.
if _matches_quote_at_position(code_lines, quote, position) != invert_result:
return ''
return quote

View File

@@ -75,7 +75,7 @@ def clear_time_caches(delete_all=False):
del tc[key]
def call_signature_time_cache(time_add_setting):
def signature_time_cache(time_add_setting):
"""
This decorator works as follows: Call it with a setting and after that
use the function with a callable that returns the key.

View File

@@ -69,5 +69,8 @@ class BaseValueSet(object):
def __eq__(self, other):
return self._set == other._set
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self._set)

View File

@@ -13,14 +13,46 @@ class AbstractFolderIO(object):
def get_file_io(self, name):
raise NotImplementedError
def get_parent_folder(self):
raise NotImplementedError
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.path)
class FolderIO(AbstractFolderIO):
def get_base_name(self):
return os.path.basename(self.path)
def list(self):
return os.listdir(self.path)
def get_file_io(self, name):
return FileIO(os.path.join(self.path, name))
def get_parent_folder(self):
return FolderIO(os.path.dirname(self.path))
def walk(self):
for root, dirs, files in os.walk(self.path):
root_folder_io = FolderIO(root)
original_folder_ios = [FolderIO(os.path.join(root, d)) for d in dirs]
modified_folder_ios = list(original_folder_ios)
yield (
root_folder_io,
modified_folder_ios,
[FileIO(os.path.join(root, f)) for f in files],
)
modified_iterator = iter(reversed(modified_folder_ios))
current = next(modified_iterator, None)
i = len(original_folder_ios)
for folder_io in reversed(original_folder_ios):
i -= 1 # Basically enumerate but reversed
if current is folder_io:
current = next(modified_iterator, None)
else:
del dirs[i]
class FileIOFolderMixin(object):
def get_parent_folder(self):

View File

@@ -67,6 +67,7 @@ from parso import python_bytes_to_unicode
from jedi.file_io import FileIO
from jedi import debug
from jedi import settings
from jedi.inference import imports
from jedi.inference import recursion
from jedi.inference.cache import inference_state_function_cache
@@ -76,7 +77,7 @@ from jedi.inference.base_value import ContextualizedNode, \
ValueSet, iterate_values
from jedi.inference.value import ClassValue, FunctionValue
from jedi.inference.syntax_tree import infer_expr_stmt, \
check_tuple_assignments
check_tuple_assignments, tree_name_to_values
from jedi.inference.imports import follow_error_node_imports_if_possible
from jedi.plugins import plugin_manager
@@ -103,16 +104,13 @@ class InferenceState(object):
self.project = project
self.access_cache = {}
self.allow_descriptor_getattr = False
self.flow_analysis_enabled = True
self.reset_recursion_limitations()
self.allow_different_encoding = True
def import_module(self, import_names, parent_module_value=None,
sys_path=None, prefer_stubs=True):
if sys_path is None:
sys_path = self.get_sys_path()
return imports.import_module(self, import_names, parent_module_value,
sys_path, prefer_stubs=prefer_stubs)
def import_module(self, import_names, sys_path=None, prefer_stubs=True):
return imports.import_module_by_names(
self, import_names, sys_path, prefer_stubs=prefer_stubs)
@staticmethod
@plugin_manager.decorate()
@@ -146,7 +144,7 @@ class InferenceState(object):
"""Convenience function"""
return self.project._get_sys_path(self, environment=self.environment, **kwargs)
def goto_definitions(self, context, name):
def infer(self, context, name):
def_ = name.get_definition(import_name_always=True)
if def_ is not None:
type_ = def_.type
@@ -170,6 +168,10 @@ class InferenceState(object):
return check_tuple_assignments(n, for_types)
if type_ in ('import_from', 'import_name'):
return imports.infer_import(context, name)
if type_ == 'with_stmt':
return tree_name_to_values(self, context, name)
elif type_ == 'param':
return context.py__getattribute__(name.value, position=name.end_pos)
else:
result = follow_error_node_imports_if_possible(context, name)
if result is not None:
@@ -179,12 +181,15 @@ class InferenceState(object):
def parse_and_get_code(self, code=None, path=None, encoding='utf-8',
use_latest_grammar=False, file_io=None, **kwargs):
if self.allow_different_encoding:
if code is None:
if file_io is None:
file_io = FileIO(path)
code = file_io.read()
code = python_bytes_to_unicode(code, encoding=encoding, errors='replace')
if code is None:
if file_io is None:
file_io = FileIO(path)
code = file_io.read()
# We cannot just use parso, because it doesn't use errors='replace'.
code = python_bytes_to_unicode(code, encoding=encoding, errors='replace')
if len(code) > settings._cropped_file_size:
code = code[:settings._cropped_file_size]
grammar = self.latest_grammar if use_latest_grammar else self.grammar
return grammar.parse(code=code, path=path, file_io=file_io, **kwargs), code

View File

@@ -58,8 +58,8 @@ class Error(object):
return self.__unicode__()
def __eq__(self, other):
return (self.path == other.path and self.name == other.name and
self._start_pos == other._start_pos)
return (self.path == other.path and self.name == other.name
and self._start_pos == other._start_pos)
def __ne__(self, other):
return not self.__eq__(other)
@@ -114,19 +114,11 @@ def _check_for_setattr(instance):
def add_attribute_error(name_context, lookup_value, name):
message = ('AttributeError: %s has no attribute %s.' % (lookup_value, name))
from jedi.inference.value.instance import CompiledInstanceName
# Check for __getattr__/__getattribute__ existance and issue a warning
# instead of an error, if that happens.
typ = Error
if lookup_value.is_instance() and not lookup_value.is_compiled():
slot_names = lookup_value.get_function_slot_names(u'__getattr__') + \
lookup_value.get_function_slot_names(u'__getattribute__')
for n in slot_names:
# TODO do we even get here?
if isinstance(name, CompiledInstanceName) and \
n.parent_context.obj == object:
typ = Warning
break
# TODO maybe make a warning for __getattr__/__getattribute__
if _check_for_setattr(lookup_value):
typ = Warning
@@ -157,7 +149,7 @@ def _check_for_exception_catch(node_context, jedi_name, exception, payload=None)
# Only nodes in try
iterator = iter(obj.children)
for branch_type in iterator:
colon = next(iterator)
next(iterator) # The colon
suite = next(iterator)
if branch_type == 'try' \
and not (branch_type.start_pos < jedi_name.start_pos <= suite.end_pos):

View File

@@ -161,8 +161,7 @@ def unpack_arglist(arglist):
# definitions.
if not (arglist.type in ('arglist', 'testlist') or (
# in python 3.5 **arg is an argument, not arglist
(arglist.type == 'argument') and
arglist.children[0] in ('*', '**'))):
arglist.type == 'argument' and arglist.children[0] in ('*', '**'))):
yield 0, arglist
return

View File

@@ -8,7 +8,7 @@ just one.
"""
from functools import reduce
from operator import add
from parso.python.tree import ExprStmt, SyncCompFor, Name
from parso.python.tree import Name
from jedi import debug
from jedi._compatibility import zip_longest, unicode
@@ -225,6 +225,10 @@ class Value(HelperValueMixin, BaseValue):
raise ValueError("There exists no safe value for value %s" % self)
return default
def execute_operation(self, other, operator):
debug.warning("%s not possible between %s and %s", operator, self, other)
return NO_VALUES
def py__call__(self, arguments):
debug.warning("no execution possible %s", self)
return NO_VALUES
@@ -239,6 +243,10 @@ class Value(HelperValueMixin, BaseValue):
"""
return NO_VALUES
def py__get__(self, instance, class_value):
debug.warning("No __get__ defined on %s", self)
return ValueSet([self])
def get_qualified_names(self):
# Returns Optional[Tuple[str, ...]]
return None
@@ -248,7 +256,13 @@ class Value(HelperValueMixin, BaseValue):
return self.parent_context.is_stub()
def _as_context(self):
raise NotImplementedError('Not all values need to be converted to contexts')
raise NotImplementedError('Not all values need to be converted to contexts: %s', self)
def name(self):
raise NotImplementedError
def py__name__(self):
return self.name.string_name
def iterate_values(values, contextualized_node=None, is_async=False):

View File

@@ -10,7 +10,8 @@ _NO_DEFAULT = object()
_RECURSION_SENTINEL = object()
def _memoize_default(default=_NO_DEFAULT, inference_state_is_first_arg=False, second_arg_is_inference_state=False):
def _memoize_default(default=_NO_DEFAULT, inference_state_is_first_arg=False,
second_arg_is_inference_state=False):
""" This is a typical memoization decorator, BUT there is one difference:
To prevent recursion it sets defaults.

View File

@@ -1,7 +1,7 @@
from jedi._compatibility import unicode
from jedi.inference.compiled.value import CompiledObject, CompiledName, \
CompiledObjectFilter, CompiledValueName, create_from_access_path
from jedi.inference.base_value import ValueWrapper, LazyValueWrapper
from jedi.inference.base_value import LazyValueWrapper
def builtin_from_name(inference_state, string):

View File

@@ -4,6 +4,8 @@ import types
import sys
import operator as op
from collections import namedtuple
import warnings
import re
from jedi._compatibility import unicode, is_py3, builtins, \
py_version, force_unicode
@@ -102,6 +104,15 @@ SignatureParam = namedtuple(
)
def shorten_repr(func):
def wrapper(self):
r = func(self)
if len(r) > 50:
r = r[:50] + '..'
return r
return wrapper
def compiled_objects_cache(attribute_name):
def decorator(func):
"""
@@ -282,6 +293,7 @@ class DirectObjectAccess(object):
return paths
@_force_unicode_decorator
@shorten_repr
def get_repr(self):
builtins = 'builtins', '__builtin__'
@@ -323,7 +335,7 @@ class DirectObjectAccess(object):
name = try_to_get_name(type(self._obj))
if name is None:
return ()
return tuple(name.split('.'))
return tuple(force_unicode(n) for n in name.split('.'))
def dir(self):
return list(map(force_unicode, dir(self._obj)))
@@ -335,8 +347,23 @@ class DirectObjectAccess(object):
except TypeError:
return False
def is_allowed_getattr(self, name):
def is_allowed_getattr(self, name, unsafe=False):
# TODO this API is ugly.
if unsafe:
# Unsafe is mostly used to check for __getattr__/__getattribute__.
# getattr_static works for properties, but the underscore methods
# are just ignored (because it's safer and avoids more code
# execution). See also GH #1378.
# Avoid warnings, see comment in the next function.
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
try:
return hasattr(self._obj, name), False
except Exception:
# Obviously has an attribute (propably a property) that
# gets executed, so just avoid all exceptions here.
return False, False
try:
attr, is_get_descriptor = getattr_static(self._obj, name)
except AttributeError:
@@ -350,7 +377,11 @@ class DirectObjectAccess(object):
def getattr_paths(self, name, default=_sentinel):
try:
return_obj = getattr(self._obj, name)
# Make sure no warnings are printed here, this is autocompletion,
# warnings should not be shown. See also GH #1383.
with warnings.catch_warnings(record=True):
warnings.simplefilter("always")
return_obj = getattr(self._obj, name)
except Exception as e:
if default is _sentinel:
if isinstance(e, AttributeError):
@@ -366,6 +397,22 @@ class DirectObjectAccess(object):
if inspect.ismodule(return_obj):
return [access]
try:
module = return_obj.__module__
except AttributeError:
pass
else:
if module is not None:
try:
__import__(module)
# For some modules like _sqlite3, the __module__ for classes is
# different, in this case it's sqlite3. So we have to try to
# load that "original" module, because it's not loaded yet. If
# we don't do that, we don't really have a "parent" module and
# we would fall back to builtins.
except ImportError:
pass
module = inspect.getmodule(return_obj)
if module is None:
module = inspect.getmodule(type(return_obj))
@@ -374,13 +421,30 @@ class DirectObjectAccess(object):
return [self._create_access(module), access]
def get_safe_value(self):
if type(self._obj) in (bool, bytes, float, int, str, unicode, slice):
if type(self._obj) in (bool, bytes, float, int, str, unicode, slice) or self._obj is None:
return self._obj
raise ValueError("Object is type %s and not simple" % type(self._obj))
def get_api_type(self):
return get_api_type(self._obj)
def get_array_type(self):
if isinstance(self._obj, dict):
return 'dict'
return None
def get_key_paths(self):
def iter_partial_keys():
# We could use list(keys()), but that might take a lot more memory.
for (i, k) in enumerate(self._obj.keys()):
# Limit key listing at some point. This is artificial, but this
# way we don't get stalled because of slow completions
if i > 50:
break
yield k
return [self._create_access_path(k) for k in iter_partial_keys()]
def get_access_path_tuples(self):
accesses = [create_access(self._inference_state, o) for o in self._get_objects_path()]
return [(access.py__name__(), access) for access in accesses]
@@ -421,6 +485,27 @@ class DirectObjectAccess(object):
op = _OPERATORS[operator]
return self._create_access_path(op(self._obj, other_access._obj))
def get_annotation_name_and_args(self):
"""
Returns Tuple[Optional[str], Tuple[AccessPath, ...]]
"""
if sys.version_info < (3, 5):
return None, ()
name = None
args = ()
if safe_getattr(self._obj, '__module__', default='') == 'typing':
m = re.match(r'typing.(\w+)\[', repr(self._obj))
if m is not None:
name = m.group(1)
import typing
if sys.version_info >= (3, 8):
args = typing.get_args(self._obj)
else:
args = safe_getattr(self._obj, '__args__', default=None)
return name, tuple(self._create_access_path(arg) for arg in args)
def needs_type_completions(self):
return inspect.isclass(self._obj) and self._obj != type
@@ -475,6 +560,17 @@ class DirectObjectAccess(object):
if o is None:
return None
try:
# Python 2 doesn't have typing.
import typing
except ImportError:
pass
else:
try:
o = typing.get_type_hints(self._obj).get('return')
except Exception:
pass
return self._create_access_path(o)
def negate(self):
@@ -485,7 +581,6 @@ class DirectObjectAccess(object):
Used to return a couple of infos that are needed when accessing the sub
objects of an objects
"""
# TODO is_allowed_getattr might raise an AttributeError
tuples = dict(
(force_unicode(name), self.is_allowed_getattr(name))
for name in self.dir()

View File

@@ -46,9 +46,9 @@ def _shadowed_dict_newstyle(klass):
except KeyError:
pass
else:
if not (type(class_dict) is types.GetSetDescriptorType and
class_dict.__name__ == "__dict__" and
class_dict.__objclass__ is entry):
if not (type(class_dict) is types.GetSetDescriptorType
and class_dict.__name__ == "__dict__"
and class_dict.__objclass__ is entry):
return class_dict
return _sentinel

View File

@@ -8,20 +8,22 @@ import sys
from jedi.parser_utils import get_cached_code_lines
from jedi._compatibility import unwrap
from jedi import settings
from jedi.inference import compiled
from jedi.cache import underscore_memoization
from jedi.file_io import FileIO
from jedi.inference.base_value import ValueSet, ValueWrapper
from jedi.inference.base_value import ValueSet, ValueWrapper, NO_VALUES
from jedi.inference.helpers import SimpleGetItemNotFound
from jedi.inference.value import ModuleValue
from jedi.inference.cache import inference_state_function_cache
from jedi.inference.compiled.getattr_static import getattr_static
from jedi.inference.cache import inference_state_function_cache, \
inference_state_method_cache
from jedi.inference.compiled.access import compiled_objects_cache, \
ALLOWED_GETITEM_TYPES, get_api_type
from jedi.inference.compiled.value import create_cached_compiled_object
from jedi.inference.gradual.conversion import to_stub
from jedi.inference.context import CompiledContext, TreeContextMixin
from jedi.inference.context import CompiledContext, CompiledModuleContext, \
TreeContextMixin
_sentinel = object()
@@ -56,8 +58,13 @@ class MixedObject(ValueWrapper):
# should be very precise, especially for stuff like `partial`.
return self.compiled_object.get_signatures()
@inference_state_method_cache(default=NO_VALUES)
def py__call__(self, arguments):
return (to_stub(self._wrapped_value) or self._wrapped_value).py__call__(arguments)
# Fallback to the wrapped value if to stub returns no values.
values = to_stub(self._wrapped_value)
if not values:
values = self._wrapped_value
return values.py__call__(arguments)
def get_safe_value(self, default=_sentinel):
if default is _sentinel:
@@ -72,6 +79,8 @@ class MixedObject(ValueWrapper):
raise SimpleGetItemNotFound
def _as_context(self):
if self.parent_context is None:
return MixedModuleContext(self)
return MixedContext(self)
def __repr__(self):
@@ -87,6 +96,10 @@ class MixedContext(CompiledContext, TreeContextMixin):
return self._value.compiled_object
class MixedModuleContext(CompiledModuleContext, MixedContext):
pass
class MixedName(compiled.CompiledName):
"""
The ``CompiledName._compiled_object`` is our MixedObject.
@@ -105,6 +118,7 @@ class MixedName(compiled.CompiledName):
if parent_value is None:
parent_context = None
else:
assert parent_value is not None
parent_context = parent_value.as_context()
if parent_context is None or isinstance(parent_context, MixedContext):
@@ -117,7 +131,7 @@ class MixedName(compiled.CompiledName):
})
# TODO use logic from compiled.CompiledObjectFilter
access_paths = self.parent_context.access_handle.getattr_paths(
access_paths = self._parent_value.access_handle.getattr_paths(
self.string_name,
default=None
)
@@ -146,22 +160,26 @@ def _load_module(inference_state, path):
).get_root_node()
# python_module = inspect.getmodule(python_object)
# TODO we should actually make something like this possible.
#inference_state.modules[python_module.__name__] = module_node
# inference_state.modules[python_module.__name__] = module_node
return module_node
def _get_object_to_check(python_object):
"""Check if inspect.getfile has a chance to find the source."""
if sys.version_info[0] > 2:
python_object = inspect.unwrap(python_object)
try:
python_object = unwrap(python_object)
except ValueError:
# Can return a ValueError when it wraps around
pass
if (inspect.ismodule(python_object) or
inspect.isclass(python_object) or
inspect.ismethod(python_object) or
inspect.isfunction(python_object) or
inspect.istraceback(python_object) or
inspect.isframe(python_object) or
inspect.iscode(python_object)):
if (inspect.ismodule(python_object)
or inspect.isclass(python_object)
or inspect.ismethod(python_object)
or inspect.isfunction(python_object)
or inspect.istraceback(python_object)
or inspect.isframe(python_object)
or inspect.iscode(python_object)):
return python_object
try:
@@ -249,6 +267,8 @@ def _create(inference_state, access_handle, parent_context, *args):
compiled_object = create_cached_compiled_object(
inference_state,
access_handle,
# TODO It looks like we have to use the compiled object as a parent context.
# Why is that?
parent_context=None if parent_context is None
else parent_context.compiled_object.as_context() # noqa
)
@@ -277,7 +297,7 @@ def _create(inference_state, access_handle, parent_context, *args):
file_io=file_io,
string_names=string_names,
code_lines=code_lines,
is_package=compiled_object.is_package,
is_package=compiled_object.is_package(),
).as_context()
if name is not None:
inference_state.module_cache.add(string_names, ValueSet([module_context]))

View File

@@ -383,8 +383,8 @@ class AccessHandle(object):
if name in ('id', 'access') or name.startswith('_'):
raise AttributeError("Something went wrong with unpickling")
#if not is_py3: print >> sys.stderr, name
#print('getattr', name, file=sys.stderr)
# if not is_py3: print >> sys.stderr, name
# print('getattr', name, file=sys.stderr)
return partial(self._workaround, force_unicode(name))
def _workaround(self, name, *args, **kwargs):
@@ -399,8 +399,4 @@ class AccessHandle(object):
@memoize_method
def _cached_results(self, name, *args, **kwargs):
#if type(self._subprocess) == InferenceStateSubprocess:
#print(name, args, kwargs,
#self._subprocess.get_compiled_method_return(self.id, name, *args, **kwargs)
#)
return self._subprocess.get_compiled_method_return(self.id, name, *args, **kwargs)

View File

@@ -17,7 +17,7 @@ from jedi.inference.compiled.access import _sentinel
from jedi.inference.cache import inference_state_function_cache
from jedi.inference.helpers import reraise_getitem_errors
from jedi.inference.signature import BuiltinSignature
from jedi.inference.context import CompiledContext
from jedi.inference.context import CompiledContext, CompiledModuleContext
class CheckAttribute(object):
@@ -50,7 +50,10 @@ class CompiledObject(Value):
return_annotation = self.access_handle.get_return_annotation()
if return_annotation is not None:
# TODO the return annotation may also be a string.
return create_from_access_path(self.inference_state, return_annotation).execute_annotation()
return create_from_access_path(
self.inference_state,
return_annotation
).execute_annotation()
try:
self.access_handle.getattr_paths(u'__call__')
@@ -181,10 +184,12 @@ class CompiledObject(Value):
def _ensure_one_filter(self, is_instance):
return CompiledObjectFilter(self.inference_state, self, is_instance)
@CheckAttribute(u'__getitem__')
def py__simple_getitem__(self, index):
with reraise_getitem_errors(IndexError, KeyError, TypeError):
access = self.access_handle.py__simple_getitem__(index)
try:
access = self.access_handle.py__simple_getitem__(index)
except AttributeError:
return super(CompiledObject, self).py__simple_getitem__(index)
if access is None:
return NO_VALUES
@@ -257,10 +262,35 @@ class CompiledObject(Value):
return default
def execute_operation(self, other, operator):
return create_from_access_path(
self.inference_state,
self.access_handle.execute_operation(other.access_handle, operator)
)
try:
return ValueSet([create_from_access_path(
self.inference_state,
self.access_handle.execute_operation(other.access_handle, operator)
)])
except TypeError:
return NO_VALUES
def execute_annotation(self):
if self.access_handle.get_repr() == 'None':
# None as an annotation doesn't need to be executed.
return ValueSet([self])
name, args = self.access_handle.get_annotation_name_and_args()
arguments = [
ValueSet([create_from_access_path(self.inference_state, path)])
for path in args
]
if name == 'Union':
return ValueSet.from_sets(arg.execute_annotation() for arg in arguments)
elif name:
# While with_generics only exists on very specific objects, we
# should probably be fine, because we control all the typing
# objects.
return ValueSet([
v.with_generics(arguments)
for v in self.inference_state.typing_module.py__getattribute__(name)
]).execute_annotation()
return super(CompiledObject, self).execute_annotation()
def negate(self):
return create_from_access_path(self.inference_state, self.access_handle.negate())
@@ -268,20 +298,48 @@ class CompiledObject(Value):
def get_metaclasses(self):
return NO_VALUES
file_io = None # For modules
def _as_context(self):
if self.parent_context is None:
return CompiledModuleContext(self)
return CompiledContext(self)
@property
def array_type(self):
return self.access_handle.get_array_type()
def get_key_values(self):
return [
create_from_access_path(self.inference_state, k)
for k in self.access_handle.get_key_paths()
]
class CompiledName(AbstractNameDefinition):
def __init__(self, inference_state, parent_context, name):
def __init__(self, inference_state, parent_value, name):
self._inference_state = inference_state
self.parent_context = parent_context
self.parent_context = parent_value.as_context()
self._parent_value = parent_value
self.string_name = name
def py__doc__(self):
value, = self.infer()
return value.py__doc__()
def _get_qualified_names(self):
parent_qualified_names = self.parent_context.get_qualified_names()
if parent_qualified_names is None:
return None
return parent_qualified_names + (self.string_name,)
def get_defining_qualified_value(self):
context = self.parent_context
if context.is_module() or context.is_class():
return self.parent_context.get_value() # Might be None
return None
def __repr__(self):
try:
name = self.parent_context.name # __name__ is not defined all the time
@@ -300,7 +358,7 @@ class CompiledName(AbstractNameDefinition):
@underscore_memoization
def infer(self):
return ValueSet([_create_from_name(
self._inference_state, self.parent_context, self.string_name
self._inference_state, self._parent_value, self.string_name
)])
@@ -385,28 +443,36 @@ class CompiledObjectFilter(AbstractFilter):
self.is_instance = is_instance
def get(self, name):
access_handle = self.compiled_object.access_handle
return self._get(
name,
lambda: self.compiled_object.access_handle.is_allowed_getattr(name),
lambda: self.compiled_object.access_handle.dir(),
lambda name, unsafe: access_handle.is_allowed_getattr(name, unsafe),
lambda name: name in access_handle.dir(),
check_has_attribute=True
)
def _get(self, name, allowed_getattr_callback, dir_callback, check_has_attribute=False):
def _get(self, name, allowed_getattr_callback, in_dir_callback, check_has_attribute=False):
"""
To remove quite a few access calls we introduced the callback here.
"""
has_attribute, is_descriptor = allowed_getattr_callback()
if check_has_attribute and not has_attribute:
return []
# Always use unicode objects in Python 2 from here.
name = force_unicode(name)
if (is_descriptor and not self._inference_state.allow_descriptor_getattr) or not has_attribute:
if self._inference_state.allow_descriptor_getattr:
pass
has_attribute, is_descriptor = allowed_getattr_callback(
name,
unsafe=self._inference_state.allow_descriptor_getattr
)
if check_has_attribute and not has_attribute:
return []
if (is_descriptor or not has_attribute) \
and not self._inference_state.allow_descriptor_getattr:
return [self._get_cached_name(name, is_empty=True)]
if self.is_instance and name not in dir_callback():
if self.is_instance and not in_dir_callback(name):
return []
return [self._get_cached_name(name)]
@@ -421,11 +487,16 @@ class CompiledObjectFilter(AbstractFilter):
from jedi.inference.compiled import builtin_from_name
names = []
needs_type_completions, dir_infos = self.compiled_object.access_handle.get_dir_infos()
# We could use `unsafe` here as well, especially as a parameter to
# get_dir_infos. But this would lead to a lot of property executions
# that are probably not wanted. The drawback for this is that we
# have a different name for `get` and `values`. For `get` we always
# execute.
for name in dir_infos:
names += self._get(
name,
lambda: dir_infos[name],
lambda: dir_infos.keys(),
lambda name, unsafe: dir_infos[name],
lambda name: name in dir_infos,
)
# ``dir`` doesn't include the type names.
@@ -435,7 +506,11 @@ class CompiledObjectFilter(AbstractFilter):
return names
def _create_name(self, name):
return self.name_class(self._inference_state, self.compiled_object, name)
return self.name_class(
self._inference_state,
self.compiled_object,
name
)
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self.compiled_object)

View File

@@ -133,6 +133,9 @@ class AbstractContext(object):
def py__name__(self):
raise NotImplementedError
def get_value(self):
raise NotImplementedError
@property
def name(self):
return None
@@ -200,6 +203,9 @@ class ValueContext(AbstractContext):
def py__doc__(self):
return self._value.py__doc__()
def get_value(self):
return self._value
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._value)
@@ -226,6 +232,7 @@ class TreeContextMixin(object):
self.inference_state, parent_context.parent_context, class_value)
func = value.BoundMethod(
instance=instance,
class_context=class_value.as_context(),
function=func
)
return func
@@ -298,14 +305,6 @@ class ModuleContext(TreeContextMixin, ValueContext):
def py__file__(self):
return self._value.py__file__()
@property
def py__package__(self):
return self._value.py__package__
@property
def is_package(self):
return self._value.is_package
def get_filters(self, until_position=None, origin_scope=None):
filters = self._value.get_filters(origin_scope)
# Skip the first filter and replace it.
@@ -316,11 +315,14 @@ class ModuleContext(TreeContextMixin, ValueContext):
until_position=until_position,
origin_scope=origin_scope
),
GlobalNameFilter(self, self.tree_node),
self.get_global_filter(),
)
for f in filters: # Python 2...
yield f
def get_global_filter(self):
return GlobalNameFilter(self, self.tree_node)
@property
def string_names(self):
return self._value.string_names
@@ -342,6 +344,9 @@ class NamespaceContext(TreeContextMixin, ValueContext):
def get_filters(self, until_position=None, origin_scope=None):
return self._value.get_filters()
def get_value(self):
return self._value
def py__file__(self):
return self._value.py__file__()
@@ -367,6 +372,9 @@ class CompForContext(TreeContextMixin, AbstractContext):
def get_filters(self, until_position=None, origin_scope=None):
yield ParserTreeFilter(self)
def get_value(self):
return None
def py__name__(self):
return '<comprehension context>'
@@ -378,9 +386,17 @@ class CompiledContext(ValueContext):
def get_filters(self, until_position=None, origin_scope=None):
return self._value.get_filters()
class CompiledModuleContext(CompiledContext):
code_lines = None
def get_value(self):
return self._value
@property
def string_names(self):
return self._value.string_names
def py__file__(self):
return self._value.py__file__()
@@ -462,7 +478,7 @@ def get_global_filters(context, until_position, origin_scope):
until_position=until_position,
origin_scope=origin_scope):
yield filter
if isinstance(context, BaseFunctionExecutionContext):
if isinstance(context, (BaseFunctionExecutionContext, ModuleContext)):
# The position should be reset if the current scope is a function.
until_position = None

View File

@@ -21,13 +21,13 @@ from jedi import settings
from jedi import debug
from jedi.parser_utils import get_parent_scope
from jedi.inference.cache import inference_state_method_cache
from jedi.inference import imports
from jedi.inference.arguments import TreeArguments
from jedi.inference.param import get_executed_param_names
from jedi.inference.helpers import is_stdlib_path
from jedi.inference.utils import to_list
from jedi.inference.value import instance
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference.references import get_module_contexts_containing_name
from jedi.inference import recursion
@@ -74,7 +74,7 @@ def dynamic_param_lookup(function_value, param_index):
path = function_value.get_root_context().py__file__()
if path is not None and is_stdlib_path(path):
# We don't want to search for usages in the stdlib. Usually people
# We don't want to search for references in the stdlib. Usually people
# don't work with it (except if you are a core maintainer, sorry).
# This makes everything slower. Just disable it and run the tests,
# you will see the slowdown, especially in 3.6.
@@ -116,8 +116,17 @@ def _search_function_arguments(module_context, funcdef, string_name):
found_arguments = False
i = 0
inference_state = module_context.inference_state
for for_mod_context in imports.get_module_contexts_containing_name(
inference_state, [module_context], string_name):
if settings.dynamic_params_for_other_modules:
module_contexts = get_module_contexts_containing_name(
inference_state, [module_context], string_name,
# Limit the amounts of files to be opened massively.
limit_reduction=5,
)
else:
module_contexts = [module_context]
for for_mod_context in module_contexts:
for name, trailer in _get_potential_nodes(for_mod_context, string_name):
i += 1
@@ -186,7 +195,7 @@ def _check_name_for_execution(inference_state, context, compare_node, name, trai
args = InstanceArguments(value.instance, args)
return args
for value in inference_state.goto_definitions(context, name):
for value in inference_state.infer(context, name):
value_node = value.tree_node
if compare_node == value_node:
yield create_args(value)

View File

@@ -9,7 +9,7 @@ from parso.tree import search_ancestor
from jedi._compatibility import use_metaclass
from jedi.inference import flow_analysis
from jedi.inference.base_value import ValueSet, Value, ValueWrapper, \
from jedi.inference.base_value import ValueSet, ValueWrapper, \
LazyValueWrapper
from jedi.parser_utils import get_cached_parent_scope
from jedi.inference.utils import to_list
@@ -246,24 +246,18 @@ class MergedFilter(object):
return '%s(%s)' % (self.__class__.__name__, ', '.join(str(f) for f in self._filters))
class _BuiltinMappedMethod(Value):
class _BuiltinMappedMethod(ValueWrapper):
"""``Generator.__next__`` ``dict.values`` methods and so on."""
api_type = u'function'
def __init__(self, builtin_value, method, builtin_func):
super(_BuiltinMappedMethod, self).__init__(
builtin_value.inference_state,
parent_context=builtin_value
)
def __init__(self, value, method, builtin_func):
super(_BuiltinMappedMethod, self).__init__(builtin_func)
self._value = value
self._method = method
self._builtin_func = builtin_func
def py__call__(self, arguments):
# TODO add TypeError if params are given/or not correct.
return self._method(self.parent_context)
def __getattr__(self, name):
return getattr(self._builtin_func, name)
return self._method(self._value)
class SpecialMethodFilter(DictFilter):

View File

@@ -81,38 +81,57 @@ def check_flow_information(value, flow, search_name, pos):
return result
def _check_isinstance_type(value, element, search_name):
try:
assert element.type in ('power', 'atom_expr')
# this might be removed if we analyze and, etc
assert len(element.children) == 2
first, trailer = element.children
assert first.type == 'name' and first.value == 'isinstance'
assert trailer.type == 'trailer' and trailer.children[0] == '('
assert len(trailer.children) == 3
def _get_isinstance_trailer_arglist(node):
if node.type in ('power', 'atom_expr') and len(node.children) == 2:
# This might be removed if we analyze and, etc
first, trailer = node.children
if first.type == 'name' and first.value == 'isinstance' \
and trailer.type == 'trailer' and trailer.children[0] == '(':
return trailer
return None
# arglist stuff
def _check_isinstance_type(value, node, search_name):
lazy_cls = None
trailer = _get_isinstance_trailer_arglist(node)
if trailer is not None and len(trailer.children) == 3:
arglist = trailer.children[1]
args = TreeArguments(value.inference_state, value, arglist, trailer)
param_list = list(args.unpack())
# Disallow keyword arguments
assert len(param_list) == 2
(key1, lazy_value_object), (key2, lazy_value_cls) = param_list
assert key1 is None and key2 is None
call = helpers.call_of_leaf(search_name)
is_instance_call = helpers.call_of_leaf(lazy_value_object.data)
# Do a simple get_code comparison. They should just have the same code,
# and everything will be all right.
normalize = value.inference_state.grammar._normalize
assert normalize(is_instance_call) == normalize(call)
except AssertionError:
if len(param_list) == 2 and len(arglist.children) == 3:
(key1, _), (key2, lazy_value_cls) = param_list
if key1 is None and key2 is None:
call = _get_call_string(search_name)
is_instance_call = _get_call_string(arglist.children[0])
# Do a simple get_code comparison of the strings . They should
# just have the same code, and everything will be all right.
# There are ways that this is not correct, if some stuff is
# redefined in between. However here we don't care, because
# it's a heuristic that works pretty well.
if call == is_instance_call:
lazy_cls = lazy_value_cls
if lazy_cls is None:
return None
value_set = NO_VALUES
for cls_or_tup in lazy_value_cls.infer():
for cls_or_tup in lazy_cls.infer():
if isinstance(cls_or_tup, iterable.Sequence) and cls_or_tup.array_type == 'tuple':
for lazy_value in cls_or_tup.py__iter__():
value_set |= lazy_value.infer().execute_with_values()
else:
value_set |= cls_or_tup.execute_with_values()
return value_set
def _get_call_string(node):
if node.parent.type == 'atom_expr':
return _get_call_string(node.parent)
code = ''
leaf = node.get_first_leaf()
end = node.get_last_leaf().end_pos
while leaf.start_pos < end:
code += leaf.value
leaf = leaf.get_next_leaf()
return code

View File

@@ -1,5 +1,6 @@
from jedi.parser_utils import get_flow_branch_keyword, is_scope, get_parent_scope
from jedi.inference.recursion import execution_allowed
from jedi.inference.helpers import is_big_annoying_library
class Status(object):
@@ -42,6 +43,10 @@ def _get_flow_scopes(node):
def reachability_check(context, value_scope, node, origin_scope=None):
if is_big_annoying_library(context) \
or not context.inference_state.flow_analysis_enabled:
return UNSURE
first_flow_scope = get_parent_scope(node, include_flows=True)
if origin_scope is not None:
origin_flow_scopes = list(_get_flow_scopes(origin_scope))

View File

@@ -12,9 +12,10 @@ from parso import ParserSyntaxError, parse
from jedi._compatibility import force_unicode, Parameter
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference.gradual.typing import TypeVar, LazyGenericClass, \
AbstractAnnotatedClass
from jedi.inference.gradual.typing import GenericClass
from jedi.inference.gradual.base import DefineGenericBase, GenericClass
from jedi.inference.gradual.generics import TupleGenericManager
from jedi.inference.gradual.typing import TypingClassValueWithIndex
from jedi.inference.gradual.type_var import TypeVar
from jedi.inference.helpers import is_string
from jedi.inference.compiled import builtin_from_name
from jedi.inference.param import get_executed_param_names
@@ -91,7 +92,7 @@ def _split_comment_param_declaration(decl_text):
debug.warning('Comment annotation is not valid Python: %s' % decl_text)
return []
if node.type == 'name':
if node.type in ['name', 'atom_expr', 'power']:
return [node.get_code().strip()]
params = []
@@ -117,13 +118,17 @@ def infer_param(function_value, param, ignore_stars=False):
tuple_ = builtin_from_name(inference_state, 'tuple')
return ValueSet([GenericClass(
tuple_,
generics=(values,),
TupleGenericManager((values,)),
) for c in values])
elif param.star_count == 2:
dct = builtin_from_name(inference_state, 'dict')
generics = (
ValueSet([builtin_from_name(inference_state, 'str')]),
values
)
return ValueSet([GenericClass(
dct,
generics=(ValueSet([builtin_from_name(inference_state, 'str')]), values),
TupleGenericManager(generics),
) for c in values])
pass
return values
@@ -161,7 +166,6 @@ def _infer_param(function_value, param):
"Comments length != Params length %s %s",
params_comments, all_params
)
from jedi.inference.value.instance import InstanceArguments
if function_value.is_bound_method():
if index == 0:
# Assume it's self, which is already handled
@@ -229,7 +233,7 @@ def infer_return_types(function, arguments):
return ValueSet.from_sets(
ann.define_generics(type_var_dict)
if isinstance(ann, (AbstractAnnotatedClass, TypeVar)) else ValueSet({ann})
if isinstance(ann, (DefineGenericBase, TypeVar)) else ValueSet({ann})
for ann in annotation_values
).execute_annotation()
@@ -270,7 +274,39 @@ def infer_type_vars_for_execution(function, arguments, annotation_dict):
annotation_variable_results,
_infer_type_vars(ann, actual_value_set),
)
return annotation_variable_results
def infer_return_for_callable(arguments, param_values, result_values):
result = NO_VALUES
for pv in param_values:
if pv.array_type == 'list':
type_var_dict = infer_type_vars_for_callable(arguments, pv.py__iter__())
result |= ValueSet.from_sets(
v.define_generics(type_var_dict)
if isinstance(v, (DefineGenericBase, TypeVar)) else ValueSet({v})
for v in result_values
).execute_annotation()
return result
def infer_type_vars_for_callable(arguments, lazy_params):
"""
Infers type vars for the Calllable class:
def x() -> Callable[[Callable[..., _T]], _T]: ...
"""
annotation_variable_results = {}
for (_, lazy_value), lazy_callable_param in zip(arguments.unpack(), lazy_params):
callable_param_values = lazy_callable_param.infer()
# Infer unknown type var
actual_value_set = lazy_value.infer()
for v in callable_param_values:
_merge_type_var_dicts(
annotation_variable_results,
_infer_type_vars(v, actual_value_set),
)
return annotation_variable_results
@@ -283,7 +319,7 @@ def _merge_type_var_dicts(base_dict, new_dict):
base_dict[type_var_name] = values
def _infer_type_vars(annotation_value, value_set):
def _infer_type_vars(annotation_value, value_set, is_class_value=False):
"""
This function tries to find information about undefined type vars and
returns a dict from type var name to value set.
@@ -298,8 +334,35 @@ def _infer_type_vars(annotation_value, value_set):
"""
type_var_dict = {}
if isinstance(annotation_value, TypeVar):
return {annotation_value.py__name__(): value_set.py__class__()}
elif isinstance(annotation_value, LazyGenericClass):
if not is_class_value:
return {annotation_value.py__name__(): value_set.py__class__()}
return {annotation_value.py__name__(): value_set}
elif isinstance(annotation_value, TypingClassValueWithIndex):
name = annotation_value.py__name__()
if name == 'Type':
given = annotation_value.get_generics()
if given:
for nested_annotation_value in given[0]:
_merge_type_var_dicts(
type_var_dict,
_infer_type_vars(
nested_annotation_value,
value_set,
is_class_value=True,
)
)
elif name == 'Callable':
given = annotation_value.get_generics()
if len(given) == 2:
for nested_annotation_value in given[1]:
_merge_type_var_dicts(
type_var_dict,
_infer_type_vars(
nested_annotation_value,
value_set.execute_annotation(),
)
)
elif isinstance(annotation_value, GenericClass):
name = annotation_value.py__name__()
if name == 'Iterable':
given = annotation_value.get_generics()
@@ -389,16 +452,21 @@ def find_unknown_type_vars(context, node):
for subscript_node in _unpack_subscriptlist(trailer.children[1]):
check_node(subscript_node)
else:
type_var_set = context.infer_node(node)
for type_var in type_var_set:
if isinstance(type_var, TypeVar) and type_var not in found:
found.append(type_var)
found[:] = _filter_type_vars(context.infer_node(node), found)
found = [] # We're not using a set, because the order matters.
check_node(node)
return found
def _filter_type_vars(value_set, found=()):
new_found = list(found)
for type_var in value_set:
if isinstance(type_var, TypeVar) and type_var not in found:
new_found.append(type_var)
return new_found
def _unpack_subscriptlist(subscriptlist):
if subscriptlist.type == 'subscriptlist':
for subscript in subscriptlist.children[::2]:

View File

@@ -0,0 +1,321 @@
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.base_value import ValueSet, NO_VALUES, Value, \
iterator_to_value_set, LazyValueWrapper, ValueWrapper
from jedi.inference.compiled import builtin_from_name
from jedi.inference.value.klass import ClassFilter
from jedi.inference.value.klass import ClassMixin
from jedi.inference.utils import to_list
from jedi.inference.names import AbstractNameDefinition, ValueName
from jedi.inference.context import ClassContext
from jedi.inference.gradual.generics import TupleGenericManager
class _BoundTypeVarName(AbstractNameDefinition):
"""
This type var was bound to a certain type, e.g. int.
"""
def __init__(self, type_var, value_set):
self._type_var = type_var
self.parent_context = type_var.parent_context
self._value_set = value_set
def infer(self):
def iter_():
for value in self._value_set:
# Replace any with the constraints if they are there.
from jedi.inference.gradual.typing import Any
if isinstance(value, Any):
for constraint in self._type_var.constraints:
yield constraint
else:
yield value
return ValueSet(iter_())
def py__name__(self):
return self._type_var.py__name__()
def __repr__(self):
return '<%s %s -> %s>' % (self.__class__.__name__, self.py__name__(), self._value_set)
class _TypeVarFilter(object):
"""
A filter for all given variables in a class.
A = TypeVar('A')
B = TypeVar('B')
class Foo(Mapping[A, B]):
...
In this example we would have two type vars given: A and B
"""
def __init__(self, generics, type_vars):
self._generics = generics
self._type_vars = type_vars
def get(self, name):
for i, type_var in enumerate(self._type_vars):
if type_var.py__name__() == name:
try:
return [_BoundTypeVarName(type_var, self._generics[i])]
except IndexError:
return [type_var.name]
return []
def values(self):
# The values are not relevant. If it's not searched exactly, the type
# vars are just global and should be looked up as that.
return []
class _AnnotatedClassContext(ClassContext):
def get_filters(self, *args, **kwargs):
filters = super(_AnnotatedClassContext, self).get_filters(
*args, **kwargs
)
for f in filters:
yield f
# The type vars can only be looked up if it's a global search and
# not a direct lookup on the class.
yield self._value.get_type_var_filter()
class DefineGenericBase(LazyValueWrapper):
def __init__(self, generics_manager):
self._generics_manager = generics_manager
def _create_instance_with_generics(self, generics_manager):
raise NotImplementedError
@inference_state_method_cache()
def get_generics(self):
return self._generics_manager.to_tuple()
def define_generics(self, type_var_dict):
from jedi.inference.gradual.type_var import TypeVar
changed = False
new_generics = []
for generic_set in self.get_generics():
values = NO_VALUES
for generic in generic_set:
if isinstance(generic, (GenericClass, TypeVar)):
result = generic.define_generics(type_var_dict)
values |= result
if result != ValueSet({generic}):
changed = True
else:
values |= ValueSet([generic])
new_generics.append(values)
if not changed:
# There might not be any type vars that change. In that case just
# return itself, because it does not make sense to potentially lose
# cached results.
return ValueSet([self])
return ValueSet([self._create_instance_with_generics(
TupleGenericManager(tuple(new_generics))
)])
def is_same_class(self, other):
if not isinstance(other, DefineGenericBase):
return False
if self.tree_node != other.tree_node:
# TODO not sure if this is nice.
return False
given_params1 = self.get_generics()
given_params2 = other.get_generics()
if len(given_params1) != len(given_params2):
# If the amount of type vars doesn't match, the class doesn't
# match.
return False
# Now compare generics
return all(
any(
# TODO why is this ordering the correct one?
cls2.is_same_class(cls1)
for cls1 in class_set1
for cls2 in class_set2
) for class_set1, class_set2 in zip(given_params1, given_params2)
)
def __repr__(self):
return '<%s: %s%s>' % (
self.__class__.__name__,
self._wrapped_value,
list(self.get_generics()),
)
class GenericClass(ClassMixin, DefineGenericBase):
"""
A class that is defined with generics, might be something simple like:
class Foo(Generic[T]): ...
my_foo_int_cls = Foo[int]
"""
def __init__(self, class_value, generics_manager):
super(GenericClass, self).__init__(generics_manager)
self._class_value = class_value
def _get_wrapped_value(self):
return self._class_value
def get_type_var_filter(self):
return _TypeVarFilter(self.get_generics(), self.list_type_vars())
def py__call__(self, arguments):
instance, = super(GenericClass, self).py__call__(arguments)
return ValueSet([_GenericInstanceWrapper(instance)])
def _as_context(self):
return _AnnotatedClassContext(self)
@to_list
def py__bases__(self):
for base in self._wrapped_value.py__bases__():
yield _LazyGenericBaseClass(self, base)
def _create_instance_with_generics(self, generics_manager):
return GenericClass(self._class_value, generics_manager)
def is_sub_class_of(self, class_value):
if super(GenericClass, self).is_sub_class_of(class_value):
return True
return self._class_value.is_sub_class_of(class_value)
class _LazyGenericBaseClass(object):
def __init__(self, class_value, lazy_base_class):
self._class_value = class_value
self._lazy_base_class = lazy_base_class
@iterator_to_value_set
def infer(self):
for base in self._lazy_base_class.infer():
if isinstance(base, GenericClass):
# Here we have to recalculate the given types.
yield GenericClass.create_cached(
base.inference_state,
base._wrapped_value,
TupleGenericManager(tuple(self._remap_type_vars(base))),
)
else:
yield base
def _remap_type_vars(self, base):
from jedi.inference.gradual.type_var import TypeVar
filter = self._class_value.get_type_var_filter()
for type_var_set in base.get_generics():
new = NO_VALUES
for type_var in type_var_set:
if isinstance(type_var, TypeVar):
names = filter.get(type_var.py__name__())
new |= ValueSet.from_sets(
name.infer() for name in names
)
else:
# Mostly will be type vars, except if in some cases
# a concrete type will already be there. In that
# case just add it to the value set.
new |= ValueSet([type_var])
yield new
class _GenericInstanceWrapper(ValueWrapper):
def py__stop_iteration_returns(self):
for cls in self._wrapped_value.class_value.py__mro__():
if cls.py__name__() == 'Generator':
generics = cls.get_generics()
try:
return generics[2].execute_annotation()
except IndexError:
pass
elif cls.py__name__() == 'Iterator':
return ValueSet([builtin_from_name(self.inference_state, u'None')])
return self._wrapped_value.py__stop_iteration_returns()
class _PseudoTreeNameClass(Value):
"""
In typeshed, some classes are defined like this:
Tuple: _SpecialForm = ...
Now this is not a real class, therefore we have to do some workarounds like
this class. Essentially this class makes it possible to goto that `Tuple`
name, without affecting anything else negatively.
"""
def __init__(self, parent_context, tree_name):
super(_PseudoTreeNameClass, self).__init__(
parent_context.inference_state,
parent_context
)
self._tree_name = tree_name
@property
def tree_node(self):
return self._tree_name
def get_filters(self, *args, **kwargs):
# TODO this is obviously wrong. Is it though?
class EmptyFilter(ClassFilter):
def __init__(self):
pass
def get(self, name, **kwargs):
return []
def values(self, **kwargs):
return []
yield EmptyFilter()
def py__class__(self):
# TODO this is obviously not correct, but at least gives us a class if
# we have none. Some of these objects don't really have a base class in
# typeshed.
return builtin_from_name(self.inference_state, u'object')
@property
def name(self):
return ValueName(self, self._tree_name)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._tree_name.value)
class BaseTypingValue(LazyValueWrapper):
def __init__(self, parent_context, tree_name):
self.inference_state = parent_context.inference_state
self.parent_context = parent_context
self._tree_name = tree_name
@property
def name(self):
return ValueName(self, self._tree_name)
def _get_wrapped_value(self):
return _PseudoTreeNameClass(self.parent_context, self._tree_name)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._tree_name.value)
class BaseTypingValueWithGenerics(DefineGenericBase):
def __init__(self, parent_context, tree_name, generics_manager):
super(BaseTypingValueWithGenerics, self).__init__(generics_manager)
self.inference_state = parent_context.inference_state
self.parent_context = parent_context
self._tree_name = tree_name
def _get_wrapped_value(self):
return _PseudoTreeNameClass(self.parent_context, self._tree_name)
def __repr__(self):
return '%s(%s%s)' % (self.__class__.__name__, self._tree_name.value,
self._generics_manager)

View File

@@ -59,27 +59,22 @@ def _try_stub_to_python_names(names, prefer_stub_to_compiled=False):
yield name
continue
name_list = name.get_qualified_names()
if name_list is None:
values = NO_VALUES
else:
values = _infer_from_stub(
module_context,
name_list[:-1],
ignore_compiled=prefer_stub_to_compiled,
)
if values and name_list:
new_names = values.goto(name_list[-1])
for new_name in new_names:
yield new_name
if new_names:
if name.api_type == 'module':
values = convert_values(name.infer(), ignore_compiled=prefer_stub_to_compiled)
if values:
for v in values:
yield v.name
continue
elif values:
for c in values:
yield c.name
continue
# This is the part where if we haven't found anything, just return the
# stub name.
else:
v = name.get_defining_qualified_value()
if v is not None:
converted = _stub_to_python_value_set(v, ignore_compiled=prefer_stub_to_compiled)
if converted:
converted_names = converted.goto(name.get_public_name())
if converted_names:
for n in converted_names:
yield n
continue
yield name
@@ -104,45 +99,44 @@ def _python_to_stub_names(names, fallback_to_python=False):
yield name
continue
if name.is_import():
for new_name in name.goto():
# Imports don't need to be converted, because they are already
# stubs if possible.
if fallback_to_python or new_name.is_stub():
yield new_name
continue
name_list = name.get_qualified_names()
stubs = NO_VALUES
if name_list is not None:
stub_module = _load_stub_module(module_context.get_value())
if stub_module is not None:
stubs = ValueSet({stub_module})
for name in name_list[:-1]:
stubs = stubs.py__getattribute__(name)
if stubs and name_list:
new_names = stubs.goto(name_list[-1])
for new_name in new_names:
yield new_name
if new_names:
if name.api_type == 'module':
found_name = False
for n in name.goto():
if n.api_type == 'module':
values = convert_values(n.infer(), only_stubs=True)
for v in values:
yield v.name
found_name = True
else:
for x in _python_to_stub_names([n], fallback_to_python=fallback_to_python):
yield x
found_name = True
if found_name:
continue
elif stubs:
for c in stubs:
yield c.name
continue
else:
v = name.get_defining_qualified_value()
if v is not None:
converted = to_stub(v)
if converted:
converted_names = converted.goto(name.get_public_name())
if converted_names:
for n in converted_names:
yield n
continue
if fallback_to_python:
# This is the part where if we haven't found anything, just return
# the stub name.
yield name
def convert_names(names, only_stubs=False, prefer_stubs=False):
def convert_names(names, only_stubs=False, prefer_stubs=False, prefer_stub_to_compiled=True):
assert not (only_stubs and prefer_stubs)
with debug.increase_indent_cm('convert names'):
if only_stubs or prefer_stubs:
return _python_to_stub_names(names, fallback_to_python=prefer_stubs)
else:
return _try_stub_to_python_names(names, prefer_stub_to_compiled=True)
return _try_stub_to_python_names(
names, prefer_stub_to_compiled=prefer_stub_to_compiled)
def convert_values(values, only_stubs=False, prefer_stubs=False, ignore_compiled=True):
@@ -162,7 +156,6 @@ def convert_values(values, only_stubs=False, prefer_stubs=False, ignore_compiled
)
# TODO merge with _python_to_stub_names?
def to_stub(value):
if value.is_stub():
return ValueSet([value])

View File

@@ -0,0 +1,98 @@
"""
This module is about generics, like the `int` in `List[int]`. It's not about
the Generic class.
"""
from jedi import debug
from jedi.cache import memoize_method
from jedi.inference.utils import to_tuple
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference.value.iterable import SequenceLiteralValue
from jedi.inference.helpers import is_string
def _resolve_forward_references(context, value_set):
for value in value_set:
if is_string(value):
from jedi.inference.gradual.annotation import _get_forward_reference_node
node = _get_forward_reference_node(context, value.get_safe_value())
if node is not None:
for c in context.infer_node(node):
yield c
else:
yield value
class _AbstractGenericManager(object):
def get_index_and_execute(self, index):
try:
return self[index].execute_annotation()
except IndexError:
debug.warning('No param #%s found for annotation %s', index, self)
return NO_VALUES
class LazyGenericManager(_AbstractGenericManager):
def __init__(self, context_of_index, index_value):
self._context_of_index = context_of_index
self._index_value = index_value
@memoize_method
def __getitem__(self, index):
return self._tuple()[index]()
def __len__(self):
return len(self._tuple())
@memoize_method
@to_tuple
def _tuple(self):
def lambda_scoping_in_for_loop_sucks(lazy_value):
return lambda: ValueSet(_resolve_forward_references(
self._context_of_index,
lazy_value.infer()
))
if isinstance(self._index_value, SequenceLiteralValue):
for lazy_value in self._index_value.py__iter__(contextualized_node=None):
yield lambda_scoping_in_for_loop_sucks(lazy_value)
else:
yield lambda: ValueSet(_resolve_forward_references(
self._context_of_index,
ValueSet([self._index_value])
))
@to_tuple
def to_tuple(self):
for callable_ in self._tuple():
yield callable_()
def is_homogenous_tuple(self):
if isinstance(self._index_value, SequenceLiteralValue):
entries = self._index_value.get_tree_entries()
if len(entries) == 2 and entries[1] == '...':
return True
return False
def __repr__(self):
return '<LazyG>[%s]' % (', '.join(repr(x) for x in self.to_tuple()))
class TupleGenericManager(_AbstractGenericManager):
def __init__(self, tup):
self._tuple = tup
def __getitem__(self, index):
return self._tuple[index]
def __len__(self):
return len(self._tuple)
def to_tuple(self):
return self._tuple
def is_homogenous_tuple(self):
return False
def __repr__(self):
return '<TupG>[%s]' % (', '.join(repr(x) for x in self.to_tuple()))

View File

@@ -1,12 +1,14 @@
from jedi.inference.base_value import ValueWrapper
from jedi.inference.value.module import ModuleValue
from jedi.inference.filters import ParserTreeFilter, \
TreeNameDefinition
from jedi.inference.filters import ParserTreeFilter
from jedi.inference.names import StubName, StubModuleName
from jedi.inference.gradual.typing import TypingModuleFilterWrapper
from jedi.inference.context import ModuleContext
class StubModuleValue(ModuleValue):
_module_name_class = StubModuleName
def __init__(self, non_stub_value_set, *args, **kwargs):
super(StubModuleValue, self).__init__(*args, **kwargs)
self.non_stub_value_set = non_stub_value_set
@@ -31,10 +33,6 @@ class StubModuleValue(ModuleValue):
names.update(super(StubModuleValue, self).sub_modules_dict())
return names
def _get_first_non_stub_filters(self):
for value in self.non_stub_value_set:
yield next(value.get_filters())
def _get_stub_filters(self, origin_scope):
return [StubFilter(
parent_context=self.as_context(),
@@ -51,6 +49,16 @@ class StubModuleValue(ModuleValue):
for f in filters:
yield f
def _as_context(self):
return StubModuleContext(self)
class StubModuleContext(ModuleContext):
def get_filters(self, until_position=None, origin_scope=None):
# Make sure to ignore the position, because positions are not relevant
# for stubs.
return super(StubModuleContext, self).get_filters(origin_scope=origin_scope)
class TypingModuleWrapper(StubModuleValue):
def get_filters(self, *args, **kwargs):
@@ -71,17 +79,8 @@ class TypingModuleContext(ModuleContext):
yield f
# From here on down we make looking up the sys.version_info fast.
class _StubName(TreeNameDefinition):
def infer(self):
inferred = super(_StubName, self).infer()
if self.string_name == 'version_info' and self.get_root_context().py__name__() == 'sys':
return [VersionInfo(c) for c in inferred]
return inferred
class StubFilter(ParserTreeFilter):
name_class = _StubName
name_class = StubName
def _is_name_reachable(self, name):
if not super(StubFilter, self)._is_name_reachable(name):

View File

@@ -0,0 +1,111 @@
from jedi._compatibility import unicode, force_unicode
from jedi import debug
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference.gradual.base import BaseTypingValue
class TypeVarClass(BaseTypingValue):
def py__call__(self, arguments):
unpacked = arguments.unpack()
key, lazy_value = next(unpacked, (None, None))
var_name = self._find_string_name(lazy_value)
# The name must be given, otherwise it's useless.
if var_name is None or key is not None:
debug.warning('Found a variable without a name %s', arguments)
return NO_VALUES
return ValueSet([TypeVar.create_cached(
self.inference_state,
self.parent_context,
self._tree_name,
var_name,
unpacked
)])
def _find_string_name(self, lazy_value):
if lazy_value is None:
return None
value_set = lazy_value.infer()
if not value_set:
return None
if len(value_set) > 1:
debug.warning('Found multiple values for a type variable: %s', value_set)
name_value = next(iter(value_set))
try:
method = name_value.get_safe_value
except AttributeError:
return None
else:
safe_value = method(default=None)
if self.inference_state.environment.version_info.major == 2:
if isinstance(safe_value, bytes):
return force_unicode(safe_value)
if isinstance(safe_value, (str, unicode)):
return safe_value
return None
class TypeVar(BaseTypingValue):
def __init__(self, parent_context, tree_name, var_name, unpacked_args):
super(TypeVar, self).__init__(parent_context, tree_name)
self._var_name = var_name
self._constraints_lazy_values = []
self._bound_lazy_value = None
self._covariant_lazy_value = None
self._contravariant_lazy_value = None
for key, lazy_value in unpacked_args:
if key is None:
self._constraints_lazy_values.append(lazy_value)
else:
if key == 'bound':
self._bound_lazy_value = lazy_value
elif key == 'covariant':
self._covariant_lazy_value = lazy_value
elif key == 'contravariant':
self._contra_variant_lazy_value = lazy_value
else:
debug.warning('Invalid TypeVar param name %s', key)
def py__name__(self):
return self._var_name
def get_filters(self, *args, **kwargs):
return iter([])
def _get_classes(self):
if self._bound_lazy_value is not None:
return self._bound_lazy_value.infer()
if self._constraints_lazy_values:
return self.constraints
debug.warning('Tried to infer the TypeVar %s without a given type', self._var_name)
return NO_VALUES
def is_same_class(self, other):
# Everything can match an undefined type var.
return True
@property
def constraints(self):
return ValueSet.from_sets(
lazy.infer() for lazy in self._constraints_lazy_values
)
def define_generics(self, type_var_dict):
try:
found = type_var_dict[self.py__name__()]
except KeyError:
pass
else:
if found:
return found
return self._get_classes() or ValueSet({self})
def execute_annotation(self):
return self._get_classes().execute_annotation()
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.py__name__())

View File

@@ -44,7 +44,7 @@ def _create_stub_map(directory):
if os.path.isfile(init):
yield entry, init
elif entry.endswith('.pyi') and os.path.isfile(path):
name = entry.rstrip('.pyi')
name = entry[:-4]
if name != '__init__':
yield name, path
@@ -91,9 +91,8 @@ def _cache_stub_file_map(version_info):
def import_module_decorator(func):
@wraps(func)
def wrapper(inference_state, import_names, parent_module_value, sys_path, prefer_stubs):
try:
python_value_set = inference_state.module_cache.get(import_names)
except KeyError:
python_value_set = inference_state.module_cache.get(import_names)
if python_value_set is None:
if parent_module_value is not None and parent_module_value.is_stub():
parent_module_values = parent_module_value.non_stub_value_set
else:
@@ -163,6 +162,7 @@ def _try_to_load_stub(inference_state, import_names, python_value_set,
if len(import_names) == 1:
# foo-stubs
for p in sys_path:
p = cast_path(p)
init = os.path.join(p, *import_names) + '-stubs' + os.path.sep + '__init__.pyi'
m = _try_to_load_stub_from_file(
inference_state,
@@ -235,7 +235,7 @@ def _load_from_typeshed(inference_state, python_value_set, parent_module_value,
map_ = _cache_stub_file_map(inference_state.grammar.version_info)
import_name = _IMPORT_MAP.get(import_name, import_name)
elif isinstance(parent_module_value, ModuleValue):
if not parent_module_value.is_package:
if not parent_module_value.is_package():
# Only if it's a package (= a folder) something can be
# imported.
return None

View File

@@ -5,22 +5,18 @@ values.
This file deals with all the typing.py cases.
"""
from jedi._compatibility import unicode, force_unicode
from jedi import debug
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.compiled import builtin_from_name
from jedi.inference.base_value import ValueSet, NO_VALUES, Value, \
iterator_to_value_set, ValueWrapper, LazyValueWrapper
LazyValueWrapper
from jedi.inference.lazy_value import LazyKnownValues
from jedi.inference.value.iterable import SequenceLiteralValue
from jedi.inference.arguments import repack_with_argument_clinic
from jedi.inference.utils import to_list
from jedi.inference.filters import FilterWrapper
from jedi.inference.names import NameWrapper, AbstractTreeName, \
AbstractNameDefinition, ValueName
from jedi.inference.helpers import is_string
from jedi.inference.value.klass import ClassMixin, ClassFilter
from jedi.inference.context import ClassContext
from jedi.inference.names import NameWrapper, ValueName
from jedi.inference.value.klass import ClassMixin
from jedi.inference.gradual.base import BaseTypingValue, BaseTypingValueWithGenerics
from jedi.inference.gradual.type_var import TypeVarClass
from jedi.inference.gradual.generics import LazyGenericManager, TupleGenericManager
_PROXY_CLASS_TYPES = 'Tuple Generic Protocol Callable Type'.split()
_TYPE_ALIAS_TYPES = {
@@ -36,52 +32,6 @@ _TYPE_ALIAS_TYPES = {
_PROXY_TYPES = 'Optional Union ClassVar'.split()
class TypingName(AbstractTreeName):
def __init__(self, value, other_name):
super(TypingName, self).__init__(value.parent_context, other_name.tree_name)
self._value = value
def infer(self):
return ValueSet([self._value])
class _BaseTypingValue(Value):
def __init__(self, inference_state, parent_context, tree_name):
super(_BaseTypingValue, self).__init__(inference_state, parent_context)
self._tree_name = tree_name
@property
def tree_node(self):
return self._tree_name
def get_filters(self, *args, **kwargs):
# TODO this is obviously wrong. Is it though?
class EmptyFilter(ClassFilter):
def __init__(self):
pass
def get(self, name, **kwargs):
return []
def values(self, **kwargs):
return []
yield EmptyFilter()
def py__class__(self):
# TODO this is obviously not correct, but at least gives us a class if
# we have none. Some of these objects don't really have a base class in
# typeshed.
return builtin_from_name(self.inference_state, u'object')
@property
def name(self):
return ValueName(self, self._tree_name)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._tree_name.value)
class TypingModuleName(NameWrapper):
def infer(self):
return ValueSet(self._remap())
@@ -94,33 +44,40 @@ class TypingModuleName(NameWrapper):
except KeyError:
pass
else:
yield TypeAlias.create_cached(inference_state, self.parent_context, self.tree_name, actual)
yield TypeAlias.create_cached(
inference_state, self.parent_context, self.tree_name, actual)
return
if name in _PROXY_CLASS_TYPES:
yield TypingClassValue.create_cached(inference_state, self.parent_context, self.tree_name)
yield ProxyTypingClassValue.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name in _PROXY_TYPES:
yield TypingValue.create_cached(inference_state, self.parent_context, self.tree_name)
yield ProxyTypingValue.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'runtime':
# We don't want anything here, not sure what this function is
# supposed to do, since it just appears in the stubs and shouldn't
# have any effects there (because it's never executed).
return
elif name == 'TypeVar':
yield TypeVarClass.create_cached(inference_state, self.parent_context, self.tree_name)
yield TypeVarClass.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'Any':
yield Any.create_cached(inference_state, self.parent_context, self.tree_name)
yield Any.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'TYPE_CHECKING':
# This is needed for e.g. imports that are only available for type
# checking or are in cycles. The user can then check this variable.
yield builtin_from_name(inference_state, u'True')
elif name == 'overload':
yield OverloadFunction.create_cached(inference_state, self.parent_context, self.tree_name)
yield OverloadFunction.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'NewType':
yield NewTypeFunction.create_cached(inference_state, self.parent_context, self.tree_name)
yield NewTypeFunction.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'cast':
# TODO implement cast
yield CastFunction.create_cached(inference_state, self.parent_context, self.tree_name)
yield CastFunction.create_cached(
inference_state, self.parent_context, self.tree_name)
elif name == 'TypedDict':
# TODO doesn't even exist in typeshed/typing.py, yet. But will be
# added soon.
@@ -139,21 +96,7 @@ class TypingModuleFilterWrapper(FilterWrapper):
name_wrapper_class = TypingModuleName
class _WithIndexBase(_BaseTypingValue):
def __init__(self, inference_state, parent_context, name, index_value, value_of_index):
super(_WithIndexBase, self).__init__(inference_state, parent_context, name)
self._index_value = index_value
self._context_of_index = value_of_index
def __repr__(self):
return '<%s: %s[%s]>' % (
self.__class__.__name__,
self._tree_name.value,
self._index_value,
)
class TypingValueWithIndex(_WithIndexBase):
class TypingValueWithIndex(BaseTypingValueWithGenerics):
def execute_annotation(self):
string_name = self._tree_name.value
@@ -168,29 +111,45 @@ class TypingValueWithIndex(_WithIndexBase):
| ValueSet([builtin_from_name(self.inference_state, u'None')])
elif string_name == 'Type':
# The type is actually already given in the index_value
return ValueSet([self._index_value])
return self._generics_manager[0]
elif string_name == 'ClassVar':
# For now don't do anything here, ClassVars are always used.
return self._index_value.execute_annotation()
return self._generics_manager[0].execute_annotation()
cls = globals()[string_name]
mapped = {
'Tuple': Tuple,
'Generic': Generic,
'Protocol': Protocol,
'Callable': Callable,
}
cls = mapped[string_name]
return ValueSet([cls(
self.inference_state,
self.parent_context,
self._tree_name,
self._index_value,
self._context_of_index
generics_manager=self._generics_manager,
)])
def gather_annotation_classes(self):
return ValueSet.from_sets(
_iter_over_arguments(self._index_value, self._context_of_index)
return ValueSet.from_sets(self._generics_manager.to_tuple())
def _create_instance_with_generics(self, generics_manager):
return TypingValueWithIndex(
self.parent_context,
self._tree_name,
generics_manager
)
class TypingValue(_BaseTypingValue):
class ProxyTypingValue(BaseTypingValue):
index_class = TypingValueWithIndex
py__simple_getitem__ = None
def with_generics(self, generics_tuple):
return self.index_class.create_cached(
self.inference_state,
self.parent_context,
self._tree_name,
generics_manager=TupleGenericManager(generics_tuple)
)
def py__getitem__(self, index_value_set, contextualized_node):
return ValueSet(
@@ -198,9 +157,11 @@ class TypingValue(_BaseTypingValue):
self.inference_state,
self.parent_context,
self._tree_name,
index_value,
value_of_index=contextualized_node.context)
for index_value in index_value_set
generics_manager=LazyGenericManager(
context_of_index=contextualized_node.context,
index_value=index_value,
)
) for index_value in index_value_set
)
@@ -222,33 +183,10 @@ class TypingClassValueWithIndex(_TypingClassMixin, TypingValueWithIndex):
pass
class TypingClassValue(_TypingClassMixin, TypingValue):
class ProxyTypingClassValue(_TypingClassMixin, ProxyTypingValue):
index_class = TypingClassValueWithIndex
def _iter_over_arguments(maybe_tuple_value, defining_context):
def iterate():
if isinstance(maybe_tuple_value, SequenceLiteralValue):
for lazy_value in maybe_tuple_value.py__iter__(contextualized_node=None):
yield lazy_value.infer()
else:
yield ValueSet([maybe_tuple_value])
def resolve_forward_references(value_set):
for value in value_set:
if is_string(value):
from jedi.inference.gradual.annotation import _get_forward_reference_node
node = _get_forward_reference_node(defining_context, value.get_safe_value())
if node is not None:
for c in defining_context.infer_node(node):
yield c
else:
yield value
for value_set in iterate():
yield ValueSet(resolve_forward_references(value_set))
class TypeAlias(LazyValueWrapper):
def __init__(self, parent_context, origin_tree_name, actual):
self.inference_state = parent_context.inference_state
@@ -282,190 +220,91 @@ class TypeAlias(LazyValueWrapper):
cls = next(iter(classes))
return cls
class _ContainerBase(_WithIndexBase):
def _get_getitem_values(self, index):
args = _iter_over_arguments(self._index_value, self._context_of_index)
for i, values in enumerate(args):
if i == index:
return values
debug.warning('No param #%s found for annotation %s', index, self._index_value)
return NO_VALUES
def gather_annotation_classes(self):
return ValueSet([self._get_wrapped_value()])
class Callable(_ContainerBase):
class Callable(BaseTypingValueWithGenerics):
def py__call__(self, arguments):
"""
def x() -> Callable[[Callable[..., _T]], _T]: ...
"""
# The 0th index are the arguments.
return self._get_getitem_values(1).execute_annotation()
try:
param_values = self._generics_manager[0]
result_values = self._generics_manager[1]
except IndexError:
debug.warning('Callable[...] defined without two arguments')
return NO_VALUES
else:
from jedi.inference.gradual.annotation import infer_return_for_callable
return infer_return_for_callable(arguments, param_values, result_values)
class Tuple(_ContainerBase):
class Tuple(LazyValueWrapper):
def __init__(self, parent_context, name, generics_manager):
self.inference_state = parent_context.inference_state
self.parent_context = parent_context
self._generics_manager = generics_manager
def _is_homogenous(self):
# To specify a variable-length tuple of homogeneous type, Tuple[T, ...]
# is used.
if isinstance(self._index_value, SequenceLiteralValue):
entries = self._index_value.get_tree_entries()
if len(entries) == 2 and entries[1] == '...':
return True
return False
return self._generics_manager.is_homogenous_tuple()
def py__simple_getitem__(self, index):
if self._is_homogenous():
return self._get_getitem_values(0).execute_annotation()
return self._generics_manager.get_index_and_execute(0)
else:
if isinstance(index, int):
return self._get_getitem_values(index).execute_annotation()
return self._generics_manager.get_index_and_execute(index)
debug.dbg('The getitem type on Tuple was %s' % index)
return NO_VALUES
def py__iter__(self, contextualized_node=None):
if self._is_homogenous():
yield LazyKnownValues(self._get_getitem_values(0).execute_annotation())
yield LazyKnownValues(self._generics_manager.get_index_and_execute(0))
else:
if isinstance(self._index_value, SequenceLiteralValue):
for i in range(self._index_value.py__len__()):
yield LazyKnownValues(self._get_getitem_values(i).execute_annotation())
for v in self._generics_manager.to_tuple():
yield LazyKnownValues(v.execute_annotation())
def py__getitem__(self, index_value_set, contextualized_node):
if self._is_homogenous():
return self._get_getitem_values(0).execute_annotation()
return self._generics_manager.get_index_and_execute(0)
return ValueSet.from_sets(
_iter_over_arguments(self._index_value, self._context_of_index)
self._generics_manager.to_tuple()
).execute_annotation()
def _get_wrapped_value(self):
tuple_, = self.inference_state.builtins_module \
.py__getattribute__('tuple').execute_annotation()
return tuple_
class Generic(_ContainerBase):
class Generic(BaseTypingValueWithGenerics):
pass
class Protocol(_ContainerBase):
class Protocol(BaseTypingValueWithGenerics):
pass
class Any(_BaseTypingValue):
class Any(BaseTypingValue):
def execute_annotation(self):
debug.warning('Used Any - returned no results')
return NO_VALUES
class TypeVarClass(_BaseTypingValue):
def py__call__(self, arguments):
unpacked = arguments.unpack()
key, lazy_value = next(unpacked, (None, None))
var_name = self._find_string_name(lazy_value)
# The name must be given, otherwise it's useless.
if var_name is None or key is not None:
debug.warning('Found a variable without a name %s', arguments)
return NO_VALUES
return ValueSet([TypeVar.create_cached(
self.inference_state,
self.parent_context,
self._tree_name,
var_name,
unpacked
)])
def _find_string_name(self, lazy_value):
if lazy_value is None:
return None
value_set = lazy_value.infer()
if not value_set:
return None
if len(value_set) > 1:
debug.warning('Found multiple values for a type variable: %s', value_set)
name_value = next(iter(value_set))
try:
method = name_value.get_safe_value
except AttributeError:
return None
else:
safe_value = method(default=None)
if self.inference_state.environment.version_info.major == 2:
if isinstance(safe_value, bytes):
return force_unicode(safe_value)
if isinstance(safe_value, (str, unicode)):
return safe_value
return None
class TypeVar(_BaseTypingValue):
def __init__(self, inference_state, parent_context, tree_name, var_name, unpacked_args):
super(TypeVar, self).__init__(inference_state, parent_context, tree_name)
self._var_name = var_name
self._constraints_lazy_values = []
self._bound_lazy_value = None
self._covariant_lazy_value = None
self._contravariant_lazy_value = None
for key, lazy_value in unpacked_args:
if key is None:
self._constraints_lazy_values.append(lazy_value)
else:
if key == 'bound':
self._bound_lazy_value = lazy_value
elif key == 'covariant':
self._covariant_lazy_value = lazy_value
elif key == 'contravariant':
self._contra_variant_lazy_value = lazy_value
else:
debug.warning('Invalid TypeVar param name %s', key)
def py__name__(self):
return self._var_name
def get_filters(self, *args, **kwargs):
return iter([])
def _get_classes(self):
if self._bound_lazy_value is not None:
return self._bound_lazy_value.infer()
if self._constraints_lazy_values:
return self.constraints
debug.warning('Tried to infer the TypeVar %s without a given type', self._var_name)
return NO_VALUES
def is_same_class(self, other):
# Everything can match an undefined type var.
return True
@property
def constraints(self):
return ValueSet.from_sets(
lazy.infer() for lazy in self._constraints_lazy_values
)
def define_generics(self, type_var_dict):
try:
found = type_var_dict[self.py__name__()]
except KeyError:
pass
else:
if found:
return found
return self._get_classes() or ValueSet({self})
def execute_annotation(self):
return self._get_classes().execute_annotation()
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self.py__name__())
class OverloadFunction(_BaseTypingValue):
class OverloadFunction(BaseTypingValue):
@repack_with_argument_clinic('func, /')
def py__call__(self, func_value_set):
# Just pass arguments through.
return func_value_set
class NewTypeFunction(_BaseTypingValue):
class NewTypeFunction(BaseTypingValue):
def py__call__(self, arguments):
ordered_args = arguments.unpack()
next(ordered_args, (None, None))
@@ -490,226 +329,13 @@ class NewType(Value):
def py__call__(self, arguments):
return self._type_value_set.execute_annotation()
@property
def name(self):
from jedi.inference.compiled.value import CompiledValueName
return CompiledValueName(self, 'NewType')
class CastFunction(_BaseTypingValue):
class CastFunction(BaseTypingValue):
@repack_with_argument_clinic('type, object, /')
def py__call__(self, type_value_set, object_value_set):
return type_value_set.execute_annotation()
class BoundTypeVarName(AbstractNameDefinition):
"""
This type var was bound to a certain type, e.g. int.
"""
def __init__(self, type_var, value_set):
self._type_var = type_var
self.parent_context = type_var.parent_context
self._value_set = value_set
def infer(self):
def iter_():
for value in self._value_set:
# Replace any with the constraints if they are there.
if isinstance(value, Any):
for constraint in self._type_var.constraints:
yield constraint
else:
yield value
return ValueSet(iter_())
def py__name__(self):
return self._type_var.py__name__()
def __repr__(self):
return '<%s %s -> %s>' % (self.__class__.__name__, self.py__name__(), self._value_set)
class TypeVarFilter(object):
"""
A filter for all given variables in a class.
A = TypeVar('A')
B = TypeVar('B')
class Foo(Mapping[A, B]):
...
In this example we would have two type vars given: A and B
"""
def __init__(self, generics, type_vars):
self._generics = generics
self._type_vars = type_vars
def get(self, name):
for i, type_var in enumerate(self._type_vars):
if type_var.py__name__() == name:
try:
return [BoundTypeVarName(type_var, self._generics[i])]
except IndexError:
return [type_var.name]
return []
def values(self):
# The values are not relevant. If it's not searched exactly, the type
# vars are just global and should be looked up as that.
return []
class AnnotatedClassContext(ClassContext):
def get_filters(self, *args, **kwargs):
filters = super(AnnotatedClassContext, self).get_filters(
*args, **kwargs
)
for f in filters:
yield f
# The type vars can only be looked up if it's a global search and
# not a direct lookup on the class.
yield self._value.get_type_var_filter()
class AbstractAnnotatedClass(ClassMixin, ValueWrapper):
def get_type_var_filter(self):
return TypeVarFilter(self.get_generics(), self.list_type_vars())
def is_same_class(self, other):
if not isinstance(other, AbstractAnnotatedClass):
return False
if self.tree_node != other.tree_node:
# TODO not sure if this is nice.
return False
given_params1 = self.get_generics()
given_params2 = other.get_generics()
if len(given_params1) != len(given_params2):
# If the amount of type vars doesn't match, the class doesn't
# match.
return False
# Now compare generics
return all(
any(
# TODO why is this ordering the correct one?
cls2.is_same_class(cls1)
for cls1 in class_set1
for cls2 in class_set2
) for class_set1, class_set2 in zip(given_params1, given_params2)
)
def py__call__(self, arguments):
instance, = super(AbstractAnnotatedClass, self).py__call__(arguments)
return ValueSet([InstanceWrapper(instance)])
def get_generics(self):
raise NotImplementedError
def define_generics(self, type_var_dict):
changed = False
new_generics = []
for generic_set in self.get_generics():
values = NO_VALUES
for generic in generic_set:
if isinstance(generic, (AbstractAnnotatedClass, TypeVar)):
result = generic.define_generics(type_var_dict)
values |= result
if result != ValueSet({generic}):
changed = True
else:
values |= ValueSet([generic])
new_generics.append(values)
if not changed:
# There might not be any type vars that change. In that case just
# return itself, because it does not make sense to potentially lose
# cached results.
return ValueSet([self])
return ValueSet([GenericClass(
self._wrapped_value,
generics=tuple(new_generics)
)])
def _as_context(self):
return AnnotatedClassContext(self)
def __repr__(self):
return '<%s: %s%s>' % (
self.__class__.__name__,
self._wrapped_value,
list(self.get_generics()),
)
@to_list
def py__bases__(self):
for base in self._wrapped_value.py__bases__():
yield LazyAnnotatedBaseClass(self, base)
class LazyGenericClass(AbstractAnnotatedClass):
def __init__(self, class_value, index_value, value_of_index):
super(LazyGenericClass, self).__init__(class_value)
self._index_value = index_value
self._context_of_index = value_of_index
@inference_state_method_cache()
def get_generics(self):
return list(_iter_over_arguments(self._index_value, self._context_of_index))
class GenericClass(AbstractAnnotatedClass):
def __init__(self, class_value, generics):
super(GenericClass, self).__init__(class_value)
self._generics = generics
def get_generics(self):
return self._generics
class LazyAnnotatedBaseClass(object):
def __init__(self, class_value, lazy_base_class):
self._class_value = class_value
self._lazy_base_class = lazy_base_class
@iterator_to_value_set
def infer(self):
for base in self._lazy_base_class.infer():
if isinstance(base, AbstractAnnotatedClass):
# Here we have to recalculate the given types.
yield GenericClass.create_cached(
base.inference_state,
base._wrapped_value,
tuple(self._remap_type_vars(base)),
)
else:
yield base
def _remap_type_vars(self, base):
filter = self._class_value.get_type_var_filter()
for type_var_set in base.get_generics():
new = NO_VALUES
for type_var in type_var_set:
if isinstance(type_var, TypeVar):
names = filter.get(type_var.py__name__())
new |= ValueSet.from_sets(
name.infer() for name in names
)
else:
# Mostly will be type vars, except if in some cases
# a concrete type will already be there. In that
# case just add it to the value set.
new |= ValueSet([type_var])
yield new
class InstanceWrapper(ValueWrapper):
def py__stop_iteration_returns(self):
for cls in self._wrapped_value.class_value.py__mro__():
if cls.py__name__() == 'Generator':
generics = cls.get_generics()
try:
return generics[2].execute_annotation()
except IndexError:
pass
elif cls.py__name__() == 'Iterator':
return ValueSet([builtin_from_name(self.inference_state, u'None')])
return self._wrapped_value.py__stop_iteration_returns()

View File

@@ -21,8 +21,6 @@ def load_proper_stub_module(inference_state, file_io, import_names, module_node)
if import_names is not None:
actual_value_set = inference_state.import_module(import_names, prefer_stubs=False)
if not actual_value_set:
return None
stub = create_stub_module(
inference_state, actual_value_set, module_node, file_io, import_names

View File

@@ -72,6 +72,10 @@ def infer_call_of_leaf(context, leaf, cut_own_trailer=False):
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if leaf == ':':
# Basically happens with foo[:] when the cursor is on the colon
from jedi.inference.base_value import NO_VALUES
return NO_VALUES
if trailer.type == 'atom':
return context.infer_node(trailer)
return context.infer_node(leaf)
@@ -90,7 +94,7 @@ def infer_call_of_leaf(context, leaf, cut_own_trailer=False):
base = power.children[start]
if base.type != 'trailer':
break
trailers = power.children[start + 1: index + 1]
trailers = power.children[start + 1:cut]
else:
base = power.children[0]
trailers = power.children[1:cut]
@@ -106,49 +110,6 @@ def infer_call_of_leaf(context, leaf, cut_own_trailer=False):
return values
def call_of_leaf(leaf):
"""
Creates a "call" node that consist of all ``trailer`` and ``power``
objects. E.g. if you call it with ``append``::
list([]).append(3) or None
You would get a node with the content ``list([]).append`` back.
This generates a copy of the original ast node.
If you're using the leaf, e.g. the bracket `)` it will return ``list([])``.
"""
# TODO this is the old version of this call. Try to remove it.
trailer = leaf.parent
# The leaf may not be the last or first child, because there exist three
# different trailers: `( x )`, `[ x ]` and `.x`. In the first two examples
# we should not match anything more than x.
if trailer.type != 'trailer' or leaf not in (trailer.children[0], trailer.children[-1]):
if trailer.type == 'atom':
return trailer
return leaf
power = trailer.parent
index = power.children.index(trailer)
new_power = copy.copy(power)
new_power.children = list(new_power.children)
new_power.children[index + 1:] = []
if power.type == 'error_node':
start = index
while True:
start -= 1
if power.children[start].type != 'trailer':
break
transformed = tree.Node('power', power.children[start:])
transformed.parent = power.parent
return transformed
return power
def get_names_of_node(node):
try:
children = node.children
@@ -257,3 +218,14 @@ def parse_dotted_names(nodes, is_import_from, until_node=None):
def values_from_qualified_names(inference_state, *names):
return inference_state.import_module(names[:-1]).py__getattribute__(names[-1])
def is_big_annoying_library(context):
string_names = context.get_root_context().string_names
if string_names is None:
return False
# Especially pandas and tensorflow are huge complicated Python libraries
# that get even slower than they already are when Jedi tries to undrstand
# dynamic features like decorators, ifs and other stuff.
return string_names[0] in ('pandas', 'numpy', 'tensorflow', 'matplotlib')

View File

@@ -15,13 +15,10 @@ import os
from parso.python import tree
from parso.tree import search_ancestor
from parso import python_bytes_to_unicode
from jedi._compatibility import (FileNotFoundError, ImplicitNSInfo,
force_unicode, unicode)
from jedi._compatibility import ImplicitNSInfo, force_unicode
from jedi import debug
from jedi import settings
from jedi.file_io import KnownContentFileIO, FileIO
from jedi.parser_utils import get_cached_code_lines
from jedi.inference import sys_path
from jedi.inference import helpers
@@ -38,20 +35,14 @@ from jedi.plugins import plugin_manager
class ModuleCache(object):
def __init__(self):
self._path_cache = {}
self._name_cache = {}
def add(self, string_names, value_set):
#path = module.py__file__()
#self._path_cache[path] = value_set
if string_names is not None:
self._name_cache[string_names] = value_set
def get(self, string_names):
return self._name_cache[string_names]
def get_from_path(self, path):
return self._path_cache[path]
return self._name_cache.get(string_names)
# This memoization is needed, because otherwise we will infinitely loop on
@@ -123,43 +114,9 @@ def _prepare_infer_import(module_context, tree_name):
importer = Importer(module_context.inference_state, tuple(import_path),
module_context, import_node.level)
#if import_node.is_nested() and not self.nested_resolve:
# scopes = [NestedImportModule(module, import_node)]
return from_import_name, tuple(import_path), import_node.level, importer.follow()
class NestedImportModule(tree.Module):
"""
TODO while there's no use case for nested import module right now, we might
be able to use them for static analysis checks later on.
"""
def __init__(self, module, nested_import):
self._module = module
self._nested_import = nested_import
def _get_nested_import_name(self):
"""
Generates an Import statement, that can be used to fake nested imports.
"""
i = self._nested_import
# This is not an existing Import statement. Therefore, set position to
# 0 (0 is not a valid line number).
zero = (0, 0)
names = [unicode(name) for name in i.namespace_names[1:]]
name = helpers.FakeName(names, self._nested_import)
new = tree.Import(i._sub_module, zero, zero, name)
new.parent = self._module
debug.dbg('Generated a nested import: %s', new)
return helpers.FakeName(str(i.namespace_names[1]), new)
def __getattr__(self, name):
return getattr(self._module, name)
def __repr__(self):
return "<%s: %s of %s>" % (self.__class__.__name__, self._module,
self._nested_import)
def _add_error(value, name, message):
if hasattr(name, 'parent') and value is not None:
analysis.add(value, 'import-error', name, message)
@@ -216,7 +173,7 @@ class Importer(object):
self._fixed_sys_path = None
self._infer_possible = True
if level:
base = module_context.py__package__()
base = module_context.get_value().py__package__()
# We need to care for two cases, the first one is if it's a valid
# Python import. This import has a properly defined module name
# chain like `foo.bar.baz` and an import in baz is made for
@@ -272,47 +229,35 @@ class Importer(object):
for name in self.import_path
)
def _sys_path_with_modifications(self):
def _sys_path_with_modifications(self, is_completion):
if self._fixed_sys_path is not None:
return self._fixed_sys_path
sys_path_mod = (
self._inference_state.get_sys_path()
return (
# For import completions we don't want to see init paths, but for
# inference we want to show the user as much as possible.
# See GH #1446.
self._inference_state.get_sys_path(add_init_paths=not is_completion)
+ sys_path.check_sys_path_modifications(self._module_context)
)
if self._inference_state.environment.version_info.major == 2:
file_path = self._module_context.py__file__()
if file_path is not None:
# Python2 uses an old strange way of importing relative imports.
sys_path_mod.append(force_unicode(os.path.dirname(file_path)))
return sys_path_mod
def follow(self):
if not self.import_path or not self._infer_possible:
return NO_VALUES
import_names = tuple(
force_unicode(i.value if isinstance(i, tree.Name) else i)
for i in self.import_path
)
sys_path = self._sys_path_with_modifications()
# Check caches first
from_cache = self._inference_state.stub_module_cache.get(self._str_import_path)
if from_cache is not None:
return ValueSet({from_cache})
from_cache = self._inference_state.module_cache.get(self._str_import_path)
if from_cache is not None:
return from_cache
value_set = [None]
for i, name in enumerate(self.import_path):
value_set = ValueSet.from_sets([
self._inference_state.import_module(
import_names[:i+1],
parent_module_value,
sys_path
) for parent_module_value in value_set
])
if not value_set:
message = 'No module named ' + '.'.join(import_names)
_add_error(self._module_context, name, message)
return NO_VALUES
return value_set
sys_path = self._sys_path_with_modifications(is_completion=False)
return import_module_by_names(
self._inference_state, self.import_path, sys_path, self._module_context
)
def _get_module_names(self, search_path=None, in_module=None):
"""
@@ -322,17 +267,19 @@ class Importer(object):
names = []
# add builtin module names
if search_path is None and in_module is None:
names += [ImportName(self._module_context, name)
for name in self._inference_state.compiled_subprocess.get_builtin_module_names()]
names += [
ImportName(self._module_context, name)
for name in self._inference_state.compiled_subprocess.get_builtin_module_names()
]
if search_path is None:
search_path = self._sys_path_with_modifications()
search_path = self._sys_path_with_modifications(is_completion=True)
for name in iter_module_names(self._inference_state, search_path):
if in_module is None:
n = ImportName(self._module_context, name)
else:
n = SubModuleName(in_module, name)
n = SubModuleName(in_module.as_context(), name)
names.append(n)
return names
@@ -355,7 +302,7 @@ class Importer(object):
extname = modname[len('flask_'):]
names.append(ImportName(self._module_context, extname))
# Now the old style: ``flaskext.foo``
for dir in self._sys_path_with_modifications():
for dir in self._sys_path_with_modifications(is_completion=True):
flaskext = os.path.join(dir, 'flaskext')
if os.path.isdir(flaskext):
names += self._get_module_names([flaskext])
@@ -365,7 +312,9 @@ class Importer(object):
# Non-modules are not completable.
if value.api_type != 'module': # not a module
continue
names += value.sub_modules_dict().values()
if not value.is_compiled():
# sub_modules_dict is not implemented for compiled modules.
names += value.sub_modules_dict().values()
if not only_modules:
from jedi.inference.gradual.conversion import convert_values
@@ -384,6 +333,36 @@ class Importer(object):
return names
def import_module_by_names(inference_state, import_names, sys_path=None,
module_context=None, prefer_stubs=True):
if sys_path is None:
sys_path = inference_state.get_sys_path()
str_import_names = tuple(
force_unicode(i.value if isinstance(i, tree.Name) else i)
for i in import_names
)
value_set = [None]
for i, name in enumerate(import_names):
value_set = ValueSet.from_sets([
import_module(
inference_state,
str_import_names[:i+1],
parent_module_value,
sys_path,
prefer_stubs=prefer_stubs,
) for parent_module_value in value_set
])
if not value_set:
message = 'No module named ' + '.'.join(str_import_names)
if module_context is not None:
_add_error(module_context, name, message)
else:
debug.warning(message)
return NO_VALUES
return value_set
@plugin_manager.decorate()
@import_module_decorator
def import_module(inference_state, import_names, parent_module_value, sys_path):
@@ -434,7 +413,7 @@ def import_module(inference_state, import_names, parent_module_value, sys_path):
from jedi.inference.value.namespace import ImplicitNamespaceValue
module = ImplicitNamespaceValue(
inference_state,
fullname=file_io_or_ns.name,
string_names=tuple(file_io_or_ns.name.split('.')),
paths=file_io_or_ns.paths,
)
elif file_io_or_ns is None:
@@ -443,7 +422,7 @@ def import_module(inference_state, import_names, parent_module_value, sys_path):
return NO_VALUES
else:
module = _load_python_module(
inference_state, file_io_or_ns, sys_path,
inference_state, file_io_or_ns,
import_names=import_names,
is_package=is_pkg,
)
@@ -455,13 +434,8 @@ def import_module(inference_state, import_names, parent_module_value, sys_path):
return ValueSet([module])
def _load_python_module(inference_state, file_io, sys_path=None,
def _load_python_module(inference_state, file_io,
import_names=None, is_package=False):
try:
return inference_state.module_cache.get_from_path(file_io.path)
except KeyError:
pass
module_node = inference_state.parse(
file_io=file_io,
cache=True,
@@ -493,13 +467,12 @@ def _load_builtin_module(inference_state, import_names=None, sys_path=None):
return module
def _load_module_from_path(inference_state, file_io, base_names):
def load_module_from_path(inference_state, file_io, base_names=None):
"""
This should pretty much only be used for get_modules_containing_name. It's
here to ensure that a random path is still properly loaded into the Jedi
module structure.
"""
e_sys_path = inference_state.get_sys_path()
path = file_io.path
if base_names:
module_name = os.path.basename(path)
@@ -510,11 +483,11 @@ def _load_module_from_path(inference_state, file_io, base_names):
else:
import_names = base_names + (module_name,)
else:
e_sys_path = inference_state.get_sys_path()
import_names, is_package = sys_path.transform_path_to_dotted(e_sys_path, path)
module = _load_python_module(
inference_state, file_io,
sys_path=e_sys_path,
import_names=import_names,
is_package=is_package,
)
@@ -522,63 +495,6 @@ def _load_module_from_path(inference_state, file_io, base_names):
return module
def get_module_contexts_containing_name(inference_state, module_contexts, name):
"""
Search a name in the directories of modules.
"""
def check_directory(folder_io):
for file_name in folder_io.list():
if file_name.endswith('.py'):
yield folder_io.get_file_io(file_name)
def check_fs(file_io, base_names):
try:
code = file_io.read()
except FileNotFoundError:
return None
code = python_bytes_to_unicode(code, errors='replace')
if name not in code:
return None
new_file_io = KnownContentFileIO(file_io.path, code)
m = _load_module_from_path(inference_state, new_file_io, base_names)
if isinstance(m, compiled.CompiledObject):
return None
return m.as_context()
# skip non python modules
used_mod_paths = set()
folders_with_names_to_be_checked = []
for module_context in module_contexts:
path = module_context.py__file__()
if path not in used_mod_paths:
file_io = module_context.get_value().file_io
if file_io is not None:
used_mod_paths.add(path)
folders_with_names_to_be_checked.append((
file_io.get_parent_folder(),
module_context.py__package__()
))
yield module_context
if not settings.dynamic_params_for_other_modules:
return
def get_file_ios_to_check():
for folder_io, base_names in folders_with_names_to_be_checked:
for file_io in check_directory(folder_io):
yield file_io, base_names
for p in settings.additional_dynamic_modules:
p = os.path.abspath(p)
if p not in used_mod_paths:
yield FileIO(p), None
for file_io, base_names in get_file_ios_to_check():
m = check_fs(file_io, base_names)
if m is not None:
yield m
def follow_error_node_imports_if_possible(context, name):
error_node = tree.search_ancestor(name, 'error_node')
if error_node is not None:
@@ -606,4 +522,3 @@ def follow_error_node_imports_if_possible(context, name):
return Importer(
context.inference_state, names, context.get_root_context(), level).follow()
return None

View File

@@ -3,11 +3,24 @@ from abc import abstractmethod
from parso.tree import search_ancestor
from jedi._compatibility import Parameter
from jedi.parser_utils import find_statement_documentation, clean_scope_docstring
from jedi.inference.utils import unite
from jedi.inference.base_value import ValueSet, NO_VALUES
from jedi.inference import docstrings
from jedi.cache import memoize_method
from jedi.inference.helpers import deep_ast_copy, infer_call_of_leaf
from jedi.plugins import plugin_manager
def _merge_name_docs(names):
doc = ''
for name in names:
if doc:
# In case we have multiple values, just return all of them
# separated by a few dashes.
doc += '\n' + '-' * 30 + '\n'
doc += name.py__doc__()
return doc
class AbstractNameDefinition(object):
@@ -59,10 +72,20 @@ class AbstractNameDefinition(object):
def is_import(self):
return False
def py__doc__(self):
return ''
@property
def api_type(self):
return self.parent_context.api_type
def get_defining_qualified_value(self):
"""
Returns either None or the value that is public and qualified. Won't
return a function, because a name in a function is never public.
"""
return None
class AbstractArbitraryName(AbstractNameDefinition):
"""
@@ -93,7 +116,7 @@ class AbstractTreeName(AbstractNameDefinition):
# In case of level == 1, it works always, because it's like a submodule
# lookup.
if import_node is not None and not (import_node.level == 1
and self.get_root_context().is_package):
and self.get_root_context().get_value().is_package()):
# TODO improve the situation for when level is present.
if include_module_names and not import_node.level:
return tuple(n.value for n in import_node.get_path_for_name(self.tree_name))
@@ -108,6 +131,13 @@ class AbstractTreeName(AbstractNameDefinition):
return None
return parent_names + (self.tree_name.value,)
def get_defining_qualified_value(self):
if self.is_import():
raise NotImplementedError("Shouldn't really happen, please report")
elif self.parent_context:
return self.parent_context.get_value() # Might be None
return None
def goto(self):
context = self.parent_context
name = self.tree_name
@@ -165,7 +195,7 @@ class AbstractTreeName(AbstractNameDefinition):
new_dotted.children[index - 1:] = []
values = context.infer_node(new_dotted)
return unite(
value.goto(name, name_context=value.as_context())
value.goto(name, name_context=context)
for value in values
)
@@ -197,6 +227,9 @@ class ValueNameMixin(object):
def infer(self):
return ValueSet([self._value])
def py__doc__(self):
return self._value.py__doc__()
def _get_qualified_names(self):
return self._value.get_qualified_names()
@@ -205,6 +238,12 @@ class ValueNameMixin(object):
return self._value.as_context()
return super(ValueNameMixin, self).get_root_context()
def get_defining_qualified_value(self):
context = self.parent_context
if context.is_module() or context.is_class():
return self.parent_context.get_value() # Might be None
return None
@property
def api_type(self):
return self._value.api_type
@@ -285,6 +324,21 @@ class TreeNameDefinition(AbstractTreeName):
node = node.parent
return indexes
def py__doc__(self):
api_type = self.api_type
if api_type in ('function', 'class'):
# Make sure the names are not TreeNameDefinitions anymore.
return clean_scope_docstring(self.tree_name.get_definition())
if api_type == 'module':
names = self.goto()
if self not in names:
return _merge_name_docs(names)
if api_type == 'statement' and self.tree_name.is_definition():
return find_statement_documentation(self.tree_name.get_definition())
return ''
class _ParamMixin(object):
def maybe_positional_argument(self, include_star=True):
@@ -307,6 +361,9 @@ class _ParamMixin(object):
return '**'
return ''
def get_qualified_names(self, include_module_names=False):
return None
class ParamNameInterface(_ParamMixin):
api_type = u'param'
@@ -434,9 +491,11 @@ class _ActualTreeParamName(BaseTreeParamName):
class AnonymousParamName(_ActualTreeParamName):
def __init__(self, function_value, tree_name):
super(AnonymousParamName, self).__init__(function_value, tree_name)
@plugin_manager.decorate(name='goto_anonymous_param')
def goto(self):
return super(AnonymousParamName, self).goto()
@plugin_manager.decorate(name='infer_anonymous_param')
def infer(self):
values = super(AnonymousParamName, self).infer()
if values:
@@ -516,7 +575,7 @@ class ImportName(AbstractNameDefinition):
return m
# It's almost always possible to find the import or to not find it. The
# importing returns only one value, pretty much always.
return next(iter(import_values))
return next(iter(import_values)).as_context()
@memoize_method
def infer(self):
@@ -531,6 +590,9 @@ class ImportName(AbstractNameDefinition):
def api_type(self):
return 'module'
def py__doc__(self):
return _merge_name_docs(self.goto())
class SubModuleName(ImportName):
_level = 1
@@ -540,12 +602,53 @@ class NameWrapper(object):
def __init__(self, wrapped_name):
self._wrapped_name = wrapped_name
@abstractmethod
def infer(self):
raise NotImplementedError
def __getattr__(self, name):
return getattr(self._wrapped_name, name)
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, self._wrapped_name)
class StubNameMixin(object):
def py__doc__(self):
from jedi.inference.gradual.conversion import convert_names
# Stubs are not complicated and we can just follow simple statements
# that have an equals in them, because they typically make something
# else public. See e.g. stubs for `requests`.
names = [self]
if self.api_type == 'statement' and '=' in self.tree_name.get_definition().children:
names = [v.name for v in self.infer()]
names = convert_names(names, prefer_stub_to_compiled=False)
if self in names:
return super(StubNameMixin, self).py__doc__()
else:
# We have signatures ourselves in stubs, so don't use signatures
# from the implementation.
return _merge_name_docs(names)
# From here on down we make looking up the sys.version_info fast.
class StubName(StubNameMixin, TreeNameDefinition):
def infer(self):
inferred = super(StubName, self).infer()
if self.string_name == 'version_info' and self.get_root_context().py__name__() == 'sys':
from jedi.inference.gradual.stub_value import VersionInfo
return [VersionInfo(c) for c in inferred]
return inferred
class ModuleName(ValueNameMixin, AbstractNameDefinition):
start_pos = 1, 0
def __init__(self, value, name):
self._value = value
self._name = name
@property
def string_name(self):
return self._name
class StubModuleName(StubNameMixin, ModuleName):
pass

View File

@@ -177,8 +177,8 @@ def get_executed_param_names_and_issues(function_value, arguments):
for k in set(param_dict) - set(keys_used):
param = param_dict[k]
if not (non_matching_keys or had_multiple_value_error or
param.star_count or param.default):
if not (non_matching_keys or had_multiple_value_error
or param.star_count or param.default):
# add a warning only if there's not another one.
for contextualized_node in arguments.get_calling_nodes():
m = _error_argument_count(funcdef, len(unpacked_va))

View File

@@ -0,0 +1,273 @@
import os
import re
from parso import python_bytes_to_unicode
from jedi.file_io import KnownContentFileIO
from jedi.inference.imports import SubModuleName, load_module_from_path
from jedi.inference.compiled import CompiledObject
from jedi.inference.filters import ParserTreeFilter
from jedi.inference.gradual.conversion import convert_names
_IGNORE_FOLDERS = ('.tox', 'venv', '__pycache__')
_OPENED_FILE_LIMIT = 2000
"""
Stats from a 2016 Lenovo Notebook running Linux:
With os.walk, it takes about 10s to scan 11'000 files (without filesystem
caching). Once cached it only takes 5s. So it is expected that reading all
those files might take a few seconds, but not a lot more.
"""
_PARSED_FILE_LIMIT = 30
"""
For now we keep the amount of parsed files really low, since parsing might take
easily 100ms for bigger files.
"""
def _resolve_names(definition_names, avoid_names=()):
for name in definition_names:
if name in avoid_names:
# Avoiding recursions here, because goto on a module name lands
# on the same module.
continue
if not isinstance(name, SubModuleName):
# SubModuleNames are not actually existing names but created
# names when importing something like `import foo.bar.baz`.
yield name
if name.api_type == 'module':
for name in _resolve_names(name.goto(), definition_names):
yield name
def _dictionarize(names):
return dict(
(n if n.tree_name is None else n.tree_name, n)
for n in names
)
def _find_defining_names(module_context, tree_name):
found_names = _find_names(module_context, tree_name)
for name in list(found_names):
# Convert from/to stubs, because those might also be usages.
found_names |= set(convert_names(
[name],
only_stubs=not name.get_root_context().is_stub(),
prefer_stub_to_compiled=False
))
found_names |= set(_find_global_variables(found_names, tree_name.value))
for name in list(found_names):
if name.api_type == 'param' or name.tree_name is None \
or name.tree_name.parent.type == 'trailer':
continue
found_names |= set(_add_names_in_same_context(name.parent_context, name.string_name))
return set(_resolve_names(found_names))
def _find_names(module_context, tree_name):
name = module_context.create_name(tree_name)
found_names = set(name.goto())
found_names.add(name)
return set(_resolve_names(found_names))
def _add_names_in_same_context(context, string_name):
if context.tree_node is None:
return
until_position = None
while True:
filter_ = ParserTreeFilter(
parent_context=context,
until_position=until_position,
)
names = set(filter_.get(string_name))
if not names:
break
for name in names:
yield name
ordered = sorted(names, key=lambda x: x.start_pos)
until_position = ordered[0].start_pos
def _find_global_variables(names, search_name):
for name in names:
if name.tree_name is None:
continue
module_context = name.get_root_context()
try:
method = module_context.get_global_filter
except AttributeError:
continue
else:
for global_name in method().get(search_name):
yield global_name
c = module_context.create_context(global_name.tree_name)
for name in _add_names_in_same_context(c, global_name.string_name):
yield name
def find_references(module_context, tree_name):
inf = module_context.inference_state
search_name = tree_name.value
# We disable flow analysis, because if we have ifs that are only true in
# certain cases, we want both sides.
try:
inf.flow_analysis_enabled = False
found_names = _find_defining_names(module_context, tree_name)
finally:
inf.flow_analysis_enabled = True
found_names_dct = _dictionarize(found_names)
module_contexts = set(d.get_root_context() for d in found_names)
module_contexts = [module_context] + [m for m in module_contexts if m != module_context]
# For param no search for other modules is necessary.
if any(n.api_type == 'param' for n in found_names):
potential_modules = module_contexts
else:
potential_modules = get_module_contexts_containing_name(
inf,
module_contexts,
search_name,
)
non_matching_reference_maps = {}
for module_context in potential_modules:
for name_leaf in module_context.tree_node.get_used_names().get(search_name, []):
new = _dictionarize(_find_names(module_context, name_leaf))
if any(tree_name in found_names_dct for tree_name in new):
found_names_dct.update(new)
for tree_name in new:
for dct in non_matching_reference_maps.get(tree_name, []):
# A reference that was previously searched for matches
# with a now found name. Merge.
found_names_dct.update(dct)
try:
del non_matching_reference_maps[tree_name]
except KeyError:
pass
else:
for name in new:
non_matching_reference_maps.setdefault(name, []).append(new)
return found_names_dct.values()
def _check_fs(inference_state, file_io, regex):
try:
code = file_io.read()
except FileNotFoundError:
return None
code = python_bytes_to_unicode(code, errors='replace')
if not regex.search(code):
return None
new_file_io = KnownContentFileIO(file_io.path, code)
m = load_module_from_path(inference_state, new_file_io)
if isinstance(m, CompiledObject):
return None
return m.as_context()
def gitignored_lines(folder_io, file_io):
ignored_paths = set()
ignored_names = set()
for l in file_io.read().splitlines():
if not l or l.startswith(b'#'):
continue
p = l.decode('utf-8', 'ignore')
if p.startswith('/'):
name = p[1:]
if name.endswith(os.path.sep):
name = name[:-1]
ignored_paths.add(os.path.join(folder_io.path, name))
else:
ignored_names.add(p)
return ignored_paths, ignored_names
def _recurse_find_python_files(folder_io, except_paths):
for root_folder_io, folder_ios, file_ios in folder_io.walk():
# Delete folders that we don't want to iterate over.
for file_io in file_ios:
path = file_io.path
if path.endswith('.py') or path.endswith('.pyi'):
if path not in except_paths:
yield file_io
if path.endswith('.gitignore'):
ignored_paths, ignored_names = \
gitignored_lines(root_folder_io, file_io)
except_paths |= ignored_paths
folder_ios[:] = [
folder_io
for folder_io in folder_ios
if folder_io.path not in except_paths
and folder_io.get_base_name() not in _IGNORE_FOLDERS
]
def _find_python_files_in_sys_path(inference_state, module_contexts):
sys_path = inference_state.get_sys_path()
except_paths = set()
yielded_paths = [m.py__file__() for m in module_contexts]
for module_context in module_contexts:
file_io = module_context.get_value().file_io
if file_io is None:
continue
folder_io = file_io.get_parent_folder()
while True:
path = folder_io.path
if not any(path.startswith(p) for p in sys_path) or path in except_paths:
break
for file_io in _recurse_find_python_files(folder_io, except_paths):
if file_io.path not in yielded_paths:
yield file_io
except_paths.add(path)
folder_io = folder_io.get_parent_folder()
def get_module_contexts_containing_name(inference_state, module_contexts, name,
limit_reduction=1):
"""
Search a name in the directories of modules.
:param limit_reduction: Divides the limits on opening/parsing files by this
factor.
"""
# Skip non python modules
for module_context in module_contexts:
if module_context.is_compiled():
continue
yield module_context
# Very short names are not searched in other modules for now to avoid lots
# of file lookups.
if len(name) <= 2:
return
parse_limit = _PARSED_FILE_LIMIT / limit_reduction
open_limit = _OPENED_FILE_LIMIT / limit_reduction
file_io_count = 0
parsed_file_count = 0
regex = re.compile(r'\b' + re.escape(name) + r'\b')
for file_io in _find_python_files_in_sys_path(inference_state, module_contexts):
file_io_count += 1
m = _check_fs(inference_state, file_io, regex)
if m is not None:
parsed_file_count += 1
yield m
if parsed_file_count >= parse_limit:
break
if file_io_count >= open_limit:
break

View File

@@ -56,6 +56,9 @@ class AbstractSignature(_SignatureMixin):
def bind(self, value):
raise NotImplementedError
def matches_signature(self, arguments):
return True
def __repr__(self):
if self.value is self._function_value:
return '<%s: %s>' % (self.__class__.__name__, self.value)
@@ -104,7 +107,7 @@ class TreeSignature(AbstractSignature):
for executed_param_name in executed_param_names)
if debug.enable_notice:
tree_node = self._function_value.tree_node
signature = parser_utils.get_call_signature(tree_node)
signature = parser_utils.get_signature(tree_node)
if matches:
debug.dbg("Overloading match: %s@%s (%s)",
signature, tree_node.start_pos[0], arguments, color='BLUE')
@@ -128,7 +131,6 @@ class BuiltinSignature(AbstractSignature):
return self.value
def bind(self, value):
assert not self.is_bound
return BuiltinSignature(value, self._return_string, is_bound=True)

View File

@@ -14,6 +14,7 @@ The signature here for bar should be `bar(b, c)` instead of bar(*args).
from jedi._compatibility import Parameter
from jedi.inference.utils import to_list
from jedi.inference.names import ParamNameWrapper
from jedi.inference.helpers import is_big_annoying_library
def _iter_nodes_for_param(param_name):
@@ -93,6 +94,14 @@ def _remove_given_params(arguments, param_names):
@to_list
def process_params(param_names, star_count=3): # default means both * and **
if param_names:
if is_big_annoying_library(param_names[0].parent_context):
# At first this feature can look innocent, but it does a lot of
# type inference in some cases, so we just ditch it.
for p in param_names:
yield p
return
used_names = set()
arg_callables = []
kwarg_callables = []
@@ -127,6 +136,7 @@ def process_params(param_names, star_count=3): # default means both * and **
used_names.add(p.string_name)
yield p
# First process *args
longest_param_names = ()
found_arg_signature = False
found_kwarg_signature = False
@@ -172,12 +182,7 @@ def process_params(param_names, star_count=3): # default means both * and **
elif arg_names:
yield arg_names[0]
for p in kw_only_names:
if p.string_name in used_names:
continue
yield p
used_names.add(p.string_name)
# Then process **kwargs
for func, arguments in kwarg_callables:
for signature in func.get_signatures():
found_kwarg_signature = True
@@ -186,8 +191,16 @@ def process_params(param_names, star_count=3): # default means both * and **
arguments,
signature.get_param_names(resolve_stars=False)
)), star_count=2):
if p.get_kind() != Parameter.KEYWORD_ONLY or not kwarg_names:
yield p
if p.get_kind() == Parameter.VAR_KEYWORD:
kwarg_names.append(p)
elif p.get_kind() == Parameter.KEYWORD_ONLY:
kw_only_names.append(p)
for p in kw_only_names:
if p.string_name in used_names:
continue
yield p
used_names.add(p.string_name)
if not found_kwarg_signature and original_kwarg_name is not None:
yield original_kwarg_name

View File

@@ -20,7 +20,8 @@ from jedi.inference.value import ClassValue, FunctionValue
from jedi.inference.value import iterable
from jedi.inference.value.dynamic_arrays import ListModification, DictModification
from jedi.inference.value import TreeInstance
from jedi.inference.helpers import is_string, is_literal, is_number, get_names_of_node
from jedi.inference.helpers import is_string, is_literal, is_number, \
get_names_of_node, is_big_annoying_library
from jedi.inference.compiled.access import COMPARISON_OPERATORS
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.gradual.stub_value import VersionInfo
@@ -45,7 +46,16 @@ def _limit_value_infers(func):
inference_state = context.inference_state
try:
inference_state.inferred_element_counts[n] += 1
if inference_state.inferred_element_counts[n] > 300:
maximum = 300
if context.parent_context is None \
and context.get_value() is inference_state.builtins_module:
# Builtins should have a more generous inference limit.
# It is important that builtins can be executed, otherwise some
# functions that depend on certain builtins features would be
# broken, see e.g. GH #1432
maximum *= 100
if inference_state.inferred_element_counts[n] > maximum:
debug.warning('In value %s there were too many inferences.', n)
return NO_VALUES
except KeyError:
@@ -55,18 +65,6 @@ def _limit_value_infers(func):
return wrapper
def _py__stop_iteration_returns(generators):
results = NO_VALUES
for generator in generators:
try:
method = generator.py__stop_iteration_returns
except AttributeError:
debug.warning('%s is not actually a generator', generator)
else:
results |= method()
return results
def infer_node(context, element):
if isinstance(context, CompForContext):
return _infer_node(context, element)
@@ -99,7 +97,7 @@ def infer_node(context, element):
str_element_names = [e.value for e in element_names]
if any(i.value in str_element_names for i in if_names):
for if_name in if_names:
definitions = context.inference_state.goto_definitions(context, if_name)
definitions = context.inference_state.infer(context, if_name)
# Every name that has multiple different definitions
# causes the complexity to rise. The complexity should
# never fall below 1.
@@ -203,8 +201,8 @@ def _infer_node(context, element):
return value_set
elif typ == 'test':
# `x if foo else y` case.
return (context.infer_node(element.children[0]) |
context.infer_node(element.children[-1]))
return (context.infer_node(element.children[0])
| context.infer_node(element.children[-1]))
elif typ == 'operator':
# Must be an ellipsis, other operators are not inferred.
# In Python 2 ellipsis is coded as three single dot tokens, not
@@ -319,8 +317,8 @@ def infer_atom(context, atom):
c = atom.children
# Parentheses without commas are not tuples.
if c[0] == '(' and not len(c) == 2 \
and not(c[1].type == 'testlist_comp' and
len(c[1].children) > 1):
and not(c[1].type == 'testlist_comp'
and len(c[1].children) > 1):
return context.infer_node(c[1])
try:
@@ -346,8 +344,8 @@ def infer_atom(context, atom):
array_node_c = array_node.children
except AttributeError:
array_node_c = []
if c[0] == '{' and (array_node == '}' or ':' in array_node_c or
'**' in array_node_c):
if c[0] == '{' and (array_node == '}' or ':' in array_node_c
or '**' in array_node_c):
new_value = iterable.DictLiteralValue(state, context, atom)
else:
new_value = iterable.SequenceLiteralValue(state, context, atom)
@@ -406,7 +404,7 @@ def _infer_expr_stmt(context, stmt, seek_name=None):
if is_setitem:
def to_mod(v):
c = ContextualizedNode(context, subscriptlist)
c = ContextualizedSubscriptListNode(context, subscriptlist)
if v.array_type == 'dict':
return DictModification(v, value_set, c)
elif v.array_type == 'list':
@@ -578,12 +576,12 @@ def _infer_comparison_part(inference_state, context, left, operator, right):
return ValueSet([right])
elif str_operator == '+':
if l_is_num and r_is_num or is_string(left) and is_string(right):
return ValueSet([left.execute_operation(right, str_operator)])
return left.execute_operation(right, str_operator)
elif _is_tuple(left) and _is_tuple(right) or _is_list(left) and _is_list(right):
return ValueSet([iterable.MergedArray(inference_state, (left, right))])
elif str_operator == '-':
if l_is_num and r_is_num:
return ValueSet([left.execute_operation(right, str_operator)])
return left.execute_operation(right, str_operator)
elif str_operator == '%':
# With strings and numbers the left type typically remains. Except for
# `int() % float()`.
@@ -591,11 +589,9 @@ def _infer_comparison_part(inference_state, context, left, operator, right):
elif str_operator in COMPARISON_OPERATORS:
if left.is_compiled() and right.is_compiled():
# Possible, because the return is not an option. Just compare.
try:
return ValueSet([left.execute_operation(right, str_operator)])
except TypeError:
# Could be True or False.
pass
result = left.execute_operation(right, str_operator)
if result:
return result
else:
if str_operator in ('is', '!=', '==', 'is not'):
operation = COMPARISON_OPERATORS[str_operator]
@@ -677,6 +673,11 @@ def tree_name_to_values(inference_state, context, tree_name):
node = tree_name.parent
if node.type == 'global_stmt':
c = context.create_context(tree_name)
if c.is_module():
# In case we are already part of the module, there is no point
# in looking up the global statement anymore, because it's not
# valid at that point anyway.
return NO_VALUES
# For global_stmt lookups, we only need the first possible scope,
# which means the function itself.
filter = next(c.get_filters())
@@ -748,6 +749,10 @@ def _apply_decorators(context, node):
else:
decoratee_value = FunctionValue.from_context(context, node)
initial = values = ValueSet([decoratee_value])
if is_big_annoying_library(context):
return values
for dec in reversed(node.get_decorators()):
debug.dbg('decorator: %s %s', dec, values, color="MAGENTA")
with debug.increase_indent_cm():
@@ -803,6 +808,11 @@ def check_tuple_assignments(name, value_set):
return value_set
class ContextualizedSubscriptListNode(ContextualizedNode):
def infer(self):
return _infer_subscript_list(self.context, self.node)
def _infer_subscript_list(context, index):
"""
Handles slices in subscript nodes.

View File

@@ -156,7 +156,8 @@ def _get_paths_from_buildout_script(inference_state, buildout_script_path):
from jedi.inference.value import ModuleValue
module_context = ModuleValue(
inference_state, module_node, file_io,
inference_state, module_node,
file_io=file_io,
string_names=None,
code_lines=get_cached_code_lines(inference_state.grammar, buildout_script_path),
).as_context()

View File

@@ -1,63 +0,0 @@
from jedi.inference import imports
def _resolve_names(definition_names, avoid_names=()):
for name in definition_names:
if name in avoid_names:
# Avoiding recursions here, because goto on a module name lands
# on the same module.
continue
if not isinstance(name, imports.SubModuleName):
# SubModuleNames are not actually existing names but created
# names when importing something like `import foo.bar.baz`.
yield name
if name.api_type == 'module':
for name in _resolve_names(name.goto(), definition_names):
yield name
def _dictionarize(names):
return dict(
(n if n.tree_name is None else n.tree_name, n)
for n in names
)
def _find_names(module_context, tree_name):
name = module_context.create_name(tree_name)
found_names = set(name.goto())
found_names.add(name)
return _dictionarize(_resolve_names(found_names))
def usages(module_context, tree_name):
search_name = tree_name.value
found_names = _find_names(module_context, tree_name)
module_contexts = set(d.get_root_context() for d in found_names.values())
module_contexts = set(m for m in module_contexts if not m.is_compiled())
non_matching_usage_maps = {}
inf = module_context.inference_state
potential_modules = imports.get_module_contexts_containing_name(
inf, module_contexts, search_name
)
for module_context in potential_modules:
for name_leaf in module_context.tree_node.get_used_names().get(search_name, []):
new = _find_names(module_context, name_leaf)
if any(tree_name in found_names for tree_name in new):
found_names.update(new)
for tree_name in new:
for dct in non_matching_usage_maps.get(tree_name, []):
# A usage that was previously searched for matches with
# a now found name. Merge.
found_names.update(dct)
try:
del non_matching_usage_maps[tree_name]
except KeyError:
pass
else:
for name in new:
non_matching_usage_maps.setdefault(name, []).append(new)
return found_names.values()

View File

@@ -21,6 +21,12 @@ def to_list(func):
return wrapper
def to_tuple(func):
def wrapper(*args, **kwargs):
return tuple(func(*args, **kwargs))
return wrapper
def unite(iterable):
"""Turns a two dimensional array into a one dimensional."""
return set(typ for types in iterable for typ in types)

View File

@@ -10,7 +10,7 @@ current module will be checked for appearances of ``arr.append``,
content will be added
This can be really cpu intensive, as you can imagine. Because |jedi| has to
follow **every** ``append`` and check wheter it's the right array. However this
follow **every** ``append`` and check whether it's the right array. However this
works pretty good, because in *slow* cases, the recursion detector and other
settings will stop this process.
@@ -119,7 +119,7 @@ def _internal_check_array_additions(context, sequence):
# reset settings
settings.dynamic_params_for_other_modules = temp_param_add
debug.dbg('Dynamic array result %s' % added_types, color='MAGENTA')
debug.dbg('Dynamic array result %s', added_types, color='MAGENTA')
return added_types
@@ -188,14 +188,17 @@ class _Modification(ValueWrapper):
class DictModification(_Modification):
def py__iter__(self):
for lazy_context in self._wrapped_value.py__iter__():
def py__iter__(self, contextualized_node=None):
for lazy_context in self._wrapped_value.py__iter__(contextualized_node):
yield lazy_context
yield self._contextualized_key
def get_key_values(self):
return self._wrapped_value.get_key_values() | self._contextualized_key.infer()
class ListModification(_Modification):
def py__iter__(self):
for lazy_context in self._wrapped_value.py__iter__():
def py__iter__(self, contextualized_node=None):
for lazy_context in self._wrapped_value.py__iter__(contextualized_node):
yield lazy_context
yield LazyKnownValues(self._assigned_values)

View File

@@ -11,7 +11,7 @@ from jedi.inference.signature import TreeSignature
from jedi.inference.filters import ParserTreeFilter, FunctionExecutionFilter, \
AnonymousFunctionExecutionFilter
from jedi.inference.names import ValueName, AbstractNameDefinition, \
AnonymousParamName, ParamName
AnonymousParamName, ParamName, NameWrapper
from jedi.inference.base_value import ContextualizedNode, NO_VALUES, \
ValueSet, TreeValue, ValueWrapper
from jedi.inference.lazy_value import LazyKnownValues, LazyKnownValue, \
@@ -21,6 +21,7 @@ from jedi.inference.value import iterable
from jedi import parser_utils
from jedi.inference.parser_cache import get_yield_exprs
from jedi.inference.helpers import values_from_qualified_names
from jedi.inference.gradual.generics import TupleGenericManager
class LambdaName(AbstractNameDefinition):
@@ -67,7 +68,7 @@ class FunctionMixin(object):
if instance is None:
# Calling the Foo.bar results in the original bar function.
return ValueSet([self])
return ValueSet([BoundMethod(instance, self)])
return ValueSet([BoundMethod(instance, class_value.as_context(), self)])
def get_param_names(self):
return [AnonymousParamName(self, param.name)
@@ -96,9 +97,6 @@ class FunctionMixin(object):
class FunctionValue(use_metaclass(CachedMetaClass, FunctionMixin, FunctionAndClassBase)):
def is_function(self):
return True
@classmethod
def from_context(cls, context, tree_node):
def create(tree_node):
@@ -143,6 +141,15 @@ class FunctionValue(use_metaclass(CachedMetaClass, FunctionMixin, FunctionAndCla
return [self]
class FunctionNameInClass(NameWrapper):
def __init__(self, class_context, name):
super(FunctionNameInClass, self).__init__(name)
self._class_context = class_context
def get_defining_qualified_value(self):
return self._class_context.get_value() # Might be None.
class MethodValue(FunctionValue):
def __init__(self, inference_state, class_context, *args, **kwargs):
super(MethodValue, self).__init__(inference_state, *args, **kwargs)
@@ -159,8 +166,15 @@ class MethodValue(FunctionValue):
return None
return names + (self.py__name__(),)
@property
def name(self):
return FunctionNameInClass(self.class_context, super(MethodValue, self).name)
class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
def is_function_execution(self):
return True
def _infer_annotations(self):
raise NotImplementedError
@@ -276,17 +290,19 @@ class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
for lazy_value in self.get_yield_lazy_values()
)
def is_generator(self):
return bool(get_yield_exprs(self.inference_state, self.tree_node))
def infer(self):
"""
Created to be used by inheritance.
"""
inference_state = self.inference_state
is_coroutine = self.tree_node.parent.type in ('async_stmt', 'async_funcdef')
is_generator = bool(get_yield_exprs(inference_state, self.tree_node))
from jedi.inference.gradual.typing import GenericClass
from jedi.inference.gradual.base import GenericClass
if is_coroutine:
if is_generator:
if self.is_generator():
if inference_state.environment.version_info < (3, 6):
return NO_VALUES
async_generator_classes = inference_state.typing_module \
@@ -297,7 +313,7 @@ class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
generics = (yield_values.py__class__(), NO_VALUES)
return ValueSet(
# In Python 3.6 AsyncGenerator is still a class.
GenericClass(c, generics)
GenericClass(c, TupleGenericManager(generics))
for c in async_generator_classes
).execute_annotation()
else:
@@ -308,10 +324,10 @@ class BaseFunctionExecutionContext(ValueContext, TreeContextMixin):
# Only the first generic is relevant.
generics = (return_values.py__class__(), NO_VALUES, NO_VALUES)
return ValueSet(
GenericClass(c, generics) for c in async_classes
GenericClass(c, TupleGenericManager(generics)) for c in async_classes
).execute_annotation()
else:
if is_generator:
if self.is_generator():
return ValueSet([iterable.Generator(inference_state, self)])
else:
return self.get_return_values()

View File

@@ -6,9 +6,10 @@ from jedi import debug
from jedi import settings
from jedi.inference import compiled
from jedi.inference.compiled.value import CompiledObjectFilter
from jedi.inference.helpers import values_from_qualified_names
from jedi.inference.helpers import values_from_qualified_names, is_big_annoying_library
from jedi.inference.filters import AbstractFilter, AnonymousFunctionExecutionFilter
from jedi.inference.names import ValueName, TreeNameDefinition, ParamName
from jedi.inference.names import ValueName, TreeNameDefinition, ParamName, \
NameWrapper
from jedi.inference.base_value import Value, NO_VALUES, ValueSet, \
iterator_to_value_set, ValueWrapper
from jedi.inference.lazy_value import LazyKnownValue, LazyKnownValues
@@ -16,9 +17,10 @@ from jedi.inference.cache import inference_state_method_cache
from jedi.inference.arguments import ValuesArguments, TreeArgumentsWrapper
from jedi.inference.value.function import \
FunctionValue, FunctionMixin, OverloadedFunctionValue, \
BaseFunctionExecutionContext, FunctionExecutionContext
from jedi.inference.value.klass import apply_py__get__, ClassFilter
BaseFunctionExecutionContext, FunctionExecutionContext, FunctionNameInClass
from jedi.inference.value.klass import ClassFilter
from jedi.inference.value.dynamic_arrays import get_dynamic_array_instance
from jedi.parser_utils import function_is_staticmethod, function_is_classmethod
class InstanceExecutedParamName(ParamName):
@@ -41,7 +43,18 @@ class AnonymousMethodExecutionFilter(AnonymousFunctionExecutionFilter):
def _convert_param(self, param, name):
if param.position_index == 0:
return InstanceExecutedParamName(self._instance, self._function_value, name)
if function_is_classmethod(self._function_value.tree_node):
return InstanceExecutedParamName(
self._instance.py__class__(),
self._function_value,
name
)
elif not function_is_staticmethod(self._function_value.tree_node):
return InstanceExecutedParamName(
self._instance,
self._function_value,
name
)
return super(AnonymousMethodExecutionFilter, self)._convert_param(param, name)
@@ -62,7 +75,7 @@ class AnonymousMethodExecutionContext(BaseFunctionExecutionContext):
# set the self name
param_names[0] = InstanceExecutedParamName(
self.instance,
self._function_value,
self._value,
param_names[0].tree_name
)
return param_names
@@ -107,11 +120,23 @@ class AbstractInstanceValue(Value):
call_funcs = self.py__getattribute__('__call__').py__get__(self, self.class_value)
return [s.bind(self) for s in call_funcs.get_signatures()]
def get_function_slot_names(self, name):
# Searches for Python functions in classes.
return []
def execute_function_slots(self, names, *inferred_args):
return ValueSet.from_sets(
name.infer().execute_with_values(*inferred_args)
for name in names
)
def __repr__(self):
return "<%s of %s>" % (self.__class__.__name__, self.class_value)
class CompiledInstance(AbstractInstanceValue):
# This is not really a compiled class, it's just an instance from a
# compiled class.
def __init__(self, inference_state, parent_context, class_value, arguments):
super(CompiledInstance, self).__init__(inference_state, parent_context,
class_value)
@@ -130,9 +155,6 @@ class CompiledInstance(AbstractInstanceValue):
def name(self):
return compiled.CompiledValueName(self, self.class_value.name.string_name)
def is_compiled(self):
return True
def is_stub(self):
return False
@@ -189,7 +211,7 @@ class _BaseTreeInstance(AbstractInstanceValue):
new = search_ancestor(new, 'funcdef', 'classdef')
if class_context.tree_node is new:
func = FunctionValue.from_context(class_context, func_node)
bound_method = BoundMethod(self, func)
bound_method = BoundMethod(self, class_context, func)
if func_node.name.value == '__init__':
context = bound_method.as_context(self._arguments)
else:
@@ -215,8 +237,10 @@ class _BaseTreeInstance(AbstractInstanceValue):
# We are inversing this, because a hand-crafted `__getattribute__`
# could still call another hand-crafted `__getattr__`, but not the
# other way around.
names = (self.get_function_slot_names(u'__getattr__') or
self.get_function_slot_names(u'__getattribute__'))
if is_big_annoying_library(self.parent_context):
return NO_VALUES
names = (self.get_function_slot_names(u'__getattr__')
or self.get_function_slot_names(u'__getattribute__'))
return self.execute_function_slots(names, name)
def py__getitem__(self, index_value_set, contextualized_node):
@@ -263,7 +287,7 @@ class _BaseTreeInstance(AbstractInstanceValue):
return ValueSet.from_sets(name.infer().execute(arguments) for name in names)
def py__get__(self, obj, class_value):
def py__get__(self, instance, class_value):
"""
obj may be None.
"""
@@ -271,9 +295,9 @@ class _BaseTreeInstance(AbstractInstanceValue):
# `method` is the new parent of the array, don't know if that's good.
names = self.get_function_slot_names(u'__get__')
if names:
if obj is None:
obj = compiled.builtin_from_name(self.inference_state, u'None')
return self.execute_function_slots(names, obj, class_value)
if instance is None:
instance = compiled.builtin_from_name(self.inference_state, u'None')
return self.execute_function_slots(names, instance, class_value)
else:
return ValueSet([self])
@@ -287,12 +311,6 @@ class _BaseTreeInstance(AbstractInstanceValue):
return names
return []
def execute_function_slots(self, names, *inferred_args):
return ValueSet.from_sets(
name.infer().execute_with_values(*inferred_args)
for name in names
)
class TreeInstance(_BaseTreeInstance):
def __init__(self, inference_state, parent_context, class_value, arguments):
@@ -320,11 +338,12 @@ class TreeInstance(_BaseTreeInstance):
for signature in self.class_value.py__getattribute__('__init__').get_signatures():
# Just take the first result, it should always be one, because we
# control the typeshed code.
if not signature.matches_signature(args):
if not signature.matches_signature(args) \
or signature.value.tree_node is None:
# First check if the signature even matches, if not we don't
# need to infer anything.
continue
bound_method = BoundMethod(self, signature.value)
bound_method = BoundMethod(self, self.class_value.as_context(), signature.value)
all_annotations = py__annotations__(signature.value.tree_node)
type_var_dict = infer_type_vars_for_execution(bound_method, args, all_annotations)
if type_var_dict:
@@ -338,6 +357,24 @@ class TreeInstance(_BaseTreeInstance):
def get_annotated_class_object(self):
return self._get_annotated_class_object() or self.class_value
def get_key_values(self):
values = NO_VALUES
if self.array_type == 'dict':
for i, (key, instance) in enumerate(self._arguments.unpack()):
if key is None and i == 0:
values |= ValueSet.from_sets(
v.get_key_values()
for v in instance.infer()
if v.array_type == 'dict'
)
if key:
values |= ValueSet([compiled.create_simple_object(
self.inference_state,
key,
)])
return values
def py__simple_getitem__(self, index):
if self.array_type == 'dict':
# Logic for dict({'foo': bar}) and dict(foo=bar)
@@ -372,9 +409,11 @@ class AnonymousInstance(_BaseTreeInstance):
class CompiledInstanceName(compiled.CompiledName):
def __init__(self, inference_state, instance, klass, name):
parent_value = klass.parent_context.get_value()
assert parent_value is not None, "How? Please reproduce and report"
super(CompiledInstanceName, self).__init__(
inference_state,
klass.parent_context,
parent_value,
name.string_name
)
self._instance = instance
@@ -409,13 +448,21 @@ class CompiledInstanceClassFilter(AbstractFilter):
class BoundMethod(FunctionMixin, ValueWrapper):
def __init__(self, instance, function):
def __init__(self, instance, class_context, function):
super(BoundMethod, self).__init__(function)
self.instance = instance
self._class_context = class_context
def is_bound_method(self):
return True
@property
def name(self):
return FunctionNameInClass(
self._class_context,
super(BoundMethod, self).name
)
def py__class__(self):
c, = values_from_qualified_names(self.inference_state, u'types', u'MethodType')
return c
@@ -440,7 +487,7 @@ class BoundMethod(FunctionMixin, ValueWrapper):
def get_signature_functions(self):
return [
BoundMethod(self.instance, f)
BoundMethod(self.instance, self._class_context, f)
for f in self._wrapped_value.get_signature_functions()
]
@@ -472,23 +519,26 @@ class SelfName(TreeNameDefinition):
def parent_context(self):
return self._instance.create_instance_context(self.class_context, self.tree_name)
def get_defining_qualified_value(self):
return self._instance
class LazyInstanceClassName(object):
class LazyInstanceClassName(NameWrapper):
def __init__(self, instance, class_member_name):
super(LazyInstanceClassName, self).__init__(class_member_name)
self._instance = instance
self._class_member_name = class_member_name
@iterator_to_value_set
def infer(self):
for result_value in self._class_member_name.infer():
for c in apply_py__get__(result_value, self._instance, self._instance.py__class__()):
for result_value in self._wrapped_name.infer():
for c in result_value.py__get__(self._instance, self._instance.py__class__()):
yield c
def __getattr__(self, name):
return getattr(self._class_member_name, name)
def get_signatures(self):
return self.infer().get_signatures()
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._class_member_name)
def get_defining_qualified_value(self):
return self._instance
class InstanceClassFilter(AbstractFilter):
@@ -514,7 +564,7 @@ class InstanceClassFilter(AbstractFilter):
]
def __repr__(self):
return '<%s for %s>' % (self.__class__.__name__, self._class_filter.context)
return '<%s for %s>' % (self.__class__.__name__, self._class_filter)
class SelfAttributeFilter(ClassFilter):
@@ -544,17 +594,18 @@ class SelfAttributeFilter(ClassFilter):
if name.is_definition() and self._access_possible(name, from_instance=True):
# TODO filter non-self assignments instead of this bad
# filter.
if self._is_in_right_scope(name):
if self._is_in_right_scope(trailer.parent.children[0], name):
yield name
def _is_in_right_scope(self, name):
base = name
hit_funcdef = False
while True:
base = search_ancestor(base, 'funcdef', 'classdef', 'lambdef')
if base is self._parser_scope:
return hit_funcdef
hit_funcdef = True
def _is_in_right_scope(self, self_name, name):
self_context = self._node_context.create_context(self_name)
names = self_context.goto(self_name, position=self_name.start_pos)
return any(
n.api_type == 'param'
and n.tree_name.get_definition().position_index == 0
and n.parent_context.tree_node is self._parser_scope
for n in names
)
def _convert_names(self, names):
return [SelfName(self._instance, self._node_context, name) for name in names]

View File

@@ -194,17 +194,18 @@ class Sequence(LazyAttributeOverwrite, IterableMixin):
return (self.merge_types_of_iterate().py__class__(),)
def _get_wrapped_value(self):
from jedi.inference.gradual.typing import GenericClass
from jedi.inference.gradual.base import GenericClass
from jedi.inference.gradual.generics import TupleGenericManager
klass = compiled.builtin_from_name(self.inference_state, self.array_type)
c, = GenericClass(klass, self._get_generics()).execute_annotation()
c, = GenericClass(
klass,
TupleGenericManager(self._get_generics())
).execute_annotation()
return c
def py__bool__(self):
return None # We don't know the length, because of appends.
def py__class__(self):
return compiled.builtin_from_name(self.inference_state, self.array_type)
@safe_property
def parent(self):
return self.inference_state.builtins_module
@@ -245,7 +246,17 @@ class GeneratorComprehension(_BaseComprehension, GeneratorBase):
pass
class DictComprehension(ComprehensionMixin, Sequence):
class _DictKeyMixin(object):
# TODO merge with _DictMixin?
def get_mapping_item_values(self):
return self._dict_keys(), self._dict_values()
def get_key_values(self):
# TODO merge with _dict_keys?
return self._dict_keys()
class DictComprehension(ComprehensionMixin, Sequence, _DictKeyMixin):
array_type = u'dict'
def __init__(self, inference_state, defining_context, sync_comp_for_node, key_node, value_node):
@@ -295,9 +306,6 @@ class DictComprehension(ComprehensionMixin, Sequence):
return ValueSet([FakeList(self.inference_state, lazy_values)])
def get_mapping_item_values(self):
return self._dict_keys(), self._dict_values()
def exact_key_items(self):
# NOTE: A smarter thing can probably done here to achieve better
# completions, but at least like this jedi doesn't crash
@@ -408,7 +416,7 @@ class SequenceLiteralValue(Sequence):
return "<%s of %s>" % (self.__class__.__name__, self.atom)
class DictLiteralValue(_DictMixin, SequenceLiteralValue):
class DictLiteralValue(_DictMixin, SequenceLiteralValue, _DictKeyMixin):
array_type = u'dict'
def __init__(self, inference_state, defining_context, atom):
@@ -421,12 +429,8 @@ class DictLiteralValue(_DictMixin, SequenceLiteralValue):
compiled_obj_index = compiled.create_simple_object(self.inference_state, index)
for key, value in self.get_tree_entries():
for k in self._defining_context.infer_node(key):
try:
method = k.execute_operation
except AttributeError:
pass
else:
if method(compiled_obj_index, u'==').get_safe_value():
for key_v in k.execute_operation(compiled_obj_index, u'=='):
if key_v.get_safe_value():
return self._defining_context.infer_node(value)
raise SimpleGetItemNotFound('No key found in dictionary %s.' % self)
@@ -473,9 +477,6 @@ class DictLiteralValue(_DictMixin, SequenceLiteralValue):
for k, v in self.get_tree_entries()
)
def get_mapping_item_values(self):
return self._dict_keys(), self._dict_values()
class _FakeSequence(Sequence):
def __init__(self, inference_state, lazy_value_list):
@@ -511,7 +512,7 @@ class FakeList(_FakeSequence):
array_type = u'tuple'
class FakeDict(_DictMixin, Sequence):
class FakeDict(_DictMixin, Sequence, _DictKeyMixin):
array_type = u'dict'
def __init__(self, inference_state, dct):
@@ -555,9 +556,6 @@ class FakeDict(_DictMixin, Sequence):
def _dict_keys(self):
return ValueSet.from_sets(lazy_value.infer() for lazy_value in self.py__iter__())
def get_mapping_item_values(self):
return self._dict_keys(), self._dict_values()
def exact_key_items(self):
return self._dct.items()

View File

@@ -25,7 +25,7 @@ py__iter__() Returns a generator of a set of types.
py__class__() Returns the class of an instance.
py__simple_getitem__(index: int/str) Returns a a set of types of the index.
Can raise an IndexError/KeyError.
py__getitem__(indexes: ValueSet) Returns a a set of types of the index.
py__getitem__(indexes: ValueSet) Returns a a set of types of the index.
py__file__() Only on modules. Returns None if does
not exist.
py__package__() -> List[str] Only on modules. For the import system.
@@ -50,23 +50,13 @@ from jedi.inference.base_value import ValueSet, iterator_to_value_set, \
NO_VALUES
from jedi.inference.context import ClassContext
from jedi.inference.value.function import FunctionAndClassBase
from jedi.inference.gradual.generics import LazyGenericManager, TupleGenericManager
from jedi.plugins import plugin_manager
def apply_py__get__(value, instance, class_value):
try:
method = value.py__get__
except AttributeError:
yield value
else:
for descriptor_value in method(instance, class_value):
yield descriptor_value
class ClassName(TreeNameDefinition):
def __init__(self, class_value, tree_name, name_context, apply_decorators):
super(ClassName, self).__init__(class_value.as_context(), tree_name)
self._name_context = name_context
super(ClassName, self).__init__(name_context, tree_name)
self._apply_decorators = apply_decorators
self._class_value = class_value
@@ -75,13 +65,11 @@ class ClassName(TreeNameDefinition):
# We're using a different value to infer, so we cannot call super().
from jedi.inference.syntax_tree import tree_name_to_values
inferred = tree_name_to_values(
self.parent_context.inference_state, self._name_context, self.tree_name)
self.parent_context.inference_state, self.parent_context, self.tree_name)
for result_value in inferred:
if self._apply_decorators:
for c in apply_py__get__(result_value,
instance=None,
class_value=self._class_value):
for c in result_value.py__get__(instance=None, class_value=self._class_value):
yield c
else:
yield result_value
@@ -145,8 +133,6 @@ class ClassMixin(object):
def py__call__(self, arguments=None):
from jedi.inference.value import TreeInstance
if arguments is None:
arguments = ValuesArguments([])
return ValueSet([TreeInstance(self.inference_state, self.parent_context, self, arguments)])
def py__class__(self):
@@ -219,7 +205,11 @@ class ClassMixin(object):
type_ = builtin_from_name(self.inference_state, u'type')
assert isinstance(type_, ClassValue)
if type_ != self:
for instance in type_.py__call__():
# We are not using execute_with_values here, because the
# plugin function for type would get executed instead of an
# instance creation.
args = ValuesArguments([])
for instance in type_.py__call__(args):
instance_filters = instance.get_filters()
# Filter out self filters
next(instance_filters)
@@ -227,7 +217,11 @@ class ClassMixin(object):
yield next(instance_filters)
def get_signatures(self):
init_funcs = self.py__call__().py__getattribute__('__init__')
# Since calling staticmethod without a function is illegal, the Jedi
# plugin doesn't return anything. Therefore call directly and get what
# we want: An instance of staticmethod.
args = ValuesArguments([])
init_funcs = self.py__call__(args).py__getattribute__('__init__')
return [sig.bind(self) for sig in init_funcs.get_signatures()]
def _as_context(self):
@@ -278,20 +272,29 @@ class ClassValue(use_metaclass(CachedMetaClass, ClassMixin, FunctionAndClassBase
)]
def py__getitem__(self, index_value_set, contextualized_node):
from jedi.inference.gradual.typing import LazyGenericClass
from jedi.inference.gradual.base import GenericClass
if not index_value_set:
return ValueSet([self])
return ValueSet(
LazyGenericClass(
GenericClass(
self,
index_value,
value_of_index=contextualized_node.context,
LazyGenericManager(
context_of_index=contextualized_node.context,
index_value=index_value,
)
)
for index_value in index_value_set
)
def with_generics(self, generics_tuple):
from jedi.inference.gradual.base import GenericClass
return GenericClass(
self,
TupleGenericManager(generics_tuple)
)
def define_generics(self, type_var_dict):
from jedi.inference.gradual.typing import GenericClass
from jedi.inference.gradual.base import GenericClass
def remap_type_vars():
"""
@@ -309,7 +312,7 @@ class ClassValue(use_metaclass(CachedMetaClass, ClassMixin, FunctionAndClassBase
if type_var_dict:
return ValueSet([GenericClass(
self,
generics=tuple(remap_type_vars())
TupleGenericManager(tuple(remap_type_vars()))
)])
return ValueSet({self})

View File

@@ -3,7 +3,7 @@ import os
from jedi import debug
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.names import ValueNameMixin, AbstractNameDefinition
from jedi.inference.names import AbstractNameDefinition, ModuleName
from jedi.inference.filters import GlobalNameFilter, ParserTreeFilter, DictFilter, MergedFilter
from jedi.inference import compiled
from jedi.inference.base_value import TreeValue
@@ -37,18 +37,6 @@ class _ModuleAttributeName(AbstractNameDefinition):
return compiled.get_string_value_set(self.parent_context.inference_state)
class ModuleName(ValueNameMixin, AbstractNameDefinition):
start_pos = 1, 0
def __init__(self, value, name):
self._value = value
self._name = name
@property
def string_name(self):
return self._name
def iter_module_names(inference_state, paths):
# Python modules/packages
for n in inference_state.compiled_subprocess.list_module_names(paths):
@@ -83,11 +71,11 @@ class SubModuleDictMixin(object):
package).
"""
names = {}
if self.is_package:
if self.is_package():
mods = iter_module_names(self.inference_state, self.py__path__())
for name in mods:
# It's obviously a relative import to the current module.
names[name] = SubModuleName(self, name)
names[name] = SubModuleName(self.as_context(), name)
# In the case of an import like `from x.` we don't need to
# add all the variables, this is only about submodules.
@@ -95,13 +83,15 @@ class SubModuleDictMixin(object):
class ModuleMixin(SubModuleDictMixin):
_module_name_class = ModuleName
def get_filters(self, origin_scope=None):
yield MergedFilter(
ParserTreeFilter(
parent_context=self.as_context(),
origin_scope=origin_scope
),
GlobalNameFilter(self, self.tree_node),
GlobalNameFilter(self.as_context(), self.tree_node),
)
yield DictFilter(self.sub_modules_dict())
yield DictFilter(self._module_attributes_dict())
@@ -121,7 +111,7 @@ class ModuleMixin(SubModuleDictMixin):
@property
@inference_state_method_cache()
def name(self):
return ModuleName(self, self._string_name)
return self._module_name_class(self, self._string_name)
@property
def _string_name(self):
@@ -186,8 +176,8 @@ class ModuleMixin(SubModuleDictMixin):
class ModuleValue(ModuleMixin, TreeValue):
api_type = u'module'
def __init__(self, inference_state, module_node, file_io, string_names,
code_lines, is_package=False):
def __init__(self, inference_state, module_node, code_lines, file_io=None,
string_names=None, is_package=False):
super(ModuleValue, self).__init__(
inference_state,
parent_context=None,
@@ -200,7 +190,7 @@ class ModuleValue(ModuleMixin, TreeValue):
self._path = file_io.path
self.string_names = string_names # Optional[Tuple[str, ...]]
self.code_lines = code_lines
self.is_package = is_package
self._is_package = is_package
def is_stub(self):
if self._path is not None and self._path.endswith('.pyi'):
@@ -224,8 +214,11 @@ class ModuleValue(ModuleMixin, TreeValue):
return os.path.abspath(self._path)
def is_package(self):
return self._is_package
def py__package__(self):
if self.is_package:
if self._is_package:
return self.string_names
return self.string_names[:-1]
@@ -235,7 +228,7 @@ class ModuleValue(ModuleMixin, TreeValue):
is a list of paths (strings).
Returns None if the module is not a package.
"""
if not self.is_package:
if not self._is_package:
return None
# A namespace package is typically auto generated and ~10 lines long.

View File

@@ -26,10 +26,10 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
api_type = u'module'
parent_context = None
def __init__(self, inference_state, fullname, paths):
def __init__(self, inference_state, string_names, paths):
super(ImplicitNamespaceValue, self).__init__(inference_state, parent_context=None)
self.inference_state = inference_state
self._fullname = fullname
self.string_names = string_names
self._paths = paths
def get_filters(self, origin_scope=None):
@@ -47,13 +47,13 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
def py__package__(self):
"""Return the fullname
"""
return self._fullname.split('.')
return self.string_names
def py__path__(self):
return self._paths
def py__name__(self):
return self._fullname
return '.'.join(self.string_names)
def is_namespace(self):
return True
@@ -61,7 +61,6 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
def is_stub(self):
return False
@property
def is_package(self):
return True
@@ -69,4 +68,4 @@ class ImplicitNamespaceValue(Value, SubModuleDictMixin):
return NamespaceContext(self)
def __repr__(self):
return '<%s: %s>' % (self.__class__.__name__, self._fullname)
return '<%s: %s>' % (self.__class__.__name__, self.py__name__())

View File

@@ -109,6 +109,21 @@ def clean_scope_docstring(scope_node):
return ''
def find_statement_documentation(tree_node):
if tree_node.type == 'expr_stmt':
tree_node = tree_node.parent # simple_stmt
maybe_string = tree_node.get_next_sibling()
if maybe_string is not None:
if maybe_string.type == 'simple_stmt':
maybe_string = maybe_string.children[0]
if maybe_string.type == 'string':
cleaned = cleandoc(safe_literal_eval(maybe_string.value))
# Since we want the docstr output to be always unicode, just
# force it.
return force_unicode(cleaned)
return ''
def safe_literal_eval(value):
first_two = value[:2].lower()
if first_two[0] == 'f' or first_two in ('fr', 'rf'):
@@ -127,10 +142,10 @@ def safe_literal_eval(value):
return ''
def get_call_signature(funcdef, width=72, call_string=None,
omit_first_param=False, omit_return_annotation=False):
def get_signature(funcdef, width=72, call_string=None,
omit_first_param=False, omit_return_annotation=False):
"""
Generate call signature of this function.
Generate a string signature of a function.
:param width: Fold lines if a line is longer than this value.
:type width: int
@@ -278,5 +293,21 @@ def cut_value_at_position(leaf, position):
return ''.join(lines)
def get_string_quote(leaf):
return re.match(r'\w*("""|\'{3}|"|\')', leaf.value).group(1)
def _function_is_x_method(method_name):
def wrapper(function_node):
"""
This is a heuristic. It will not hold ALL the times, but it will be
correct pretty much for anyone that doesn't try to beat it.
staticmethod/classmethod are builtins and unless overwritten, this will
be correct.
"""
for decorator in function_node.get_decorators():
dotted_name = decorator.children[1]
if dotted_name.get_code() == method_name:
return True
return False
return wrapper
function_is_staticmethod = _function_is_x_method('staticmethod')
function_is_classmethod = _function_is_x_method('classmethod')

View File

@@ -14,18 +14,18 @@ class _PluginManager(object):
self._registered_plugins.extend(plugins)
self._build_functions()
def decorate(self):
def decorate(self, name=None):
def decorator(callback):
@wraps(callback)
def wrapper(*args, **kwargs):
return built_functions[name](*args, **kwargs)
return built_functions[public_name](*args, **kwargs)
name = callback.__name__
public_name = name or callback.__name__
assert name not in self._built_functions
assert public_name not in self._built_functions
built_functions = self._built_functions
built_functions[name] = callback
self._cached_base_callbacks[name] = callback
built_functions[public_name] = callback
self._cached_base_callbacks[public_name] = callback
return wrapper

164
jedi/plugins/pytest.py Normal file
View File

@@ -0,0 +1,164 @@
from parso.python.tree import search_ancestor
from jedi._compatibility import FileNotFoundError
from jedi.inference.cache import inference_state_method_cache
from jedi.inference.imports import load_module_from_path
from jedi.inference.filters import ParserTreeFilter
from jedi.inference.base_value import NO_VALUES, ValueSet
_PYTEST_FIXTURE_MODULES = [
('_pytest', 'monkeypatch'),
('_pytest', 'capture'),
('_pytest', 'logging'),
('_pytest', 'tmpdir'),
('_pytest', 'pytester'),
]
def execute(callback):
def wrapper(value, arguments):
# This might not be necessary anymore in pytest 4/5, definitely needed
# for pytest 3.
if value.py__name__() == 'fixture' \
and value.parent_context.py__name__() == '_pytest.fixtures':
return NO_VALUES
return callback(value, arguments)
return wrapper
def infer_anonymous_param(func):
def get_returns(value):
if value.tree_node.annotation is not None:
return value.execute_with_values()
# In pytest we need to differentiate between generators and normal
# returns.
# Parameters still need to be anonymous, .as_context() ensures that.
function_context = value.as_context()
if function_context.is_generator():
return function_context.merge_yield_values()
else:
return function_context.get_return_values()
def wrapper(param_name):
is_pytest_param, param_name_is_function_name = \
_is_a_pytest_param_and_inherited(param_name)
if is_pytest_param:
module = param_name.get_root_context()
fixtures = _goto_pytest_fixture(
module,
param_name.string_name,
# This skips the current module, because we are basically
# inheriting a fixture from somewhere else.
skip_own_module=param_name_is_function_name,
)
if fixtures:
return ValueSet.from_sets(
get_returns(value)
for fixture in fixtures
for value in fixture.infer()
)
return func(param_name)
return wrapper
def goto_anonymous_param(func):
def wrapper(param_name):
is_pytest_param, param_name_is_function_name = \
_is_a_pytest_param_and_inherited(param_name)
if is_pytest_param:
names = _goto_pytest_fixture(
param_name.get_root_context(),
param_name.string_name,
skip_own_module=param_name_is_function_name,
)
if names:
return names
return func(param_name)
return wrapper
def complete_param_names(func):
def wrapper(context, func_name, decorator_nodes):
module_context = context.get_root_context()
if _is_pytest_func(func_name, decorator_nodes):
names = []
for module_context in _iter_pytest_modules(module_context):
names += FixtureFilter(module_context).values()
if names:
return names
return func(context, func_name, decorator_nodes)
return wrapper
def _goto_pytest_fixture(module_context, name, skip_own_module):
for module_context in _iter_pytest_modules(module_context, skip_own_module=skip_own_module):
names = FixtureFilter(module_context).get(name)
if names:
return names
def _is_a_pytest_param_and_inherited(param_name):
"""
Pytest params are either in a `test_*` function or have a pytest fixture
with the decorator @pytest.fixture.
This is a heuristic and will work in most cases.
"""
funcdef = search_ancestor(param_name.tree_name, 'funcdef')
if funcdef is None: # A lambda
return False, False
decorators = funcdef.get_decorators()
return _is_pytest_func(funcdef.name.value, decorators), \
funcdef.name.value == param_name.string_name
def _is_pytest_func(func_name, decorator_nodes):
return func_name.startswith('test') \
or any('fixture' in n.get_code() for n in decorator_nodes)
@inference_state_method_cache()
def _iter_pytest_modules(module_context, skip_own_module=False):
if not skip_own_module:
yield module_context
file_io = module_context.get_value().file_io
if file_io is not None:
folder = file_io.get_parent_folder()
sys_path = module_context.inference_state.get_sys_path()
while any(folder.path.startswith(p) for p in sys_path):
file_io = folder.get_file_io('conftest.py')
if file_io.path != module_context.py__file__():
try:
m = load_module_from_path(module_context.inference_state, file_io)
yield m.as_context()
except FileNotFoundError:
pass
folder = folder.get_parent_folder()
for names in _PYTEST_FIXTURE_MODULES:
for module_value in module_context.inference_state.import_module(names):
yield module_value.as_context()
class FixtureFilter(ParserTreeFilter):
def _filter(self, names):
for name in super(FixtureFilter, self)._filter(names):
funcdef = name.parent
if funcdef.type == 'funcdef':
# Class fixtures are not supported
decorated = funcdef.parent
if decorated.type == 'decorated' and self._is_fixture(decorated):
yield name
def _is_fixture(self, decorated):
for decorator in decorated.children:
dotted_name = decorator.children[1]
# A heuristic, this makes it faster.
if 'fixture' in dotted_name.get_code():
for value in self.parent_context.infer_node(dotted_name):
if value.name.get_qualified_names(include_module_names=True) \
== ('_pytest', 'fixtures', 'fixture'):
return True
return False

View File

@@ -6,6 +6,7 @@ from jedi.plugins import stdlib
from jedi.plugins import flask
from jedi.plugins import django
from jedi.plugins import plugin_manager
from jedi.plugins import pytest
plugin_manager.register(stdlib, flask, django)
plugin_manager.register(stdlib, flask, pytest, django)

View File

@@ -84,7 +84,7 @@ class {typename}(tuple):
# These methods were added by Jedi.
# __new__ doesn't really work with Jedi. So adding this to nametuples seems
# like the easiest way.
def __init__(_cls, {arg_list}):
def __init__(self, {arg_list}):
'A helper function for namedtuple.'
self.__iterable = ({arg_list})
@@ -335,7 +335,7 @@ def builtins_isinstance(objects, types, arguments, inference_state):
class StaticMethodObject(ValueWrapper):
def py__get__(self, instance, klass):
def py__get__(self, instance, class_value):
return ValueSet([self._wrapped_value])
@@ -363,7 +363,7 @@ class ClassMethodGet(ValueWrapper):
self._function = function
def get_signatures(self):
return self._function.get_signatures()
return [sig.bind(self._function) for sig in self._function.get_signatures()]
def py__call__(self, arguments):
return self._function.execute(ClassMethodArguments(self._class, arguments))
@@ -467,8 +467,6 @@ def collections_namedtuple(value, arguments, callback):
generated_class = next(module.iter_classdefs())
parent_context = ModuleValue(
inference_state, module,
file_io=None,
string_names=None,
code_lines=parso.split_lines(code, keepends=True),
).as_context()
@@ -819,10 +817,10 @@ class EnumInstance(LazyValueWrapper):
def tree_name_to_values(func):
def wrapper(inference_state, value, tree_name):
if tree_name.value == 'sep' and value.is_module() and value.py__name__() == 'os.path':
def wrapper(inference_state, context, tree_name):
if tree_name.value == 'sep' and context.is_module() and context.py__name__() == 'os.path':
return ValueSet({
compiled.create_simple_object(inference_state, os.path.sep),
})
return func(inference_state, value, tree_name)
return func(inference_state, context, tree_name)
return wrapper

View File

@@ -57,7 +57,7 @@ def rename(script, new_name):
:param script: The source Script object.
:return: list of changed lines/changed files
"""
return Refactoring(_rename(script.usages(), new_name))
return Refactoring(_rename(script.get_references(), new_name))
def _rename(names, replace_str):
@@ -146,8 +146,8 @@ def extract(script, new_name):
# add parentheses in multi-line case
open_brackets = ['(', '[', '{']
close_brackets = [')', ']', '}']
if '\n' in text and not (text[0] in open_brackets and text[-1] ==
close_brackets[open_brackets.index(text[0])]):
if '\n' in text and not (text[0] in open_brackets and text[-1]
== close_brackets[open_brackets.index(text[0])]):
text = '(%s)' % text
# add new line before statement
@@ -166,11 +166,11 @@ def inline(script):
dct = {}
definitions = script.goto_assignments()
definitions = script.goto()
assert len(definitions) == 1
stmt = definitions[0]._definition
usages = script.usages()
inlines = [r for r in usages
references = script.get_references()
inlines = [r for r in references
if not stmt.start_pos <= (r.line, r.column) <= stmt.end_pos]
inlines = sorted(inlines, key=lambda x: (x.module_path, x.line, x.column),
reverse=True)
@@ -183,7 +183,7 @@ def inline(script):
replace_str = line[expression_list[0].start_pos[1]:stmt.end_pos[1] + 1]
replace_str = replace_str.strip()
# tuples need parentheses
if expression_list and isinstance(expression_list[0], pr.Array):
if expression_list and expression_list[0].type == 'TODO':
arr = expression_list[0]
if replace_str[0] not in ['(', '[', '{'] and len(arr) > 1:
replace_str = '(%s)' % replace_str

View File

@@ -112,6 +112,14 @@ something has been changed e.g. to a function. If this happens, only the
function is being reparsed.
"""
_cropped_file_size = 10e6 # 1 Megabyte
"""
Jedi gets extremely slow if the file size exceed a few thousand lines.
To avoid getting stuck completely Jedi crops the file this point.
One megabyte of typical Python code equals about 20'000 lines of code.
"""
# ----------------
# dynamic stuff
# ----------------

30
jedi/third_party/README_typeshed.md vendored Normal file
View File

@@ -0,0 +1,30 @@
# Typeshed in Jedi
Typeshed is used in Jedi to provide completions for all the stdlib modules.
The relevant files in jedi are in `jedi/inference/gradual`. `gradual` stands
for "gradual typing".
## Updating Typeshed
Currently Jedi has a custom implementation hosted in
https://github.com/davidhalter/typeshed.git for two reasons:
- Jedi doesn't understand Tuple.__init__ properly.
- Typeshed has a bug: https://github.com/python/typeshed/issues/2999
Therefore we need a bit of a complicated process to upgrade typeshed:
cd jedi/third_party/typeshed
git remote add upstream https://github.com/python/typeshed
git fetch upstream
git checkout jedi
git rebase upstream/master
git push -f
git push
cd ../../..
git commit jedi/third_party/typeshed -m "Upgrade typeshed"
If merge conflicts appear, just make sure that only one commit from Jedi
appears.

View File

@@ -17,7 +17,7 @@ from jedi import Interpreter
READLINE_DEBUG = False
def setup_readline(namespace_module=__main__):
def setup_readline(namespace_module=__main__, fuzzy=False):
"""
Install Jedi completer to :mod:`readline`.
@@ -83,7 +83,7 @@ def setup_readline(namespace_module=__main__):
logging.debug("Start REPL completion: " + repr(text))
interpreter = Interpreter(text, [namespace_module.__dict__])
completions = interpreter.completions()
completions = interpreter.complete(fuzzy=fuzzy)
logging.debug("REPL completions: %s", completions)
self.matches = [

View File

@@ -2,9 +2,8 @@
addopts = --doctest-modules
# Ignore broken files in blackbox test directories
norecursedirs = .* docs completion refactor absolute_import namespace_package
scripts extensions speed static_analysis not_in_sys_path
sample_venvs init_extension_module simple_import jedi/third_party
norecursedirs = .* jedi/third_party scripts docs
test/completion test/refactor test/static_analysis test/examples
# Activate `clean_jedi_cache` fixture for all tests. This should be
# fine as long as we are using `clean_jedi_cache` as a session scoped

View File

@@ -1 +1 @@
parso>=0.5.0
parso>=0.5.2

View File

@@ -45,9 +45,9 @@ def run(code, index, infer=False):
start = time.time()
script = jedi.Script(code)
if infer:
result = script.goto_definitions()
result = script.infer()
else:
result = script.completions()
result = script.complete()
print('Used %ss for the %sth run.' % (time.time() - start, index + 1))
return result

View File

@@ -45,7 +45,7 @@ def run():
print('Process Memory before: %skB' % process_memory())
# After this the module should be cached.
# Need to invent a path so that it's really cached.
jedi.Script(wx_core, path='foobar.py').completions()
jedi.Script(wx_core, path='foobar.py').complete()
gc.collect() # make sure that it's all fair and the gc did its job.
print('Process Memory after: %skB' % process_memory())

View File

@@ -8,5 +8,9 @@ ignore =
E722,
# don't know why this was ever even an option, 1+1 should be possible.
E226,
# line break before binary operator
# Sometimes `type() is` makes sense and is better than isinstance. Code
# review is there to find the times when it doesn't make sense.
E721,
# Line break before binary operator
W503,
exclude = jedi/third_party/* .tox/*

View File

@@ -38,11 +38,14 @@ setup(name='jedi',
extras_require={
'testing': [
# Pytest 5 doesn't support Python 2 and Python 3.4 anymore.
'pytest>=3.1.0,<5.0.0',
'pytest>=3.9.0,<5.0.0',
# docopt for sith doctests
'docopt',
# coloroma for colored debug output
'colorama',
'colorama==0.4.1', # Pinned so it works for Python 3.4
],
'qa': [
'flake8==3.7.9',
],
},
package_data={'jedi': ['*.pyi', 'third_party/typeshed/LICENSE',

View File

@@ -95,9 +95,7 @@ class TestCase(object):
args = json.load(f)
return cls(*args)
operations = [
'completions', 'goto_assignments', 'goto_definitions', 'usages',
'call_signatures']
operations = ['complete', 'goto', 'infer', 'get_references', 'get_signatures']
@classmethod
def generate(cls, file_path):
@@ -123,12 +121,12 @@ class TestCase(object):
def run(self, debugger, record=None, print_result=False):
try:
with open(self.path) as f:
self.script = jedi.Script(f.read(), self.line, self.column, self.path)
self.script = jedi.Script(f.read(), path=self.path)
kwargs = {}
if self.operation == 'goto_assignments':
kwargs['follow_imports'] = random.choice([False, True])
self.objects = getattr(self.script, self.operation)(**kwargs)
self.objects = getattr(self.script, self.operation)(self.line, self.column, **kwargs)
if print_result:
print("{path}: Line {line} column {column}".format(**self.__dict__))
self.show_location(self.line, self.column)

View File

@@ -1,29 +0,0 @@
def test_keyword_doc(Script):
r = list(Script("or", 1, 1).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
r = list(Script("asfdasfd", 1, 1).goto_definitions())
assert len(r) == 0
k = Script("fro").completions()[0]
imp_start = '\nThe ``import'
assert k.raw_doc.startswith(imp_start)
assert k.doc.startswith(imp_start)
def test_blablabla(Script):
defs = Script("import").goto_definitions()
assert len(defs) == 1 and [1 for d in defs if d.doc]
# unrelated to #44
def test_operator_doc(Script):
r = list(Script("a == b", 1, 3).goto_definitions())
assert len(r) == 1
assert len(r[0].doc) > 100
def test_lambda(Script):
defs = Script('lambda x: x', column=0).goto_definitions()
assert [d.type for d in defs] == ['keyword']

View File

@@ -45,6 +45,9 @@ b[int():]
#? list()
b[:]
#? 3
b[:]
#? int()
b[:, 1]
#? int()
@@ -297,6 +300,17 @@ some_dct['b']
#? int() float() str()
some_dct['c']
class Foo:
pass
objects = {object(): 1, Foo: '', Foo(): 3.0}
#? int() float() str()
objects[Foo]
#? int() float() str()
objects[Foo()]
#? int() float() str()
objects['']
# -----------------
# with variable as index
# -----------------

View File

@@ -188,14 +188,31 @@ def init_global_var_predefined():
global_var_predefined
def global_as_import():
from import_tree import globals
#? ['foo']
globals.foo
#? int()
globals.foo
global r
r = r[r]
if r:
r += r + 2
#? int()
r
# -----------------
# within docstrs
# -----------------
def a():
"""
#? ['global_define']
#? []
global_define
#?
str
"""
pass
@@ -284,6 +301,56 @@ except MyException as e:
for x in e.my_attr:
pass
# -----------------
# params
# -----------------
my_param = 1
#? 9 str()
def foo1(my_param):
my_param = 3.0
foo1("")
my_type = float()
#? 20 float()
def foo2(my_param: my_type):
pass
foo2("")
#? 20 int()
def foo3(my_param=my_param):
pass
foo3("")
some_default = ''
#? []
def foo(my_t
#? []
def foo(my_t, my_ty
#? ['some_default']
def foo(my_t=some_defa
#? ['some_default']
def foo(my_t=some_defa, my_t2=some_defa
# python > 2.7
#? ['my_type']
def foo(my_t: lala=some_defa, my_t2: my_typ
#? ['my_type']
def foo(my_t: lala=some_defa, my_t2: my_typ
#? []
def foo(my_t: lala=some_defa, my_t
#? []
lambda my_t
#? []
lambda my_, my_t
#? ['some_default']
lambda x=some_defa
#? ['some_default']
lambda y, x=some_defa
# Just make sure we're not in some weird parsing recovery after opening brackets
def
# -----------------
# continuations
@@ -326,3 +393,19 @@ with open('') as f1, open('') as f2:
f1.closed
#? ['closed']
f2.closed
class Foo():
def __enter__(self):
return ''
#? 14 str()
with Foo() as f3:
#? str()
f3
#! 14 ['with Foo() as f3: f3']
with Foo() as f3:
f3
#? 6 Foo
with Foo() as f3:
f3

View File

@@ -382,6 +382,7 @@ getattr(getattr, 1)
getattr(str, [])
# python >= 3.5
class Base():
def ret(self, b):
return b
@@ -399,6 +400,12 @@ class Wrapper2():
#? int()
Wrapper(Base()).ret(3)
#? ['ret']
Wrapper(Base()).ret
#? int()
Wrapper(Wrapper(Base())).ret(3)
#? ['ret']
Wrapper(Wrapper(Base())).ret
#? int()
Wrapper2(Base()).ret(3)
@@ -409,6 +416,8 @@ class GetattrArray():
#? int()
GetattrArray().something[0]
#? []
GetattrArray().something
# -----------------
@@ -607,3 +616,17 @@ DefaultArg().y()
DefaultArg.x()
#? str()
DefaultArg.y()
# -----------------
# Error Recovery
# -----------------
from import_tree.pkg.base import MyBase
class C1(MyBase):
def f3(self):
#! 13 ['def f1']
self.f1() . # hey'''
#? 13 MyBase.f1
self.f1() . # hey'''

View File

@@ -20,7 +20,7 @@ tuple
class MyClass:
@pass_decorator
def x(foo,
#? 5 ["tuple"]
#? 5 []
tuple,
):
return 1

View File

@@ -0,0 +1,29 @@
# Exists only for completion/pytest.py
import pytest
@pytest.fixture()
def my_other_conftest_fixture():
return 1.0
@pytest.fixture()
def my_conftest_fixture(my_other_conftest_fixture):
return my_other_conftest_fixture
def my_not_existing_fixture():
return 3 # Just a normal function
@pytest.fixture()
def inheritance_fixture():
return ''
@pytest.fixture
def testdir(testdir):
#? ['chdir']
testdir.chdir
return testdir

View File

@@ -330,3 +330,16 @@ import abc
#? ['abstractmethod']
@abc.abstractmethod
# -----------------
# Goto
# -----------------
x = 1
#! 5 []
@x.foo()
def f(): pass
#! 1 ['x = 1']
@x.foo()
def f(): pass

View File

@@ -1,68 +0,0 @@
"""
Fallback to callee definition when definition not found.
- https://github.com/davidhalter/jedi/issues/131
- https://github.com/davidhalter/jedi/pull/149
"""
"""Parenthesis closed at next line."""
# Ignore these definitions for a little while, not sure if we really want them.
# python <= 2.5
#? isinstance
isinstance(
)
#? isinstance
isinstance(
)
#? isinstance
isinstance(None,
)
#? isinstance
isinstance(None,
)
"""Parenthesis closed at same line."""
# Note: len('isinstance(') == 11
#? 11 isinstance
isinstance()
# Note: len('isinstance(None,') == 16
##? 16 isinstance
isinstance(None,)
# Note: len('isinstance(None,') == 16
##? 16 isinstance
isinstance(None, )
# Note: len('isinstance(None, ') == 17
##? 17 isinstance
isinstance(None, )
# Note: len('isinstance( ') == 12
##? 12 isinstance
isinstance( )
"""Unclosed parenthesis."""
#? isinstance
isinstance(
def x(): pass # acts like EOF
##? isinstance
isinstance(
def x(): pass # acts like EOF
#? isinstance
isinstance(None,
def x(): pass # acts like EOF
##? isinstance
isinstance(None,

View File

@@ -252,3 +252,65 @@ def import_issues(foo):
"""
#? datetime.datetime()
foo
# -----------------
# Doctest completions
# -----------------
def doctest_with_gt():
"""
x
>>> somewhere_in_docstring = 3
#? ['import_issues']
>>> import_issu
#? ['somewhere_in_docstring']
>>> somewhere_
blabla
>>> haha = 3
#? ['haha']
>>> hah
#? ['doctest_with_space']
>>> doctest_with_sp
"""
def doctest_with_space():
"""
x
#? ['import_issues']
import_issu
"""
def docstring_rst_identifiers():
"""
#? 30 ['import_issues']
hello I'm here `import_iss` blabla
#? ['import_issues']
hello I'm here `import_iss
#? []
hello I'm here import_iss
#? []
hello I'm here ` import_iss
#? ['upper']
hello I'm here `str.upp
"""
def doctest_without_ending():
"""
#? []
import_issu
ha
no_ending = False
#? ['import_issues']
import_issu
#? ['no_ending']
no_endin

View File

@@ -335,6 +335,11 @@ some_lst2[3]
#? int() str()
some_lst2[2]
some_lst3 = []
some_lst3[0] = 3
some_lst3[:] = '' # Is ignored for now.
#? int()
some_lst3[0]
# -----------------
# set setitem/other modifications (should not work)
# -----------------
@@ -379,5 +384,15 @@ some_dct['y'] = tuple
some_dct['x']
#? int() str() list tuple
some_dct['unknown']
k = 'a'
#? int()
some_dct['a']
some_dct[k]
# python > 3.5
some_other_dct = dict(some_dct, c=set)
#? int()
some_other_dct['a']
#? list
some_other_dct['x']
#? set
some_other_dct['c']

View File

@@ -1,4 +1,4 @@
# goto_assignments command tests are different in syntax
# goto command tests are different in syntax
definition = 3
#! 0 ['a = definition']
@@ -37,6 +37,7 @@ foo = 10;print(foo)
# classes
# -----------------
class C(object):
x = 3
def b(self):
#! ['b = math']
b
@@ -44,8 +45,14 @@ class C(object):
self.b
#! 14 ['def b']
self.b()
#! 14 ['def b']
self.b.
#! 11 ['param self']
self.b
#! ['x = 3']
self.x
#! 14 ['x = 3']
self.x.
return 1
#! ['def b']

View File

@@ -0,0 +1,5 @@
def something():
global foo
foo = 3

View File

@@ -0,0 +1,3 @@
class MyBase:
def f1(self):
pass

View File

@@ -0,0 +1,5 @@
from ..usages import usage_definition
def x():
usage_definition()

View File

@@ -0,0 +1,76 @@
class Super(object):
attribute = 3
def func(self):
return 1
class Inner():
pass
class Sub(Super):
#? 13 Sub.attribute
def attribute(self):
pass
#! 8 ['attribute = 3']
def attribute(self):
pass
#! 4 ['def func']
func = 3
#! 12 ['def func']
class func(): pass
#! 8 ['class Inner']
def Inner(self): pass
# -----------------
# Finding self
# -----------------
class Test1:
class Test2:
def __init__(self):
self.foo_nested = 0
#? ['foo_nested']
self.foo_
#?
self.foo_here
def __init__(self, self2):
self.foo_here = 3
#? ['foo_here', 'foo_in_func']
self.foo_
#? int()
self.foo_here
#?
self.foo_nested
#?
self.foo_not_on_self
#? float()
self.foo_in_func
self2.foo_on_second = ''
def closure():
self.foo_in_func = 4.
def bar(self):
self = 3
self.foo_not_on_self = 3
class SubTest(Test1):
def __init__(self):
self.foo_sub_class = list
def bar(self):
#? ['foo_here', 'foo_in_func', 'foo_sub_class']
self.foo_
#? int()
self.foo_here
#?
self.foo_nested
#?
self.foo_not_on_self

Some files were not shown because too many files have changed in this diff Show More