Evaluation -> type inference

This commit is contained in:
Dave Halter
2019-08-12 10:11:41 +02:00
parent 467839a9ea
commit 4619552589
15 changed files with 34 additions and 46 deletions

View File

@@ -135,8 +135,8 @@ New APIs:
- Integrated the parser of 2to3. This will make refactoring possible. It will
also be possible to check for error messages (like compiling an AST would give)
in the future.
- With the new parser, the evaluation also completely changed. It's now simpler
and more readable.
- With the new parser, the type inference also completely changed. It's now
simpler and more readable.
- Completely rewritten REPL completion.
- Added ``jedi.names``, a command to do static analysis. Thanks to that
sourcegraph guys for sponsoring this!

View File

@@ -47,10 +47,10 @@ The Jedi Core
The core of Jedi consists of three parts:
- :ref:`Parser <parser>`
- :ref:`Python code evaluation <evaluate>`
- :ref:`Python type inference <evaluate>`
- :ref:`API <dev-api>`
Most people are probably interested in :ref:`code evaluation <evaluate>`,
Most people are probably interested in :ref:`type inference <evaluate>`,
because that's where all the magic happens. I need to introduce the :ref:`parser
<parser>` first, because :mod:`jedi.evaluate` uses it extensively.
@@ -68,12 +68,12 @@ The grammar that this parsers uses is very similar to the official Python
.. _evaluate:
Evaluation of python code (evaluate/__init__.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type inference of python code (evaluate/__init__.py)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. automodule:: jedi.evaluate
Evaluation Contexts (evaluate/base_context.py)
Inference Contexts (evaluate/base_context.py)
++++++++++++++++++++++++++++++++++++++++++++++++++++++
.. automodule:: jedi.evaluate.base_context

View File

@@ -67,7 +67,6 @@ Will probably never be implemented:
- metaclasses (how could an auto-completion ever support this)
- ``setattr()``, ``__import__()``
- writing to some dicts: ``globals()``, ``locals()``, ``object.__dict__``
- evaluating ``if`` / ``while`` / ``del``
Caveats
@@ -84,7 +83,7 @@ etc.
**Security**
Security is an important issue for |jedi|. Therefore no Python code is
executed. As long as you write pure Python, everything is evaluated
executed. As long as you write pure Python, everything is inferred
statically. But: If you use builtin modules (``c_builtin``) there is no other
option than to execute those modules. However: Execute isn't that critical (as
e.g. in pythoncomplete, which used to execute *every* import!), because it

View File

@@ -253,10 +253,10 @@ class Completion:
def _trailer_completions(self, previous_leaf):
user_context = get_user_scope(self._module_context, self._position)
evaluation_context = self._evaluator.create_context(
inferred_context = self._evaluator.create_context(
self._module_context, previous_leaf
)
contexts = evaluate_call_of_leaf(evaluation_context, previous_leaf)
contexts = evaluate_call_of_leaf(inferred_context, previous_leaf)
completion_names = []
debug.dbg('trailer completion contexts: %s', contexts, color='MAGENTA')
for context in contexts:

View File

@@ -286,7 +286,7 @@ def find_virtualenvs(paths=None, **kwargs):
for path in os.listdir(directory):
path = os.path.join(directory, path)
if path in _used_paths:
# A path shouldn't be evaluated twice.
# A path shouldn't be inferred twice.
continue
_used_paths.add(path)

View File

@@ -1,5 +1,5 @@
"""
Evaluation of Python code in |jedi| is based on three assumptions:
Type inference of Python code in |jedi| is based on three assumptions:
* The code uses as least side effects as possible. Jedi understands certain
list/tuple/set modifications, but there's no guarantee that Jedi detects
@@ -12,7 +12,7 @@ Evaluation of Python code in |jedi| is based on three assumptions:
* The programmer is not a total dick, e.g. like `this
<https://github.com/davidhalter/jedi/issues/24>`_ :-)
The actual algorithm is based on a principle called lazy evaluation. That
The actual algorithm is based on a principle I call lazy type inference. That
said, the typical entry point for static analysis is calling
``eval_expr_stmt``. There's separate logic for autocompletion in the API, the
evaluator is all about evaluating an expression.
@@ -58,8 +58,8 @@ Jedi has been tested very well, so you can just start modifying code. It's best
to write your own test first for your "new" feature. Don't be scared of
breaking stuff. As long as the tests pass, you're most likely to be fine.
I need to mention now that lazy evaluation is really good because it
only *evaluates* what needs to be *evaluated*. All the statements and modules
I need to mention now that lazy type inference is really good because it
only *inferes* what needs to be *inferred*. All the statements and modules
that are not used are just being ignored.
"""
from parso.python import tree
@@ -188,7 +188,7 @@ class Evaluator(object):
# never fall below 1.
if len(definitions) > 1:
if len(name_dicts) * len(definitions) > 16:
debug.dbg('Too many options for if branch evaluation %s.', if_stmt)
debug.dbg('Too many options for if branch inference %s.', if_stmt)
# There's only a certain amount of branches
# Jedi can evaluate, otherwise it will take to
# long.
@@ -214,14 +214,14 @@ class Evaluator(object):
result |= eval_node(context, element)
return result
else:
return self._eval_element_if_evaluated(context, element)
return self._eval_element_if_inferred(context, element)
else:
if predefined_if_name_dict:
return eval_node(context, element)
else:
return self._eval_element_if_evaluated(context, element)
return self._eval_element_if_inferred(context, element)
def _eval_element_if_evaluated(self, context, element):
def _eval_element_if_inferred(self, context, element):
"""
TODO This function is temporary: Merge with eval_element.
"""

View File

@@ -103,7 +103,7 @@ def _iterate_argument_clinic(evaluator, arguments, parameters):
if not context_set and not optional:
# For the stdlib we always want values. If we don't get them,
# that's ok, maybe something is too hard to resolve, however,
# we will not proceed with the evaluation of that function.
# we will not proceed with the type inference of that function.
debug.warning('argument_clinic "%s" not resolvable.', name)
raise ParamIssue
yield context_set
@@ -200,10 +200,6 @@ def unpack_arglist(arglist):
class TreeArguments(AbstractArguments):
def __init__(self, evaluator, context, argument_node, trailer=None):
"""
The argument_node is either a parser node or a list of evaluated
objects. Those evaluated objects may be lists of evaluated objects
themselves (one list for the first argument, one for the second, etc).
:param argument_node: May be an argument_node or a list of nodes.
"""
self.argument_node = argument_node

View File

@@ -105,9 +105,6 @@ class FunctionMixin(object):
class FunctionContext(use_metaclass(CachedMetaClass, FunctionMixin, FunctionAndClassBase)):
"""
Needed because of decorators. Decorators are evaluated here.
"""
def is_function(self):
return True
@@ -178,7 +175,7 @@ class FunctionExecutionContext(TreeContext):
This is the most complicated class, because it contains the logic to
transfer parameters. It is even more complicated, because there may be
multiple calls to functions and recursion has to be avoided. But this is
responsibility of the decorators.
responsibility of the recursion module.
"""
function_execution_filter = FunctionExecutionFilter

View File

@@ -105,9 +105,9 @@ class AbstractInstanceContext(Context):
return names
return []
def execute_function_slots(self, names, *evaluated_args):
def execute_function_slots(self, names, *inferred_args):
return ContextSet.from_sets(
name.infer().execute_with_values(*evaluated_args)
name.infer().execute_with_values(*inferred_args)
for name in names
)

View File

@@ -238,10 +238,6 @@ class ClassMixin(object):
class ClassContext(use_metaclass(CachedMetaClass, ClassMixin, FunctionAndClassBase)):
"""
This class is not only important to extend `tree.Class`, it is also a
important for descriptors (if the descriptor methods are evaluated or not).
"""
api_type = u'class'
@evaluator_method_cache()

View File

@@ -137,7 +137,7 @@ def _search_function_executions(evaluator, module_context, funcdef, string_name)
# This is a simple way to stop Jedi's dynamic param recursion
# from going wild: The deeper Jedi's in the recursion, the less
# code should be evaluated.
# code should be inferred.
if i * evaluator.dynamic_params_depth > MAX_PARAM_SEARCHES:
return

View File

@@ -98,14 +98,14 @@ class NameFinder(object):
position = self._position
# For functions and classes the defaults don't belong to the
# function and get evaluated in the context before the function. So
# function and get inferred in the context before the function. So
# make sure to exclude the function/class name.
if origin_scope is not None:
ancestor = search_ancestor(origin_scope, 'funcdef', 'classdef', 'lambdef')
lambdef = None
if ancestor == 'lambdef':
# For lambdas it's even more complicated since parts will
# be evaluated later.
# be inferred later.
lambdef = ancestor
ancestor = search_ancestor(origin_scope, 'funcdef', 'classdef')
if ancestor is not None:
@@ -133,7 +133,7 @@ class NameFinder(object):
``filters``), until a name fits.
"""
names = []
# This paragraph is currently needed for proper branch evaluation
# This paragraph is currently needed for proper branch type inference
# (static analysis).
if self._context.predefined_names and isinstance(self._name, tree.Name):
node = self._name

View File

@@ -116,7 +116,7 @@ def eval_node(context, element):
return (context.eval_node(element.children[0]) |
context.eval_node(element.children[-1]))
elif typ == 'operator':
# Must be an ellipsis, other operators are not evaluated.
# Must be an ellipsis, other operators are not inferred.
# In Python 2 ellipsis is coded as three single dot tokens, not
# as one token 3 dot token.
if element.value not in ('.', '...'):
@@ -209,7 +209,7 @@ def eval_atom(context, atom):
if atom.value in ('False', 'True', 'None'):
return ContextSet([compiled.builtin_from_name(context.evaluator, atom.value)])
elif atom.value == 'print':
# print e.g. could be evaluated like this in Python 2.7
# print e.g. could be inferred like this in Python 2.7
return NO_CONTEXTS
elif atom.value == 'yield':
# Contrary to yield from, yield can just appear alone to return a
@@ -347,7 +347,7 @@ def eval_or_test(context, or_test):
if operator.type == 'comp_op': # not in / is not
operator = ' '.join(c.value for c in operator.children)
# handle lazy evaluation of and/or here.
# handle type inference of and/or here.
if operator in ('and', 'or'):
left_bools = set(left.py__bool__() for left in types)
if left_bools == {True}:
@@ -535,8 +535,8 @@ def _remove_statements(evaluator, context, stmt, name):
"""
This is the part where statements are being stripped.
Due to lazy evaluation, statements like a = func; b = a; b() have to be
evaluated.
Due to lazy type inference, statements like a = func; b = a; b() have to be
inferred.
"""
pep0484_contexts = \
annotation.find_type_from_comment_hint_assign(context, stmt, name)

View File

@@ -21,7 +21,7 @@ def check_follow_definition_types(Script, source):
def test_follow_import_incomplete(Script, environment):
"""
Completion on incomplete imports should always take the full completion
to do any evaluation.
to do any type inference.
"""
datetime = check_follow_definition_types(Script, "import itertool")
assert datetime == ['module']

View File

@@ -223,7 +223,7 @@ se = s * 2 if s == '\\' else s
(f2, os_path + 'join(dirname(__file__), "completion", "basi)', 33, ['on"']),
(f2, os_path + 'join(dirname(__file__), "completion", "basi")', 33, ['on"']),
# join with one argument. join will not get evaluated and the result is
# join with one argument. join will not get inferred and the result is
# that directories and in a slash. This is unfortunate, but doesn't
# really matter.
(f2, os_path + 'join("tes', 9, ['t"']),