mirror of
https://github.com/junegunn/fzf.git
synced 2026-05-17 22:09:55 +08:00
Compare commits
43 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| a099d76fa6 | |||
| a5646b46e8 | |||
| 2202481705 | |||
| 6153004070 | |||
| 95f186f364 | |||
| 58b2855513 | |||
| a00df93e13 | |||
| 76efddd718 | |||
| b638ff46fb | |||
| 259e841a77 | |||
| f0a2f5ef14 | |||
| 2ae7367e8a | |||
| 6f33df755e | |||
| 2aec7d5201 | |||
| fc60406684 | |||
| cf57950301 | |||
| 48c4913392 | |||
| 17f2aa1a1f | |||
| b5f7221580 | |||
| e6b9a08699 | |||
| 8dbb3b352d | |||
| 9f422851fe | |||
| 7a811f0cb8 | |||
| b80059e21f | |||
| 26de195bbb | |||
| b59f27ef5a | |||
| f3ca0b1365 | |||
| a8e1ef0989 | |||
| 2f27a3ede2 | |||
| 9249ea1739 | |||
| 92bfe68c74 | |||
| 92dc40ea82 | |||
| 12a280ba14 | |||
| 0c6ead6e98 | |||
| 280a011f02 | |||
| d324580840 | |||
| f9830c5a3d | |||
| 95bc5b8f0c | |||
| 0b08f0dea0 | |||
| e7300fe300 | |||
| 260d160973 | |||
| d57ed157ad | |||
| 9226bc605d |
@@ -0,0 +1,17 @@
|
||||
## Contribution Policy
|
||||
|
||||
We do not accept pull requests generated primarily by AI without genuine understanding or real-world usage context.
|
||||
|
||||
All contributions are expected to demonstrate:
|
||||
- A clear understanding of the codebase
|
||||
- Alignment with product direction
|
||||
- Thoughtful reasoning behind changes
|
||||
- Evidence of real-world usage or hands-on experience with the problem
|
||||
|
||||
If these expectations are not met, we would prefer to implement the changes ourselves rather than spend time reviewing low-effort submissions.
|
||||
|
||||
---
|
||||
|
||||
## Acknowledgement
|
||||
|
||||
- [ ] I confirm that this PR meets the above expectations and reflects my own understanding and real-world context.
|
||||
@@ -12,6 +12,6 @@ jobs:
|
||||
label:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/labeler@v5
|
||||
- uses: actions/labeler@v6
|
||||
with:
|
||||
configuration-path: .github/labeler.yml
|
||||
|
||||
@@ -5,7 +5,7 @@ on:
|
||||
push:
|
||||
branches: [ master, devel ]
|
||||
pull_request:
|
||||
branches: [ master ]
|
||||
branches: [ master, devel ]
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
|
||||
@@ -1,6 +1,45 @@
|
||||
CHANGELOG
|
||||
=========
|
||||
|
||||
0.71.0
|
||||
------
|
||||
- Added `--popup` as a new name for `--tmux` with Zellij support
|
||||
- `--popup` starts fzf in a tmux popup or a Zellij floating pane
|
||||
- `--tmux` is now an alias for `--popup`
|
||||
- Requires tmux 3.3+ or Zellij 0.44+
|
||||
- Cross-reload item identity with `--id-nth`
|
||||
- Added `--id-nth=NTH` to define item identity fields for cross-reload operations
|
||||
- When a `reload` is triggered with tracking enabled, fzf searches for the tracked item by its identity fields in the new list.
|
||||
- `--track --id-nth ..` tracks by the entire line
|
||||
- `--track --id-nth 1` tracks by the first field
|
||||
- `--track` without `--id-nth` retains the existing index-based tracking behavior
|
||||
- The UI is temporarily blocked (prompt dimmed, input disabled) until the item is found or loading completes.
|
||||
- Press `Escape` or `Ctrl-C` to cancel the blocked state without quitting
|
||||
- Info line shows `+T*` / `+t*` while searching
|
||||
- With `--multi`, selected items are preserved across `reload-sync` by matching their identity fields
|
||||
- Performance improvements
|
||||
- The search performance now scales linearly with the number of CPU cores, as we dropped static partitioning to allow better load balancing across threads.
|
||||
```
|
||||
=== query: 'linux' ===
|
||||
[all] baseline: 17.12ms current: 14.28ms (1.20x) matches: 179966 (12.79%)
|
||||
[1T] baseline: 136.49ms current: 137.25ms (0.99x) matches: 179966 (12.79%)
|
||||
[2T] baseline: 75.74ms current: 68.75ms (1.10x) matches: 179966 (12.79%)
|
||||
[4T] baseline: 41.16ms current: 34.97ms (1.18x) matches: 179966 (12.79%)
|
||||
[8T] baseline: 32.82ms current: 17.79ms (1.84x) matches: 179966 (12.79%)
|
||||
```
|
||||
- Improved the cache structure, reducing memory footprint per entry by 86x.
|
||||
- With the reduced per-entry cost, the cache now has broader coverage.
|
||||
- Shell integration improvements
|
||||
- bash: CTRL-R now supports multi-select and `shift-delete` to delete history entries (#4715)
|
||||
- fish: Improved command history (CTRL-R) (#4703) (@bitraid)
|
||||
- `GET /` HTTP endpoint now includes `positions` field in each match entry, providing the indices of matched characters for external highlighting (#4726)
|
||||
- Bug fixes
|
||||
- `--walker=follow` no longer follows symlinks whose target is an ancestor of the walker root, avoiding severe resource exhaustion when a symlink points outside the tree (e.g. Wine's `z:` → `/`) (#4710)
|
||||
- Fixed AWK tokenizer not treating a new line character as whitespace
|
||||
- Fixed `--{accept,with}-nth` removing trailing whitespaces with a non-default `--delimiter`
|
||||
- Fixed OSC8 hyperlinks being mangled when the URL contains unicode characters (#4707)
|
||||
- Fixed `--with-shell` not handling quoted arguments correctly (#4709)
|
||||
|
||||
0.70.0
|
||||
------
|
||||
- Added `change-with-nth` action for dynamically changing the `--with-nth` option.
|
||||
|
||||
+50
-13
@@ -415,25 +415,26 @@ layout options so that the specified number of items are visible in the list
|
||||
section (default: \fB10+\fR).
|
||||
Ignored when \fB\-\-height\fR is not specified or set as an absolute value.
|
||||
.TP
|
||||
.BI "\-\-tmux" "[=[center|top|bottom|left|right][,SIZE[%]][,SIZE[%]][,border-native]]"
|
||||
Start fzf in a tmux popup (default \fBcenter,50%\fR). Requires tmux 3.3 or
|
||||
later. This option is ignored if you are not running fzf inside tmux.
|
||||
.BI "\-\-popup" "[=[center|top|bottom|left|right][,SIZE[%]][,SIZE[%]][,border-native]]"
|
||||
Start fzf in a tmux popup or in a Zellij floating pane (default
|
||||
\fBcenter,50%\fR). Requires tmux 3.3+ or Zellij 0.44+. This option is ignored if you
|
||||
are not running fzf inside tmux or Zellij. \fB\-\-tmux\fR is an alias for this option.
|
||||
|
||||
e.g.
|
||||
\fB# Popup in the center with 70% width and height
|
||||
fzf \-\-tmux 70%
|
||||
fzf \-\-popup 70%
|
||||
|
||||
# Popup on the left with 40% width and 100% height
|
||||
fzf \-\-tmux right,40%
|
||||
fzf \-\-popup right,40%
|
||||
|
||||
# Popup on the bottom with 100% width and 30% height
|
||||
fzf \-\-tmux bottom,30%
|
||||
fzf \-\-popup bottom,30%
|
||||
|
||||
# Popup on the top with 80% width and 40% height
|
||||
fzf \-\-tmux top,80%,40%
|
||||
fzf \-\-popup top,80%,40%
|
||||
|
||||
# Popup with a native tmux border in the center with 80% width and height
|
||||
fzf \-\-tmux center,80%,border\-native\fR
|
||||
# Popup with a native tmux or Zellij border in the center with 80% width and height
|
||||
fzf \-\-popup center,80%,border\-native\fR
|
||||
|
||||
.SS LAYOUT
|
||||
.TP
|
||||
@@ -617,17 +618,53 @@ Disable multi-line display of items when using \fB\-\-read0\fR
|
||||
.B "\-\-raw"
|
||||
Enable raw mode where non-matching items are also displayed in a dimmed color.
|
||||
.TP
|
||||
.B "\-\-track"
|
||||
.BI "\-\-track"
|
||||
Make fzf track the current selection when the result list is updated.
|
||||
This can be useful when browsing logs using fzf with sorting disabled. It is
|
||||
not recommended to use this option with \fB\-\-tac\fR as the resulting behavior
|
||||
can be confusing. Also, consider using \fBtrack\fR action instead of this
|
||||
option.
|
||||
can be confusing.
|
||||
|
||||
When \fB\-\-id\-nth\fR is also set, fzf enables field\-based tracking across
|
||||
\fBreload\fRs. See \fB\-\-id\-nth\fR for details.
|
||||
|
||||
Without \fB\-\-id\-nth\fR, \fB\-\-track\fR uses index\-based tracking that
|
||||
does not persist across reloads.
|
||||
|
||||
.RS
|
||||
e.g.
|
||||
\fBgit log \-\-oneline \-\-graph \-\-color=always | nl |
|
||||
\fB# Index\-based tracking (does not persist across reloads)
|
||||
git log \-\-oneline \-\-graph \-\-color=always | nl |
|
||||
fzf \-\-ansi \-\-track \-\-no\-sort \-\-layout=reverse\-list\fR
|
||||
|
||||
\fB# Track by first field (e.g. pod name) across reloads
|
||||
kubectl get pods | fzf \-\-track \-\-id\-nth 1 \-\-header\-lines=1 \\
|
||||
\-\-bind 'ctrl\-r:reload:kubectl get pods'\fR
|
||||
.RE
|
||||
.TP
|
||||
.BI "\-\-id\-nth=" "N[,..]"
|
||||
Define item identity fields for cross\-reload operations. When set, fzf
|
||||
uses the specified fields to identify items across \fBreload\fR and
|
||||
\fBreload\-sync\fR.
|
||||
|
||||
With \fB\-\-track\fR, fzf extracts the tracking key from the current item
|
||||
using the nth expression and searches for a matching item in the reloaded list.
|
||||
While searching, the UI is blocked (query input and cursor movement are
|
||||
disabled, and the prompt is dimmed). With \fBreload\fR, the blocked state
|
||||
clears as soon as the match is found in the stream. With \fBreload\-sync\fR,
|
||||
the blocked state persists until the entire stream is complete. Press
|
||||
\fBEscape\fR or \fBCtrl\-C\fR to cancel the blocked state without quitting fzf.
|
||||
|
||||
The info line shows \fB+T*\fR (or \fB+t*\fR for one\-off tracking) while
|
||||
the search is in progress.
|
||||
|
||||
With \fB\-\-multi\fR, selected items are preserved across \fBreload\-sync\fR
|
||||
by matching their identity fields in the reloaded list.
|
||||
|
||||
.RS
|
||||
e.g.
|
||||
\fB# Track and preserve selections by pod name across reloads
|
||||
kubectl get pods | fzf \-\-multi \-\-track \-\-id\-nth 1 \-\-header\-lines=1 \\
|
||||
\-\-bind 'ctrl\-r:reload\-sync:kubectl get pods'\fR
|
||||
.RE
|
||||
.TP
|
||||
.B "\-\-tac"
|
||||
|
||||
+2
-2
@@ -1,9 +1,9 @@
|
||||
__fzf_defaults() {
|
||||
# $1: Prepend to FZF_DEFAULT_OPTS_FILE and FZF_DEFAULT_OPTS
|
||||
# $2: Append to FZF_DEFAULT_OPTS_FILE and FZF_DEFAULT_OPTS
|
||||
printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
builtin printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
command cat "${FZF_DEFAULT_OPTS_FILE-}" 2> /dev/null
|
||||
printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
builtin printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
}
|
||||
|
||||
__fzf_exec_awk() {
|
||||
|
||||
@@ -161,6 +161,7 @@ _fzf_opts_completion() {
|
||||
--history
|
||||
--history-size
|
||||
--hscroll-off
|
||||
--id-nth
|
||||
--info
|
||||
--info-command
|
||||
--input-border
|
||||
|
||||
@@ -102,9 +102,9 @@ if [[ -o interactive ]]; then
|
||||
# the changes. See code comments in "common.sh" for the implementation details.
|
||||
|
||||
__fzf_defaults() {
|
||||
printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
builtin printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
command cat "${FZF_DEFAULT_OPTS_FILE-}" 2> /dev/null
|
||||
printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
builtin printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
}
|
||||
|
||||
__fzf_exec_awk() {
|
||||
|
||||
+26
-8
@@ -25,9 +25,9 @@ if [[ $- =~ i ]]; then
|
||||
# the changes. See code comments in "common.sh" for the implementation details.
|
||||
|
||||
__fzf_defaults() {
|
||||
printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
builtin printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
command cat "${FZF_DEFAULT_OPTS_FILE-}" 2> /dev/null
|
||||
printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
builtin printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
}
|
||||
|
||||
__fzf_exec_awk() {
|
||||
@@ -77,17 +77,31 @@ __fzf_cd__() {
|
||||
) && printf 'builtin cd -- %q' "$(builtin unset CDPATH && builtin cd -- "$dir" && builtin pwd)"
|
||||
}
|
||||
|
||||
__fzf_history_delete() {
|
||||
[[ -s $1 ]] || return
|
||||
|
||||
local offsets
|
||||
offsets=($(sort -rnu "$1"))
|
||||
for offset in "${offsets[@]}"; do
|
||||
builtin history -d "$offset"
|
||||
done
|
||||
}
|
||||
|
||||
if command -v perl > /dev/null; then
|
||||
__fzf_history__() {
|
||||
local output script
|
||||
local output script deletefile
|
||||
deletefile=$(mktemp)
|
||||
script='BEGIN { getc; $/ = "\n\t"; $HISTCOUNT = $ENV{last_hist} + 1 } s/^[ *]//; s/\n/\n\t/gm; print $HISTCOUNT - $. . "\t$_" if !$seen{$_}++'
|
||||
output=$(
|
||||
set +o pipefail
|
||||
builtin fc -lnr -2147483648 |
|
||||
last_hist=$(HISTTIMEFORMAT='' builtin history 1) command perl -n -l0 -e "$script" |
|
||||
FZF_DEFAULT_OPTS=$(__fzf_defaults "" "-n2..,.. --scheme=history --bind=ctrl-r:toggle-sort,alt-r:toggle-raw --wrap-sign '"$'\t'"↳ ' --highlight-line ${FZF_CTRL_R_OPTS-} +m --read0") \
|
||||
FZF_DEFAULT_OPTS=$(__fzf_defaults "" "-n2..,.. --scheme=history --bind=ctrl-r:toggle-sort,alt-r:toggle-raw --wrap-sign '"$'\t'"↳ ' --highlight-line --bind 'shift-delete:execute-silent(cat {+f1} >> \"$deletefile\")+exclude-multi' --multi ${FZF_CTRL_R_OPTS-} --read0") \
|
||||
FZF_DEFAULT_OPTS_FILE='' $(__fzfcmd) --query "$READLINE_LINE"
|
||||
) || return
|
||||
)
|
||||
__fzf_history_delete "$deletefile"
|
||||
command rm -f "$deletefile"
|
||||
[[ -n $output ]] || return
|
||||
READLINE_LINE=$(command perl -pe 's/^\d*\t//' <<< "$output")
|
||||
if [[ -z $READLINE_POINT ]]; then
|
||||
echo "$READLINE_LINE"
|
||||
@@ -97,7 +111,8 @@ if command -v perl > /dev/null; then
|
||||
}
|
||||
else # awk - fallback for POSIX systems
|
||||
__fzf_history__() {
|
||||
local output script
|
||||
local output script deletefile
|
||||
deletefile=$(mktemp)
|
||||
[[ $(HISTTIMEFORMAT='' builtin history 1) =~ [[:digit:]]+ ]] # how many history entries
|
||||
script='function P(b) { ++n; sub(/^[ *]/, "", b); if (!seen[b]++) { printf "%d\t%s%c", '$((BASH_REMATCH + 1))' - n, b, 0 } }
|
||||
NR==1 { b = substr($0, 2); next }
|
||||
@@ -108,9 +123,12 @@ else # awk - fallback for POSIX systems
|
||||
set +o pipefail
|
||||
builtin fc -lnr -2147483648 2> /dev/null | # ( $'\t '<lines>$'\n' )* ; <lines> ::= [^\n]* ( $'\n'<lines> )*
|
||||
__fzf_exec_awk "$script" | # ( <counter>$'\t'<lines>$'\000' )*
|
||||
FZF_DEFAULT_OPTS=$(__fzf_defaults "" "-n2..,.. --scheme=history --bind=ctrl-r:toggle-sort,alt-r:toggle-raw --wrap-sign '"$'\t'"↳ ' --highlight-line ${FZF_CTRL_R_OPTS-} +m --read0") \
|
||||
FZF_DEFAULT_OPTS=$(__fzf_defaults "" "-n2..,.. --scheme=history --bind=ctrl-r:toggle-sort,alt-r:toggle-raw --wrap-sign '"$'\t'"↳ ' --highlight-line --bind 'shift-delete:execute-silent(cat {+f1} >> \"$deletefile\")+exclude-multi' --multi ${FZF_CTRL_R_OPTS-} --read0") \
|
||||
FZF_DEFAULT_OPTS_FILE='' $(__fzfcmd) --query "$READLINE_LINE"
|
||||
) || return
|
||||
)
|
||||
__fzf_history_delete "$deletefile"
|
||||
command rm -f "$deletefile"
|
||||
[[ -n $output ]] || return
|
||||
READLINE_LINE=${output#*$'\t'}
|
||||
if [[ -z $READLINE_POINT ]]; then
|
||||
echo "$READLINE_LINE"
|
||||
|
||||
@@ -45,9 +45,9 @@ if [[ -o interactive ]]; then
|
||||
# the changes. See code comments in "common.sh" for the implementation details.
|
||||
|
||||
__fzf_defaults() {
|
||||
printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
builtin printf '%s\n' "--height ${FZF_TMUX_HEIGHT:-40%} --min-height 20+ --bind=ctrl-z:ignore $1"
|
||||
command cat "${FZF_DEFAULT_OPTS_FILE-}" 2> /dev/null
|
||||
printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
builtin printf '%s\n' "${FZF_DEFAULT_OPTS-} $2"
|
||||
}
|
||||
|
||||
__fzf_exec_awk() {
|
||||
@@ -129,7 +129,7 @@ fi
|
||||
# CTRL-R - Paste the selected command from history into the command line
|
||||
fzf-history-widget() {
|
||||
local selected extracted_with_perl=0
|
||||
setopt localoptions noglobsubst noposixbuiltins pipefail no_aliases no_glob no_ksharrays extendedglob 2> /dev/null
|
||||
setopt localoptions noglobsubst noposixbuiltins pipefail no_aliases no_glob no_sh_glob no_ksharrays extendedglob 2> /dev/null
|
||||
# Ensure the module is loaded if not already, and the required features, such
|
||||
# as the associative 'history' array, which maps event numbers to full history
|
||||
# lines, are set. Also, make sure Perl is installed for multi-line output.
|
||||
|
||||
+1
-1
@@ -323,7 +323,7 @@ func trySkip(input *util.Chars, caseSensitive bool, b byte, from int) int {
|
||||
byteArray := input.Bytes()[from:]
|
||||
// For case-insensitive search of a letter, search for both cases in one pass
|
||||
if !caseSensitive && b >= 'a' && b <= 'z' {
|
||||
idx := indexByteTwo(byteArray, b, b-32)
|
||||
idx := IndexByteTwo(byteArray, b, b-32)
|
||||
if idx < 0 {
|
||||
return -1
|
||||
}
|
||||
|
||||
@@ -15,7 +15,7 @@ func cpuHasAVX2() bool
|
||||
// or -1 if neither is present. Uses AVX2 when available, SSE2 otherwise.
|
||||
//
|
||||
//go:noescape
|
||||
func indexByteTwo(s []byte, b1, b2 byte) int
|
||||
func IndexByteTwo(s []byte, b1, b2 byte) int
|
||||
|
||||
// lastIndexByteTwo returns the index of the last occurrence of b1 or b2 in s,
|
||||
// or -1 if neither is present. Uses AVX2 when available, SSE2 otherwise.
|
||||
|
||||
@@ -41,11 +41,11 @@ cpuid_no:
|
||||
MOVB $0, ret+0(FP)
|
||||
RET
|
||||
|
||||
// func indexByteTwo(s []byte, b1, b2 byte) int
|
||||
// func IndexByteTwo(s []byte, b1, b2 byte) int
|
||||
//
|
||||
// Returns the index of the first occurrence of b1 or b2 in s, or -1.
|
||||
// Uses AVX2 (32 bytes/iter) when available, SSE2 (16 bytes/iter) otherwise.
|
||||
TEXT ·indexByteTwo(SB),NOSPLIT,$0-40
|
||||
TEXT ·IndexByteTwo(SB),NOSPLIT,$0-40
|
||||
MOVQ s_base+0(FP), SI
|
||||
MOVQ s_len+8(FP), BX
|
||||
MOVBLZX b1+24(FP), AX
|
||||
|
||||
@@ -7,7 +7,7 @@ package algo
|
||||
// to search for both bytes in a single pass.
|
||||
//
|
||||
//go:noescape
|
||||
func indexByteTwo(s []byte, b1, b2 byte) int
|
||||
func IndexByteTwo(s []byte, b1, b2 byte) int
|
||||
|
||||
// lastIndexByteTwo returns the index of the last occurrence of b1 or b2 in s,
|
||||
// or -1 if neither is present. Implemented in assembly using ARM64 NEON,
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
#include "textflag.h"
|
||||
|
||||
// func indexByteTwo(s []byte, b1, b2 byte) int
|
||||
// func IndexByteTwo(s []byte, b1, b2 byte) int
|
||||
//
|
||||
// Returns the index of the first occurrence of b1 or b2 in s, or -1.
|
||||
// Uses ARM64 NEON to search for both bytes in a single pass over the data.
|
||||
// Adapted from Go's internal/bytealg/indexbyte_arm64.s (single-byte version).
|
||||
TEXT ·indexByteTwo(SB),NOSPLIT,$0-40
|
||||
TEXT ·IndexByteTwo(SB),NOSPLIT,$0-40
|
||||
MOVD s_base+0(FP), R0
|
||||
MOVD s_len+8(FP), R2
|
||||
MOVBU b1+24(FP), R1
|
||||
|
||||
@@ -6,7 +6,7 @@ import "bytes"
|
||||
|
||||
// indexByteTwo returns the index of the first occurrence of b1 or b2 in s,
|
||||
// or -1 if neither is present.
|
||||
func indexByteTwo(s []byte, b1, b2 byte) int {
|
||||
func IndexByteTwo(s []byte, b1, b2 byte) int {
|
||||
i1 := bytes.IndexByte(s, b1)
|
||||
if i1 == 0 {
|
||||
return 0
|
||||
|
||||
+11
-11
@@ -28,9 +28,9 @@ func TestIndexByteTwo(t *testing.T) {
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := indexByteTwo([]byte(tt.s), tt.b1, tt.b2)
|
||||
got := IndexByteTwo([]byte(tt.s), tt.b1, tt.b2)
|
||||
if got != tt.want {
|
||||
t.Errorf("indexByteTwo(%q, %c, %c) = %d, want %d", tt.s[:min(len(tt.s), 40)], tt.b1, tt.b2, got, tt.want)
|
||||
t.Errorf("IndexByteTwo(%q, %c, %c) = %d, want %d", tt.s[:min(len(tt.s), 40)], tt.b1, tt.b2, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -46,27 +46,27 @@ func TestIndexByteTwo(t *testing.T) {
|
||||
for pos := 0; pos < n; pos++ {
|
||||
for _, b := range []byte{'A', 'B'} {
|
||||
data[pos] = b
|
||||
got := indexByteTwo(data, 'A', 'B')
|
||||
got := IndexByteTwo(data, 'A', 'B')
|
||||
want := loopIndexByteTwo(data, 'A', 'B')
|
||||
if got != want {
|
||||
t.Fatalf("indexByteTwo(len=%d, match=%c@%d) = %d, want %d", n, b, pos, got, want)
|
||||
t.Fatalf("IndexByteTwo(len=%d, match=%c@%d) = %d, want %d", n, b, pos, got, want)
|
||||
}
|
||||
data[pos] = byte('c' + (pos % 20))
|
||||
}
|
||||
}
|
||||
// Test with no match
|
||||
got := indexByteTwo(data, 'A', 'B')
|
||||
got := IndexByteTwo(data, 'A', 'B')
|
||||
if got != -1 {
|
||||
t.Fatalf("indexByteTwo(len=%d, no match) = %d, want -1", n, got)
|
||||
t.Fatalf("IndexByteTwo(len=%d, no match) = %d, want -1", n, got)
|
||||
}
|
||||
// Test with both bytes present
|
||||
if n >= 2 {
|
||||
data[n/3] = 'A'
|
||||
data[n*2/3] = 'B'
|
||||
got := indexByteTwo(data, 'A', 'B')
|
||||
got := IndexByteTwo(data, 'A', 'B')
|
||||
want := loopIndexByteTwo(data, 'A', 'B')
|
||||
if got != want {
|
||||
t.Fatalf("indexByteTwo(len=%d, both@%d,%d) = %d, want %d", n, n/3, n*2/3, got, want)
|
||||
t.Fatalf("IndexByteTwo(len=%d, both@%d,%d) = %d, want %d", n, n/3, n*2/3, got, want)
|
||||
}
|
||||
data[n/3] = byte('c' + ((n / 3) % 20))
|
||||
data[n*2/3] = byte('c' + ((n * 2 / 3) % 20))
|
||||
@@ -147,10 +147,10 @@ func FuzzIndexByteTwo(f *testing.F) {
|
||||
f.Add([]byte(""), byte('a'), byte('b'))
|
||||
f.Add([]byte("aaa"), byte('a'), byte('a'))
|
||||
f.Fuzz(func(t *testing.T, data []byte, b1, b2 byte) {
|
||||
got := indexByteTwo(data, b1, b2)
|
||||
got := IndexByteTwo(data, b1, b2)
|
||||
want := loopIndexByteTwo(data, b1, b2)
|
||||
if got != want {
|
||||
t.Errorf("indexByteTwo(len=%d, b1=%d, b2=%d) = %d, want %d", len(data), b1, b2, got, want)
|
||||
t.Errorf("IndexByteTwo(len=%d, b1=%d, b2=%d) = %d, want %d", len(data), b1, b2, got, want)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -214,7 +214,7 @@ func benchIndexByteTwo(b *testing.B, size int, pos int) {
|
||||
fn func([]byte, byte, byte) int
|
||||
}
|
||||
impls := []impl{
|
||||
{"asm", indexByteTwo},
|
||||
{"asm", IndexByteTwo},
|
||||
{"2xIndexByte", refIndexByteTwo},
|
||||
{"loop", loopIndexByteTwo},
|
||||
}
|
||||
|
||||
+17
-16
@@ -6,6 +6,7 @@ import (
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/junegunn/fzf/src/algo"
|
||||
"github.com/junegunn/fzf/src/tui"
|
||||
)
|
||||
|
||||
@@ -123,31 +124,31 @@ func toAnsiString(color tui.Color, offset int) string {
|
||||
return ret + ";"
|
||||
}
|
||||
|
||||
func isPrint(c uint8) bool {
|
||||
return '\x20' <= c && c <= '\x7e'
|
||||
}
|
||||
|
||||
func matchOperatingSystemCommand(s string, start int) int {
|
||||
// `\x1b][0-9][;:][[:print:]]+(?:\x1b\\\\|\x07)`
|
||||
// ^ match starting here after the first printable character
|
||||
//
|
||||
i := start // prefix matched in nextAnsiEscapeSequence()
|
||||
for ; i < len(s) && isPrint(s[i]); i++ {
|
||||
|
||||
// Find the terminator: BEL (\x07) or ESC (\x1b) for ST (\x1b\\)
|
||||
idx := algo.IndexByteTwo(stringBytes(s[i:]), '\x07', '\x1b')
|
||||
if idx < 0 {
|
||||
return -1
|
||||
}
|
||||
if i < len(s) {
|
||||
if s[i] == '\x07' {
|
||||
return i + 1
|
||||
}
|
||||
// `\x1b]8;PARAMS;URI\x1b\\TITLE\x1b]8;;\x1b`
|
||||
// ------
|
||||
if s[i] == '\x1b' && i < len(s)-1 && s[i+1] == '\\' {
|
||||
return i + 2
|
||||
}
|
||||
i += idx
|
||||
|
||||
if s[i] == '\x07' {
|
||||
return i + 1
|
||||
}
|
||||
// `\x1b]8;PARAMS;URI\x1b\\TITLE\x1b]8;;\x1b`
|
||||
// ------
|
||||
if i < len(s)-1 && s[i+1] == '\\' {
|
||||
return i + 2
|
||||
}
|
||||
|
||||
// `\x1b]8;PARAMS;URI\x1b\\TITLE\x1b]8;;\x1b`
|
||||
// ------------
|
||||
if i < len(s) && s[:i+1] == "\x1b]8;;\x1b" {
|
||||
if s[:i+1] == "\x1b]8;;\x1b" {
|
||||
return i + 1
|
||||
}
|
||||
|
||||
@@ -233,7 +234,7 @@ Loop:
|
||||
|
||||
// \x1b][0-9]+[;:][[:print:]]+(?:\x1b\\\\|\x07)
|
||||
// ---------------
|
||||
if j > 2 && i+j+1 < len(s) && (s[i+j] == ';' || s[i+j] == ':') && isPrint(s[i+j+1]) {
|
||||
if j > 2 && i+j+1 < len(s) && (s[i+j] == ';' || s[i+j] == ':') && s[i+j+1] >= '\x20' {
|
||||
if k := matchOperatingSystemCommand(s[i:], j+2); k != -1 {
|
||||
return i, i + k
|
||||
}
|
||||
|
||||
+18
-15
@@ -2,10 +2,13 @@ package fzf
|
||||
|
||||
import "sync"
|
||||
|
||||
// queryCache associates strings to lists of items
|
||||
type queryCache map[string][]Result
|
||||
// ChunkBitmap is a bitmap with one bit per item in a chunk.
|
||||
type ChunkBitmap [chunkBitWords]uint64
|
||||
|
||||
// ChunkCache associates Chunk and query string to lists of items
|
||||
// queryCache associates query strings to bitmaps of matching items
|
||||
type queryCache map[string]ChunkBitmap
|
||||
|
||||
// ChunkCache associates Chunk and query string to bitmaps
|
||||
type ChunkCache struct {
|
||||
mutex sync.Mutex
|
||||
cache map[*Chunk]*queryCache
|
||||
@@ -30,9 +33,9 @@ func (cc *ChunkCache) retire(chunk ...*Chunk) {
|
||||
cc.mutex.Unlock()
|
||||
}
|
||||
|
||||
// Add adds the list to the cache
|
||||
func (cc *ChunkCache) Add(chunk *Chunk, key string, list []Result) {
|
||||
if len(key) == 0 || !chunk.IsFull() || len(list) > queryCacheMax {
|
||||
// Add stores the bitmap for the given chunk and key
|
||||
func (cc *ChunkCache) Add(chunk *Chunk, key string, bitmap ChunkBitmap, matchCount int) {
|
||||
if len(key) == 0 || !chunk.IsFull() || matchCount > queryCacheMax {
|
||||
return
|
||||
}
|
||||
|
||||
@@ -44,11 +47,11 @@ func (cc *ChunkCache) Add(chunk *Chunk, key string, list []Result) {
|
||||
cc.cache[chunk] = &queryCache{}
|
||||
qc = cc.cache[chunk]
|
||||
}
|
||||
(*qc)[key] = list
|
||||
(*qc)[key] = bitmap
|
||||
}
|
||||
|
||||
// Lookup is called to lookup ChunkCache
|
||||
func (cc *ChunkCache) Lookup(chunk *Chunk, key string) []Result {
|
||||
// Lookup returns the bitmap for the exact key
|
||||
func (cc *ChunkCache) Lookup(chunk *Chunk, key string) *ChunkBitmap {
|
||||
if len(key) == 0 || !chunk.IsFull() {
|
||||
return nil
|
||||
}
|
||||
@@ -58,15 +61,15 @@ func (cc *ChunkCache) Lookup(chunk *Chunk, key string) []Result {
|
||||
|
||||
qc, ok := cc.cache[chunk]
|
||||
if ok {
|
||||
list, ok := (*qc)[key]
|
||||
if ok {
|
||||
return list
|
||||
if bm, ok := (*qc)[key]; ok {
|
||||
return &bm
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cc *ChunkCache) Search(chunk *Chunk, key string) []Result {
|
||||
// Search finds the bitmap for the longest prefix or suffix of the key
|
||||
func (cc *ChunkCache) Search(chunk *Chunk, key string) *ChunkBitmap {
|
||||
if len(key) == 0 || !chunk.IsFull() {
|
||||
return nil
|
||||
}
|
||||
@@ -86,8 +89,8 @@ func (cc *ChunkCache) Search(chunk *Chunk, key string) []Result {
|
||||
prefix := key[:len(key)-idx]
|
||||
suffix := key[idx:]
|
||||
for _, substr := range [2]string{prefix, suffix} {
|
||||
if cached, found := (*qc)[substr]; found {
|
||||
return cached
|
||||
if bm, found := (*qc)[substr]; found {
|
||||
return &bm
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
+11
-11
@@ -6,34 +6,34 @@ func TestChunkCache(t *testing.T) {
|
||||
cache := NewChunkCache()
|
||||
chunk1p := &Chunk{}
|
||||
chunk2p := &Chunk{count: chunkSize}
|
||||
items1 := []Result{{}}
|
||||
items2 := []Result{{}, {}}
|
||||
cache.Add(chunk1p, "foo", items1)
|
||||
cache.Add(chunk2p, "foo", items1)
|
||||
cache.Add(chunk2p, "bar", items2)
|
||||
bm1 := ChunkBitmap{1}
|
||||
bm2 := ChunkBitmap{1, 2}
|
||||
cache.Add(chunk1p, "foo", bm1, 1)
|
||||
cache.Add(chunk2p, "foo", bm1, 1)
|
||||
cache.Add(chunk2p, "bar", bm2, 2)
|
||||
|
||||
{ // chunk1 is not full
|
||||
cached := cache.Lookup(chunk1p, "foo")
|
||||
if cached != nil {
|
||||
t.Error("Cached disabled for non-empty chunks", cached)
|
||||
t.Error("Cached disabled for non-full chunks", cached)
|
||||
}
|
||||
}
|
||||
{
|
||||
cached := cache.Lookup(chunk2p, "foo")
|
||||
if cached == nil || len(cached) != 1 {
|
||||
t.Error("Expected 1 item cached", cached)
|
||||
if cached == nil || cached[0] != 1 {
|
||||
t.Error("Expected bitmap cached", cached)
|
||||
}
|
||||
}
|
||||
{
|
||||
cached := cache.Lookup(chunk2p, "bar")
|
||||
if cached == nil || len(cached) != 2 {
|
||||
t.Error("Expected 2 items cached", cached)
|
||||
if cached == nil || cached[1] != 2 {
|
||||
t.Error("Expected bitmap cached", cached)
|
||||
}
|
||||
}
|
||||
{
|
||||
cached := cache.Lookup(chunk1p, "foobar")
|
||||
if cached != nil {
|
||||
t.Error("Expected 0 item cached", cached)
|
||||
t.Error("Expected nil cached", cached)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
+4
-5
@@ -34,19 +34,18 @@ const (
|
||||
maxBgProcessesPerAction = 3
|
||||
|
||||
// Matcher
|
||||
numPartitionsMultiplier = 8
|
||||
maxPartitions = 32
|
||||
progressMinDuration = 200 * time.Millisecond
|
||||
progressMinDuration = 200 * time.Millisecond
|
||||
|
||||
// Capacity of each chunk
|
||||
chunkSize int = 1000
|
||||
chunkSize int = 1024
|
||||
chunkBitWords = (chunkSize + 63) / 64
|
||||
|
||||
// Pre-allocated memory slices to minimize GC
|
||||
slab16Size int = 100 * 1024 // 200KB * 32 = 12.8MB
|
||||
slab32Size int = 2048 // 8KB * 32 = 256KB
|
||||
|
||||
// Do not cache results of low selectivity queries
|
||||
queryCacheMax int = chunkSize / 5
|
||||
queryCacheMax int = chunkSize / 2
|
||||
|
||||
// Not to cache mergers with large lists
|
||||
mergerCacheMax int = 100000
|
||||
|
||||
+12
-3
@@ -56,6 +56,9 @@ func Run(opts *Options) (int, error) {
|
||||
if opts.useTmux() {
|
||||
return runTmux(os.Args, opts)
|
||||
}
|
||||
if opts.useZellij() {
|
||||
return runZellij(os.Args, opts)
|
||||
}
|
||||
|
||||
if needWinpty(opts) {
|
||||
return runWinpty(os.Args, opts)
|
||||
@@ -195,11 +198,13 @@ func Run(opts *Options) (int, error) {
|
||||
// Reader
|
||||
streamingFilter := opts.Filter != nil && !sort && !opts.Tac && !opts.Sync && opts.Bench == 0
|
||||
var reader *Reader
|
||||
var ingestionStart time.Time
|
||||
if !streamingFilter {
|
||||
reader = NewReader(func(data []byte) bool {
|
||||
return chunkList.Push(data)
|
||||
}, eventBox, executor, opts.ReadZero, opts.Filter == nil)
|
||||
|
||||
ingestionStart = time.Now()
|
||||
readyChan := make(chan bool)
|
||||
go reader.ReadSource(opts.Input, opts.WalkerRoot, opts.WalkerOpts, opts.WalkerSkip, initialReload, initialEnv, readyChan)
|
||||
<-readyChan
|
||||
@@ -283,6 +288,7 @@ func Run(opts *Options) (int, error) {
|
||||
} else {
|
||||
eventBox.Unwatch(EvtReadNew)
|
||||
eventBox.WaitFor(EvtReadFin)
|
||||
ingestionTime := time.Since(ingestionStart)
|
||||
|
||||
// NOTE: Streaming filter is inherently not compatible with --tail
|
||||
snapshot, _, _ := chunkList.Snapshot(opts.Tail)
|
||||
@@ -316,13 +322,14 @@ func Run(opts *Options) (int, error) {
|
||||
}
|
||||
avg := total / time.Duration(len(times))
|
||||
selectivity := float64(matchCount) / float64(totalItems) * 100
|
||||
fmt.Printf(" %d iterations avg: %.2fms min: %.2fms max: %.2fms total: %.2fs items: %d matches: %d (%.2f%%)\n",
|
||||
fmt.Printf(" %d iterations avg: %.2fms min: %.2fms max: %.2fms total: %.2fs items: %d matches: %d (%.2f%%) ingestion: %.2fms\n",
|
||||
len(times),
|
||||
float64(avg.Microseconds())/1000,
|
||||
float64(minD.Microseconds())/1000,
|
||||
float64(maxD.Microseconds())/1000,
|
||||
total.Seconds(),
|
||||
totalItems, matchCount, selectivity)
|
||||
totalItems, matchCount, selectivity,
|
||||
float64(ingestionTime.Microseconds())/1000)
|
||||
return ExitOk, nil
|
||||
}
|
||||
|
||||
@@ -465,7 +472,9 @@ func Run(opts *Options) (int, error) {
|
||||
if heightUnknown && !deferred {
|
||||
determine(!reading)
|
||||
}
|
||||
matcher.Reset(snapshot, input(), false, !reading, sort, snapshotRevision)
|
||||
if !useSnapshot || evt == EvtReadFin {
|
||||
matcher.Reset(snapshot, input(), false, !reading, sort, snapshotRevision)
|
||||
}
|
||||
|
||||
case EvtSearchNew:
|
||||
var command *commandSpec
|
||||
|
||||
+22
-44
@@ -4,6 +4,7 @@ import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/junegunn/fzf/src/util"
|
||||
@@ -57,7 +58,7 @@ const (
|
||||
// NewMatcher returns a new Matcher
|
||||
func NewMatcher(cache *ChunkCache, patternBuilder func([]rune) *Pattern,
|
||||
sort bool, tac bool, eventBox *util.EventBox, revision revision, threads int) *Matcher {
|
||||
partitions := min(numPartitionsMultiplier*runtime.NumCPU(), maxPartitions)
|
||||
partitions := runtime.NumCPU()
|
||||
if threads > 0 {
|
||||
partitions = threads
|
||||
}
|
||||
@@ -148,27 +149,6 @@ func (m *Matcher) Loop() {
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Matcher) sliceChunks(chunks []*Chunk) [][]*Chunk {
|
||||
partitions := m.partitions
|
||||
perSlice := len(chunks) / partitions
|
||||
|
||||
if perSlice == 0 {
|
||||
partitions = len(chunks)
|
||||
perSlice = 1
|
||||
}
|
||||
|
||||
slices := make([][]*Chunk, partitions)
|
||||
for i := 0; i < partitions; i++ {
|
||||
start := i * perSlice
|
||||
end := start + perSlice
|
||||
if i == partitions-1 {
|
||||
end = len(chunks)
|
||||
}
|
||||
slices[i] = chunks[start:end]
|
||||
}
|
||||
return slices
|
||||
}
|
||||
|
||||
type partialResult struct {
|
||||
index int
|
||||
matches []Result
|
||||
@@ -192,39 +172,37 @@ func (m *Matcher) scan(request MatchRequest) MatchResult {
|
||||
maxIndex := request.chunks[numChunks-1].lastIndex(minIndex)
|
||||
cancelled := util.NewAtomicBool(false)
|
||||
|
||||
slices := m.sliceChunks(request.chunks)
|
||||
numSlices := len(slices)
|
||||
resultChan := make(chan partialResult, numSlices)
|
||||
numWorkers := min(m.partitions, numChunks)
|
||||
var nextChunk atomic.Int32
|
||||
resultChan := make(chan partialResult, numWorkers)
|
||||
countChan := make(chan int, numChunks)
|
||||
waitGroup := sync.WaitGroup{}
|
||||
|
||||
for idx, chunks := range slices {
|
||||
for idx := range numWorkers {
|
||||
waitGroup.Add(1)
|
||||
if m.slab[idx] == nil {
|
||||
m.slab[idx] = util.MakeSlab(slab16Size, slab32Size)
|
||||
}
|
||||
go func(idx int, slab *util.Slab, chunks []*Chunk) {
|
||||
defer func() { waitGroup.Done() }()
|
||||
count := 0
|
||||
allMatches := make([][]Result, len(chunks))
|
||||
for idx, chunk := range chunks {
|
||||
matches := request.pattern.Match(chunk, slab)
|
||||
allMatches[idx] = matches
|
||||
count += len(matches)
|
||||
go func(idx int, slab *util.Slab) {
|
||||
defer waitGroup.Done()
|
||||
var matches []Result
|
||||
for {
|
||||
ci := int(nextChunk.Add(1)) - 1
|
||||
if ci >= numChunks {
|
||||
break
|
||||
}
|
||||
chunkMatches := request.pattern.Match(request.chunks[ci], slab)
|
||||
matches = append(matches, chunkMatches...)
|
||||
if cancelled.Get() {
|
||||
return
|
||||
}
|
||||
countChan <- len(matches)
|
||||
}
|
||||
sliceMatches := make([]Result, 0, count)
|
||||
for _, matches := range allMatches {
|
||||
sliceMatches = append(sliceMatches, matches...)
|
||||
countChan <- len(chunkMatches)
|
||||
}
|
||||
if m.sort && request.pattern.sortable {
|
||||
m.sortBuf[idx] = radixSortResults(sliceMatches, m.tac, m.sortBuf[idx])
|
||||
m.sortBuf[idx] = radixSortResults(matches, m.tac, m.sortBuf[idx])
|
||||
}
|
||||
resultChan <- partialResult{idx, sliceMatches}
|
||||
}(idx, m.slab[idx], chunks)
|
||||
resultChan <- partialResult{idx, matches}
|
||||
}(idx, m.slab[idx])
|
||||
}
|
||||
|
||||
wait := func() bool {
|
||||
@@ -252,8 +230,8 @@ func (m *Matcher) scan(request MatchRequest) MatchResult {
|
||||
}
|
||||
}
|
||||
|
||||
partialResults := make([][]Result, numSlices)
|
||||
for range slices {
|
||||
partialResults := make([][]Result, numWorkers)
|
||||
for range numWorkers {
|
||||
partialResult := <-resultChan
|
||||
partialResults[partialResult.index] = partialResult.matches
|
||||
}
|
||||
|
||||
+2
-9
@@ -136,14 +136,7 @@ func (mg *Merger) Get(idx int) Result {
|
||||
if mg.tac {
|
||||
idx = mg.count - idx - 1
|
||||
}
|
||||
for _, list := range mg.lists {
|
||||
numItems := len(list)
|
||||
if idx < numItems {
|
||||
return list[idx]
|
||||
}
|
||||
idx -= numItems
|
||||
}
|
||||
panic(fmt.Sprintf("Index out of bounds (unsorted, %d/%d)", idx, mg.count))
|
||||
return mg.mergedGet(idx)
|
||||
}
|
||||
|
||||
func (mg *Merger) ToMap() map[int32]Result {
|
||||
@@ -171,7 +164,7 @@ func (mg *Merger) mergedGet(idx int) Result {
|
||||
}
|
||||
if cursor >= 0 {
|
||||
rank := list[cursor]
|
||||
if minIdx < 0 || compareRanks(rank, minRank, mg.tac) {
|
||||
if minIdx < 0 || mg.sorted && compareRanks(rank, minRank, mg.tac) || !mg.sorted && rank.item.Index() < minRank.item.Index() {
|
||||
minRank = rank
|
||||
minIdx = listIdx
|
||||
}
|
||||
|
||||
+17
-2
@@ -54,10 +54,25 @@ func buildLists(partiallySorted bool) ([][]Result, []Result) {
|
||||
}
|
||||
|
||||
func TestMergerUnsorted(t *testing.T) {
|
||||
lists, items := buildLists(false)
|
||||
lists, _ := buildLists(false)
|
||||
|
||||
// Sort each list by index to simulate real worker behavior
|
||||
// (workers process chunks in ascending order via nextChunk.Add(1))
|
||||
for _, list := range lists {
|
||||
sort.Slice(list, func(i, j int) bool {
|
||||
return list[i].item.Index() < list[j].item.Index()
|
||||
})
|
||||
}
|
||||
items := []Result{}
|
||||
for _, list := range lists {
|
||||
items = append(items, list...)
|
||||
}
|
||||
sort.Slice(items, func(i, j int) bool {
|
||||
return items[i].item.Index() < items[j].item.Index()
|
||||
})
|
||||
cnt := len(items)
|
||||
|
||||
// Not sorted: same order
|
||||
// Not sorted: items in ascending index order
|
||||
mg := NewMerger(nil, lists, false, false, revision{}, 0, 0)
|
||||
assert(t, cnt == mg.Length(), "Invalid Length")
|
||||
for i := range cnt {
|
||||
|
||||
+25
-8
@@ -75,9 +75,10 @@ Usage: fzf [options]
|
||||
--min-height=HEIGHT[+] Minimum height when --height is given as a percentage.
|
||||
Add '+' to automatically increase the value
|
||||
according to the other layout options (default: 10+).
|
||||
--tmux[=OPTS] Start fzf in a tmux popup (requires tmux 3.3+)
|
||||
--popup[=OPTS] Start fzf in a popup window (requires tmux 3.3+ or Zellij 0.44+)
|
||||
[center|top|bottom|left|right][,SIZE[%]][,SIZE[%]]
|
||||
[,border-native] (default: center,50%)
|
||||
--tmux[=OPTS] Alias for --popup
|
||||
|
||||
LAYOUT
|
||||
--layout=LAYOUT Choose layout: [default|reverse|reverse-list]
|
||||
@@ -101,6 +102,7 @@ Usage: fzf [options]
|
||||
--no-multi-line Disable multi-line display of items when using --read0
|
||||
--raw Enable raw mode (show non-matching items)
|
||||
--track Track the current selection when the result is updated
|
||||
--id-nth=N[,..] Define item identity fields for cross-reload operations
|
||||
--tac Reverse the order of the input
|
||||
--gap[=N] Render empty lines between each item
|
||||
--gap-line[=STR] Draw horizontal line on each gap using the string
|
||||
@@ -416,7 +418,7 @@ func parseTmuxOptions(arg string, index int) (*tmuxOptions, error) {
|
||||
var err error
|
||||
opts := defaultTmuxOptions(index)
|
||||
tokens := splitRegexp.Split(arg, -1)
|
||||
errorToReturn := errors.New("invalid tmux option: " + arg + " (expected: [center|top|bottom|left|right][,SIZE[%]][,SIZE[%][,border-native]])")
|
||||
errorToReturn := errors.New("invalid popup option: " + arg + " (expected: [center|top|bottom|left|right][,SIZE[%]][,SIZE[%][,border-native]])")
|
||||
if len(tokens) == 0 || len(tokens) > 4 {
|
||||
return nil, errorToReturn
|
||||
}
|
||||
@@ -594,6 +596,7 @@ type Options struct {
|
||||
Sort int
|
||||
Raw bool
|
||||
Track trackOption
|
||||
IdNth []Range
|
||||
Tac bool
|
||||
Tail int
|
||||
Criteria []criterion
|
||||
@@ -1610,7 +1613,7 @@ func parseWalkerOpts(str string) (walkerOpts, error) {
|
||||
}
|
||||
|
||||
var (
|
||||
executeRegexp *regexp.Regexp
|
||||
argActionRegexp *regexp.Regexp
|
||||
splitRegexp *regexp.Regexp
|
||||
actionNameRegexp *regexp.Regexp
|
||||
)
|
||||
@@ -1629,7 +1632,7 @@ const (
|
||||
)
|
||||
|
||||
func init() {
|
||||
executeRegexp = regexp.MustCompile(
|
||||
argActionRegexp = regexp.MustCompile(
|
||||
`(?si)[:+](become|execute(?:-multi|-silent)?|reload(?:-sync)?|preview|(?:change|bg-transform|transform)-(?:query|prompt|(?:border|list|preview|input|header|footer)-label|header-lines|header|footer|search|with-nth|nth|pointer|ghost)|bg-transform|transform|change-(?:preview-window|preview|multi)|(?:re|un|toggle-)bind|pos|put|print|search|trigger)`)
|
||||
splitRegexp = regexp.MustCompile("[,:]+")
|
||||
actionNameRegexp = regexp.MustCompile("(?i)^[a-z-]+")
|
||||
@@ -1639,7 +1642,7 @@ func maskActionContents(action string) string {
|
||||
masked := ""
|
||||
Loop:
|
||||
for len(action) > 0 {
|
||||
loc := executeRegexp.FindStringIndex(action)
|
||||
loc := argActionRegexp.FindStringIndex(action)
|
||||
if loc == nil {
|
||||
masked += action
|
||||
break
|
||||
@@ -1694,7 +1697,7 @@ Loop:
|
||||
}
|
||||
|
||||
func parseSingleActionList(str string) ([]*action, error) {
|
||||
// We prepend a colon to satisfy executeRegexp and remove it later
|
||||
// We prepend a colon to satisfy argActionRegexp and remove it later
|
||||
masked := maskActionContents(":" + str)[1:]
|
||||
return parseActionList(masked, str, []*action{}, false)
|
||||
}
|
||||
@@ -2634,7 +2637,7 @@ func parseOptions(index *int, opts *Options, allArgs []string) error {
|
||||
opts.Version = true
|
||||
case "--no-winpty":
|
||||
opts.NoWinpty = true
|
||||
case "--tmux":
|
||||
case "--tmux", "--popup":
|
||||
given, str := optionalNextString()
|
||||
if given {
|
||||
if opts.Tmux, err = parseTmuxOptions(str, index); err != nil {
|
||||
@@ -2643,7 +2646,7 @@ func parseOptions(index *int, opts *Options, allArgs []string) error {
|
||||
} else {
|
||||
opts.Tmux = defaultTmuxOptions(index)
|
||||
}
|
||||
case "--no-tmux":
|
||||
case "--no-tmux", "--no-popup":
|
||||
opts.Tmux = nil
|
||||
case "--tty-default":
|
||||
if opts.TtyDefault, err = nextString("tty device name required"); err != nil {
|
||||
@@ -2811,6 +2814,16 @@ func parseOptions(index *int, opts *Options, allArgs []string) error {
|
||||
opts.Track = trackEnabled
|
||||
case "--no-track":
|
||||
opts.Track = trackDisabled
|
||||
case "--id-nth":
|
||||
str, err := nextString("nth expression required")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if opts.IdNth, err = splitNth(str); err != nil {
|
||||
return err
|
||||
}
|
||||
case "--no-id-nth":
|
||||
opts.IdNth = nil
|
||||
case "--tac":
|
||||
opts.Tac = true
|
||||
case "--no-tac":
|
||||
@@ -3615,6 +3628,10 @@ func (opts *Options) useTmux() bool {
|
||||
return opts.Tmux != nil && len(os.Getenv("TMUX")) > 0 && opts.Tmux.index >= opts.Height.index
|
||||
}
|
||||
|
||||
func (opts *Options) useZellij() bool {
|
||||
return opts.Tmux != nil && len(os.Getenv("ZELLIJ")) > 0 && opts.Tmux.index >= opts.Height.index
|
||||
}
|
||||
|
||||
func (opts *Options) noSeparatorLine() bool {
|
||||
if opts.Inputless {
|
||||
return true
|
||||
|
||||
+43
-60
@@ -61,7 +61,7 @@ type Pattern struct {
|
||||
delimiter Delimiter
|
||||
nth []Range
|
||||
revision revision
|
||||
procFun map[termType]algo.Algo
|
||||
procFun [6]algo.Algo
|
||||
cache *ChunkCache
|
||||
denylist map[int32]struct{}
|
||||
startIndex int32
|
||||
@@ -150,7 +150,7 @@ func BuildPattern(cache *ChunkCache, patternCache map[string]*Pattern, fuzzy boo
|
||||
cache: cache,
|
||||
denylist: denylist,
|
||||
startIndex: startIndex,
|
||||
procFun: make(map[termType]algo.Algo)}
|
||||
}
|
||||
|
||||
ptr.cacheKey = ptr.buildCacheKey()
|
||||
ptr.directAlgo, ptr.directTerm = ptr.buildDirectAlgo(fuzzyAlgo)
|
||||
@@ -300,104 +300,87 @@ func (p *Pattern) CacheKey() string {
|
||||
|
||||
// Match returns the list of matches Items in the given Chunk
|
||||
func (p *Pattern) Match(chunk *Chunk, slab *util.Slab) []Result {
|
||||
// ChunkCache: Exact match
|
||||
cacheKey := p.CacheKey()
|
||||
|
||||
// Bitmap cache: exact match or prefix/suffix
|
||||
var cachedBitmap *ChunkBitmap
|
||||
if p.cacheable {
|
||||
if cached := p.cache.Lookup(chunk, cacheKey); cached != nil {
|
||||
return cached
|
||||
}
|
||||
cachedBitmap = p.cache.Lookup(chunk, cacheKey)
|
||||
}
|
||||
if cachedBitmap == nil {
|
||||
cachedBitmap = p.cache.Search(chunk, cacheKey)
|
||||
}
|
||||
|
||||
// Prefix/suffix cache
|
||||
space := p.cache.Search(chunk, cacheKey)
|
||||
|
||||
matches := p.matchChunk(chunk, space, slab)
|
||||
matches, bitmap := p.matchChunk(chunk, cachedBitmap, slab)
|
||||
|
||||
if p.cacheable {
|
||||
p.cache.Add(chunk, cacheKey, matches)
|
||||
p.cache.Add(chunk, cacheKey, bitmap, len(matches))
|
||||
}
|
||||
return matches
|
||||
}
|
||||
|
||||
func (p *Pattern) matchChunk(chunk *Chunk, space []Result, slab *util.Slab) []Result {
|
||||
func (p *Pattern) matchChunk(chunk *Chunk, cachedBitmap *ChunkBitmap, slab *util.Slab) ([]Result, ChunkBitmap) {
|
||||
matches := []Result{}
|
||||
var bitmap ChunkBitmap
|
||||
|
||||
// Skip header items in chunks that contain them
|
||||
startIdx := 0
|
||||
if p.startIndex > 0 && chunk.count > 0 && chunk.items[0].Index() < p.startIndex {
|
||||
startIdx = int(p.startIndex - chunk.items[0].Index())
|
||||
if startIdx >= chunk.count {
|
||||
return matches
|
||||
return matches, bitmap
|
||||
}
|
||||
}
|
||||
|
||||
hasCachedBitmap := cachedBitmap != nil
|
||||
|
||||
// Fast path: single fuzzy term, no nth, no denylist.
|
||||
// Calls the algo function directly, bypassing MatchItem/extendedMatch/iter
|
||||
// and avoiding per-match []Offset heap allocation.
|
||||
if p.directAlgo != nil && len(p.denylist) == 0 {
|
||||
t := p.directTerm
|
||||
if space == nil {
|
||||
for idx := startIdx; idx < chunk.count; idx++ {
|
||||
res, _ := p.directAlgo(t.caseSensitive, t.normalize, p.forward,
|
||||
&chunk.items[idx].text, t.text, p.withPos, slab)
|
||||
if res.Start >= 0 {
|
||||
matches = append(matches, buildResultFromBounds(
|
||||
&chunk.items[idx], res.Score,
|
||||
int(res.Start), int(res.End), int(res.End), true))
|
||||
}
|
||||
for idx := startIdx; idx < chunk.count; idx++ {
|
||||
if hasCachedBitmap && cachedBitmap[idx/64]&(uint64(1)<<(idx%64)) == 0 {
|
||||
continue
|
||||
}
|
||||
} else {
|
||||
for _, result := range space {
|
||||
res, _ := p.directAlgo(t.caseSensitive, t.normalize, p.forward,
|
||||
&result.item.text, t.text, p.withPos, slab)
|
||||
if res.Start >= 0 {
|
||||
matches = append(matches, buildResultFromBounds(
|
||||
result.item, res.Score,
|
||||
int(res.Start), int(res.End), int(res.End), true))
|
||||
}
|
||||
res, _ := p.directAlgo(t.caseSensitive, t.normalize, p.forward,
|
||||
&chunk.items[idx].text, t.text, p.withPos, slab)
|
||||
if res.Start >= 0 {
|
||||
bitmap[idx/64] |= uint64(1) << (idx % 64)
|
||||
matches = append(matches, buildResultFromBounds(
|
||||
&chunk.items[idx], res.Score,
|
||||
int(res.Start), int(res.End), int(res.End), true))
|
||||
}
|
||||
}
|
||||
return matches
|
||||
return matches, bitmap
|
||||
}
|
||||
|
||||
if len(p.denylist) == 0 {
|
||||
// Huge code duplication for minimizing unnecessary map lookups
|
||||
if space == nil {
|
||||
for idx := startIdx; idx < chunk.count; idx++ {
|
||||
if match, _, _ := p.MatchItem(&chunk.items[idx], p.withPos, slab); match.item != nil {
|
||||
matches = append(matches, match)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for _, result := range space {
|
||||
if match, _, _ := p.MatchItem(result.item, p.withPos, slab); match.item != nil {
|
||||
matches = append(matches, match)
|
||||
}
|
||||
}
|
||||
}
|
||||
return matches
|
||||
}
|
||||
|
||||
if space == nil {
|
||||
for idx := startIdx; idx < chunk.count; idx++ {
|
||||
if _, prs := p.denylist[chunk.items[idx].Index()]; prs {
|
||||
if hasCachedBitmap && cachedBitmap[idx/64]&(uint64(1)<<(idx%64)) == 0 {
|
||||
continue
|
||||
}
|
||||
if match, _, _ := p.MatchItem(&chunk.items[idx], p.withPos, slab); match.item != nil {
|
||||
bitmap[idx/64] |= uint64(1) << (idx % 64)
|
||||
matches = append(matches, match)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for _, result := range space {
|
||||
if _, prs := p.denylist[result.item.Index()]; prs {
|
||||
continue
|
||||
}
|
||||
if match, _, _ := p.MatchItem(result.item, p.withPos, slab); match.item != nil {
|
||||
matches = append(matches, match)
|
||||
}
|
||||
return matches, bitmap
|
||||
}
|
||||
|
||||
for idx := startIdx; idx < chunk.count; idx++ {
|
||||
if hasCachedBitmap && cachedBitmap[idx/64]&(uint64(1)<<(idx%64)) == 0 {
|
||||
continue
|
||||
}
|
||||
if _, prs := p.denylist[chunk.items[idx].Index()]; prs {
|
||||
continue
|
||||
}
|
||||
if match, _, _ := p.MatchItem(&chunk.items[idx], p.withPos, slab); match.item != nil {
|
||||
bitmap[idx/64] |= uint64(1) << (idx % 64)
|
||||
matches = append(matches, match)
|
||||
}
|
||||
}
|
||||
return matches
|
||||
return matches, bitmap
|
||||
}
|
||||
|
||||
// MatchItem returns the match result if the Item is a match.
|
||||
|
||||
+118
-1
@@ -2,6 +2,7 @@ package fzf
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/junegunn/fzf/src/algo"
|
||||
@@ -137,7 +138,7 @@ func TestOrigTextAndTransformed(t *testing.T) {
|
||||
origText: &origBytes,
|
||||
transformed: &transformed{pattern.revision, trans}}
|
||||
pattern.extended = extended
|
||||
matches := pattern.matchChunk(&chunk, nil, slab) // No cache
|
||||
matches, _ := pattern.matchChunk(&chunk, nil, slab) // No cache
|
||||
if !(matches[0].item.text.ToString() == "junegunn" &&
|
||||
string(*matches[0].item.origText) == "junegunn.choi" &&
|
||||
reflect.DeepEqual((*matches[0].item.transformed).tokens, trans)) {
|
||||
@@ -199,3 +200,119 @@ func TestCacheable(t *testing.T) {
|
||||
test(false, "foo 'bar", "foo", false)
|
||||
test(false, "foo !bar", "foo", false)
|
||||
}
|
||||
|
||||
func buildChunks(numChunks int) []*Chunk {
|
||||
chunks := make([]*Chunk, numChunks)
|
||||
words := []string{
|
||||
"src/main/java/com/example/service/UserService.java",
|
||||
"src/test/java/com/example/service/UserServiceTest.java",
|
||||
"docs/api/reference/endpoints.md",
|
||||
"lib/internal/utils/string_helper.go",
|
||||
"pkg/server/http/handler/auth.go",
|
||||
"build/output/release/app.exe",
|
||||
"config/production/database.yml",
|
||||
"scripts/deploy/kubernetes/setup.sh",
|
||||
"vendor/github.com/junegunn/fzf/src/core.go",
|
||||
"node_modules/.cache/babel/transform.js",
|
||||
}
|
||||
for ci := range numChunks {
|
||||
chunks[ci] = &Chunk{count: chunkSize}
|
||||
for i := range chunkSize {
|
||||
text := words[(ci*chunkSize+i)%len(words)]
|
||||
chunks[ci].items[i] = Item{text: util.ToChars([]byte(text))}
|
||||
chunks[ci].items[i].text.Index = int32(ci*chunkSize + i)
|
||||
}
|
||||
}
|
||||
return chunks
|
||||
}
|
||||
|
||||
func buildPatternWith(cache *ChunkCache, runes []rune) *Pattern {
|
||||
return BuildPattern(cache, make(map[string]*Pattern),
|
||||
true, algo.FuzzyMatchV2, true, CaseSmart, false, true,
|
||||
false, true, []Range{}, Delimiter{}, revision{}, runes, nil, 0)
|
||||
}
|
||||
|
||||
func TestBitmapCacheBenefit(t *testing.T) {
|
||||
numChunks := 100
|
||||
chunks := buildChunks(numChunks)
|
||||
queries := []string{"s", "se", "ser", "serv", "servi"}
|
||||
|
||||
// 1. Run all queries with shared cache (simulates incremental typing)
|
||||
cache := NewChunkCache()
|
||||
for _, q := range queries {
|
||||
pat := buildPatternWith(cache, []rune(q))
|
||||
for _, chunk := range chunks {
|
||||
pat.Match(chunk, slab)
|
||||
}
|
||||
}
|
||||
|
||||
// 2. GC and measure memory with cache populated
|
||||
runtime.GC()
|
||||
runtime.GC()
|
||||
var memWith runtime.MemStats
|
||||
runtime.ReadMemStats(&memWith)
|
||||
|
||||
// 3. Clear cache, GC, measure again
|
||||
cache.Clear()
|
||||
runtime.GC()
|
||||
runtime.GC()
|
||||
var memWithout runtime.MemStats
|
||||
runtime.ReadMemStats(&memWithout)
|
||||
|
||||
cacheMem := int64(memWith.Alloc) - int64(memWithout.Alloc)
|
||||
t.Logf("Chunks: %d, Queries: %d", numChunks, len(queries))
|
||||
t.Logf("Cache memory: %d bytes (%.1f KB)", cacheMem, float64(cacheMem)/1024)
|
||||
t.Logf("Per-chunk-per-query: %.0f bytes", float64(cacheMem)/float64(numChunks*len(queries)))
|
||||
|
||||
// 4. Verify correctness: cached vs uncached produce same results
|
||||
cache2 := NewChunkCache()
|
||||
for _, q := range queries {
|
||||
pat := buildPatternWith(cache2, []rune(q))
|
||||
for _, chunk := range chunks {
|
||||
pat.Match(chunk, slab)
|
||||
}
|
||||
}
|
||||
for _, q := range queries {
|
||||
patCached := buildPatternWith(cache2, []rune(q))
|
||||
patFresh := buildPatternWith(NewChunkCache(), []rune(q))
|
||||
var countCached, countFresh int
|
||||
for _, chunk := range chunks {
|
||||
countCached += len(patCached.Match(chunk, slab))
|
||||
countFresh += len(patFresh.Match(chunk, slab))
|
||||
}
|
||||
if countCached != countFresh {
|
||||
t.Errorf("query=%q: cached=%d, fresh=%d", q, countCached, countFresh)
|
||||
}
|
||||
t.Logf("query=%q: matches=%d", q, countCached)
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkWithCache(b *testing.B) {
|
||||
numChunks := 100
|
||||
chunks := buildChunks(numChunks)
|
||||
queries := []string{"s", "se", "ser", "serv", "servi"}
|
||||
|
||||
b.Run("cached", func(b *testing.B) {
|
||||
for range b.N {
|
||||
cache := NewChunkCache()
|
||||
for _, q := range queries {
|
||||
pat := buildPatternWith(cache, []rune(q))
|
||||
for _, chunk := range chunks {
|
||||
pat.Match(chunk, slab)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
b.Run("uncached", func(b *testing.B) {
|
||||
for range b.N {
|
||||
for _, q := range queries {
|
||||
cache := NewChunkCache()
|
||||
pat := buildPatternWith(cache, []rune(q))
|
||||
for _, chunk := range chunks {
|
||||
pat.Match(chunk, slab)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -6,5 +6,5 @@ import "golang.org/x/sys/unix"
|
||||
|
||||
// Protect calls OS specific protections like pledge on OpenBSD
|
||||
func Protect() {
|
||||
unix.PledgePromises("stdio dpath wpath rpath tty proc exec inet tmppath")
|
||||
unix.PledgePromises("stdio cpath dpath wpath rpath inet fattr unix tty proc exec")
|
||||
}
|
||||
|
||||
@@ -23,6 +23,32 @@ func escapeSingleQuote(str string) string {
|
||||
return "'" + strings.ReplaceAll(str, "'", "'\\''") + "'"
|
||||
}
|
||||
|
||||
func popupArgStr(args []string, opts *Options) (string, string) {
|
||||
fzf, rest := args[0], args[1:]
|
||||
args = []string{"--bind=ctrl-z:ignore"}
|
||||
if !opts.Tmux.border && (opts.BorderShape == tui.BorderUndefined || opts.BorderShape == tui.BorderLine) {
|
||||
if tui.DefaultBorderShape == tui.BorderRounded {
|
||||
rest = append(rest, "--border=rounded")
|
||||
} else {
|
||||
rest = append(rest, "--border=sharp")
|
||||
}
|
||||
}
|
||||
if opts.Tmux.border && opts.Margin == defaultMargin() {
|
||||
args = append(args, "--margin=0,1")
|
||||
}
|
||||
argStr := escapeSingleQuote(fzf)
|
||||
for _, arg := range append(args, rest...) {
|
||||
argStr += " " + escapeSingleQuote(arg)
|
||||
}
|
||||
argStr += ` --no-popup --no-height`
|
||||
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
dir = "."
|
||||
}
|
||||
return argStr, dir
|
||||
}
|
||||
|
||||
func fifo(name string) (string, error) {
|
||||
ns := time.Now().UnixNano()
|
||||
output := filepath.Join(os.TempDir(), fmt.Sprintf("fzf-%s-%d", name, ns))
|
||||
|
||||
@@ -274,6 +274,24 @@ func (r *Reader) readFiles(roots []string, opts walkerOpts, ignores []string) bo
|
||||
ToSlash: fastwalk.DefaultToSlash(),
|
||||
Sort: fastwalk.SortFilesFirst,
|
||||
}
|
||||
|
||||
// When following symlinks, precompute the absolute real paths of walker
|
||||
// roots so we can skip symlinks that point to an ancestor. fastwalk's
|
||||
// built-in loop detection (shouldTraverse) catches loops on the second
|
||||
// pass, but a single pass through a symlink like z: -> / already
|
||||
// traverses the entire root filesystem, causing severe resource
|
||||
// exhaustion. Skipping ancestor symlinks prevents this entirely.
|
||||
var absRoots []string
|
||||
if opts.follow {
|
||||
for _, root := range roots {
|
||||
if real, err := filepath.EvalSymlinks(root); err == nil {
|
||||
if abs, err := filepath.Abs(real); err == nil {
|
||||
absRoots = append(absRoots, filepath.Clean(abs))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ignoresBase := []string{}
|
||||
ignoresFull := []string{}
|
||||
ignoresSuffix := []string{}
|
||||
@@ -307,6 +325,24 @@ func (r *Reader) readFiles(roots []string, opts walkerOpts, ignores []string) bo
|
||||
if isDirSymlink && !opts.follow {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
// Skip symlinks whose target is an ancestor of (or equal to)
|
||||
// any walker root. Following such symlinks would traverse a
|
||||
// superset of the tree we're already walking.
|
||||
if isDirSymlink && len(absRoots) > 0 {
|
||||
if target, err := filepath.EvalSymlinks(path); err == nil {
|
||||
if abs, err := filepath.Abs(target); err == nil {
|
||||
abs = filepath.Clean(abs)
|
||||
if abs == string(os.PathSeparator) {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
for _, absRoot := range absRoots {
|
||||
if absRoot == abs || strings.HasPrefix(absRoot, abs+string(os.PathSeparator)) {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
isDir := de.IsDir() || isDirSymlink
|
||||
if isDir {
|
||||
base := filepath.Base(path)
|
||||
|
||||
+179
-34
@@ -216,8 +216,9 @@ const (
|
||||
)
|
||||
|
||||
type StatusItem struct {
|
||||
Index int `json:"index"`
|
||||
Text string `json:"text"`
|
||||
Index int `json:"index"`
|
||||
Text string `json:"text"`
|
||||
Positions []int `json:"positions,omitempty"`
|
||||
}
|
||||
|
||||
type Status struct {
|
||||
@@ -314,6 +315,12 @@ type Terminal struct {
|
||||
sort bool
|
||||
toggleSort bool
|
||||
track trackOption
|
||||
idNth []Range
|
||||
trackKey string
|
||||
trackBlocked bool
|
||||
trackSync bool
|
||||
trackKeyCache map[int32]bool
|
||||
pendingSelections map[string]selectedItem
|
||||
targetIndex int32
|
||||
delimiter Delimiter
|
||||
expect map[tui.Event]string
|
||||
@@ -387,6 +394,7 @@ type Terminal struct {
|
||||
hasLoadActions bool
|
||||
hasResizeActions bool
|
||||
triggerLoad bool
|
||||
pendingReqList bool
|
||||
filterSelection bool
|
||||
reading bool
|
||||
running *util.AtomicBool
|
||||
@@ -1043,6 +1051,7 @@ func NewTerminal(opts *Options, eventBox *util.EventBox, executor *util.Executor
|
||||
sort: opts.Sort > 0,
|
||||
toggleSort: opts.ToggleSort,
|
||||
track: opts.Track,
|
||||
idNth: opts.IdNth,
|
||||
targetIndex: minItem.Index(),
|
||||
delimiter: opts.Delimiter,
|
||||
expect: opts.Expect,
|
||||
@@ -1850,7 +1859,14 @@ func (t *Terminal) UpdateList(result MatchResult) {
|
||||
}
|
||||
if t.revision != newRevision {
|
||||
if !t.revision.compatible(newRevision) {
|
||||
// Reloaded: clear selection
|
||||
// Reloaded: capture selection keys for restoration, then clear (reload-sync only)
|
||||
if t.trackSync && len(t.idNth) > 0 && t.multi > 0 && len(t.selected) > 0 {
|
||||
t.pendingSelections = make(map[string]selectedItem, len(t.selected))
|
||||
for _, sel := range t.selected {
|
||||
key := t.trackKeyFor(sel.item, t.idNth)
|
||||
t.pendingSelections[key] = sel
|
||||
}
|
||||
}
|
||||
t.selected = make(map[int32]selectedItem)
|
||||
t.clearNumLinesCache()
|
||||
} else {
|
||||
@@ -1891,9 +1907,36 @@ func (t *Terminal) UpdateList(result MatchResult) {
|
||||
}
|
||||
if t.triggerLoad {
|
||||
t.triggerLoad = false
|
||||
t.pendingReqList = true
|
||||
t.eventChan <- tui.Load.AsEvent()
|
||||
}
|
||||
if prevIndex >= 0 {
|
||||
// Search for the tracked item by nth key
|
||||
// - reload (async): search eagerly, unblock as soon as match is found
|
||||
// - reload-sync: wait until stream is complete before searching
|
||||
trackWasBlocked := t.trackBlocked
|
||||
if len(t.trackKey) > 0 && (!t.trackSync || !t.reading) {
|
||||
found := false
|
||||
for i := 0; i < t.merger.Length(); i++ {
|
||||
item := t.merger.Get(i).item
|
||||
idx := item.Index()
|
||||
match, ok := t.trackKeyCache[idx]
|
||||
if !ok {
|
||||
match = t.trackKeyFor(item, t.idNth) == t.trackKey
|
||||
t.trackKeyCache[idx] = match
|
||||
}
|
||||
if match {
|
||||
t.cy = i
|
||||
if t.track.Current() {
|
||||
t.track.index = idx
|
||||
}
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if found || !t.reading {
|
||||
t.unblockTrack()
|
||||
}
|
||||
} else if prevIndex >= 0 {
|
||||
pos := t.cy - t.offset
|
||||
count := t.merger.Length()
|
||||
i := t.merger.FindIndex(prevIndex)
|
||||
@@ -1909,12 +1952,25 @@ func (t *Terminal) UpdateList(result MatchResult) {
|
||||
t.cy = count - min(count, t.maxItems()) + pos
|
||||
}
|
||||
}
|
||||
// Restore selections by id-nth key after reload completes
|
||||
if !t.reading && t.pendingSelections != nil {
|
||||
for i := 0; i < t.merger.Length() && len(t.pendingSelections) > 0; i++ {
|
||||
item := t.merger.Get(i).item
|
||||
key := t.trackKeyFor(item, t.idNth)
|
||||
if sel, found := t.pendingSelections[key]; found {
|
||||
t.selected[item.Index()] = selectedItem{sel.at, item}
|
||||
delete(t.pendingSelections, key)
|
||||
}
|
||||
}
|
||||
t.pendingSelections = nil
|
||||
}
|
||||
needActivation := false
|
||||
if !t.reading {
|
||||
switch t.resultMerger.Length() {
|
||||
case 0:
|
||||
zero := tui.Zero.AsEvent()
|
||||
if _, prs := t.keymap[zero]; prs {
|
||||
t.pendingReqList = true
|
||||
t.eventChan <- zero
|
||||
}
|
||||
// --sync, only 'focus' is bound, but no items to focus
|
||||
@@ -1922,16 +1978,26 @@ func (t *Terminal) UpdateList(result MatchResult) {
|
||||
case 1:
|
||||
one := tui.One.AsEvent()
|
||||
if _, prs := t.keymap[one]; prs {
|
||||
t.pendingReqList = true
|
||||
t.eventChan <- one
|
||||
}
|
||||
}
|
||||
}
|
||||
if t.hasResultActions {
|
||||
t.pendingReqList = true
|
||||
t.eventChan <- tui.Result.AsEvent()
|
||||
}
|
||||
updateList := !t.trackBlocked && !t.pendingReqList
|
||||
updatePrompt := trackWasBlocked && !t.trackBlocked
|
||||
t.mutex.Unlock()
|
||||
|
||||
t.reqBox.Set(reqInfo, nil)
|
||||
t.reqBox.Set(reqList, nil)
|
||||
if updateList {
|
||||
t.reqBox.Set(reqList, nil)
|
||||
}
|
||||
if updatePrompt {
|
||||
t.reqBox.Set(reqPrompt, nil)
|
||||
}
|
||||
if needActivation {
|
||||
t.reqBox.Set(reqActivate, nil)
|
||||
}
|
||||
@@ -2177,7 +2243,7 @@ func (t *Terminal) resizeWindows(forcePreview bool, redrawBorder bool) {
|
||||
width := screenWidth - marginInt[1] - marginInt[3]
|
||||
height := screenHeight - marginInt[0] - marginInt[2]
|
||||
|
||||
t.prevLines = make([]itemLine, screenHeight)
|
||||
t.prevLines = make([]itemLine, max(1, screenHeight))
|
||||
if t.border != nil && redrawBorder {
|
||||
t.border = nil
|
||||
}
|
||||
@@ -2880,6 +2946,8 @@ func (t *Terminal) printPrompt() {
|
||||
color := tui.ColInput
|
||||
if t.paused {
|
||||
color = tui.ColDisabled
|
||||
} else if t.trackBlocked {
|
||||
color = color.WithAttr(tui.Dim)
|
||||
}
|
||||
w.CPrint(color, string(before))
|
||||
w.CPrint(color, string(after))
|
||||
@@ -2963,18 +3031,6 @@ func (t *Terminal) printInfoImpl() {
|
||||
found := t.resultMerger.Length()
|
||||
total := max(found, t.count)
|
||||
output := fmt.Sprintf("%d/%d", found, total)
|
||||
if t.toggleSort {
|
||||
if t.sort {
|
||||
output += " +S"
|
||||
} else {
|
||||
output += " -S"
|
||||
}
|
||||
}
|
||||
if t.track.Global() {
|
||||
output += " +T"
|
||||
} else if t.track.Current() {
|
||||
output += " +t"
|
||||
}
|
||||
if t.multi > 0 {
|
||||
if t.multi == maxMulti {
|
||||
output += fmt.Sprintf(" (%d)", len(t.selected))
|
||||
@@ -2985,6 +3041,26 @@ func (t *Terminal) printInfoImpl() {
|
||||
if t.progress > 0 && t.progress < 100 {
|
||||
output += fmt.Sprintf(" (%d%%)", t.progress)
|
||||
}
|
||||
if t.toggleSort {
|
||||
if t.sort {
|
||||
output += " +S"
|
||||
} else {
|
||||
output += " -S"
|
||||
}
|
||||
}
|
||||
if t.track.Global() {
|
||||
if t.trackBlocked {
|
||||
output += " +T*"
|
||||
} else {
|
||||
output += " +T"
|
||||
}
|
||||
} else if t.track.Current() {
|
||||
if t.trackBlocked {
|
||||
output += " +t*"
|
||||
} else {
|
||||
output += " +t"
|
||||
}
|
||||
}
|
||||
if t.failed != nil && t.count == 0 {
|
||||
output = fmt.Sprintf("[Command failed: %s]", *t.failed)
|
||||
}
|
||||
@@ -3905,6 +3981,7 @@ func (t *Terminal) printHighlighted(result Result, colBase tui.ColorPair, colMat
|
||||
frozenRight = line[splitOffsetRight:]
|
||||
}
|
||||
displayWidthSum := 0
|
||||
displayWidthLeft := 0
|
||||
todo := [3]func(){}
|
||||
for fidx, runes := range [][]rune{frozenLeft, frozenRight, middle} {
|
||||
if len(runes) == 0 {
|
||||
@@ -3930,7 +4007,11 @@ func (t *Terminal) printHighlighted(result Result, colBase tui.ColorPair, colMat
|
||||
// For frozen parts, reserve space for the ellipsis in the middle part
|
||||
adjustedMaxWidth -= ellipsisWidth
|
||||
}
|
||||
displayWidth = t.displayWidthWithLimit(runes, 0, adjustedMaxWidth)
|
||||
var prefixWidth int
|
||||
if fidx == 2 {
|
||||
prefixWidth = displayWidthLeft
|
||||
}
|
||||
displayWidth = t.displayWidthWithLimit(runes, prefixWidth, adjustedMaxWidth)
|
||||
if !t.wrap && displayWidth > adjustedMaxWidth {
|
||||
maxe = util.Constrain(maxe+min(maxWidth/2-ellipsisWidth, t.hscrollOff), 0, len(runes))
|
||||
transformOffsets := func(diff int32) {
|
||||
@@ -3968,6 +4049,9 @@ func (t *Terminal) printHighlighted(result Result, colBase tui.ColorPair, colMat
|
||||
displayWidth = t.displayWidthWithLimit(runes, 0, maxWidth)
|
||||
}
|
||||
displayWidthSum += displayWidth
|
||||
if fidx == 0 {
|
||||
displayWidthLeft = displayWidth
|
||||
}
|
||||
|
||||
if maxWidth > 0 {
|
||||
color := colBase
|
||||
@@ -3975,7 +4059,7 @@ func (t *Terminal) printHighlighted(result Result, colBase tui.ColorPair, colMat
|
||||
color = color.WithFg(t.theme.Nomatch)
|
||||
}
|
||||
todo[fidx] = func() {
|
||||
t.printColoredString(t.window, runes, offs, color)
|
||||
t.printColoredString(t.window, runes, offs, color, prefixWidth)
|
||||
}
|
||||
} else {
|
||||
break
|
||||
@@ -4002,10 +4086,13 @@ func (t *Terminal) printHighlighted(result Result, colBase tui.ColorPair, colMat
|
||||
return finalLineNum
|
||||
}
|
||||
|
||||
func (t *Terminal) printColoredString(window tui.Window, text []rune, offsets []colorOffset, colBase tui.ColorPair) {
|
||||
func (t *Terminal) printColoredString(window tui.Window, text []rune, offsets []colorOffset, colBase tui.ColorPair, initialPrefixWidth ...int) {
|
||||
var index int32
|
||||
var substr string
|
||||
var prefixWidth int
|
||||
if len(initialPrefixWidth) > 0 {
|
||||
prefixWidth = initialPrefixWidth[0]
|
||||
}
|
||||
maxOffset := int32(len(text))
|
||||
var url *url
|
||||
for _, offset := range offsets {
|
||||
@@ -4212,7 +4299,7 @@ func (t *Terminal) followOffset() int {
|
||||
for i := len(body) - 1; i >= 0; i-- {
|
||||
h := t.previewLineHeight(body[i], maxWidth)
|
||||
if visualLines+h > height {
|
||||
return headerLines + i + 1
|
||||
return min(len(lines)-1, headerLines+i+1)
|
||||
}
|
||||
visualLines += h
|
||||
}
|
||||
@@ -4510,7 +4597,7 @@ Loop:
|
||||
}
|
||||
}
|
||||
|
||||
t.previewer.scrollable = t.previewer.scrollable || t.pwindow.Y() == height-1 && t.pwindow.X() == t.pwindow.Width()
|
||||
t.previewer.scrollable = t.previewer.scrollable || t.pwindow.Y() == height-1 && t.pwindow.X() == t.pwindow.Width() || t.previewed.filled
|
||||
if fillRet == tui.FillNextLine {
|
||||
continue
|
||||
} else if fillRet == tui.FillSuspend {
|
||||
@@ -4533,7 +4620,7 @@ Loop:
|
||||
}
|
||||
lineNo++
|
||||
}
|
||||
t.previewer.scrollable = t.previewer.scrollable || index < len(lines)-1
|
||||
t.previewer.scrollable = t.previewer.scrollable || t.previewed.filled || index < len(lines)-1
|
||||
t.previewed.image = image
|
||||
t.previewed.wireframe = wireframe
|
||||
}
|
||||
@@ -4543,7 +4630,7 @@ func (t *Terminal) renderPreviewScrollbar(yoff int, barLength int, barStart int)
|
||||
w := t.pborder.Width()
|
||||
xw := [2]int{t.pwindow.Left(), t.pwindow.Width()}
|
||||
redraw := false
|
||||
if len(t.previewer.bar) != height || t.previewer.xw != xw {
|
||||
if len(t.previewer.bar) != height || t.previewer.xw != xw || t.previewed.version != t.previewer.version {
|
||||
redraw = true
|
||||
t.previewer.bar = make([]bool, height)
|
||||
t.previewer.xw = xw
|
||||
@@ -5355,6 +5442,22 @@ func (t *Terminal) currentIndex() int32 {
|
||||
return minItem.Index()
|
||||
}
|
||||
|
||||
func (t *Terminal) trackKeyFor(item *Item, nth []Range) string {
|
||||
tokens := Tokenize(item.AsString(t.ansi), t.delimiter)
|
||||
return StripLastDelimiter(JoinTokens(Transform(tokens, nth)), t.delimiter)
|
||||
}
|
||||
|
||||
func (t *Terminal) unblockTrack() {
|
||||
if t.trackBlocked {
|
||||
t.trackBlocked = false
|
||||
t.trackKey = ""
|
||||
t.trackKeyCache = nil
|
||||
if !t.inputless {
|
||||
t.tui.ShowCursor()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *Terminal) addClickHeaderWord(env []string) []string {
|
||||
/*
|
||||
* echo $'HL1\nHL2' | fzf --header-lines 3 --header $'H1\nH2' --header-lines-border --bind 'click-header:preview:env | grep FZF_CLICK'
|
||||
@@ -6176,6 +6279,14 @@ func (t *Terminal) Loop() error {
|
||||
callback(a.a)
|
||||
}
|
||||
}
|
||||
// When track-blocked, only allow abort/cancel and track-disabling actions
|
||||
if t.trackBlocked && a.t != actToggleTrack && a.t != actToggleTrackCurrent && a.t != actUntrackCurrent {
|
||||
if a.t == actAbort || a.t == actCancel {
|
||||
t.unblockTrack()
|
||||
req(reqPrompt, reqInfo)
|
||||
}
|
||||
return true
|
||||
}
|
||||
Action:
|
||||
switch a.t {
|
||||
case actIgnore, actStart, actClick:
|
||||
@@ -6947,14 +7058,16 @@ func (t *Terminal) Loop() error {
|
||||
case trackDisabled:
|
||||
t.track = trackEnabled
|
||||
}
|
||||
req(reqInfo)
|
||||
t.unblockTrack()
|
||||
req(reqPrompt, reqInfo)
|
||||
case actToggleTrackCurrent:
|
||||
if t.track.Current() {
|
||||
t.track = trackDisabled
|
||||
} else if t.track.Disabled() {
|
||||
t.track = trackCurrent(t.currentIndex())
|
||||
}
|
||||
req(reqInfo)
|
||||
t.unblockTrack()
|
||||
req(reqPrompt, reqInfo)
|
||||
case actShowHeader:
|
||||
t.headerVisible = true
|
||||
req(reqList, reqInfo, reqPrompt, reqHeader)
|
||||
@@ -7017,7 +7130,8 @@ func (t *Terminal) Loop() error {
|
||||
if t.track.Current() {
|
||||
t.track = trackDisabled
|
||||
}
|
||||
req(reqInfo)
|
||||
t.unblockTrack()
|
||||
req(reqPrompt, reqInfo)
|
||||
case actSearch:
|
||||
override := []rune(a.a)
|
||||
t.inputOverride = &override
|
||||
@@ -7070,10 +7184,12 @@ func (t *Terminal) Loop() error {
|
||||
}
|
||||
if !me.Down {
|
||||
barDragging = false
|
||||
pmx, pmy = -1, -1
|
||||
}
|
||||
if !me.Down || !t.hasPreviewWindow() {
|
||||
pbarDragging = false
|
||||
pborderDragging = -1
|
||||
previewDraggingPos = -1
|
||||
pmx, pmy = -1, -1
|
||||
}
|
||||
|
||||
// Scrolling
|
||||
@@ -7101,7 +7217,7 @@ func (t *Terminal) Loop() error {
|
||||
}
|
||||
|
||||
// Preview dragging
|
||||
if me.Down && (previewDraggingPos >= 0 || click && t.hasPreviewWindow() && t.pwindow.Enclose(my, mx)) {
|
||||
if t.hasPreviewWindow() && me.Down && (previewDraggingPos >= 0 || click && t.pwindow.Enclose(my, mx)) {
|
||||
if previewDraggingPos > 0 {
|
||||
scrollPreviewBy(previewDraggingPos - my)
|
||||
}
|
||||
@@ -7111,7 +7227,7 @@ func (t *Terminal) Loop() error {
|
||||
|
||||
// Preview scrollbar dragging
|
||||
headerLines := t.activePreviewOpts.headerLines
|
||||
pbarDragging = me.Down && (pbarDragging || click && t.hasPreviewWindow() && my >= t.pwindow.Top()+headerLines && my < t.pwindow.Top()+t.pwindow.Height() && mx == t.pwindow.Left()+t.pwindow.Width())
|
||||
pbarDragging = t.hasPreviewWindow() && me.Down && (pbarDragging || click && my >= t.pwindow.Top()+headerLines && my < t.pwindow.Top()+t.pwindow.Height() && mx == t.pwindow.Left()+t.pwindow.Width())
|
||||
if pbarDragging {
|
||||
effectiveHeight := t.pwindow.Height() - headerLines
|
||||
numLines := len(t.previewer.lines) - headerLines
|
||||
@@ -7128,7 +7244,7 @@ func (t *Terminal) Loop() error {
|
||||
}
|
||||
|
||||
// Preview border dragging (resizing)
|
||||
if pborderDragging < 0 && click && t.hasPreviewWindow() {
|
||||
if t.hasPreviewWindow() && pborderDragging < 0 && click {
|
||||
switch t.activePreviewOpts.position {
|
||||
case posUp:
|
||||
if t.pborder.Enclose(my, mx) && my == t.pborder.Top()+t.pborder.Height()-1 {
|
||||
@@ -7157,7 +7273,7 @@ func (t *Terminal) Loop() error {
|
||||
}
|
||||
}
|
||||
|
||||
if pborderDragging >= 0 && t.hasPreviewWindow() {
|
||||
if t.hasPreviewWindow() && pborderDragging >= 0 {
|
||||
var newSize int
|
||||
var prevSize int
|
||||
switch t.activePreviewOpts.position {
|
||||
@@ -7368,6 +7484,22 @@ func (t *Terminal) Loop() error {
|
||||
newCommand = &commandSpec{command, tempFiles}
|
||||
reloadSync = a.t == actReloadSync
|
||||
t.reading = true
|
||||
|
||||
if len(t.idNth) > 0 {
|
||||
t.trackSync = reloadSync
|
||||
}
|
||||
// Capture tracking key before reload
|
||||
if !t.track.Disabled() && len(t.idNth) > 0 {
|
||||
if item := t.currentItem(); item != nil {
|
||||
t.trackKey = t.trackKeyFor(item, t.idNth)
|
||||
t.trackKeyCache = make(map[int32]bool)
|
||||
t.trackBlocked = true
|
||||
if !t.inputless {
|
||||
t.tui.HideCursor()
|
||||
}
|
||||
req(reqPrompt, reqInfo)
|
||||
}
|
||||
}
|
||||
}
|
||||
case actUnbind:
|
||||
if keys, _, err := parseKeyChords(a.a, "PANIC"); err == nil {
|
||||
@@ -7571,6 +7703,11 @@ func (t *Terminal) Loop() error {
|
||||
// Dispatch queued background requests
|
||||
t.dispatchAsync()
|
||||
|
||||
if t.pendingReqList {
|
||||
t.pendingReqList = false
|
||||
req(reqList)
|
||||
}
|
||||
|
||||
t.mutex.Unlock() // Must be unlocked before touching reqBox
|
||||
|
||||
if reload {
|
||||
@@ -7735,10 +7872,18 @@ func (t *Terminal) dumpItem(i *Item) StatusItem {
|
||||
if i == nil {
|
||||
return StatusItem{}
|
||||
}
|
||||
return StatusItem{
|
||||
item := StatusItem{
|
||||
Index: int(i.Index()),
|
||||
Text: i.AsString(t.ansi),
|
||||
}
|
||||
if t.resultMerger.pattern != nil {
|
||||
_, _, pos := t.resultMerger.pattern.MatchItem(i, true, t.slab)
|
||||
if pos != nil {
|
||||
sort.Ints(*pos)
|
||||
item.Positions = *pos
|
||||
}
|
||||
}
|
||||
return item
|
||||
}
|
||||
|
||||
func (t *Terminal) tryLock(timeout time.Duration) bool {
|
||||
|
||||
+1
-29
@@ -1,39 +1,11 @@
|
||||
package fzf
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
|
||||
"github.com/junegunn/fzf/src/tui"
|
||||
)
|
||||
|
||||
func runTmux(args []string, opts *Options) (int, error) {
|
||||
// Prepare arguments
|
||||
fzf, rest := args[0], args[1:]
|
||||
args = []string{"--bind=ctrl-z:ignore"}
|
||||
if !opts.Tmux.border && (opts.BorderShape == tui.BorderUndefined || opts.BorderShape == tui.BorderLine) {
|
||||
// We append --border option at the end, because `--style=full:STYLE`
|
||||
// may have changed the default border style.
|
||||
if tui.DefaultBorderShape == tui.BorderRounded {
|
||||
rest = append(rest, "--border=rounded")
|
||||
} else {
|
||||
rest = append(rest, "--border=sharp")
|
||||
}
|
||||
}
|
||||
if opts.Tmux.border && opts.Margin == defaultMargin() {
|
||||
args = append(args, "--margin=0,1")
|
||||
}
|
||||
argStr := escapeSingleQuote(fzf)
|
||||
for _, arg := range append(args, rest...) {
|
||||
argStr += " " + escapeSingleQuote(arg)
|
||||
}
|
||||
argStr += ` --no-tmux --no-height`
|
||||
|
||||
// Get current directory
|
||||
dir, err := os.Getwd()
|
||||
if err != nil {
|
||||
dir = "."
|
||||
}
|
||||
argStr, dir := popupArgStr(args, opts)
|
||||
|
||||
// Set tmux options for popup placement
|
||||
// C Both The centre of the terminal
|
||||
|
||||
+6
-4
@@ -161,7 +161,7 @@ func awkTokenizer(input string) ([]string, int) {
|
||||
end := 0
|
||||
for idx := 0; idx < len(input); idx++ {
|
||||
r := input[idx]
|
||||
white := r == 9 || r == 32
|
||||
white := r == 9 || r == 32 || r == 10
|
||||
switch state {
|
||||
case awkNil:
|
||||
if white {
|
||||
@@ -218,11 +218,12 @@ func Tokenize(text string, delimiter Delimiter) []Token {
|
||||
return withPrefixLengths(tokens, 0)
|
||||
}
|
||||
|
||||
// StripLastDelimiter removes the trailing delimiter and whitespaces
|
||||
// StripLastDelimiter removes the trailing delimiter
|
||||
func StripLastDelimiter(str string, delimiter Delimiter) string {
|
||||
if delimiter.str != nil {
|
||||
str = strings.TrimSuffix(str, *delimiter.str)
|
||||
} else if delimiter.regex != nil {
|
||||
return strings.TrimSuffix(str, *delimiter.str)
|
||||
}
|
||||
if delimiter.regex != nil {
|
||||
locs := delimiter.regex.FindAllStringIndex(str, -1)
|
||||
if len(locs) > 0 {
|
||||
lastLoc := locs[len(locs)-1]
|
||||
@@ -230,6 +231,7 @@ func StripLastDelimiter(str string, delimiter Delimiter) string {
|
||||
str = str[:lastLoc[0]]
|
||||
}
|
||||
}
|
||||
return str
|
||||
}
|
||||
return strings.TrimRightFunc(str, unicode.IsSpace)
|
||||
}
|
||||
|
||||
@@ -56,9 +56,9 @@ func TestParseRange(t *testing.T) {
|
||||
|
||||
func TestTokenize(t *testing.T) {
|
||||
// AWK-style
|
||||
input := " abc: def: ghi "
|
||||
input := " abc: \n\t def: ghi "
|
||||
tokens := Tokenize(input, Delimiter{})
|
||||
if tokens[0].text.ToString() != "abc: " || tokens[0].prefixLength != 2 {
|
||||
if tokens[0].text.ToString() != "abc: \n\t " || tokens[0].prefixLength != 2 {
|
||||
t.Errorf("%s", tokens)
|
||||
}
|
||||
|
||||
@@ -71,9 +71,9 @@ func TestTokenize(t *testing.T) {
|
||||
// With delimiter regex
|
||||
tokens = Tokenize(input, delimiterRegexp("\\s+"))
|
||||
if tokens[0].text.ToString() != " " || tokens[0].prefixLength != 0 ||
|
||||
tokens[1].text.ToString() != "abc: " || tokens[1].prefixLength != 2 ||
|
||||
tokens[2].text.ToString() != "def: " || tokens[2].prefixLength != 8 ||
|
||||
tokens[3].text.ToString() != "ghi " || tokens[3].prefixLength != 14 {
|
||||
tokens[1].text.ToString() != "abc: \n\t " || tokens[1].prefixLength != 2 ||
|
||||
tokens[2].text.ToString() != "def: " || tokens[2].prefixLength != 10 ||
|
||||
tokens[3].text.ToString() != "ghi " || tokens[3].prefixLength != 16 {
|
||||
t.Errorf("%s", tokens)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/junegunn/go-shellwords"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
@@ -20,8 +21,8 @@ type Executor struct {
|
||||
|
||||
func NewExecutor(withShell string) *Executor {
|
||||
shell := os.Getenv("SHELL")
|
||||
args := strings.Fields(withShell)
|
||||
if len(args) > 0 {
|
||||
args, err := shellwords.Parse(withShell)
|
||||
if err == nil && len(args) > 0 {
|
||||
shell = args[0]
|
||||
args = args[1:]
|
||||
} else {
|
||||
|
||||
@@ -0,0 +1,41 @@
|
||||
package fzf
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
)
|
||||
|
||||
func runZellij(args []string, opts *Options) (int, error) {
|
||||
argStr, dir := popupArgStr(args, opts)
|
||||
|
||||
zellijArgs := []string{
|
||||
"run", "--floating", "--close-on-exit", "--block-until-exit",
|
||||
"--cwd", dir,
|
||||
}
|
||||
if !opts.Tmux.border {
|
||||
zellijArgs = append(zellijArgs, "--borderless", "true")
|
||||
}
|
||||
switch opts.Tmux.position {
|
||||
case posUp:
|
||||
zellijArgs = append(zellijArgs, "-y", "0")
|
||||
case posDown:
|
||||
zellijArgs = append(zellijArgs, "-y", "9999")
|
||||
case posLeft:
|
||||
zellijArgs = append(zellijArgs, "-x", "0")
|
||||
case posRight:
|
||||
zellijArgs = append(zellijArgs, "-x", "9999")
|
||||
case posCenter:
|
||||
// Zellij centers floating panes by default
|
||||
}
|
||||
zellijArgs = append(zellijArgs, "--width", opts.Tmux.width.String())
|
||||
zellijArgs = append(zellijArgs, "--height", opts.Tmux.height.String())
|
||||
zellijArgs = append(zellijArgs, "--")
|
||||
|
||||
return runProxy(argStr, func(temp string, needBash bool) (*exec.Cmd, error) {
|
||||
sh, err := sh(needBash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
zellijArgs = append(zellijArgs, sh, temp)
|
||||
return exec.Command("zellij", zellijArgs...), nil
|
||||
}, opts, true)
|
||||
}
|
||||
+246
-6
@@ -404,11 +404,11 @@ class TestCore < TestInteractive
|
||||
tmux.send_keys "seq 1 111 | #{fzf("-m +s --tac #{opt} -q11")}", :Enter
|
||||
tmux.until { |lines| assert_equal '> 111', lines[-3] }
|
||||
tmux.send_keys :Tab
|
||||
tmux.until { |lines| assert_equal ' 4/111 -S (1)', lines[-2] }
|
||||
tmux.until { |lines| assert_equal ' 4/111 (1) -S', lines[-2] }
|
||||
tmux.send_keys 'C-R'
|
||||
tmux.until { |lines| assert_equal '> 11', lines[-3] }
|
||||
tmux.send_keys :Tab
|
||||
tmux.until { |lines| assert_equal ' 4/111 +S (2)', lines[-2] }
|
||||
tmux.until { |lines| assert_equal ' 4/111 (2) +S', lines[-2] }
|
||||
tmux.send_keys :Enter
|
||||
assert_equal %w[111 11], fzf_output_lines
|
||||
end
|
||||
@@ -1190,6 +1190,16 @@ class TestCore < TestInteractive
|
||||
tmux.until { |lines| assert lines.any_include?('9999␊10000') }
|
||||
end
|
||||
|
||||
def test_freeze_left_tabstop
|
||||
writelines(%W[1\t2\t3])
|
||||
# With --freeze-left 1 and --tabstop=2:
|
||||
# Frozen left: "1" (width 1)
|
||||
# Middle starts with "\t" at prefix width 1, tabstop 2 → 1 space
|
||||
# Then "2" at column 2, next "\t" at column 3 → 1 space, then "3"
|
||||
tmux.send_keys %(cat #{tempname} | #{FZF} --tabstop=2 --freeze-left 1), :Enter
|
||||
tmux.until { |lines| assert_equal '> 1 2 3', lines[-3] }
|
||||
end
|
||||
|
||||
def test_freeze_left_keep_right
|
||||
tmux.send_keys %(seq 10000 | #{FZF} --read0 --delimiter "\n" --freeze-left 3 --keep-right --ellipsis XX --no-multi-line --bind space:toggle-multi-line), :Enter
|
||||
tmux.until { |lines| assert_match(/^> 1␊2␊3XX.*10000␊$/, lines[-3]) }
|
||||
@@ -1649,6 +1659,236 @@ class TestCore < TestInteractive
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_whole_line
|
||||
# --track --id-nth .. should track by entire line across reloads
|
||||
tmux.send_keys "seq 1000 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:seq 1000 | sort -R'", :Enter
|
||||
tmux.until { |lines| assert_equal 1000, lines.match_count }
|
||||
|
||||
# Move to item 555
|
||||
tmux.send_keys '555'
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert_includes lines, '> 555'
|
||||
end
|
||||
tmux.send_keys :BSpace, :BSpace, :BSpace
|
||||
|
||||
# Reload with shuffled order — cursor should track "555"
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
assert_equal 1000, lines.match_count
|
||||
assert_includes lines, '> 555'
|
||||
assert_includes lines[-2], '+T'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_field
|
||||
# --track --id-nth 1 should track by first field across reloads
|
||||
tmux.send_keys "printf '1 apple\\n2 banana\\n3 cherry\\n' | #{FZF} --track --id-nth 1 --bind 'ctrl-r:reload:printf \"1 apricot\\n2 blueberry\\n3 cranberry\\n\"'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines, '> 1 apple'
|
||||
end
|
||||
|
||||
# Move up to "2 banana"
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert_includes lines, '> 2 banana' }
|
||||
|
||||
# Reload — the second field changes, but first field "2" stays
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines, '> 2 blueberry'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_no_match
|
||||
# When tracked item is not found after reload, cursor stays at current position
|
||||
tmux.send_keys "printf 'alpha\\nbeta\\ngamma\\n' | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:printf \"delta\\nepsilon\\nzeta\\n\"'", :Enter
|
||||
tmux.until { |lines| assert_equal 3, lines.match_count }
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert_includes lines, '> beta' }
|
||||
|
||||
# Reload with completely different items — no match for "beta"
|
||||
# Cursor stays at the same position (second item)
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines, '> epsilon'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_blocked_indicator
|
||||
# +T* should appear during reload and disappear when match is found
|
||||
tmux.send_keys "seq 100 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:sleep 1; seq 100 | sort -R'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 100, lines.match_count
|
||||
assert_includes lines[-2], '+T'
|
||||
end
|
||||
|
||||
# Trigger slow reload — should show +T* while blocked
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+T*' }
|
||||
|
||||
# After reload completes, +T* should clear back to +T
|
||||
tmux.until do |lines|
|
||||
assert_equal 100, lines.match_count
|
||||
assert_includes lines[-2], '+T'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_abort_unblocks
|
||||
# Escape during track-blocked state should unblock, not quit
|
||||
tmux.send_keys "seq 100 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:sleep 3; seq 100'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 100, lines.match_count
|
||||
assert_includes lines[-2], '+T'
|
||||
end
|
||||
|
||||
# Trigger slow reload
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+T*' }
|
||||
|
||||
# Escape should unblock, not quit fzf
|
||||
tmux.send_keys :Escape
|
||||
tmux.until do |lines|
|
||||
assert_includes lines[-2], '+T'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_async_unblocks_early
|
||||
# With async reload, +T* should clear as soon as the match streams in,
|
||||
# even while loading is still in progress.
|
||||
# sleep 1 first so +T* is observable, then the match arrives, then more items after a delay.
|
||||
tmux.send_keys "seq 5 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:sleep 1; echo 1; sleep 2; seq 2 10'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 5, lines.match_count
|
||||
assert_includes lines, '> 1'
|
||||
end
|
||||
|
||||
# Trigger reload — blocked during initial sleep
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+T*' }
|
||||
# Match "1" arrives, unblocks before the remaining items load
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert_includes lines, '> 1'
|
||||
assert_includes lines[-2], '+T'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_sync_blocks_until_complete
|
||||
# With reload-sync, +T* should stay until the entire stream is complete,
|
||||
# even though the match arrives early in the stream.
|
||||
tmux.send_keys "seq 5 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload-sync:sleep 1; echo 1; sleep 2; seq 2 10'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 5, lines.match_count
|
||||
assert_includes lines, '> 1'
|
||||
end
|
||||
|
||||
# Trigger reload-sync — every observable state must be either:
|
||||
# 1. +T* (still blocked), or
|
||||
# 2. final state (count=10, +T without *)
|
||||
# Any other combination (e.g. unblocked while count < 10) is a bug.
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
info = lines[-2]
|
||||
blocked = info&.include?('+T*')
|
||||
unless blocked
|
||||
raise "Unblocked before stream complete (count: #{lines.match_count})" if lines.match_count != 10
|
||||
|
||||
assert_includes info, '+T'
|
||||
assert_includes lines, '> 1'
|
||||
end
|
||||
!blocked
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_toggle_track_unblocks
|
||||
# toggle-track during track-blocked state should unblock and disable tracking
|
||||
tmux.send_keys "seq 100 | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:sleep 5; seq 100' --bind 'ctrl-t:toggle-track'", :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 100, lines.match_count
|
||||
assert_includes lines[-2], '+T'
|
||||
end
|
||||
|
||||
# Trigger slow reload
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+T*' }
|
||||
|
||||
# toggle-track should unblock and disable tracking before reload completes
|
||||
tmux.send_keys 'C-t'
|
||||
tmux.until(timeout: 3) do |lines|
|
||||
refute_includes lines[-2], '+T'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_nth_reload_async_no_match
|
||||
# With async reload, when tracked item is not found, cursor stays at
|
||||
# current position after stream completes
|
||||
tmux.send_keys "printf 'alpha\\nbeta\\ngamma\\n' | #{FZF} --track --id-nth .. --bind 'ctrl-r:reload:sleep 1; printf \"delta\\nepsilon\\nzeta\\n\"'", :Enter
|
||||
tmux.until { |lines| assert_equal 3, lines.match_count }
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert_includes lines, '> beta' }
|
||||
|
||||
# Reload with completely different items — no match for "beta"
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+T*' }
|
||||
# After stream completes, unblocks with cursor at same position (second item)
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines, '> epsilon'
|
||||
refute_includes lines[-2], '+T*'
|
||||
end
|
||||
end
|
||||
|
||||
def test_track_action_with_id_nth
|
||||
# track-current with --id-nth should track by specified field
|
||||
tmux.send_keys "printf '1 apple\\n2 banana\\n3 cherry\\n' | #{FZF} --id-nth 1 --bind 'ctrl-t:track-current,ctrl-r:reload:printf \"1 apricot\\n2 blueberry\\n3 cranberry\\n\"'", :Enter
|
||||
tmux.until { |lines| assert_equal 3, lines.match_count }
|
||||
|
||||
# Move to "2 banana" and activate tracking
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert_includes lines, '> 2 banana' }
|
||||
tmux.send_keys 'C-t'
|
||||
tmux.until { |lines| assert_includes lines[-2], '+t' }
|
||||
|
||||
# Reload — should track by field "2"
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines, '> 2 blueberry'
|
||||
end
|
||||
end
|
||||
|
||||
def test_id_nth_preserve_multi_selection
|
||||
# --id-nth with --multi should preserve selections across reload-sync
|
||||
File.write(tempname, "1 apricot\n2 blueberry\n3 cranberry\n")
|
||||
tmux.send_keys "printf '1 apple\\n2 banana\\n3 cherry\\n' | #{fzf("--multi --id-nth 1 --bind 'ctrl-r:reload-sync:cat #{tempname}'")}", :Enter
|
||||
tmux.until { |lines| assert_equal 3, lines.match_count }
|
||||
|
||||
# Select first item (1 apple) and third item (3 cherry)
|
||||
tmux.send_keys :Tab
|
||||
tmux.send_keys :Up, :Up, :Tab
|
||||
tmux.until { |lines| assert_includes lines[-2], '(2)' }
|
||||
|
||||
# Reload — selections should be preserved by id-nth key
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until do |lines|
|
||||
assert_equal 3, lines.match_count
|
||||
assert_includes lines[-2], '(2)'
|
||||
assert(lines.any? { |l| l.include?('apricot') })
|
||||
end
|
||||
|
||||
# Accept and verify the correct items were preserved
|
||||
tmux.send_keys :Enter
|
||||
assert_equal ['1 apricot', '3 cranberry'], fzf_output_lines
|
||||
end
|
||||
|
||||
def test_one_and_zero
|
||||
tmux.send_keys "seq 10 | #{FZF} --bind 'zero:preview(echo no match),one:preview(echo {} is the only match)'", :Enter
|
||||
tmux.send_keys '1'
|
||||
@@ -2085,13 +2325,13 @@ class TestCore < TestInteractive
|
||||
tmux.send_keys %(echo "foo ,bar,baz" | #{FZF} -d, --accept-nth 2,2,1,3,1 --sync --bind start:accept > #{tempname}), :Enter
|
||||
wait do
|
||||
assert_path_exists tempname
|
||||
# Last delimiter and the whitespaces are removed
|
||||
assert_equal ['bar,bar,foo ,bazfoo'], File.readlines(tempname, chomp: true)
|
||||
# Last delimiter is removed
|
||||
assert_equal ['bar,bar,foo ,bazfoo '], File.readlines(tempname, chomp: true)
|
||||
end
|
||||
end
|
||||
|
||||
def test_accept_nth_regex_delimiter
|
||||
tmux.send_keys %(echo "foo :,:bar,baz" | #{FZF} --delimiter='[:,]+' --accept-nth 2,2,1,3,1 --sync --bind start:accept > #{tempname}), :Enter
|
||||
tmux.send_keys %(echo "foo :,:bar,baz" | #{FZF} --delimiter=' *[:,]+ *' --accept-nth 2,2,1,3,1 --sync --bind start:accept > #{tempname}), :Enter
|
||||
wait do
|
||||
assert_path_exists tempname
|
||||
# Last delimiter and the whitespaces are removed
|
||||
@@ -2109,7 +2349,7 @@ class TestCore < TestInteractive
|
||||
end
|
||||
|
||||
def test_accept_nth_template
|
||||
tmux.send_keys %(echo "foo ,bar,baz" | #{FZF} -d, --accept-nth '[{n}] 1st: {1}, 3rd: {3}, 2nd: {2}' --sync --bind start:accept > #{tempname}), :Enter
|
||||
tmux.send_keys %(echo "foo ,bar,baz" | #{FZF} -d " *, *" --accept-nth '[{n}] 1st: {1}, 3rd: {3}, 2nd: {2}' --sync --bind start:accept > #{tempname}), :Enter
|
||||
wait do
|
||||
assert_path_exists tempname
|
||||
# Last delimiter and the whitespaces are removed
|
||||
|
||||
+32
-2
@@ -393,6 +393,20 @@ class TestPreview < TestInteractive
|
||||
end
|
||||
end
|
||||
|
||||
def test_preview_follow_wrap_long_line
|
||||
tmux.send_keys %(seq 1 | #{FZF} --preview "seq 2; yes yes | head -10000 | tr '\n' ' '" --preview-window follow,wrap --bind up:preview-up,down:preview-down), :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert lines.any_include?('3/3 │')
|
||||
end
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert lines.any_include?('2/3 │') }
|
||||
tmux.send_keys :Up
|
||||
tmux.until { |lines| assert lines.any_include?('1/3 │') }
|
||||
tmux.send_keys :Down
|
||||
tmux.until { |lines| assert lines.any_include?('2/3 │') }
|
||||
end
|
||||
|
||||
def test_close
|
||||
tmux.send_keys "seq 100 | #{FZF} --preview 'echo foo' --bind ctrl-c:close", :Enter
|
||||
tmux.until { |lines| assert_equal 100, lines.match_count }
|
||||
@@ -593,7 +607,7 @@ class TestPreview < TestInteractive
|
||||
end
|
||||
|
||||
def test_preview_wrap_sign_between_ansi_fragments_overflow
|
||||
tmux.send_keys %(seq 1 | #{FZF} --preview 'echo -e "\\x1b[33m1234567890 \\x1b[mhello"; echo -e "\\x1b[33m1234567890 \\x1b[mhello"' --preview-window 2,wrap-word), :Enter
|
||||
tmux.send_keys %(seq 1 | #{FZF} --preview 'echo -e "\\x1b[33m123 \\x1b[mhi"; echo -e "\\x1b[33m123 \\x1b[mhi"' --preview-window 2,wrap-word,noinfo), :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert_equal(2, lines.count { |line| line.include?('│ 12 │') })
|
||||
@@ -602,11 +616,27 @@ class TestPreview < TestInteractive
|
||||
end
|
||||
|
||||
def test_preview_wrap_sign_between_ansi_fragments_overflow2
|
||||
tmux.send_keys %(seq 1 | #{FZF} --preview 'echo -e "\\x1b[33m1234567890 \\x1b[mhello"; echo -e "\\x1b[33m1234567890 \\x1b[mhello"' --preview-window 1,wrap-word), :Enter
|
||||
tmux.send_keys %(seq 1 | #{FZF} --preview 'echo -e "\\x1b[33m123 \\x1b[mhi"; echo -e "\\x1b[33m123 \\x1b[mhi"' --preview-window 1,wrap-word,noinfo), :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert_equal(2, lines.count { |line| line.include?('│ 1 │') })
|
||||
assert_equal(0, lines.count { |line| line.include?('│ h') })
|
||||
end
|
||||
end
|
||||
|
||||
def test_preview_toggle_should_redraw_scrollbar
|
||||
tmux.send_keys %(seq 1 | #{FZF} --no-border --scrollbar --preview 'seq $((FZF_PREVIEW_LINES + 1))' --preview-border line --bind tab:toggle-preview --header foo --header-border --footer bar --footer-border), :Enter
|
||||
tmux.until do |lines|
|
||||
assert_equal 1, lines.match_count
|
||||
assert_operator lines.count { |line| line.end_with?('│') }, :>, 2
|
||||
end
|
||||
tmux.send_keys :Tab
|
||||
tmux.until do |lines|
|
||||
assert_equal(2, lines.count { |line| line.end_with?('│') })
|
||||
end
|
||||
tmux.send_keys :Tab
|
||||
tmux.until do |lines|
|
||||
assert_operator lines.count { |line| line.end_with?('│') }, :>, 2
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -16,6 +16,31 @@ class TestServer < TestInteractive
|
||||
assert_empty state[:query]
|
||||
assert_equal({ index: 0, text: '1' }, state[:current])
|
||||
|
||||
# No positions when query is empty
|
||||
state[:matches].each do |m|
|
||||
assert_nil m[:positions]
|
||||
end
|
||||
assert_nil state[:current][:positions] if state[:current]
|
||||
|
||||
# Positions with a single-character query
|
||||
Net::HTTP.post(fn.call, 'change-query(1)')
|
||||
tmux.until { |lines| assert_equal 2, lines.match_count }
|
||||
state = JSON.parse(Net::HTTP.get(fn.call), symbolize_names: true)
|
||||
assert_equal [0], state[:current][:positions]
|
||||
state[:matches].each do |m|
|
||||
assert_includes m[:text], '1'
|
||||
assert_equal [m[:text].index('1')], m[:positions]
|
||||
end
|
||||
|
||||
# Positions with a multi-character query; verify sorted ascending
|
||||
Net::HTTP.post(fn.call, 'change-query(10)')
|
||||
tmux.until { |lines| assert_equal 1, lines.match_count }
|
||||
state = JSON.parse(Net::HTTP.get(fn.call), symbolize_names: true)
|
||||
assert_equal '10', state[:current][:text]
|
||||
assert_equal [0, 1], state[:current][:positions]
|
||||
assert_equal state[:current][:positions], state[:current][:positions].sort
|
||||
|
||||
# No match — no current item
|
||||
Net::HTTP.post(fn.call, 'change-query(yo)+reload(seq 100)+change-prompt:hundred> ')
|
||||
tmux.until { |lines| assert_equal 100, lines.item_count }
|
||||
tmux.until { |lines| assert_equal 'hundred> yo', lines[-1] }
|
||||
|
||||
@@ -832,6 +832,55 @@ class TestBash < TestBase
|
||||
tmux.prepare
|
||||
end
|
||||
|
||||
def test_ctrl_r_delete
|
||||
tmux.prepare
|
||||
tmux.send_keys 'echo to-keep', :Enter
|
||||
tmux.prepare
|
||||
tmux.send_keys 'echo to-delete-1', :Enter
|
||||
tmux.prepare
|
||||
tmux.send_keys 'echo to-delete-2', :Enter
|
||||
tmux.prepare
|
||||
tmux.send_keys 'echo to-delete-3', :Enter
|
||||
tmux.prepare
|
||||
tmux.send_keys 'echo another-keeper', :Enter
|
||||
tmux.prepare
|
||||
|
||||
# Open Ctrl-R and delete one entry
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_operator lines.match_count, :>, 0 }
|
||||
tmux.send_keys 'to-delete'
|
||||
tmux.until { |lines| assert_equal 3, lines.match_count }
|
||||
tmux.send_keys 'S-Delete'
|
||||
tmux.until { |lines| assert_equal 2, lines.match_count }
|
||||
|
||||
# Multi-select remaining two and delete them at once
|
||||
tmux.send_keys :BTab, :BTab
|
||||
tmux.until { |lines| assert_includes lines[-2], '(2)' }
|
||||
tmux.send_keys 'S-Delete'
|
||||
tmux.until { |lines| assert_equal 0, lines.match_count }
|
||||
|
||||
# Exit without selecting
|
||||
tmux.send_keys :Escape
|
||||
tmux.prepare
|
||||
|
||||
# Verify deleted entries are gone from history
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_operator lines.match_count, :>, 0 }
|
||||
tmux.send_keys 'to-delete'
|
||||
tmux.until { |lines| assert_equal 0, lines.match_count }
|
||||
tmux.send_keys :Escape
|
||||
tmux.prepare
|
||||
|
||||
# Verify kept entries are still there
|
||||
tmux.send_keys 'C-r'
|
||||
tmux.until { |lines| assert_operator lines.match_count, :>, 0 }
|
||||
tmux.send_keys 'to-keep'
|
||||
tmux.until { |lines| assert_equal 1, lines.match_count }
|
||||
tmux.send_keys :Enter
|
||||
tmux.until { |lines| assert_equal 'echo to-keep', lines[-1] }
|
||||
tmux.send_keys 'C-c'
|
||||
end
|
||||
|
||||
def test_dynamic_completion_loader
|
||||
tmux.paste 'touch /tmp/foo; _fzf_completion_loader=1'
|
||||
tmux.paste '_completion_loader() { complete -o default fake; }'
|
||||
@@ -920,6 +969,7 @@ class TestZsh < TestBase
|
||||
end
|
||||
|
||||
test_perl_and_awk 'ctrl_r_multiline_index_collision' do
|
||||
tmux.send_keys 'setopt sh_glob', :Enter
|
||||
# Leading number in multi-line history content is not confused with index
|
||||
prepare_ctrl_r_test
|
||||
tmux.send_keys "'line 1"
|
||||
|
||||
+2
-1
@@ -5,6 +5,7 @@ fo = "fo"
|
||||
enew = "enew"
|
||||
tabe = "tabe"
|
||||
Iterm = "Iterm"
|
||||
ser = "ser"
|
||||
|
||||
[files]
|
||||
extend-exclude = ["README.md"]
|
||||
extend-exclude = ["README.md", "*.s"]
|
||||
|
||||
Reference in New Issue
Block a user