-
Notifications
You must be signed in to change notification settings - Fork 12.8k
Investigate eagerly returning more information about suggestions #36265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
fyi - this is what we are aiming for: microsoft/vscode#39441 (comment). |
I've measuring the perf impact of the most simple approach possible: call In a empty project, this call takes ~3seconds for 1500 or so completion items. So clearly not a good solution! Just a breakdown of simples of timings for
(Note that @andrewbranch At the TS/Editor sync today, we though you may the person to investigate this on the TS side. Using the existing TS Server apis, would you be able to look into:
|
(Cross-posting from microsoft/vscode#39441 to move discussion here) Would the ideal response include a separate completion entry for every overload of the same method/function? Currently, we only send one entry per identifier, and then when we send the details, we show the first overload and a string indicating how many additional overloads (if any) are available: |
Very early investigation observations:
|
I also have a UX question/consideration about the signature part of the VS Code proposal: how are you planning to represent objects that have both properties and call signatures? Common examples can be found in jest: test('something', () => {
expect(0).toBe(1);
});
test.each(table)('thing %1', () => {
expect.hasAssertions();
}); I’m wondering if a format like this mockup might obscure the properties of |
Thanks for looking into this @andrewbranch! For overloads, I don't think we should show a unique entry in the completion list for each overload. Unlike Java overloads which are more like distinct methods, JS overloads all still map to a single implementation. Many JS library methods also have a ton of overloads (like However I'm not sure if we have an VS Code advice right now on what to show in the UI for overloads |
Not so sure about the overloads. Yes, they are always the same function but I see two benefits when having each overload be an entry
Generally, I don't have strong feelings about this... It would be nice if TS has this information so that the extension can make a decision but for me overloads is not P1 - at least not right now.
That's a question that we haven't asked ourselves yet. Today, we have completion item kinds for
Big thanks for looking into this 👏 Generally, we aren't looking for an "all or nothing"-approach but "more in more". Still, having eventually consistent information without false negatives should be the goal here, esp. since I am afraid that everything else will confuse people. It is not the first time that the "too much data, too little compute-time"-challenge is coming up - other extensions are facing the sample problem. @andrewbranch There is one idea that I would like to get expert-feedback on: Let's state that there is completions from "scope-items" (global, local and other scope-variables) and "member-items" (class, namespace members). Now let's assume that the majority of completion items are "scope-items", e.g TS returns ~1000 completions when triggered at the top of a file vs. 25 completions for |
Global cachingWe briefly discussed caching global completions (cc @amcasey) in a recent editor sync (primarily as a way of reducing costs associated with data transfer, not computation). I think there is definitely an opportunity to do something like what you’re describing, but ironically, the rest of this completions proposal makes it much harder to do. Today, the global completion list doesn’t actually contain any type information, which makes it pretty easy to cache. But, if we encode type information (such as call signatures) into that list, it becomes much more volatile, and cache invalidation can become difficult. Because of declaration merging, any change in a non-module file or in a interface Document {
(): void;
} Adding this declaration in an otherwise empty file gives the global variable While refreshing the cached list isn’t a huge problem, I don’t have a clear idea of how we’d detect this in an efficient way in order to recompute the list—certainly not in a way that’s generalizable for arbitrary sources of globals. Built-in libs are fairly self-sufficient, but a global declaration file that makes use of import types and globals declared elsewhere, combined with conditional or indexed access types, could change shape in response to changes in any file, in hard-to-track ways. Further performance investigationsI recorded timings for a few scenarios on Friday: completions after the letter I’d want to reproduce this experiment in a more controlled environment before reporting the full data here, but the summary here is basically a huge “it depends.” For the global list of 1339 items, (returned with the single character For For ConclusionsI have real concerns about caching type information in global lists, although I think caching the non-type-related parts of built-in lib globals, at the least, is still possible and would be worth the savings in wire cost (at least for thin-client-like scenarios). But, I’m uncertain whether such a mechanism could be reliable for reducing the computation costs of generating call signature readouts. I think the emerging insight from the numbers I’ve gathered is that getting call signatures usually isn’t catastrophically expensive, but it can add up quickly for long lists, and it depends greatly on the composition of the list. (I’m honestly puzzled by the discrepancy between the ~30% slowdown for globals and the ~300% slowdown for members of Personally, I’d be much more comfortable with this proposal if these details were returned in a second pass after the bones of the list is returned. (One consequence of this though, is that you wouldn’t be able to display overloads as separate items, since we don’t know how many overloads there are until resolving call signatures.) Is that something you all could consider? |
Thanks for the update. Curious to learn more when you have measured more. Coincidentally, cached completions came up with dart today as well (Dart-Code/Dart-Code#2290 (comment)). We are planning to check with Java, C#, Cpp folks but I do acknowledge the complexity with TS and "open types"
Yeah - we have considered that and todays version of resolving the focused item is the mini variant of this. We could do this in batches, let's say resolve whole view port. But there will be flicker because typing changes what's visible in the view port by a lot (we don't filter just by prefix). E.g. you have said 1339 global completions and resolve the first 10 or so. Now, typing Another important note on the 2nd-pass-resolve is that this pass cannot change the insert behaviour of an item, e.g we cannot wait/block for resolving to happen which means a user is free to insert a suggestion before its details are resolved. Another suggestion that has been brought up is to differentiate between "full" and "partial" completions, e.g. pressing So, those are the current options in our order of preference
Then, there is always to option to artificially limit the amount of completions that are returned by the extension and then signal that is more via the |
Currently I’m pretty convinced that this just isn’t viable. Even if the 300% slower measurement is a distant outlier, I don’t think we want to accept a 30% slowdown as the typical case, with the possibility of unpredictable pathological cases.
@amcasey and I talked about this yesterday. Honestly, I think requesting the entire batch of details in a second pass, as long as it doesn’t block the list from showing up initially, would be on ok place to start. If the performance there seems to be a problem, we could look into requesting smaller batches, but the numbers I was seeing weren’t large in absolute terms; they were just bad compared to the baseline. For the global case, for example, the worst trial I recorded for resolving call signatures on everything was 138 ms, compared to the baseline average of 36 ms. So, displaying the initial list in 36 ms and getting the details for all 1339 items 100 ms later doesn’t sound terrible to me—then, you wouldn’t need to worry about calculating and requesting new batches as the user types. |
Not sure with what version you have measured this? I just tried this with 3.9.0-dev.20200413 and see the following numbers when using an "empty project" (a single ts file, no project file). Values are medians of 5 runs: get completion list 15ms, resolve all details (via So, now comes the fun part: Our TS-extension and extension-integration are quite expensive and add 3-4x to the ts-server time, e.g empty project: ts-server: 15ms, ts-ext: 41ms, vscode project: ts-server: 76ms, ts-ext: 322ms. This is after results come in, the total is the sum, and mostly due to data conversion and validation. Assuming we can minimise that and assuming resolving all details actually only costs 3x (above) then this could add up to zero. Very curious. On resolving batches: Yes, we can play with that already today (and we did) but the UI will flicker as we show details all the time, not just on focus. I will see to get some recordings that can be posted here. |
My numbers reflect only the body of the language service functions, excluding
Do you mean you were calling the completion details request for every item in the list? That’s not going to give you anything close to realistic numbers because it gets a totally different set of information than what you’re asking for in this proposal, and by calling the function N times it repeats a lot of work that could be reused. There’s no nightly you can try out to get the numbers I reported; I simply made local changes against master to perform the semantic work I thought would be most expensive. To clarify my earlier comment:
Secondly:
I mean, if we can minimize that overhead, we should do it independently of protocol changes and realize performance gains. From our side (and thinking as a VS Code + TypeScript user myself), if you told me there‘s an opportunity to reduce the user-observed completions time by 3x, I think I would be pretty excited about the 3x speedup rather than seeing an opportunity to replace all that surplus time with other computation 😄. In other words, if there’s a realistic opportunity to drastically improve the times for completions as they are today, that should be the baseline we measure against. |
In my experiment I have called
Sure thing. We are on it, independent of this. |
Interesting. I didn’t realize that the protocol accepted an array of items, because the language service does not—in the server code we just map over this list and do the full |
@andrewbranch Is there any chance that you can make available what you have prototyped, e.g using the real fast implementation behind the existing protocol? Or using a smarter protocol that doesn't require us to send back all those name-string. I am planning to run the following experiment
|
I don’t currently have anything prototyped that would work for that experiment. Essentially all I did was measure the impact of adding Happy to throw something together, but I want to be sure we’re on the same page about what you’re getting in the response vs. what you would like to get in the response. |
@jrieken @andrewbranch I looked into using {
"name": "xyz",
"kind": "const",
"kindModifiers": "export",
"sortText": "5",
"hasAction": true,
"source": "/Users/matb/projects/san/x"
} So unfortunately I think we are blocked on adopting the new VS Code API for auto imports. I can open a new issue for this, I suspect |
@mjbvz that’s correct, and we need an internal-only property like that not just for auto-imports, but also for disambiguating |
@andrewbranch I often check this issue and I see work is stopped to implement this from TypeScript side. Is this is still relevant for TS team? This feature would improve DX very very much! |
I think the status right now is that my investigation proved the initial proposal not viable for performance reasons. We’d be happy to investigate alternative proposals that don’t impact the completion list response time, like returning details for batches of completion items in a subsequent request. |
@andrewbranch Understand, thanks for all your work. I hope in the future we will have this feature available. |
Unassigning myself since I’m not aware of any actionable next steps on our side. Feel free to ping me if things change 👍 |
@andrewbranch that's a pity but I kinda expected this. We will proceed with our API/UI changes without selfhost validation via TypeScript, tho we have done this: microsoft/vscode#98228 (comment) |
Problem
Suggestions VS Code have three broad UX parts:
foo
.string
.Currently VS Code only shows the details of the currently active suggestion. This can be problematic in cases like auto imports, where multiple symbols may all have the same
name
:This design also reflects the design of the suggestion api:
completionInfo
returns the entire list of suggestions, but only includes the namecompletionEntryDetails
returns the details and documentation for an individual suggestionInvestigation
With microsoft/vscode#39441, VS Code is exploring letting languages/extensions return the details part of the suggestion eagerly. This would let VS Code show details for the entire list of suggestions instead of just for the active one. In the auto import case specifically, this would likely mean showing the file paths we would import from.
We would like the typescript team's feedback on these idea and our current proposals:
/cc @jrieken For VS Code apis
/cc @amcasey @minestarks for VS
The text was updated successfully, but these errors were encountered: