New code editors like Cursor are bringing fresh ideas to AI-assisted software development. How do they stack up against GitHub Copilot, the established market leader?
The Market Leader: GitHub Copilot
GitHub Copilot was a pioneer in AI support for developers and remains the clear market leader. After more than two years, it reportedly serves over 77,000 corporate customers and drives more than 40% of GitHub’s revenue growth [1].
However, competition is on the rise. Two relatively small start-ups have garnered attention with their AI-powered development tools: Cursor by Anysphere and Windsurf by Codeium [2]. Unlike GitHub Copilot—which is available as an extension for multiple IDEs and editors—Cursor and Windsurf are standalone code editors based on forks of Visual Studio Code. Anysphere has secured around 60 million US dollars [3] in funding for Cursor’s development, while Codeium, founded in 2021, raised more than 150 million US dollars in August 2024 [4].
Open Source Alternatives Emerge
The open-source community is also joining this trend. Glass Devtools, for instance, is working on another Visual Studio Code fork called Void, aiming to be an open-source alternative to Cursor that explicitly supports locally hosted large language models (LLMs). However, Void is still in its early stages. Interested users can sign up for a beta programme waitlist or compile the available source code themselves.
All forks of Visual Studio Code lack certain proprietary features of the original editor. For example, Windsurf currently does not integrate the Windows Subsystem for Linux (WSL), although it is planned. Visual Studio Code’s popular Live Share feature is not supported by either Cursor or Windsurf, and portions of Microsoft’s .NET support—particularly the debugger—are not freely available for forks to use.
Start-ups like Cursor and Codeium aim to differentiate themselves from the market-leading Copilot by offering deeper AI functionality in their own integrated development environments (IDEs). Their hope is to compensate for Copilot’s broad compatibility, given Copilot works as an extension for all major IDEs without requiring users to switch to a different environment. It will be interesting to see whether Microsoft and its subsidiary GitHub respond by more tightly integrating Copilot into their own IDEs, such as Visual Studio and Visual Studio Code.
Below is an overview of experiences with Cursor as a prime example of these new editors. We’ll explore how it differs from GitHub Copilot and briefly look at Windsurf.
Multimodel Support
One key feature of Cursor is its multimodel design. Users are not restricted to a single LLM and can instead choose from multiple sources. As of late 2024, Cursor offered support for Anthropic Claude, OpenAI’s GPT models, Google Gemini, and a proprietary model called cursor-small
. Being able to switch between models is extremely useful because providers continue to release improved versions. It is crucial for developers to have the flexibility to select the best model for a specific task (see Figure 1).
Until recently, GitHub Copilot worked exclusively with OpenAI’s LLMs. In late October, however, an experimental feature [5] was introduced to make GitHub Copilot multimodel-ready (see Figure 2). The preview for model selection in the chat area is still not universally available across all IDEs.
How the AI Interacts with Your Code
Another difference between GitHub Copilot and Cursor becomes apparent when examining their interactions with LLMs. If you set up a proxy (such as mitmproxy) in Visual Studio Code to capture traffic, you can see precisely what Copilot sends to GitHub—including snippets of your code and the prompts it uses (see Figure 3). Much of the interactive logic for Copilot appears to happen within the IDE extension rather than solely on GitHub’s servers.
With Cursor, however, analysing the communication between the editor and its underlying models is more challenging because it all takes place server-side. An open-source tool called CursorLens can record and analyse Cursor’s interactions with the chosen LLM, but it is neither official nor guaranteed to work with the latest version of Cursor. In testing for this article, the tool could not successfully connect to the newest iteration of Cursor and GPT‑4o.
Subscription Model and API Keys
Like GitHub Copilot, Cursor operates on a subscription model [6]. There is a free tier with a limited number of AI requests. Professional developers can opt for a Pro plan at about 20 USD per month, while teams can invest in a Business plan at 40 USD per user per month.
In contrast to GitHub Copilot, Cursor allows you to provide your own API keys for OpenAI, Anthropic, Google, and even Azure OpenAI. This means you could, in theory, continue using the free version of Cursor’s software while paying a separate subscription fee to an LLM provider. However, the Cursor IDE strongly warns that using your own API keys may prove more expensive overall and can lead to certain feature limitations (see Figure 4).
Even when you supply your own API keys, all traffic still routes through Cursor’s servers. Cursor stores your keys, and both your source code and chat prompts are sent to Cursor, which then relays them to the respective LLM provider. If you hoped to keep everything local (e.g. via a private OpenAI deployment on a virtual network or a tool like Ollama), you would be disappointed—Cursor does not currently allow bypassing its servers.
Performance Comparisons
Anyone switching from GitHub Copilot to Cursor will likely notice faster performance in some areas. Although Cursor can’t magically speed up requests to external LLMs—those response times are set by the providers—it does show its strengths in operations that either do not require a large LLM or use Cursor’s proprietary models. Leveraging an approach called Speculative Edits [7], Cursor delivers remarkably quick results. Tasks that may take Copilot a few seconds are often nearly instantaneous in Cursor. This speed is particularly evident for code completions and AI-powered “Fast Apply” code modifications requested from the chat window.
Having used both Cursor and Copilot side by side for several months, I find Cursor’s responsiveness helps maintain flow while programming.
Predicting Your Next Steps
Both GitHub Copilot and Cursor collect context (generally from the currently open files) and send it to their respective servers to predict what you might type next. Copilot waits until you pause for a moment, then returns a suggestion that you can accept with a keystroke.
Cursor does something similar but adds a twist: once you accept a suggestion, Cursor may actively propose subsequent changes—even in areas of the code you have not touched yet (see Figure 5). It can guide you through all the code modifications needed for a bigger refactoring, step by step.
Two other Cursor features are especially convenient: Multi Line Edit and Smart Rewrite. Multi Line Edit can detect patterns in your edits and propose consistent changes across your codebase (see Figure 6). Smart Rewrite lets you type a rough idea without worrying about syntax or typos; Cursor attempts to interpret your intention and produce the correct code (see Figure 7). In contrast, Copilot tends to replicate any typos you enter, whereas Cursor often fixes them.
It may take a few days to get used to the shortcuts and suggestions in Cursor, but once you understand how its AI recognises, corrects, and completes code, you might find your coding style shifts significantly.
Code Completion Quality
Neither GitHub Copilot nor Cursor uses top-tier LLMs like GPT‑4o or Claude for basic code completion. Those models are too expensive and slow for frequent, short requests. Instead, Copilot relies on GPT-3.5-turbo or a smaller GPT-4o variant, while Cursor uses its own optimised model.
From my experience, the code-completion quality of both solutions is similar. However, for tasks such as creating comments, writing documentation, or drafting technical text in Markdown, I still find GitHub Copilot’s output more refined. As a result, I often keep both tools open simultaneously, particularly for complex projects.
Chat Features
GitHub Copilot, Cursor, and Windsurf all provide chat functionality that lets you ask about your code or pose general technical questions. Because they can tap into the same LLM providers, the overall quality of answers is comparable.
The biggest differences lie in how context is assembled for the chat:
- In Visual Studio Code, you can include the current text selection, individual files, or terminal output (see Figure 8).
- Cursor offers more extensive options, letting you feed code snippets, web search results, or even your own documentation sites into the LLM prompt (see Figure 9).
For instance, if you work with a niche language called TCQL and provide its documentation, Cursor’s chat can successfully answer queries about it (see Figure 10).
Codeium is currently in a transitional phase: its existing IDE extension differs from what is offered in the new Windsurf IDE. The Windsurf chat system, called Cascade, has fewer features for contextual assembly than the extension but compensates with an agent-based approach that can create files, edit them, and even propose and run terminal commands (see Figure 11).
All three tools produce solid replies to complex questions, thanks to modern LLMs. Cursor and Windsurf have a small edge in automatically detecting relevant code sections, while Cursor’s quick “Fast Apply” approach to chat-based code suggestions is particularly impressive.
Workplace Considerations
All three tools send your code to the cloud for AI processing, which requires trust in the providers. GitHub, for example, can point to numerous certifications and audits to demonstrate its data-handling practices.
Meanwhile, Codeium offers an Enterprise Edition that can operate entirely within a customer’s data centre. In testing, however, the locally hosted version lagged behind Codeium’s cloud service in terms of answer quality.
Conclusion
The landscape of AI-assisted development has evolved rapidly. GitHub Copilot still holds a dominant position, integrating seamlessly into many IDEs and delivering strong performance for documentation and text generation. Meanwhile, Cursor and Windsurf build on Visual Studio Code forks to embed AI more deeply into the development process, offering advanced features like next-step predictions, multi-line edits, and agent-based interactions.
Cursor’s noteworthy advantages include its snappy performance, innovative editing features (such as Smart Rewrite), and flexible model selection. Developers who rely on Visual Studio Code might find it hard to go back after enjoying Cursor’s lightning-quick responses and automated refactoring suggestions. Windsurf, although promising, is still maturing and lacks several basic editor capabilities you might expect from a Visual Studio Code fork.
Ultimately, healthy competition in this space ensures that Microsoft and GitHub will continue investing in new Copilot features rather than resting on their laurels—benefiting all developers looking for smarter ways to write and maintain code.