After nearly a year of daily use of GitHub Copilot in a software research and development context, here is a feedback report based on my real-world experience. I work as a full-stack developer on projects involving Python, TypeScript (Angular), as well as more transversal topics such as documentation, configuration, and DevOps.
Over the months, GitHub Copilot has established itself as a tool that is both powerful and imperfect. It has delivered very concrete productivity gains on well-understood scopes, while also forcing me to adapt my practices when facing its limitations—especially when dealing with large contexts or cross-cutting changes. More than a simple code completion tool, Copilot has progressively influenced the way I work.
At first, I used GitHub Copilot in a fairly basic way. It was mainly for code completion, template generation, and occasionally for algorithm proposals that I copied from the chat and manually adapted in my files. At that stage, Copilot saved me some time but clearly remained a secondary tool in my workflow.
With the arrival of edit and agent modes, my usage changed radically. Copilot no longer just suggests code line by line: it explores the project, analyzes the existing codebase, and directly proposes concrete changes across one or several files. The interface, which allows you to precisely review each change and accept or reject them individually, was a real turning point. Today, in most cases, I start by asking Copilot to do the work. I only intervene manually afterwards—either to adjust what was proposed or when the result does not meet my expectations.
In an R&D context, where exploration, prototyping, stabilization, and knowledge transfer constantly overlap, this inversion of the workflow changes quite a lot of things. The real question is no longer whether Copilot is “good” or “bad,” but rather when it provides real value and when it is better to take back control. This is exactly what I aim to analyze here, through a feedback based on daily use—sometimes very effective, sometimes more frustrating, but always instructive.
What Copilot Brings to Me on a Daily Basis
Application Code Generation
On well-defined scopes, GitHub Copilot has proven very effective at generating Python code, Angular services or components, TypeScript models, and unit tests. Over time, however, I realized that there are two very different ways of using Copilot for code generation.
The first approach is to give it a relatively high-level end goal: what you want to achieve, without necessarily detailing how to get there. Copilot then analyzes the context, chooses an approach on its own, and generates a complete solution. This can work, especially for rapid prototyping, but it has a major drawback: the internal logic and architectural decisions are implicitly made by the tool. With Angular components in particular, this can quickly lead to implementations that do not align with the patterns already used elsewhere in the application.
The second approach—by far the one I use most often—is to describe very precisely the procedure I want to see applied. I explain the business logic, the steps to follow, the expected architecture, the patterns to respect, and sometimes even the exact structure of files or functions. Copilot then mainly acts as an executor: it writes the code following my logic, rather than proposing its own.
This approach has several advantages. It drastically reduces the time spent reviewing and fixing generated code, allows me to retain strong control over the overall architecture, and ensures better consistency between modules. In practice, I rarely ask Copilot for a “turnkey” final solution. I prefer to dictate the path and use it as an implementation accelerator rather than as a technical decision-maker.
Log-Based Assisted Debugging
One of the most impactful use cases for me is debugging based on logs and stack traces. Very often, I provide Copilot with API error logs or exception traces, along with the relevant files, sometimes without having a clear idea of the root cause myself.
In these situations, Copilot is often able to identify plausible causes and propose coherent fixes. What impressed me the most, however, is its ability to verbalize its reasoning step by step. It does not just provide a solution: it explains why a given hypothesis is considered, which elements in the logs support it, and what alternative explanations might exist.
This explicit reasoning is essential. It allows me to keep a critical mindset, confront its hypotheses with my own understanding of the system, and use Copilot as an analytical support rather than a black box. In this role, it becomes a genuine reasoning partner.
DevOps and Skill Development: Much More Than Code Generation
Copilot has regularly helped me generate docker-compose files, Nginx configurations, automation scripts, and development environment setups. But the most striking example was the transition from a Docker Compose-based deployment to a Kubernetes deployment. I had never created a Helm chart before. Starting from an existing docker-compose architecture, I was able—with Copilot’s help—to build a functional Helm chart. It guided me step by step, progressively explaining Kubernetes concepts, chart structure, the role of each file, and the reasoning behind the choices made.
Of course, this learning process does not rely solely on Copilot as an editor tool. More broadly, any LLM can help explain Kubernetes concepts or Helm charts. But using it directly in the editor, on a real project, with real constraints, and receiving explanations alongside concrete code changes, completely changes the experience. This continuous interaction between analysis, generation, and explanation is what truly enables learning.
In hindsight, this is probably one of the most valuable aspects of my Copilot usage. It allows me to make progress across many domains by following its reasoning and proposals—not just to produce code faster.
I clearly see two distinct modes of usage. On topics I already master, I mainly use Copilot as an accelerator: I dictate the logic and let it speed up implementation. The benefit is primarily time savings.
On topics I master poorly—or not at all—the approach is different. Copilot becomes a learning tool. It allows me to do things I would not be able to do alone, or not without a significant investment, and to progressively acquire a global understanding of the subject. This approach has two important limitations:
- The first is learning depth: I have noticed that this skill acquisition sometimes remains superficial. I may have a high-level understanding, but I occasionally ask the same questions again, which is a sign that the knowledge is not as deeply anchored as with more traditional learning methods.
- The second limitation is critical thinking. When you do not master a topic, it is difficult to challenge the choices proposed by Copilot. In the case of Helm charts, for example, I followed its logic while being aware that other approaches probably exist—possibly more relevant ones—that I am not yet able to fully evaluate.
Documentation and Testing: Strong Potential, Still Underused
There are two use cases I still rely on relatively little, but which clearly deserve mention given how convincing the results are when I do use them: documentation generation and test generation.
For documentation, GitHub Copilot has proven very effective, whether for documenting methods and classes or for producing more global documentation such as README files. Based on existing code, it can explain how a component works, describe its role in the application, explain how to configure it, how to modify it, and how to use it. In an R&D context, where documentation is often incomplete or outdated, this capability is particularly valuable—especially for knowledge transfer, onboarding, or simply revisiting code after several weeks.
Copilot is also very strong at generating tests, mocks, and fixtures. Given an existing service or component, it can propose coherent unit tests, cover the main use cases, and set up the necessary mocking structures. The suggestions are generally clean and usable, although—as always—they require careful review.
I do not yet have enough experience with these use cases to provide feedback as detailed as for application code or DevOps. I still use them sporadically. But based on what I have seen so far, Copilot is particularly relevant for these tasks, and this is clearly an area I plan to explore further.
The Context Challenge: Limits, Adjustments, and Evolution
The main limitation I have encountered with GitHub Copilot concerns the handling of large contexts. As soon as requests involve many files, cross-cutting changes, or a deep understanding of the global architecture, the quality and coherence of the proposals tend to degrade. Copilot may produce locally relevant changes, but ones that are difficult to assess as a whole, with a real risk of regressions or inconsistencies.
This issue is especially visible in an R&D context. In such environments, applications and components are often developed without a fixed specification from the outset. Features emerge progressively, driven by needs and experimentation. Due to time constraints—or because functional direction is not yet stabilized—there is not always an opportunity to rethink the architecture with each new evolution. Code grows, sometimes piles up, and certain parts gradually become harder to maintain.
I encountered this situation with a component that had evolved significantly over time. Features had accumulated, the structure had become less readable, and maintainability was clearly becoming an issue. I then tried to use Copilot’s agent mode to propose a global refactoring plan. On paper, the approach was solid: Copilot analyzed the component, identified mixed responsibilities, and suggested a cleaner, more coherent decomposition.
However, once execution began, the limits quickly became apparent. The refactoring required changes across many interdependent files. Copilot introduced numerous regressions, broke existing behaviors, and was unable to reliably fix everything. The context to manage was simply too large, and the cost of validation and correction far outweighed the expected benefit. In this specific case, I had to take back control and perform the refactoring in a much more manual and incremental way.
This experience led me to adapt my usage of Copilot. Rather than entrusting it with large-scale refactorings, I now use it on more constrained scopes: extracting subcomponents, localized cleanup, consistent renaming, or simplifying well-targeted sections of code. Deliberately limiting the context, breaking tasks down, and working in short iterations results in far more relevant proposals that are easier to validate.
After about a year of continuous use, I have nonetheless observed positive progress on this front. Understanding of existing code has improved, context handling is more robust than it used to be, and agent mode has become more stable. The limitations related to massive contexts are still there, but they are now better identified and easier to work around. The overall trajectory is encouraging—as long as Copilot is used thoughtfully and never treated as an autopilot.
Conclusion
After a year of use in R&D, GitHub Copilot has proven to be much more than a simple code completion tool. It delivers real productivity gains on well-scoped tasks, but also provides meaningful support for analysis, reasoning, and skill development in areas that are less familiar. Used directly in the editor on a real project, it enables learning “by doing,” by following the explanations and design choices it proposes.
That said, Copilot does not replace expertise or technical judgment. In broader contexts or for architectural decisions, a high level of caution is required: its suggestions must be clearly framed, broken down, and systematically validated. Without a critical mindset, there is a risk of following choices that are not fully understood or mastered.
In terms of productivity, even though it is difficult to rely on objective metrics, the subjective assessment after one year of use is quite clear. For generating code templates, small methods, or simple and repetitive logic, the speed of development is roughly doubled. For more complex tasks that require analysis, understanding, and adaptation of the generated code, the perceived gain can reach ×3 or ×4, including review time—and sometimes even more, although this is harder to estimate precisely. Conversely, there are cases where the tool actually costs time: when the proposed solution is not appropriate and has to be dismantled and redone, productivity can occasionally drop to half of what it would be when writing the code directly.
Ultimately, Copilot mainly changes the developer’s posture. Less time is spent writing code line by line, and more time is devoted to guiding, analyzing, and validating. When used well, it is both a production accelerator and a genuine learning tool. When used poorly, it quickly becomes counterproductive.



