WeMagnus intelligent assistant: digital innovation for City Hall General Secretaries

secrétaire de mairie travaillant sur ordinateur

As a management platform dedicated to local authorities, Berger-Levrault’s WeMagnus software stands out for solutions based on the latest advances in artificial intelligence (AI) and automated information processing. This new Intelligent Assistant offers users precise, contextualized support.

But how was this tool designed? What technologies does it embody? What are its specific features?

Designed as near and dear as possible to users

To create this trusted professional companion, the BL.Research Team immersed itself in the day-to-day work of City-Hall secretaries. Understanding their practices, their needs and their difficulties was essential to designing an interface adapted to both agents and citizens.

  • Immersion in the field: Participation in immersions within local authorities to gain a better understanding of the daily lives of City-Hall secretaries, their constraints and their expectations.
  • Focus group on interactions: Organization of a role-playing exercise to simulate interactions between town clerks and the Intelligent Assistant. This exercise enabled the emergence of usage scenarios and initial ideas for concrete questions.
  • Focus group on information retrieval: In-depth exploration of the questions City-Hall secretaries ask themselves on a daily basis, the difficulties associated with information retrieval, and definition of the ideal user path to access relevant sources.

The idea is to conceive an intelligent assistant capable of centralizing and analyzing business, regulatory and software data to answer the day-to-day questions of town hall general secretaries.

Development focused on research and innovation

We therefore focused on the creation of an answer engine system, using the “retrieval augmented generation” (RAG) technique. This artificial intelligence tool is designed to answer any question formulated in natural language, by combining our documentary databases, lexical immersion models and the power of large language models (LLM).

This search engine combines :

  1. Information search: Identification of relevant sources in document databases (online help, FAQ, FlashInfo, BL.API sheet, etc.).
  2. Answer generation: Extraction of information of interest and generation of a concise, reliable and contextualized summary answer using a language model (LLM).

This enables us to offer exhaustive, sourced and qualitative responses, surpassing traditional solutions such as chatGPT, Claude or Perplexity.

Figure 2: a) WeMagnus intelligent assistant home screen, b) intelligent assistant response screen, the response shows the sources used to reinforce the user’s reliability and confidence.

Reliability and trust at the heart of our concerns

To ensure high reliability (90%), well above default LLM relevance rates, the team has combined several advanced techniques:

  • Data preparation, cleaning and enrichment
  • Prompt engineering: the assistant is able to answer “I don’t know”, thus avoiding the errors (or “hallucinations”) typical of LLMs.
  • Rigorous validation: the solution was validated in three stages, thanks to the active participation of local and regional authorities’ help desk.
    • Functional tests: carried out by the BL.Research Team on the basis of test books produced by the support team.
    • Relevance tests: carried out by assistance department experts (coordinator and referents)
    • Use tests: immersed with novice helpdesk users, answering customers’ questions live.

These initial tests to validate the tool’s reliability and usefulness are supplemented by automatic tools for continuous quality validation.

Process de validation de l'AI WeMagnus
Figure 4: Validation process for the intelligent assistant, carried out with the BU Local and Regional Authorities’ support services.

An adaptable, scalable infrastructure

Assistant’s architecture is designed to be both robust and adaptable. It can integrate documents from a wide variety of formats and sources, and update them regularly to incorporate new software knowledge. This flexibility is the result of a modular approach that breaks down functionalities into micro-services, facilitating continuous improvement and the addition of new capabilities as technology advances.

Document processing is carried out incrementally, in 4 sub-steps:

  • Extraction: retrieval of documents from various sources (customer area, BL.API), in various formats (PDF, Word, html…) and classification.
  • Parsing: elimination of noisy data and standardization of information.
  • Enrichment: adding metadata and contextualizing data to enhance its informative value.
  • Restructuring: structuring information to make it usable by our AI models.

Our Intelligent Assistant interface is also modular, thanks to an API and micro-frontend architecture. This enables seamless integration with WeMagnus.

Perspectives for research and continuous improvement

The BL.Research Team by Berger-Levrault is actively exploring new methods for enriching results with multimedia data, assisting with editing and, more broadly, developing natural language interaction within WeMagnus. This continuous improvement approach guarantees a constantly evolving service, capable of meeting the future challenges of municipal management.

Thus, WeMagnus Intelligent Assistant represents an important step towards more efficient and more intelligent public management, taking advantage of technological advances to support local government players in their day-to-day missions.

To find out more about the WeMagnus commercial solution, click here !

More ...

Scroll to Top