Latest articles

Not all Hallucinations are Good to Throw Away When it Comes to Legal Abstractive Summarization

The extensive use of Large Language Models (LLMs) for summarization tasks brings new challenges. One of the most important is related to the LLMs’ tendency to generate hallucinations, i.e., texts that give the impression of being fluid and natural, despite their lack of fidelity and nonsensical nature. Hallucinations can be factual: a source of knowledge, such as an expert, an ontology or a knowledge base, can indeed certify their veracity.

Read More »
Scroll to Top