Neolink: Lifelong pathway recommendation model

Alice pathway illustration

Find information about the project’s challenges by following this link.

Neolink is a company created in 2012 which developed solution to encourage the internet that brings together. Its main objective is to unify economic activity and social utility in its solutions. The company has been bought by Berger-Levrault in 2019.

Social services are notoriously complex to model and manage. It is essential to turn to mutualized systems that are tailored for the actors of these services. While examples of tools in the general public are legion today, there are still very few dedicated solutions in the social work and health sector, and even fewer offering long-term care for users. The objective of this project is to propose a multi-stakeholder and interactive recommendation system allowing to recommend social projects in the form of ordered set of actions, which we qualify as lifelong pathway.
This second part article is dedicated to models in lifelong pathways recommendation.

A chain composite item recommender for lifelong pathways

In the following part, we contribute with a first approach that sets the bases of lifelong pathways recommendation. We model the problem, study its complexity, propose a resolution and optimizations, as well as evaluation criteria. We consider that recommending such composite items calls for non traditional recommendation approaches. More precisely, we consider lifelong pathways as particular Chain Composite Items (CCI), that have to satisfy the classical CCI properties of maximality (in terms of relevance), validity (in terms of beneficiary budget), and compatibility (in terms of action composition). The recommendation problem is formalized as a form of orienteering problem, for which we implemented two approaches for finding exact and approximate solutions. We adapt classical evaluation criteria for measuring the quality of the recommended pathways. We experiment with both artificial and real datasets, showing our approach is a promising building block of an interactive lifelong pathways recommender system.

Concepts of our problem statement for the recommendation of lifelong pathways to beneficiaries

Pathway Recommender System
Figure 1: Pathway recommender system

We consider a set of actions representing various personal, professional or health steps that can be undertaken by a beneficiary to further his life goal. Each action is associated with a cost that represents the action difficulty. Given two actions, we consider a distance between them, that represents how smooth the progression of the beneficiary would be by pairing these two actions in a pathway.
Our recommendation problem has the following inputs: a set of actions for forming pathways, a set of former beneficiaries represented by their diagnoses and the set of pathways they have undertook, formed by actions, a new beneficiary with diagnosis. The problem consists of recommending for beneficiaries a pathway with actions, based on diagnoses, set of pathways and diagnosis.
We decompose our recommendation problem in three sub-problems:

  • P1 compute action pro files in terms of action cost and distance between actions. The problem consists of computing a profile for each action.
  • P2 compute a user pro file in terms of relevance in actions. The problem consists of computing a profile for for beneficiary with a relevance score for each action.
  • P3 compute recommendations of pathways. The problem consists in finding a sequence of actions without repetition.

Problem resolution

P1 and P2 are mostly treated as learning problems, where distances between actions and the relevance of an action for the beneficiary are learned from the set of existing diagnoses and pathways, that were devised by professional social actors. The cost of an action can be more trivially extracted from past pathways available data.

Computing action profiles (P1)

We model each action as a vector of features, computed as the average of diagnoses of former beneficiaries who were recommended the action. The cost of actions is fixed to the median time spent for each action, as reported in the set of past pathways. The median is used for its robustness to extreme values. Note that, should this information be unavailable, a machine learning based approach similar to the ones described next for distances and relevance computation, should be applied.
Learning distances between actions corresponds to a traditional metric learning problem. Classical approaches addressing this problem include (1) transformation to classification task, or (2) weekly-supervised metric learning.
Regarding (1), distances are usually linear combinations of feature-wise distances between objects, obtained by fitting a linear classifier (e.g., SVM) over pairs of objects labeled positively if objects should be in the same group and negatively otherwise.
As to (2), weekly-supervised metric learning approaches aim at defining a new space based on input pairwise Must-Link or Cannot-Link constraints that specify respectively if two points should be close or distant in the output space.
Finally, in the general case, it can express more complex scaling and relations between features.

Computing beneficiary profile (P2)

As you can see in Figure 1, the set of former pathways can be seen as a graph of actions that were undertaken in the past. While this graph could have been used to extract action popularity scores which may be useful for recommending popular actions, our goal is to compute a different profile for each new beneficiary. As such, our problem is close to a cold start problem for a new user, for which we use a hybrid approach, leveraging both the beneficiary diagnosis and the former diagnoses and pathways.
Precisely, we define the collaborative relevance of an action for a beneficiary by the probability that he’ll undertakes the action. We express this problem as a classification task aiming to predict “True” if the action is relevant or “False” if it’s not, based on the diagnosis of the beneficiary. We use a naive Bayes approach to obtain the expected probability values. By generalizing the prediction for a beneficiary based on the observations over past beneficiaries, this score accounts for a collaborative score of relevance.

Pathway recommendation (P3)

We choose to use the Mathematical Integer Programming (MIP) model as we expect most instances to be small enough for an exact solution to be tractably found by a state-of-the-art solvers like CPLEX. Such model allows to trivially add linear constraints on a case by case basis, whenever needed. For efficiency purpose, the resolution of such problems classically reformulates the initial multi-objective problem into a single objective problem (the maximization of the relevance) and rewrites the last two remaining objectives (minimizing the time budget and the distance) as so-called epsilon constraints with upper bound on the time budget and the distance. This reformulation benefits from single objective optimization mechanisms available in CPLEX.
For those larger instances of the problem that could not be handled by CPLEX, we implemented a simple greedy algorithm to efficiently compute approximate solutions to P3. Its principle is as follows. It starts by picking the most relevant action, and then adds subsequent actions by computing a score by
dividing relevance by distance, and picking the actions achieving the best score.
A windowing system is used to limit the number of comparisons (the larger the window is, the greater the number of best relevant actions are considered). Additionally, at each iteration, costly actions not respecting the remaining budget are pruned.

Approach’s evaluation

By testing our approach, we wanted to answer two mains questions: how effective is our PRS approach to recommend lifelong pathways? And to which extent PRS scales to larger instances of the problem, i.e., to a larger population of possible actions?

To test the effectiveness of the PRS, we used a real dataset composed of 2812 user diagnosis and 56 actions, with 14 descriptive features that represent contextual information such as “need for childcare”, “searching for a job”, etc.
Our evaluation scheme uses classical evaluation metrics for recommender systems, namely precision and recall of the discovered composite items when compared to the real pathways that were undertaken by the beneficiaries. However, as the recommendation task is complex, we consider precision and recall at a certain similarity threshold. We consider that there is a match between a recommended action and an expected action from the pathway if their similarity exceeds a threshold that is a parameter of the precision and recall measures.
As explained before, P3 is handled by reformulating objectives as epsilon constraints for which an upper bound has to be provided. To lead our test, we experiment with several values for these bounds. Noticeably, we consider 3 thresholds for the cost constraints expressed as the run time of actions, respectively 207, 364 and 601 days as these are the average observed period of time for respectively, below 25 years old, between 25 and 55 years old. and above
55 years old to go back to work after an unemployment period. Similarly, for the distance constraint we consider 3 distance thresholds: 3, 4, and 5 that allows to run several actions in one recommended pathway. Figures below provides histograms of the distributions of run time costs and distances which assess the choices for the aforementioned thresholds. In order to assess the accuracy of our metric learning, we run a comparison with a simple Euclidean distance on all the previous scenarios. Finally, a three-fold cross validation is set up to ensure that train and test sets are well separated and to produce average values.

Histograms of observed run time cost for the actions in the French RSA dataset
Figure 2: Histogram of observed run time cost for the actions in the dataset
Histogram of distances between actions
Figure 3: Histogram of distances between actions

Figures 4, 5 and 6 summarize the main results of our experiments in terms of F-Measure at a given similarity threshold. Note that Figures 5 and 6 represent different groupings of the same data, averaging distance and time, respectively. Results presented are averaged by the 3-fold cross validation process
and standard deviation around each mean is represented as light color areas around the plots. Importantly, the similarity is expressed as a function of the distance between actions.
Figure 4 and 5 show the importance of the distance between actions in our method. Indeed, Figure 4 presents results with a traditional Euclidean distance between actions, that does not benefit from external knowledge of which actions should be grouped together to better fit the observed pathways. Also, the lower the similarity threshold on theses figures is, the easier it is to find a match between a recommender and an observed action, and the more likely the F-measure score is to be high. Consistently with this observation, both Figures 4 and 5 show a decreasing F-Measure score with the increase in similarity threshold. However, our learned distance between action allows for a very high F-Measure until the similarity score is close to 1 while the traditional Euclidean distance falls for smaller similarity thresholds. Interestingly, in both cases, the F-Measure score when looking for a perfect match (similarity threshold = 1) are close to 0:3 but with slightly better performances for the tuned metric. Finally, the F-Measure score is higher in case of short-term pathways which is normal as it is easier to predict for short period of time and shorter sequences for which it is easier to improve the recall.
Figures 5 and 6 show that Problem P3 can be solved as efficiently if we consider constraints on the time budget as in Figure 5 or the maximal distance as in Figure 6. All in all, the formulation of the resolution of P3 with epsilon constraints does not seem to be the limiting factor in the observed F-Measure results. In the next paragraph, we study the influence of the number of actions on the resolution of Problem P3.

F-measure scores computed for dierent similarity thresholds for 3 run time costs thresholds. Distance between action is set to an Euclidean distance.
Figure 4: F-measure scores computed for different similarity thresholds for 3 run time
costs thresholds. Distance between action is set to an Euclidean distance.
F-measure scores computed for dierent similarity thresholds for 3 run time costs thresholds. Distance between action is learned as explained
Figure 5: F-measure scores computed for different similarity thresholds for 3 run time
costs thresholds. Distance between action is learned as explained
F-measure scores computed for dierent similarity thresholds for 3 distances thresholds. Distance between action is learned as explained
Figure 6: F-measure scores computed for different similarity thresholds for 3 distances
thresholds. Distance between action is learned as explained

For our scalability test, in order to understand what is the maximum number of actions PRS can handle, we generated height artificial datasets as follows. We first used a KDE-gaussian kernel to fit the distributions of relevance, cost and distance, respectively, with the samples obtained from the real dataset used for the effectiveness tests. We then generated height datasets of increasing sizes (100, 200, 300, 400, 500, 1000, 5000, 10,000), representing the inputs of P3 by drawing relevance, cost and distance following the distributions obtained with the estimator.
We run CPLEX on each dataset, setting a time out of one hour. The results, depicted in Figure 7, are as expected and illustrate the hardness of the problem. For sizes above 500 actions, the time out of one hour was reached.
We also run greedyPRS on the height datasets, varying budget (from 100 to 1000). For scalability testing, we report only the results for dataset 10,000. As shown in Figure 8, contrary to the exact resolution by CPLEX, greedyPRS solves much larger instances of the problem in milliseconds. Of course, because of the simplicity of the greedy algorithm, solutions may not be feasible since the distance espilon-constraint is not checked.
To understand the quality of the solutions found by greedyPRS, we also run it on the French RSA dataset. As illustrated in Figure 9, the greedy approach maintains a reasonable F-measure in recommendation albeit not as good as PRS in our preliminary tests.

Figure 7: Scalability of P3 (exact)
Resolution of P3 with GreedyPRS (approximate solution): scalability.
Figure 8: Resolution of P3 with GreedyPRS (approximate solution): scalability
Resolution of P3 with GreedyPRS (approximate solution): F-measure
Figure 9: Resolution of P3 with GreedyPRS (approximate solution): F-measure

To conclude

This article introduces a system for lifelong pathways recommendation. We model lifelong pathways as particular chain composite items that are built from atomic actions, and whose retrieval is formalized as a form of orienteering problem. We experiment with both artificial and real datasets, showing our approach
is a promising building block of an interactive lifelong pathways recommender system.
Our short term perspectives consist of including in our model a multi-stakeholders context, to conciliate the objectives or constraints of beneficiaries, social actors and services providers, as well as mechanisms for the exploration of pathways, so that users can interact with the recommender. On the longer term, we plan to add an explanation mechanism of the recommendations to increase trust of stakeholders in the proposed pathways and to improve their acceptance.

More ...

Scroll to Top