Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Leveraging Graph Theory for Food Delivery Logistics Optimization

Periodic Reporting for period 1 - GraphEats (Leveraging Graph Theory for Food Delivery Logistics Optimization)

Reporting period: 2025-05-01 to 2027-04-30

On-demand food and grocery platforms must assign couriers to sets of orders in real time while keeping delivery times low, travel distances short, and customer satisfaction high. This "dispatch" problem is computationally hard because new orders arrive continuously, couriers move, and many feasible assignments overlap. This project tackles the challenge through graph theory: each possible plan is treated as a vertex, and an edge links two plans that cannot be selected together (a "conflict graph"). By studying the structure of this underlying conflict graph—how sparse it is, how conflicts cluster, and where it looks tree-like—we can simplify the decision space and make better, faster decisions. The overall objectives were: (1) understand the typical shapes and limits of conflicts created by common batching rules; (2) exploit those insights to build reduction techniques and smarter branching strategies that simplify and accelerate optimization; and (3) test the resulting methods on synthetic and, if available, real operational scenarios. The expected pathway to impact moves from structural insight toward faster solvers, shorter delivery times, and fewer wasted kilometers—translating into lower costs, reduced emissions, and more reliable service in dense urban areas. Although the action concluded early after four months, the foundational analysis initiated during this period was aimed at enabling practical tools that improve last-mile logistics at city scale.
During the four-month reporting period, work focused on laying the technical groundwork:

- Problem formalization: The dispatch setting was specified in a clear, reproducible way, defining how candidate courier plans are generated. This produced a precise "conflict model" that later algorithms can rely on.

- Early structural insight: Under common batching limits (e.g. allowing at most a small number of orders to be grouped), we observed that conflicts around a single courier are naturally bounded. In plain terms, batching rules cap how many mutually incompatible choices can “fan out” from one courier at once. This kind of bound is valuable because it points to targeted simplifications before solving.

- Prototype tooling: A small synthetic data generator was drafted to create realistic plan lists and their conflicts. This will enable controlled experiments to test reduction rules and solver variants without needing sensitive operational data.

- Algorithm design notes: A shortlist of pre-solve reductions (for example, removing dominated or redundant plans and simplifying around low-variability couriers) and a plan for a learning-aided branching baseline were produced.

No public communication, exploitation, or benchmarking results were produced within this short period; the emphasis remained on technical scoping and feasibility.
While the action ended before full prototypes, the initial findings and design directions indicate promising advances over generic dispatch solvers:

- Structure-aware simplification: Recognizing that batching rules naturally cap local conflict "fan-outs" suggests specialized reductions that are both safe (do not change the best solution) and fast. This can shrink problems substantially before optimization begins.

- Smarter search: The design notes outline branching strategies that prioritize the most influential conflicts and consider small, low-conflict regions separately, an approach expected to reduce solve times and variability compared with off-the-shelf methods.

- Practical evaluation pathway: The synthetic generator enables apples-to-apples comparisons on speed, solution quality, and robustness, and can later be adapted to anonymized real-world patterns.

To bring these ideas to full impact, the key needs are: (i) implementation time for the reduction pipeline and branching strategies; (ii) access to representative, privacy-preserving datasets or realistic simulators; (iii) head-to-head benchmarks against strong baselines under typical service-level targets; and (iv) packaging the methods in open, well-documented code so that industry and researchers can reproduce and extend the results.
Project Approach
My booklet 0 0