Periodic Reporting for period 1 - GraphEats (Leveraging Graph Theory for Food Delivery Logistics Optimization)
Reporting period: 2025-05-01 to 2027-04-30
- Problem formalization: The dispatch setting was specified in a clear, reproducible way, defining how candidate courier plans are generated. This produced a precise "conflict model" that later algorithms can rely on.
- Early structural insight: Under common batching limits (e.g. allowing at most a small number of orders to be grouped), we observed that conflicts around a single courier are naturally bounded. In plain terms, batching rules cap how many mutually incompatible choices can “fan out” from one courier at once. This kind of bound is valuable because it points to targeted simplifications before solving.
- Prototype tooling: A small synthetic data generator was drafted to create realistic plan lists and their conflicts. This will enable controlled experiments to test reduction rules and solver variants without needing sensitive operational data.
- Algorithm design notes: A shortlist of pre-solve reductions (for example, removing dominated or redundant plans and simplifying around low-variability couriers) and a plan for a learning-aided branching baseline were produced.
No public communication, exploitation, or benchmarking results were produced within this short period; the emphasis remained on technical scoping and feasibility.
- Structure-aware simplification: Recognizing that batching rules naturally cap local conflict "fan-outs" suggests specialized reductions that are both safe (do not change the best solution) and fast. This can shrink problems substantially before optimization begins.
- Smarter search: The design notes outline branching strategies that prioritize the most influential conflicts and consider small, low-conflict regions separately, an approach expected to reduce solve times and variability compared with off-the-shelf methods.
- Practical evaluation pathway: The synthetic generator enables apples-to-apples comparisons on speed, solution quality, and robustness, and can later be adapted to anonymized real-world patterns.
To bring these ideas to full impact, the key needs are: (i) implementation time for the reduction pipeline and branching strategies; (ii) access to representative, privacy-preserving datasets or realistic simulators; (iii) head-to-head benchmarks against strong baselines under typical service-level targets; and (iv) packaging the methods in open, well-documented code so that industry and researchers can reproduce and extend the results.