Logistics · AI Demand Forecasting

Forecast the peak. Staff before it lands. Stop chasing the surge.

AI demand forecasting — Prophet/ARIMA + LightGBM + Temporal-Fusion-Transformer ensemble blended per region and day-part. Wired to weather, promo, and marketplace-event signals with staff-allocation recommendations the planner can act on tonight.

forecast-console · plannerLIVE
FORECAST · FCT-2204
Tomorrow · West region · day-part split
Last-mile · 4 hubs · postal-code rollup
VOLUME
84,200 parcels
vs same-day-last-week 81,400
FORECAST + FEATURE SIGNALS
AM band (06–11)pending…
Midday (11–15)pending…
PM band (15–20)pending…
Weather featurepending…
Promo flagpending…
FORECAST CONFIDENCE
0.00 · breach0.60 · review0.80+ · on-target
BAND
Steady · weekday baseline
ROUTING
Planner · staffing locked
REASONING
Volume tracks last-week baseline
Weather signal applied · minor headwind
No promo or event lifts modelled
EVALUATING…
Forecast · within band

What we build

Forecast infrastructure — wired into the operation.

Each capability is a production component — not a notebook — wired to your TMS / WMS, monitored against per-hub MAPE, and version-controlled end to end.

Regional forecasting · DC / hub / postal cluster

Volume forecasts at hub, distribution centre, and postal-code-cluster level — rolled up cleanly so the regional manager and the network planner see the same numbers.

Day-part modelling · AM / midday / PM / overnight

Forecasts split into AM, midday, PM, and overnight bands so staff and vehicles are sized to when the parcels actually arrive — not to a flat daily total that hides the surge.

Weather, holiday, and marketplace features

Weather, public-holiday, school-term, and marketplace-event signals — Lazada, Shopee, TikTok Shop, Amazon — all wired into the model so 11.11 stops being a guess.

Staff + vehicle-allocation recommendations

Recommended headcount and vehicle-class mix per shift, sized to the forecast and ready for the planner tonight. Override audit captures the why when the human disagrees.

Peak detection + early-warning alerts

Alerts fire days ahead of 11.11, 12.12, Lunar New Year, and Black Friday — with surge factors, suggested staff bumps, and reroute proposals before the shift starts.

Backtest + accuracy tracking

Rolling-window backtests, per-hub MAPE and bias, peak-day error budgets, and PM-readable confidence bands — so the planner knows exactly how much to trust the number.

Where it lands

One engine, every kind of network.

Same forecasting and feature-store layer — tuned per network and operating model. Shared pipelines, per-hub adapters, and the day-part splits that match how your shifts actually run.

01

3PL last-mile

Parcel volume per zip and day-part for last-mile carriers and 3PLs. Hub rollups, route-density features, and SLA-tier-aware forecasts — staff planned shift-by-shift.

02

Quick-commerce dark stores

Per-store, per-15-minute forecasts for dark-store and rapid-grocery networks. Weather and event sensitivity built in; rider-pool sizing reconciled against forecast nightly.

03

Cold-chain F&B distribution

Per-route, per-temperature-band forecasts for cold-chain and F&B distribution. Vehicle-class mix tilted to compartment temperatures and SLA bands per delivery window.

04

Cross-border + airfreight

Per-lane, per-week forecasts across air, sea, and overland cross-border. Macro and FX features layered in; capacity bookings and dock-slot reservations sized accordingly.

05

E-commerce platforms

Per-warehouse, per-SKU-class forecasts for e-commerce and marketplace operators. Promo lifts, marketplace-event flags, and inventory-on-hand wired to staffing decisions.

06

Field-service operations

Per-region, per-skill-set forecasts for field-service and tech-dispatch operations. Skill-mix and territory-load balanced against forecasted job volume per shift.

A walk-through

From history to staffing rec — in five clear steps.

Follow a 3PL last-mile cycle from data ingestion through staffing recommendations. Every step is visible to the planner, the regional manager, and the ML lead.

ANCHOR CLIENT · MERIDIAN PARCELS
Meridian Parcels Network· 3PL last-mile · 12 hubs · 2,400 routes · 1.4M parcels / week
STEP 01 · 05
STEP 01 · INGEST
Pulling history and exogenous signals
Operational volume from TMS, WMS, and OMS plus weather, holiday, marketplace-event, and macro feeds — normalised into one feature store with daily reconciliation.
FEEDS · NORMALISED
TMS · operational volume
1.4M parcels / week · ok
Weather · ECMWF + regional
12 hubs · 14-day fcst · ok
Marketplace · Lazada+Shopee
promo calendar · 8 events
Holiday + event APIs
SEA region · 92 events / yr
PIPELINE TELEMETRY
Feature-store rows9.2M · 18 months
Recon break rate0.07% · within SLA
Ingestion latency11m · T+1 04:00
Coverage12 hubs · 2,400 routes

Model families we deploy

No single model owns every horizon. So we ensemble.

Classical time-series, gradient-boosted regressors, deep sequence models, and a residual-based anomaly detector — composed into one production pipeline with version control at every step.

PROPHET + ARIMA + ETS
Time-series ensemble

Classical time-series models blended with weighted stacking — robust on stable lanes, fast to retrain, and the right baseline before any ML layer goes near production.

LIGHTGBM / XGBOOST
Gradient-boosted regressor

Tree-boosting on rich feature sets — calendar, weather, promo, lag, and rolling features — delivering strong short-horizon forecasts and explainable SHAP-level signal.

TEMPORAL FUSION TRANSFORMER
Deep sequence model

TFT with attention layers for long-horizon, multi-series forecasting — captures complex day-part interactions and exogenous-variable lags that classical models miss.

RESIDUAL + REGIME-AWARE
Anomaly + peak detector

Residual-based anomaly detector with regime-change handling and event-aware thresholds — flags surges and breaks early, with recall + false-alarm rate tracked weekly.

Data sources wired into every forecast

Every feed that moves the volume — integrated.

Pulled in parallel, normalised into a feature store the model trains on, reconciled daily against operational records, and audit-logged alongside the model version that produced the numbers.

Source
What it unlocks
Providers
Operational volume
Daily, hourly, and 15-minute operational volume from TMS, WMS, OMS, and ERP cores — normalised into one feature store per hub with reconciliation breaks routed daily.
TMSWMSOMSERPCustom core
Weather + climate
Hourly observations and 14-day forecasts from global and regional met services. Encoded as temperature, precipitation, wind, and storm-flag features per hub and route.
OpenWeatherAccuWeatherECMWFRegional metNOAA
Marketing + promo
Promo calendars and marketplace-event flags from major platforms plus internal CRM, scored by historical lift to drive day-of-week and day-part forecast adjustments.
ShopifyLazadaShopeeTikTok ShopInternal CRM
Holiday + event APIs
Public-holiday, festival, sports, and concert feeds plus school-term calendars and government-event lists — encoded as named-event features per region and date.
Public-holiday APIsFestival feedsSports/concertsSchool termsGovt events
Macro indicators
FX, fuel, retail-sales, sentiment, and port-congestion data layered in for cross-border and longer-horizon forecasts where macro signals materially shift volume.
FX ratesFuel pricesRetail-sales dataSentiment indexesPort congestion
Inventory + capacity
Stock-on-hand, fleet roster, warehouse capacity, dock-slot availability, and driver-pool data — wired in so staffing recommendations stay inside what the network can actually run.
WMS stockFleet rosterWarehouse capacityDock-slot avail.Driver pool

Explainability, not just numbers

Every forecast carries its reasoning. For the planner. For the auditor.

Every forecast number, anomaly flag, and staffing recommendation is accompanied by feature contributions, model lineage, and a plain-language commentary the planner can act on — generated at forecast time, indexed for audit.

  • Top features (SHAP) cited per forecast cell
  • Model lineage traced to feature-store version
  • Plain-English commentary for planner + manager
  • Override audit captured on every recommendation
AUDIT RECORD · FCT-2218
forecast.explain v3.1
HubWest · DC-04 · day-part
Modelensemble · TFT + LGBM
Top feature11.11 promo lift · +2.6pp
Surge factor×2.45 · vs baseline
ConfidenceMAPE 5.8% · backtest
Sign-offplanner + ops lead
Audit SHAa4c2…d7e9

Forecast governance

Built to survive peak — not just to ship a notebook.

Production-ready from day one. Delivery includes methodology versioning, override audit, peak-event playbooks, and the reconciliation discipline operations leadership will expect.

Every point below ships with the forecast. Not bolted on later.

Forecast-methodology versioning per region

Model variant, feature set, and training window are versioned per region and per day-part. Methodology version is attached to every forecast so a planner or auditor can reproduce it.

Backtest + holdout-validation audit log

Rolling backtests with documented holdouts and per-hub MAPE / bias / coverage are logged automatically. Promotions to production require explicit sign-off against the audit record.

Anomaly-flag review · confirmed vs false-alarm

Every anomaly flag is reviewed and labelled as confirmed or false alarm. Weekly false-alarm and recall rates flow into model retraining decisions and detector-threshold tuning.

Staffing-recommendation override audit

Every override of a system staffing recommendation is captured with reason code, reviewer, and timestamp — exported as part of the operations audit chain at month-end.

Peak-event playbook

Lunar New Year, 11.11, 12.12, and Black Friday playbooks rehearsed each cycle with surge factors, staffing bumps, and reroute logic versioned and signed off ahead of the event.

Forecast-vs-actual reconciliation

Forecast vs actual volume reconciled per hub per day with residuals logged. Material drifts escalate automatically; recurring patterns surface to the operations and ML leads.

Frameworks we align to

ISO 27001SOC 2PDPAGDPRMAS OutsourcingISO 9001

Why Axccelerate for demand forecasting

Not a forecasting tool.
A demand-planning stack.

A vendor gives you a chart. Our stack gives you ingestion, features, ensemble forecasting, anomaly detection, and staffing recommendations — the infrastructure a real planning team actually runs on.

Feature
Axccelerate
Typical vendor
In-house
Regional + day-part forecasts
Varies
Varies
Weather + promo features wired in
Varies
Marketplace event-aware modelling (11.11 / LNY)
Peak detection + early-warning alerts
Varies
Staff / vehicle allocation recommendations
Varies
Backtest + accuracy tracking · per hub
Varies
Varies
Multi-source feature store
Varies
Hub-level rollups + reconciliation
Varies
Forecast-vs-actual audit
Varies
No vendor lock-in · your TMS + data

Pricing

Priced to the network footprint, not the dashboard count.

Forecast deployments are custom — we scope against your hubs, data sources, and operating cadence before quoting.

Launch
Enquirefor pricing
One hub · daily forecast

One hub or DC, daily forecast horizon, weather and promo features wired in. Operational dashboards for the planner and a weekly forecast-vs-actual report.

1 hub · daily forecast
Weather + promo features
Planner dashboard
Weekly accuracy report
Backtest framework
Enquire for pricing
Most popular
Scale
Enquirefor pricing
Multi-hub · day-part

Multi-hub network with day-part forecasting, peak alerts, and staff-allocation recommendations. Marketplace-event-aware modelling with 11.11 and LNY playbooks.

Up to 12 hubs · day-part
Peak detection + alerts
Staff + vehicle recs
Marketplace-event modelling
Per-hub MAPE tracking
Enquire for pricing
Fleet
Enquirefor pricing
Enterprise · full network

Enterprise deployment across the full regional network with anomaly detection, regulator-ready audit trails, and 24/7 operations support for the ops control room.

Unlimited hubs + DCs
Anomaly + regime detector
Regulator-ready audit
Override audit trail
24/7 ops + ML support
Talk to us

FAQ

Common questions.

Don't see your question here?

Ask us directly

Glossary

The vocabulary behind every forecast number.

A quick reference for the terms that show up in demand-forecasting work — the language your planner, ML lead, and operations team will all use.

MAPE
Mean Absolute Percentage Error

The standard accuracy metric for forecasts — average of |actual − forecast| / |actual|, expressed as a percentage. Tracked per hub, per day-part, and on peak vs non-peak days.

Bias
Mean forecast error

The average of (actual − forecast). A positive bias means the model is systematically under-forecasting; a negative bias means it is over-forecasting. Near-zero is the target.

Holdout
Out-of-sample validation set

A slice of historical data deliberately withheld from training, used to evaluate the model on data it has never seen. Rolling holdouts simulate how the model will behave in production.

Backtest
Historical forecast replay

Running the trained model over historical periods as if forecasting them in real time. Produces MAPE, bias, and coverage metrics that match what production will look like.

Residual
Forecast minus actual

The unexplained part of an observation after the forecast is subtracted. Residual patterns flag missing features; large residuals trigger anomaly detection.

Day-part
Intra-day time band

An intra-day band — typically AM / midday / PM / overnight — at which volume is forecast separately. Critical because daily totals hide surges that staff and vehicle plans must absorb.

Peak factor
Surge multiplier vs baseline

The ratio of forecast peak-day volume to baseline-day volume. Used to size staff bumps, vehicle additions, and overflow plans for events like 11.11 or Lunar New Year.

Surge
Short-window volume spike

A short-window spike in volume above the forecast confidence band — usually triggered by promos, events, or weather. Surge alerts trigger reroute and staff bump proposals.

Regime
Statistical operating state

A statistically distinct period in the data — for example, pre-pandemic vs post-pandemic, or pre-peak vs peak season. Models can over-fit when regimes change without regime-aware features.

Exogenous feature
External-driver variable

A feature outside the time series itself — weather, promo, holiday, FX, sentiment — that helps predict volume. Carefully selected exogenous features lift accuracy materially.

Ensemble
Blended-model forecast

A forecast produced by combining outputs from multiple model families — typically Prophet, ARIMA, LightGBM, and a deep sequence model — with weighted stacking or a learned blender.

Lag
Past-period feature

A feature that uses values from a previous time step (e.g., volume 1 day ago, 7 days ago). Lag features encode autocorrelation that drives most short-horizon accuracy.

Leakage
Future-information contamination

When information that wouldn't have been available at forecast time leaks into the training set, inflating accuracy in tests but breaking in production. Pipeline discipline prevents it.

Drift
Model-performance decay

The gradual decay of a model's accuracy as the underlying volume patterns change. Drift monitoring and scheduled retraining keep production models inside their confidence band.

Explainable · audit-ready

Your demand-planning chain, engineered.

30-minute scoping with a senior engineer and a demand-planning specialist. You'll leave with a hub map, feature plan, and realistic timeline — not a sales pitch.