What we build
Forecast infrastructure — wired into the operation.
Each capability is a production component — not a notebook — wired to your TMS / WMS, monitored against per-hub MAPE, and version-controlled end to end.
Regional forecasting · DC / hub / postal cluster
Volume forecasts at hub, distribution centre, and postal-code-cluster level — rolled up cleanly so the regional manager and the network planner see the same numbers.
Day-part modelling · AM / midday / PM / overnight
Forecasts split into AM, midday, PM, and overnight bands so staff and vehicles are sized to when the parcels actually arrive — not to a flat daily total that hides the surge.
Weather, holiday, and marketplace features
Weather, public-holiday, school-term, and marketplace-event signals — Lazada, Shopee, TikTok Shop, Amazon — all wired into the model so 11.11 stops being a guess.
Staff + vehicle-allocation recommendations
Recommended headcount and vehicle-class mix per shift, sized to the forecast and ready for the planner tonight. Override audit captures the why when the human disagrees.
Peak detection + early-warning alerts
Alerts fire days ahead of 11.11, 12.12, Lunar New Year, and Black Friday — with surge factors, suggested staff bumps, and reroute proposals before the shift starts.
Backtest + accuracy tracking
Rolling-window backtests, per-hub MAPE and bias, peak-day error budgets, and PM-readable confidence bands — so the planner knows exactly how much to trust the number.
Where it lands
One engine, every kind of network.
Same forecasting and feature-store layer — tuned per network and operating model. Shared pipelines, per-hub adapters, and the day-part splits that match how your shifts actually run.
3PL last-mile
Parcel volume per zip and day-part for last-mile carriers and 3PLs. Hub rollups, route-density features, and SLA-tier-aware forecasts — staff planned shift-by-shift.
Quick-commerce dark stores
Per-store, per-15-minute forecasts for dark-store and rapid-grocery networks. Weather and event sensitivity built in; rider-pool sizing reconciled against forecast nightly.
Cold-chain F&B distribution
Per-route, per-temperature-band forecasts for cold-chain and F&B distribution. Vehicle-class mix tilted to compartment temperatures and SLA bands per delivery window.
Cross-border + airfreight
Per-lane, per-week forecasts across air, sea, and overland cross-border. Macro and FX features layered in; capacity bookings and dock-slot reservations sized accordingly.
E-commerce platforms
Per-warehouse, per-SKU-class forecasts for e-commerce and marketplace operators. Promo lifts, marketplace-event flags, and inventory-on-hand wired to staffing decisions.
Field-service operations
Per-region, per-skill-set forecasts for field-service and tech-dispatch operations. Skill-mix and territory-load balanced against forecasted job volume per shift.
Model families we deploy
No single model owns every horizon. So we ensemble.
Classical time-series, gradient-boosted regressors, deep sequence models, and a residual-based anomaly detector — composed into one production pipeline with version control at every step.
Classical time-series models blended with weighted stacking — robust on stable lanes, fast to retrain, and the right baseline before any ML layer goes near production.
Tree-boosting on rich feature sets — calendar, weather, promo, lag, and rolling features — delivering strong short-horizon forecasts and explainable SHAP-level signal.
TFT with attention layers for long-horizon, multi-series forecasting — captures complex day-part interactions and exogenous-variable lags that classical models miss.
Residual-based anomaly detector with regime-change handling and event-aware thresholds — flags surges and breaks early, with recall + false-alarm rate tracked weekly.
Data sources wired into every forecast
Every feed that moves the volume — integrated.
Pulled in parallel, normalised into a feature store the model trains on, reconciled daily against operational records, and audit-logged alongside the model version that produced the numbers.
Explainability, not just numbers
Every forecast carries its reasoning. For the planner. For the auditor.
Every forecast number, anomaly flag, and staffing recommendation is accompanied by feature contributions, model lineage, and a plain-language commentary the planner can act on — generated at forecast time, indexed for audit.
- Top features (SHAP) cited per forecast cell
- Model lineage traced to feature-store version
- Plain-English commentary for planner + manager
- Override audit captured on every recommendation
Frameworks we align to
Why Axccelerate for demand forecasting
Not a forecasting tool.
A demand-planning stack.
A vendor gives you a chart. Our stack gives you ingestion, features, ensemble forecasting, anomaly detection, and staffing recommendations — the infrastructure a real planning team actually runs on.
Pricing
Priced to the network footprint, not the dashboard count.
Forecast deployments are custom — we scope against your hubs, data sources, and operating cadence before quoting.
Glossary
The vocabulary behind every forecast number.
A quick reference for the terms that show up in demand-forecasting work — the language your planner, ML lead, and operations team will all use.
- MAPE
- Mean Absolute Percentage Error
The standard accuracy metric for forecasts — average of |actual − forecast| / |actual|, expressed as a percentage. Tracked per hub, per day-part, and on peak vs non-peak days.
- Bias
- Mean forecast error
The average of (actual − forecast). A positive bias means the model is systematically under-forecasting; a negative bias means it is over-forecasting. Near-zero is the target.
- Holdout
- Out-of-sample validation set
A slice of historical data deliberately withheld from training, used to evaluate the model on data it has never seen. Rolling holdouts simulate how the model will behave in production.
- Backtest
- Historical forecast replay
Running the trained model over historical periods as if forecasting them in real time. Produces MAPE, bias, and coverage metrics that match what production will look like.
- Residual
- Forecast minus actual
The unexplained part of an observation after the forecast is subtracted. Residual patterns flag missing features; large residuals trigger anomaly detection.
- Day-part
- Intra-day time band
An intra-day band — typically AM / midday / PM / overnight — at which volume is forecast separately. Critical because daily totals hide surges that staff and vehicle plans must absorb.
- Peak factor
- Surge multiplier vs baseline
The ratio of forecast peak-day volume to baseline-day volume. Used to size staff bumps, vehicle additions, and overflow plans for events like 11.11 or Lunar New Year.
- Surge
- Short-window volume spike
A short-window spike in volume above the forecast confidence band — usually triggered by promos, events, or weather. Surge alerts trigger reroute and staff bump proposals.
- Regime
- Statistical operating state
A statistically distinct period in the data — for example, pre-pandemic vs post-pandemic, or pre-peak vs peak season. Models can over-fit when regimes change without regime-aware features.
- Exogenous feature
- External-driver variable
A feature outside the time series itself — weather, promo, holiday, FX, sentiment — that helps predict volume. Carefully selected exogenous features lift accuracy materially.
- Ensemble
- Blended-model forecast
A forecast produced by combining outputs from multiple model families — typically Prophet, ARIMA, LightGBM, and a deep sequence model — with weighted stacking or a learned blender.
- Lag
- Past-period feature
A feature that uses values from a previous time step (e.g., volume 1 day ago, 7 days ago). Lag features encode autocorrelation that drives most short-horizon accuracy.
- Leakage
- Future-information contamination
When information that wouldn't have been available at forecast time leaks into the training set, inflating accuracy in tests but breaking in production. Pipeline discipline prevents it.
- Drift
- Model-performance decay
The gradual decay of a model's accuracy as the underlying volume patterns change. Drift monitoring and scheduled retraining keep production models inside their confidence band.
Your demand-planning chain, engineered.
30-minute scoping with a senior engineer and a demand-planning specialist. You'll leave with a hub map, feature plan, and realistic timeline — not a sales pitch.