The Estimated Time of Arrival – or ETA – has always been a critical information item for all actors in the logistics chain. It is pivotal, too, in any activity involving travel and mobile resources on the ground. In logistics, the accuracy and reliability of the ETA provided by each link in the chain has operational repercussions for all subsequent links in the chain. The forecasted ETA as communicated by each actor in the chain is the baseline on which each person further down the line will build, to:
- organise and plan human resources and equipment needed for operations in progress (unloading, storage, goods assignment, reloading…);
- calculate the ETA to communicate to the following link in the chain.
In all business sectors where just-in-time operating is the norm, it is easy to understand how reliability of a given data item such as the ETA conditions each actor’s operating efficiency and overall performance with respect to the logistics chain. For or onsite services to an end customer, being able to communicate a reliable ETA is a crucial driver for success, underpinning as it does the vital statistics of the fulfilment business: the quality of service as perceived by the customer, and their satisfaction.
AI will be key to achieving a dependable ETA
Because multiple parameters have to be taken into account to predict arrival time – for a freight vehicle, van, lorry or simple home delivery – the calculation is a complex one. It’s been a while since professionals performed this kind of exercise ‘by hand’! – and here we mean solutions for modelling routes to calculate an estimated arrival time – do exist now, and have been used for decades in the world of transport.
There is more and more talk about the use of artificial intelligence in this domain, but for once this has nothing to do with fashion: it is a manifestation of a response to a real need:
- to raise reliability rates for predictive modelling, and so, also, for the models themselves, while taking more data into account: that is, data histories that are far more detailed in terms of vehicle characteristics, road network variables, traffic conditions, accidents and unforeseen events and so on… and that cast the net wide to include a host of variables such as weather conditions, local events, seasonal phenomena with population movements, holiday departure dates, etc. Taken all together, this history data will form the foundation for the artificial intelligence engine to learn from, so it can establish forecasts for arrival times that are much more precise.
- to fine-tune predictions in real time, as a function of data uploaded from vehicles in the fleet, from drivers and via road and logistics infrastructures. The error margin for the ETA needs to be constantly updated and recalculated so it can be communicated to all parties involved further down the line – enterprise or end consumer – providing information that is accurate and operationally useful with a quantifiable reliability indicator.
The three-fold offering of AI in the area of forecasting arrival times, and predicting and forecasting in general, is effectively its capacity to:
- develop the forecasting model itself through learning from past operations;
- refresh, potentially in real time, ETAs that have already been communicated ETA;
- provide information that is as precious as the estimated arrival times themselves: the reliability rate for these estimations, the subtext to this being that these rates with the help of AI and machine learning can only go up….
The perfect environment in which to deploy AI
A more auspicious alignment of the planets could not be imagined for taking forward the deployment of AI in this domain. Firstly, never has such a wealth of data been generated at every step in the activity chain – by humans, computers, vehicles and connected objects of all types. Above all, with the proliferation of mobile apps and embedded systems, never have such large quantities of high precision data been so available as they are now (with time and date-stamping, and geographic positioning), and all of it instantly uploadable and exploitable for operational ends…
Next, the calculating power needed to process these huge volumes of data is now more than affordable, above all when you take into consideration the «business» value of information obtained in return. Finally, thanks to the open-source library, access to predictive analysis algorithms and machine learning has become much more widely available.
A priori, with access to AI becoming so widespread, the perfect conditions now prevail for the revolution about to take place in estimated arrival time calculation, and this has to be one of the most apt of the applications for which AI is a candidate, as well as being so eagerly awaited by those in the field. Nonetheless, these applications are still a long way off being readily available, and although the incentives are stacking up fast, the Proof of Concept stage is often proving to be difficult to complete. There are two main reasons for this, common to all potential domains of application for artificial intelligence: the real availability of data, and the levels of expertise required in data science.
Without data sharing, ETA accuracy cannot improve
We are all aware of the fact that performances depend on the quantity and quality of the learning data. While in theory, in the transport sector data are not in short supply, reluctance to share and mutualise these data so many actors can reap the benefits continues to hold us back from achieving truly convincing results as regards ETA.
For obvious statistical reasons, the more an algorithm is honed on vast volumes of good quality business data, the more the margin for error is reduced. Logically it follows that a small transporter embarking on this adventure alone, and only having access to their own data history, would be hard-pressed to adequately train an algorithm to cover all possible and imaginable case scenarios for all journeys, at any time of day or night, under any meteorological conditions, for any type of vehicle. A major transporter would fare slightly better, but still much less than if several transporters pooled their respective data sets to gain from economies of scale. This sets the stage for the emergence of collaborative platforms that might now offer ETA calculation services based on real data supplied by user members – data these users would continue to own individually, but which is shared for the benefit of everyone – small or large companies alike – to improve on ETA precision as calculated and delivered thanks to artificial intelligence.
Pertinent skillsets are rare and precious commodities …
The fact that machine learning algorithms are available in open source does not mean that just anybody can use them to develop their own AI for the ends of ETA calculation. The algorithm will only do a certain amount, and certainly won’t do much at all without seriously investing in the task of predictive modelling. Modelling, including the data analysis side, and knowledge of existing usage scenarios for market standard algorithms, is the core business of data scientists. This skill profile is highly sought after by all the major players in freight and transportation alike. Despite rising numbers of qualified data scientists each year, demand always outstrips supply, and the stakes in terms of salaries are significant, as pay packets for these individuals have sky-rocketed in recent years.
Whether it is to be able to predict capacities, optimise routes or calculate estimated arrival times, an alternative to directly employing such profiles is, of course, to seek the help of service providers and publishers of logistics optimization solutions. A publisher like Geoconcept has been recruiting data and modelling specialists for decades. Their experts in-house are used to working with state-of-the-art tools – and artificial intelligence is an integral part of this as the backbone of the solutions they develop for customers all over the world. |