Non classé
Edge Computing in Logistics: Enabling Real-Time Data Processing Closer to Operations
Published
11 mois agoon
By
As logistics networks become increasingly complex, the volume of real-time data generated by devices, equipment, vehicles, and facilities is growing rapidly. Traditional cloud-centric architectures, which depend on centralized processing, may not meet speed and / or reliability goals needed to support operational needs at scale.
Edge computing — processing data locally, near the source — has emerged as a method to address these challenges by reducing latency and improving resiliency. However, implementing edge computing in logistics environments introduces its own set of technical and operational complexities that require careful planning.
The Challenge: Managing Complexity in Edge-Based Logistics
Deploying edge computing in logistics introduces several practical challenges. Even with local processing, network variability, particularly in remote warehouses, ports, and along mobile routes, can still cause small but impactful delays. Managing available bandwidth efficiently among many connected devices remains a continuing issue, particularly when scaling systems to significant quantities of distributed resources.
Device management is another critical area. In a typical logistics network, operators must provision, update, and monitor large fleets of devices — sensors, cameras, connected and autonomous vehicles, and mobile computers. This creates significant maintenance and oversight requirements, as each device represents a potential point of failure or vulnerability.
Data security at the edge also presents risks. Unlike centralized cloud infrastructure, edge devices are often physically accessible, making them more vulnerable to tampering or unauthorized access. Securing sensitive information — such as shipment details, customer data, and operational parameters — is a continuous concern.
A lack of industry-wide standards complicates the situation. Different manufacturers and vendors often use different protocols and systems, making integrations resource intensive from both a capital and personnel perspective. This fragmentation of connectivity often delays edge deployment initiatives.
Optimizing AI models for edge hardware is another area of difficulty. AI models designed for centralized cloud environments are often too large or power-hungry to run efficiently on smaller edge devices. Logistics organizations must carefully balance model size, speed, power consumption, and decision accuracy.
Cost also factors heavily into adoption decisions. Establishing a robust edge environment requires upfront capital investment, and demonstrating a clear return on investment is often challenging, especially in preliminary stages.
Environmental factors present additional challenges. Edge devices in logistics must often operate outdoors or in other harsh conditions, including extreme temperatures, exposure to vibration during transport, and inconsistent power quality, all of which can affect reliability.
Finally, real-time decision-making complexity must be addressed. It is not sufficient to simply collect and transmit data; edge systems must interpret and act upon data autonomously, thus introducing further technical requirements. Even after local processing, logistics organizations must ensure consistent and reliable data synchronization back to cloud-based systems for reporting and analysis without network congestion, version conflicts or other anomalies.
Addressing the Challenges: Practical Approaches
Organizations that have successfully adopted edge computing in logistics have been using a hybrid strategy, where real-time operational decisions are managed at the edge, while longer-term analytics and broader visibility are maintained in the cloud. This reduces dependency on external networks while retaining the benefits of centralized reporting and analysis.
To support this, many logistics companies are adopting AI model optimization tools designed for edge deployments, such as Google’s TensorFlow Lite and NVIDIA Jetson platforms. These allow the development of lightweight models that maintain performance within the processing and power constraints of mobile or ruggedized edge hardware.
Security strategies have also evolved. Increasingly, organizations are deploying zero-trust architectures, where each device and communication session is authenticated and encrypted by default, significantly reducing the attack surface across the network.
Managing edge devices at scale requires unified platforms capable of remote monitoring, updating, and configuration management. Solutions such as Microsoft Azure IoT Hub provide centralized oversight while maintaining device autonomy at the operational level.
Efforts to standardize data formats and interfaces are also beginning to help. Organizations like the Open Logistics Foundation and the Industrial Internet Consortium are promoting standards that reduce integration complexity across vendor ecosystems.
On the hardware side, companies are investing in ruggedized edge devices that are specifically designed to handle harsh operational environments, minimizing downtime and maintenance needs.
Operationally, many organizations are rolling out edge computing solutions in stages — starting in controlled warehouse environments where conditions are more predictable, and then expanding to more complex, mobile scenarios such as trucking or intermodal freight.
Examples of Real-World Applications
Several logistics companies have already integrated edge computing into their operations in specific use cases.
FedEx has deployed a system called SenseAwareID, which uses lightweight Bluetooth Low Energy (BLE) sensors to enable real-time package tracking without relying solely on cellular networks. Sensors communicate with edge gateways to provide operational visibility without adding significant network load.
Maersk has introduced Remote Container Management for its shipping containers. IoT sensors embedded within containers monitor environmental conditions and location data locally, transmitting only essential updates to centralized systems.
DHL has deployed smart glasses in warehouse operations as part of its Vision Picking project. The glasses process inventory and product picking data locally, reducing the time needed to retrieve and fulfill orders compared to fully cloud-dependent systems.
Key Considerations for Logistics Organizations
Several key factors should be considered when planning edge deployments in logistics.
First, edge AI differs from cloud AI in a few ways. Models must be optimized not just for accuracy, but also for constraints such as memory usage, processing speed, and battery consumption. Security must be built into the system from the outset, with a combination of hardware security modules, encryption, and continuous monitoring.
While 5G networks promise improved latency and throughput, edge computing remains important even with advanced wireless connectivity, because local processing still offers superior performance in many operational scenarios.
Progress toward interoperability is ongoing, and standards being promoted by industry groups are likely to reduce integration complexity over time.
Finally, the return on investment for edge computing often comes not from direct technology savings but from operational improvements such as reduced fulfillment cycle times, lower incident rates, more accurate inventory tracking, and improved customer service levels.
Summing Up
Edge computing is becoming a valuable tool for addressing the real-time data processing requirements of modern logistics operations. Its adoption, however, requires careful attention to technical, operational, and economic factors.
Current challenges include managing latency, device fleets, security, integration standards, AI model deployment, cost, environmental conditions, decision-making reliability, and cloud synchronization. Companies that approach these issues systematically — through hybrid architectures, AI optimization, zero-trust security, ruggedized hardware, and staged deployments — are seeing real-world operational benefits.
Use cases from FedEx, Maersk, and DHL illustrate that while edge computing deployments in logistics are still evolving, they are delivering measurable improvements in speed, reliability, and visibility.
As these technologies mature and standards emerge, edge computing is expected to play an increasingly significant role in the logistics and supply chain ecosystem both in the far and near future.
The post Edge Computing in Logistics: Enabling Real-Time Data Processing Closer to Operations appeared first on Logistics Viewpoints.
You may like
Non classé
Crusoe and Redwood Materials Expand Strategic Partnership
Published
15 heures agoon
25 mars 2026By
On March 24, 2026, Crusoe, an AI infrastructure company, and Redwood Materials, a leader in battery recycling and energy storage, announced a major expansion of their existing partnership.
The move scales their joint operations in Sparks, Nevada, to seven times the original AI infrastructure density, providing a blueprint for how second-life batteries can power high-performance computing.
From Pilot to Scale: 7x Growth
The expansion follows a successful pilot program launched in June 2025. Initially, the project utilized four Crusoe Spark™ modular data centers. Following seven months of high performance, the companies are increasing the deployment to 24 modular data centers.
This growth is made possible by the hardware’s “modular” nature. Unlike traditional data centers that require years of stationary construction, modular units can be manufactured off-site and deployed in months.
Powering AI with Second-Life Batteries
A central component of this partnership is the use of “second-life” electric vehicle (EV) batteries. When EV batteries are no longer optimal for automotive use, they often retain significant capacity for stationary energy storage.
Redwood Materials integrates these repurposed batteries into a 12-megawatt (MW) / 63-megawatt-hour (MWh) microgrid. This system, combined with on-site solar power, provides the energy required to run Crusoe’s AI-optimized GPUs. The orchestration of these batteries is handled by Redwood’s “Pack Manager” technology, which ensures steady power delivery for the intense workloads required by AI model training and inference.
Reliability and Performance Metrics
A primary concern with renewable-powered microgrids is “uptime”, the percentage of time the system is operational. The press release highlights several key performance indicators from the initial seven-month period:
99.2% Operational Availability: The microgrid exceeded reliability expectations while running on renewable sources and battery storage.
99.9% Total Uptime: By leveraging the traditional power grid as a backup source, Crusoe Cloud maintained a nearly constant state of operation.
Supply Chain and Sustainability
The partnership addresses two of the most significant bottlenecks in the current AI boom: energy consumption and deployment speed.
Sustainability: By using recycled materials and on-site renewable energy, the “AI factory” model reduces the carbon footprint associated with massive data processing.
Predictability: The ability to scale in months rather than years allows AI providers to meet the rapidly fluctuating demand for compute power.
As the demand for intelligence grows, the convergence of innovative energy storage and modular infrastructure—as demonstrated by Crusoe and Redwood Materials—offers a potential path forward for sustainable and rapid industrial scaling.
The post Crusoe and Redwood Materials Expand Strategic Partnership appeared first on Logistics Viewpoints.
Non classé
Velotic Launches as Independent Industrial Software Company Integrating Proficy, Kepware, and ThingWorx
Published
19 heures agoon
25 mars 2026By
Velotic announced its launch as an independent industrial software company, bringing together multiple established platforms to support evolving industrial and manufacturing requirements. The formation of Velotic coincides with the closing of TPG’s previously announced acquisitions of Proficy, the former manufacturing software business of GE Vernova, and PTC’s former industrial connectivity and Internet of Things (IoT) businesses.
Backed by TPG, Velotic provides a suite of data-driven solutions designed to help improve operational efficiency, enhance productivity, and increase visibility across complex industrial environments. The combined portfolio integrates Proficy’s automation and production management capabilities, Kepware’s industrial connectivity technologies, and ThingWorx’s industrial data and analytics applications.
According to Craig Resnick, Vice President, ARC Advisory Group, “The industrial software market is entering a pivotal moment. Manufacturers are under pressure to modernize operations, extract greater value from data, and rapidly adopt AI—without sacrificing reliability, safety, or control. Against this backdrop, the formation of Velotic as a new standalone industrial software company bringing together Proficy®, Kepware® and ThingWorx® represents more than a corporate restructuring. It signals a shift in how industrial data, analytics, and operations technology (OT) can be delivered at scale, that ARC strongly advocates.”
Velotic is positioned to help address increasing demand for integrated, AI-enabled industrial software by combining established technologies into a unified offering. The company focuses on helping to enable manufacturers to manage data more effectively and support operational decision-making across distributed environments.
Manufacturing software executive Brian Shepherd has been appointed CEO of Velotic. He brings over 25 years of experience in manufacturing technology, including leadership roles at Rockwell Automation, Hexagon Manufacturing Intelligence, and PTC. James Heppelmann, former Chairman and CEO of PTC, has been named Executive Chairman.
Velotic operates as a hardware-agnostic platform provider with a focus on flexibility and interoperability. Proficy, Kepware, and ThingWorx will continue as distinct product lines within the broader portfolio. The company is headquartered in the Boston area and reports more than $300 million in revenue, serving customers across manufacturing, oil and gas, utilities, and infrastructure sectors.
The post Velotic Launches as Independent Industrial Software Company Integrating Proficy, Kepware, and ThingWorx appeared first on Logistics Viewpoints.
Non classé
Lytica and the Emergence of a Pricing Science Layer in Procurement
Published
21 heures agoon
25 mars 2026By
A recent briefing with Lytica highlights a shift in procurement from opaque negotiation toward statistically grounded pricing intelligence.
Procurement has long operated with an imbalance of information.
Suppliers understand pricing across customers, volumes, and market conditions. Buyers rely on internal history, limited benchmarks, and negotiation experience to determine whether a price is competitive. In categories such as electronic components, this gap is amplified by volatility and limited transparency.
The result is consistent. Different companies, and often different divisions within the same company, pay materially different prices for the same component.
Lytica is attempting to address that condition.
From Transaction Data to Market Intelligence
Lytica’s platform is built on anonymized buyer transaction data aggregated across a network of companies. This creates a continuously updated view of pricing across suppliers, regions, and time.
This is not modeled data or survey input. It reflects observed market behavior.
That distinction allows procurement teams to assess pricing against a broader market reference:
Where are we overpaying
How do suppliers price across customers
What does competitive pricing look like
This represents a move from internal spend analysis to external market intelligence.
From Benchmarking to a Pricing Discipline
The more important development is how this data is modeled.
Lytica treats pricing as a measure of competitiveness rather than a fixed value. Prices exist within a distribution shaped by real transactions. Each company occupies a position within that distribution.
This enables a more structured evaluation of procurement performance:
Prices can be ranked relative to the market
Outliers can be identified and examined
Expected price ranges can be estimated using observed data
The question shifts from “Is this price good” to “How competitive is this price relative to the market”
This introduces a more disciplined approach to procurement performance.
Quantifying Leverage in Negotiation
Once pricing is modeled this way, negotiation becomes more structured.
Procurement teams can enter discussions with:
Target pricing ranges based on transaction data
Evidence of variance across comparable buyers
Supplier-specific pricing patterns over time
This replaces qualitative positioning with data-backed arguments.
The result is more consistent outcomes and shorter negotiation cycles.
From Data to Decision Support
The next step is applying this dataset in operational workflows.
As outlined in modern supply chain architectures , AI systems become more useful when grounded in domain-specific data and applied with context.
In this case, systems can:
Identify deviations from competitive pricing levels
Estimate expected pricing ranges based on observed transactions
Generate supplier-specific negotiation guidance
Monitor pricing performance over time
These outputs are typically delivered as structured guidance for sourcing teams.
The Role of Context and Retrieval
The effectiveness of this approach depends on how data is accessed and retained.
Retrieval-based architectures allow systems to reference current transaction data when generating recommendations. Context-aware systems retain supplier history, pricing behavior, and prior outcomes across decision cycles.
This supports continuity in decision making rather than isolated analysis.
Positioning in the Stack
Lytica does not replace ERP or sourcing platforms. It operates as an intelligence layer above them.
This reflects a broader shift:
Systems of record manage transactions
Systems of execution manage workflows
Systems of intelligence guide decisions
Over time, as confidence in recommendations increases, this layer is likely to become more integrated into execution.
The Bottom Line
Lytica reflects a shift in procurement.
Pricing is moving from opaque negotiation toward structured, data-based market positioning.
This changes how procurement operates:
From internal benchmarks to external reference points
From periodic sourcing to continuous evaluation
From intuition to structured decision support
In more volatile supply environments, this type of capability becomes increasingly relevant.
Organizations that adopt it early will have a clearer understanding of their market position and a more consistent approach to improving it.
The post Lytica and the Emergence of a Pricing Science Layer in Procurement appeared first on Logistics Viewpoints.
Crusoe and Redwood Materials Expand Strategic Partnership
Velotic Launches as Independent Industrial Software Company Integrating Proficy, Kepware, and ThingWorx
Lytica and the Emergence of a Pricing Science Layer in Procurement
Walmart and the New Supply Chain Reality: AI, Automation, and Resilience
Ex-Asia ocean rates climb on GRIs, despite slowing demand – October 22, 2025 Update
13 Books Logistics And Supply Chain Experts Need To Read
Trending
-
Non classé1 an agoWalmart and the New Supply Chain Reality: AI, Automation, and Resilience
- Non classé5 mois ago
Ex-Asia ocean rates climb on GRIs, despite slowing demand – October 22, 2025 Update
- Non classé7 mois ago
13 Books Logistics And Supply Chain Experts Need To Read
- Non classé2 mois ago
Container Shipping Overcapacity & Rate Outlook 2026
- Non classé4 mois ago
Ocean rates climb – for now – on GRIs despite demand slump; Red Sea return coming soon? – November 11, 2025 Update
- Non classé1 mois ago
Ocean rates ease as LNY begins; US port call fees again? – February 17, 2026 Update
- Non classé1 an ago
Unlocking Digital Efficiency in Logistics – Data Standards and Integration
-
Non classé6 mois agoNavigating the Energy Demands of AI: How Data Center Growth Is Transforming Utility Planning and Power Infrastructure
