Non classé
From Microgrids to Hypergrids: Data Center Power Demands + Hyperscaler Capital is Creating a New Grid Architecture
Published
2 mois agoon
By
About 0.3 percent of US power was generated by microgrids in 2024, but data centers use about 4.4 percent of US power today, a figure expected to grow to about 12 percent by 2030. The urgent rush to develop new and more capable “frontier models,” which are critical to the functioning of AI applications, is viewed as an existential requirement for hyperscalers and is inherently linked to enormous energy consumption. These models are developed using power-hungry machine learning algorithms that run on graphics processing units (GPUs), tensor processing units (TPUs), and conventional central processing units (CPUs).
The power required to create these frontier models has become a limiting factor for hyperscalers seeking to remain relevant and competitive, driving them to increasingly act as their own utilities. Traditionally, data centers sourced power from utilities, but new hyperscale data centers are unwilling to wait through five-plus-year planning cycles to access grid power. For example, the Stargate data center currently under construction is planned for a power consumption of 1.2 GW at its flagship site in Abilene, Texas. Stargate is a portfolio of massive sites designed to reach a total commitment of 10 GW and $500 billion in investment across the US.
Crusoe Energy is building these data centers and is actively developing the power plants and underlying infrastructure required to support the initiative at the flagship Abilene campus. In this effort, Crusoe is acting as a vertically integrated AI infrastructure provider, handling both the power generation and the data center build.
The Hypergrid Regulation Problem
The regulation of microgrids has been problematic. FERC Order No. 2023 (issued July 2023) has helped reduce connection queues for new power sources by introducing the Cluster Study Process, the “First-Ready, First-Served” reform, and firm deadlines for grid operators to complete studies, including financial penalties for failure to process requests on time. FERC Order No. 2023 deals exclusively with the generator interconnection queue and applies to new gas and nuclear power plants, as well as renewables such as wind and solar and energy storage.
Historically, a data center’s primary function has been to act as a massive consumer (load) of electricity. Connecting a load, such as a factory or data center, has traditionally fallen under the authority of state public utility commissions (PUCs), not FERC. Because Order No. 2023 addresses only generator queues, it provides no relief for load interconnection queues, which are the primary source of the multiyear delays faced by data centers.
If a data center’s microgrid meets the regulatory requirements to sell power (export) to the wholesale interstate grid—for example, by qualifying as a Qualifying Facility or Exempt Wholesale Generator—the interconnection of that specific generating asset would be governed by FERC’s generator interconnection procedures, including the 2023 reforms. However, data centers can simultaneously be large loads, making them subject to state utility regulation as well as certain federal approvals.
The US Department of Energy (DOE) has formally urged FERC to initiate rulemaking to clarify federal jurisdiction and establish standardized rules for the interconnection of large electrical loads, typically defined as greater than 20 MW and including data centers. However, the jurisdictional boundary between state and federal authority remains unsettled as of the end of 2025.
The Hypergrid Interconnection Problem
In principle, utilities welcome additional business and the opportunity to sell power to data centers, but hyperscalers are not typical grid customers. In the current frenzied rush to build data centers, utilities are not prepared to meet the aggressive schedules that data center customers demand.
The Stargate project is a massive joint-venture data center complex involving OpenAI, Oracle, and SoftBank. The project relies on Crusoe to address the primary bottleneck facing new hyperscale AI data centers: the speed and availability of power. Crusoe is the developer and operator of Stargate’s flagship campus in Abilene, Texas, which is planned to scale up to 1.2 GW of power capacity. Crusoe’s core business model is to control the full stack, from power generation and energy procurement to data center design and hardware deployment, enabling sites to come online in months rather than years.
Bridge Power: For the Abilene site, Crusoe is installing GE Vernova LM2500XPRESS aeroderivative gas turbines. This on-site natural gas plant is a crucial component that allows the data center to energize quickly, bypassing slow utility interconnection queues. These units are flexible, highly efficient, and capable of providing nearly 1 GW of power.
Renewable Integration: The Abilene site is also strategically located to draw on the region’s abundant wind power, a key factor in Crusoe’s site selection, and uses large-scale behind-the-meter battery storage and solar resources.
Backup/Resilience: The gas turbines function as a highly responsive source of backup power for the data halls, replacing traditional, less efficient diesel generators and ensuring 24/7 reliability for highly sensitive AI workload.
Future Plans: Crusoe has announced a long-term strategic partnership with Blue Energy to develop a massive, multi-gigawatt, nuclear-powered data center campus at the Port of Victoria, Texas, demonstrating its commitment to pioneering long-term, high-capacity generation solutions.
In short, Crusoe is not just building a building; it is building a Grid-Interactive Compute Plant (GICP)—a massive power generation and orchestration asset designed to serve the Stargate project’s unprecedented energy demands.
Stargate Data Center (Crusoe Energy)
The Utility Perspective on Power for Data Centers
Utilities have several key performance indicators that help them maintain reliable power, and they will assess whether a hypergrid improves or degrades these metrics. The electric grid (macrogrid) is designed to always have more power available than is being used at any given moment. This “excess generating capacity” is best measured by the Planning Reserve Margin (PRM). The reserve margin represents the amount of available generating capacity a region has above its anticipated peak demand.
The industry standard minimum target for reserve margin across most US regions has historically been around 15 percent. This reserve is intended to protect against long-duration outages. Spinning reserve, used for frequency regulation, is approximately 3 to 7 percent and can be deployed within seconds to help regulate grid frequency.
Both reserve margin and spinning reserves are threatened by massive new loads. With advanced grid control systems, hypergrids can be designed to improve both reserve and spinning margins.
Conclusion and Outlook
In recent years in the US, non-dispatchable wind and solar power have dominated new power additions, but this new capacity has not kept pace with rising power demand, and both reserve margins and spinning reserves have declined. This is due in part to the retirement of generation assets such as steam turbines in coal and nuclear plants, as well as older gas generators. Advanced grid-forming inverters for solar PV and battery systems, along with advanced power converters for wind turbines and Static VAR Compensators (SVCs) and STATCOMs, can provide synthetic inertia and voltage regulation capabilities. While renewable power is not dispatchable, large grid-scale batteries are, and these batteries will play an increasingly important role for data centers, far beyond the function that traditional data center UPS systems served in the past.
Given the current crisis of rapidly rising data center power loads, aging infrastructure, and retiring firm generation, the most effective path to a more reliable grid requires new hypergrids to focus on advanced automation, grid-forming inverters, expanded battery storage, more effective demand response, and a more interconnected and digital grid.
Regulations for connecting hypergrids and microgrids to local macrogrids need to be improved through consistent rules that reduce connection queues without compromising grid stability or reliability. The split authority—where FERC regulates how power generation is added to the grid while state public utility commissions regulate how new loads are added—was established before microgrids were common. Today, the massive scale of hypergrids is placing significant pressure on these outdated regulatory structures. The US should strive to be more highly interconnected across North America to improve the effective reserve margin.
Ultimately, whether it is a 1 MW microgrid or a 700 MW hypergrid, designing these systems with advanced control technologies that enhance grid stability when connected to the macrogrid, while also meeting load requirements in island mode, would significantly ease interconnection. Both microgrids and hypergrids share these requirements:
The Core Requirements
Protection and isolation (safety).
Limit harmonic distortion and voltage flicker.
Capability to absorb or inject reactive power (VARs) during both power import and export.
Advanced Requirements
The microgrid/hypergrid BESS and PV inverters should be capable of providing rapid, advanced voltage support to the utility’s distribution system, effectively acting as a high-speed STATCOM (Static Synchronous Compensator).
The microgrid/hypergrid should be able to modulate its real power output (MW) very quickly to participate in frequency regulation markets.
Microgrids/hypergrids should have black start capability.
The microgrid/hypergrid must contractually offer spare capacity and BESS to participate in the utility’s demand response or virtual power plant (VPP) programs, agreeing to inject power or curtail load when the macrogrid is stressed.
Microgrids/hypergrids need to demonstrate that their advanced inverter controls are sophisticated enough to mimic the stabilizing effect of physical inertia, preventing severe frequency drops when a large generator trips offline.
Where smaller microgrids typically relied on a mix of intermittent renewables (solar PV and wind), modest battery energy storage systems, and smaller, high-speed reciprocating diesel or gas engines for backup during island mode, hypergrids are defined by their sheer scale. These massive facilities integrate gigawatt-class gas turbines or large, modular fuel cell arrays alongside industrial-scale UPS systems and grid-scale BESS measured in tens or hundreds of megawatts (MW). The mission has shifted: traditional microgrids required a grid connection primarily to offload excess renewable generation that exceeded local load, whereas hypergrids are architected to become active partners in grid management, with significant potential to provide high-value grid services, including large-scale demand response (DR), frequency regulation, and dynamic voltage support through controlled injection and absorption of reactive power (VARs). In doing so, they transform the data center from a massive load into a dispatchable, revenue-generating asset.
Hyperscalers (Microsoft, Google, Amazon, Meta) continue to maintain ambitious public goals, such as achieving 100 percent renewable energy, yet many hypergrids are currently powered by natural gas. Hyperscalers are not abandoning their renewable commitments, but they are prioritizing “speed to power” over “immediacy of green power,” creating a significant and visible contradiction. They are not simply building gas plants; they are designing transitional, future-proof energy platforms in which the current reliance on natural gas is a deliberate, temporary step to address the speed-to-power constraint. This contradiction is driving a new hypergrid design philosophy centered on modularity, fuel flexibility, and long-term site viability for clean energy integration.
Hyperscalers are specifying natural gas turbines, often aeroderivative models, that are manufactured to be hydrogen-ready. Hypergrids are deploying BESS systems far larger than required for basic UPS backup. Power-first site selection has become a priority, and hyperscalers, together with their utility partners, are explicitly designing the hypergrid as a multi-phase energy complex intended to ultimately transition away from gas toward firm, zero-carbon energy sources. Site selection is based not only on available land, but also on access to underutilized high-voltage transmission lines or proximity to existing clean energy assets, such as retiring coal plants with established interconnection rights.
In summary, the hypergrid replaces the passive relationship characteristic of traditional microgrids with an active, contractual partnership with the utility, transforming a potentially disruptive massive load into a system-stabilizing asset. If designed correctly, hypergrids can reduce power costs and improve the reliability of the macrogrid on which everyone depends.
F
The post From Microgrids to Hypergrids: Data Center Power Demands + Hyperscaler Capital is Creating a New Grid Architecture appeared first on Logistics Viewpoints.
You may like
Non classé
Supply Chain and Logistics News February 23rd- 26th 2026
Published
24 heures agoon
27 février 2026By
This week’s supply chain landscape is defined by a massive push to bridge the gap between having data and actually using it. From the high-stakes legal battle over billion-dollar tariffs to a radical AI-driven workforce restructuring at WiseTech Global, the industry is moving past simple visibility toward a period of high-consequence execution. Whether it is the Supreme Court’s intervention in trade policy or the operationalization of decision intelligence showcased at the 30th Annual ARC Forum, the recurring theme is clear: the next competitive advantage belongs to those who can synchronize their technology, their inventory, and their legal strategies in real time. In this edition, we break down the four critical shifts—architectural, legal, operational, and structural—shaping the final days of February 2026.
Your News for the Week:
The Technology Gap: Why Supply Chain Execution Still Isn’t Fully Connected Yet
Richard Stewart of Infios argues that the primary technology gap in modern supply chain execution is not a lack of ambition or budget, but rather an architectural failure. Most existing systems, such as WMS and TMS, are designed to optimize within their own silos, leaving a critical disconnect during real-time disruptions where manual workarounds and spreadsheets are still required to coordinate responses. Citing the Supply Chain Execution Readiness Report, Richard highlights that 69% of leaders struggle with data quality and integration, driving a shift in buying criteria toward interoperability and real-time visibility. Ultimately, Richard suggests that the next competitive advantage will belong to organizations that move beyond simple visibility toward “connected execution,” prioritizing modular architectures that synchronize decisions across the entire operational landscape rather than just reporting on them.
FedEx sues the US Government, seeking a full refund over Trump Tariffs
FedEx has officially filed a lawsuit against the US government, seeking a full refund for duties paid under the Trump administration’s recent tariff policies. The move follows a landmark 6-3 Supreme Court ruling that found the president overstepped his authority by using emergency powers to bypass Congress’s sole power to levy taxes. While the court’s decision stopped the specific enforcement mechanism, it left the status of the estimated $175 billion already collected in limbo. As the first major carrier to seek reimbursement, FedEx’s legal challenge could set a precedent that could affect the logistics industry and thousands of other importers currently navigating a volatile trade environment.
From Hidden Inventory to Returns Recovery: Exposing Operational Blind Spots
Hiu Wai Loh sheds light on the hidden inventory crisis and the costly returns black hole that plagues supply chains long after peak season ends. The research reveals that a staggering number of organizations suffer from fragmented data, leading to false stockouts and millions of dollars trapped in reverse logistics limbo. To overcome these operational blind spots, the author argues that companies must tear down silos and adopt a unified, real-time inventory model. By leveraging AI-driven smart disposition, businesses can efficiently route returns to their most profitable next destination, transforming a traditional cost center into a powerful engine for full-price recovery and year-round agility.
Avantor and Aera Technology were present at the 30th Annual ARC Forum and presented on how they are operationalizing Decision Intelligence. They explore how modern supply chains are navigating the paradox of increasing global disruptions alongside record-breaking operational efficiency. By highlighting a case study from Avantor, the presentation demonstrated how Decision Intelligence (DI) can move beyond theoretical AI to automate thousands of routine daily decisions, such as stock rebalancing and purchase order prioritization. The key takeaway from the ARC Advisory Group’s 30th Leadership Forum is that companies should focus on “change-ready” solutions that solve immediate, high-impact problems rather than waiting for perfect data or fully autonomous systems.
WiseTech Global Cutting 30% of Workforce in AI restructure:
WiseTech Global, the developer of the CargoWise platform, has announced a major two-year restructuring plan that will involve cutting approximately 2,000 jobs, or 29% of its global workforce. This strategic pivot aims to integrate artificial intelligence deeper into both its internal operations and its customer-facing software, which currently handles a massive 75% of global customs transaction data. The layoffs are expected to hit the company’s U.S. cloud division, E2open, particularly hard, with some reports suggesting cuts of up to 50% there. This move comes at a turbulent time for the Australian tech giant, as it seeks to regain investor confidence following a 68% drop in share price since late 2024 amid leadership controversies and shifting market dynamics.
Song of the week:
The post Supply Chain and Logistics News February 23rd- 26th 2026 appeared first on Logistics Viewpoints.
Non classé
Burger King’s AI “Patty” Moves AI Into Frontline Execution
Published
2 jours agoon
26 février 2026By
Burger King is piloting an AI assistant called “Patty” inside employee headsets as part of its broader BK Assistant platform. This is not a marketing chatbot. It is an operational system embedded into restaurant execution.
Patty supports crew members with preparation guidance, monitors equipment status, and analyzes customer interactions for defined service language such as “please” and “thank you.” Managers can query performance metrics tied to service quality in real time.
The architecture matters more than the novelty.
AI Inside the Operational Core
Patty is integrated with a cloud based point of sale system. That connection allows:
near real time inventory updates across channels
equipment downtime alerts
synchronized digital menu adjustments
structured service quality measurement
If a product goes out of stock or a machine fails, availability can be updated across kiosks, drive through boards, and digital systems within minutes.
This is AI operating inside the transaction layer, not sitting above it.
Earlier fast food AI experiments focused on automated drive through ordering. Burger King is more measured there. The more consequential shift is internal execution intelligence.
Efficiency, Visibility, and Risk
Across retail and logistics sectors, AI agents are being embedded directly into workflows to standardize performance and compress response times. The value comes from integration and coordination, not conversational capability.
At the same time, customer sentiment toward fully automated service remains mixed. Privacy, workforce implications, and over automation risk are active concerns. As AI begins monitoring tone and behavior, governance becomes part of the deployment decision.
Operational AI improves visibility. It also expands accountability.
Implications for Supply Chain and Operations Leaders
Three themes emerge:
Execution instrumentation – AI is now measuring soft metrics and converting them into structured operational data.
Closed loop response – When connected to POS and inventory systems, AI can both detect issues and trigger corrective updates.
Governance at scale – Embedding AI at the edge requires clear oversight, performance auditability, and workforce alignment.
Burger King plans to expand BK Assistant across U.S. restaurants by the end of 2026, with Patty currently piloting in several hundred locations.
This is not a fast food curiosity. It is a signal.
AI is moving from analytics to execution. From dashboards to headsets. From advisory tools to operational participants.
For supply chain leaders, the question is no longer whether AI will enter frontline operations. The question is how intentionally it will be architected and governed once it does.
The post Burger King’s AI “Patty” Moves AI Into Frontline Execution appeared first on Logistics Viewpoints.
Non classé
AI and Enterprise Software: Is the “SaaSpocalypse” Narrative Overstated?
Published
2 jours agoon
26 février 2026By
Capital is rotating. Growth has given way to value, and within technology the divergence is increasingly pronounced. While broad indices have stabilized, many software names have not. Since late 2025, software equities have materially underperformed other parts of the technology complex. Forward revenue growth across many mid-cap SaaS firms has slowed from prior expansion levels, net retention rates have edged down in several categories, and valuation multiples have compressed accordingly. Markets are repricing both growth durability and margin structure.
The prevailing explanation is straightforward. Generative AI lowers barriers to entry, reduces the cost of building applications, and compresses differentiation. If application logic becomes easier to produce, competitive intensity increases and pricing power weakens. The result is visible not only in equity valuations, but in moderated expansion rates and tighter forward guidance. There is substance behind that concern. But reducing enterprise software economics to code production misses where the structural leverage in these platforms actually resides.
The Core Bear Case
The bearish thesis rests on three related propositions: AI commoditizes application logic, accelerates competitive entry, and pressures margins. If enterprises can generate software dynamically, recurring subscription models face structural pressure. If workflows can be automated through agents, reliance on fixed applications may decline. If code becomes less scarce, incumbents may struggle to defend premium multiples.
The repricing in software reflects these risks. Multiples have compressed meaningfully, and growth expectations have moderated across several verticals. In certain categories, retention softness suggests substitution pressure is already emerging. These signals should not be dismissed as temporary volatility.
At the same time, equating software value solely with feature output or code generation is a simplification. Enterprise software durability rarely rests on feature sets alone.
What Enterprise Software Actually Represents
In supply chain environments, systems function as operational coordination layers rather than isolated applications. Transportation management systems, warehouse platforms, planning suites, and multi-enterprise visibility networks sit at the center of integrated transaction flows. They embed years of configuration, exception handling logic, compliance mappings, and cross-functional workflows. Over time, they accumulate operational data that informs sourcing, forecasting, transportation optimization, and execution decisions across the enterprise.
Replacing those systems is not equivalent to generating new code. It requires rebuilding institutional memory, re-establishing integration points, and re-validating compliance controls across internal and external stakeholders. The switching cost is not interface retraining; it is operational re-architecture.
In our research on AI system design in supply chains
AI in the Supply Chain-sp
, the recurring conclusion is that structural advantage stems from coordination, persistent context, and integration density. Model capability matters. Economic durability flows from how systems connect and govern activity across distributed networks. That distinction is central to evaluating enterprise software in the current environment.
Where Risk Is Real
Not all software categories have equivalent structural protection. Risk is most evident in narrowly defined vertical tools, lightweight workflow utilities, and productivity-layer applications with limited proprietary data accumulation. In these segments, generative models can replicate core functionality with relatively low switching friction. Pricing pressure can intensify quickly, and margin compression may prove structural rather than cyclical.
By contrast, enterprise workflow orchestration platforms deeply embedded in core business processes create operational dependency. Replacing them requires redesigning process architecture, not simply substituting interfaces. Systems that accumulate years of transaction data, customization layers, and ecosystem integrations generate switching costs that extend beyond feature parity. Observability and monitoring platforms that collect continuous telemetry function as operational infrastructure; as AI agents proliferate, the need for measurement, traceability, and governance increases rather than declines.
In supply chain software specifically, planning platforms and transportation orchestration systems accumulate integration density over time. That density represents economic friction against displacement and reinforces durability when market volatility increases.
AI as Architectural Pressure
AI will alter software economics. It will increase development intensity, shorten product cycles, and compress margins in commoditized segments. Vendors operating at the surface layer of functionality will face sustained pressure.
However, AI simultaneously increases coordination complexity. As autonomous agents proliferate, enterprises require more governance controls, more integration layers, and more persistent contextual memory. The economic question shifts from “Who can build features fastest?” to “Who can coordinate distributed intelligence most reliably?”
Agent-to-agent communication, contextual memory frameworks, retrieval-based reasoning, and graph-aware modeling are becoming foundational design considerations in supply chain environments, as described in ARC’s white paper AI in the Supply Chain: Architecting the Future of Logistics. Vendors capable of governing these interactions at scale may strengthen their structural position. Vendors confined to interface-layer differentiation may see pricing pressure intensify. The outcome is not uniform decline; it is structural differentiation within the sector.
Valuation vs. Structural Impairment
Markets reprice sectors quickly when uncertainty rises. The current adjustment reflects legitimate concerns: slower growth trajectories, reduced retention durability, increased competitive intensity, and rising research and development requirements. These are measurable economic factors.
The open question is whether valuations reflect permanent impairment across enterprise software broadly, or whether the market is failing to distinguish between commoditized applications and structurally embedded coordination platforms.
Some observers argue that AI may ultimately expand the addressable market for enterprise systems rather than compress it. As AI adoption increases, enterprises may require additional orchestration frameworks, governance layers, and system-level controls. In that scenario, platforms with embedded workflows and distribution reach could see increased strategic relevance. The impact will vary materially by category and architectural depth.
In supply chain markets, complexity is not declining. Cross-border regulation is tightening, network volatility remains elevated, and multi-enterprise coordination is becoming more demanding. Economic value accrues to platforms that integrate and govern transactions, not to those that merely present information.
Implications for Enterprise Buyers
For supply chain leaders, the relevant issue is not short-term equity performance but architectural positioning. Does the platform function as a system of record embedded in transaction flows, or as a reporting layer adjacent to them? How deeply is it integrated into compliance processes, procurement logic, and transportation execution? Does it accumulate proprietary operational data that reinforces switching costs over time? Is it evolving toward coordinated AI architectures, or layering assistive tools onto a static foundation?
AI will not eliminate enterprise systems. It will expose those whose economic value rests primarily on surface functionality rather than integration depth.
A Measured Conclusion
The current narrative captures real pressure within segments of the software sector, but it does not fully account for structural differentiation. Certain categories face sustained pricing compression where differentiation is shallow and switching friction is low. Others may strengthen as AI increases coordination demands, governance requirements, and integration complexity.
The decisive factor will not be branding or feature velocity. It will be integration density, data gravity, and the ability to coordinate distributed intelligence across enterprise and partner networks. In supply chain contexts, platforms that govern transactions, maintain contextual continuity, and orchestrate multi-node operations retain structural advantage. Platforms that merely automate isolated tasks face a more uncertain economic trajectory.
That distinction, rather than headline narrative, will determine long-term outcomes.
_______________________________________________________________________________
Download the Full Architecture Framework
A2A is only one component of a broader intelligent supply chain architecture. For a structured analysis of how A2A integrates with context-aware systems, retrieval frameworks, graph-based reasoning, and data harmonization requirements, download the full white paper:
The paper outlines the architectural model, governance considerations, and practical implementation path for enterprises building connected intelligence across their supply networks.
Download the white paper to explore the complete framework.
The post AI and Enterprise Software: Is the “SaaSpocalypse” Narrative Overstated? appeared first on Logistics Viewpoints.
Supply Chain and Logistics News February 23rd- 26th 2026
Burger King’s AI “Patty” Moves AI Into Frontline Execution
AI and Enterprise Software: Is the “SaaSpocalypse” Narrative Overstated?
Walmart and the New Supply Chain Reality: AI, Automation, and Resilience
Ex-Asia ocean rates climb on GRIs, despite slowing demand – October 22, 2025 Update
13 Books Logistics And Supply Chain Experts Need To Read
Trending
-
Non classé12 mois agoWalmart and the New Supply Chain Reality: AI, Automation, and Resilience
- Non classé4 mois ago
Ex-Asia ocean rates climb on GRIs, despite slowing demand – October 22, 2025 Update
- Non classé7 mois ago
13 Books Logistics And Supply Chain Experts Need To Read
- Non classé1 mois ago
Container Shipping Overcapacity & Rate Outlook 2026
- Non classé4 mois ago
Ocean rates climb – for now – on GRIs despite demand slump; Red Sea return coming soon? – November 11, 2025 Update
- Non classé1 an ago
Unlocking Digital Efficiency in Logistics – Data Standards and Integration
-
Non classé6 mois agoBlue Yonder Acquires Optoro to Revolutionize Returns Management
-
Non classé5 mois agoNavigating the Energy Demands of AI: How Data Center Growth Is Transforming Utility Planning and Power Infrastructure
