The Internet of Things, or IoT, is vast, consisting of nearly 50 billion “things” by 2020 according to Philip Howard. The IoT is also nebulous. Defined as a network of physical objects or “things” embedded with electronics, software, sensors and connectivity, this IoT as we know it today includes devices as diverse as heart monitoring implants, biochip transponders on farm animals, automobiles with built-in sensors, refrigerators providing online status, bio-hazardous particulate sensors, centrally scheduled and monitored outdoor lighting, distributed net-energy meters, and many more. Plus who knows what kinds of things are on the drawing board.

Connecting all these things to the Internet is a certainty. Economics insure it. Publicly traded companies making data center hardware and software, delivering connectivity plumbing like fiber, providing cloud services, offering mobile services like smart phone connectivity, are all looking for that next hundred million in revenue offered by an emerging market of 50 billion things. Plus like those bulls in Pamplona, startups are running toward this bullring of opportunity too, hoping to create the killer app or uncover the dominant business model for all these things. How, though, will all these things get connected? Why wirelessly of course.


The edge of the Internet is hard to get to, because it is either remote or always moving or both. If it were easy, it would already be part of the Internet! Imagine an IoT application that utilizes RFID tags to track tools and equipment assigned to a truck and used by a field service worker. Depending upon the job and the day, this field service truck and worker may be in town where connectivity is easy, or way out of town at a remote, high-value asset like a pipeline pump station. Only a wireless connection works in both situations. One glimpse of a typical cellular provider’s mobile data coverage map and it’s easy to see that cellular coverage is prodigious. Ever increasing ARPU powering a never ending rollout of data connection speeds (… 2G, 3G, 4G/LTE …) has insured that cellular is very nearly everywhere. This fact lies in stark contrast to the many failures of Municipal Wi-Fi, doomed by technical, economic and business model shortcomings, foreshadowing a future bathed in cellular.


Cellular, however, is not the Internet per se because it does not operate using the Internet Protocol known as TCP/IP. Yet cellular routers and mobile applications with their cloud services have become quite adept at marshaling TCP/IP payloads across cellular networks, so cellular is perfectly capable of being used to extend the edge of the Internet as far and wide as cellular networks reach today, and tomorrow.

Spanning Networks

At the Internet’s edge, IoT application developers are hard at work building businesses on devices connected to the Internet via a spanning network. First, a spanning network extends the reach of the Internet using cellular, from a cell tower outward, as far as a cell tower can reach – the so-called last mile of connectivity. Then from the edge of a cell tower’s reach, a spanning network extends the Internet even further. Wired protocols like PLC for gas, water and power meters or RS-485 for fieldbus devices have been used in the past, but the ease and economics of wireless mesh networks like Zigbee and its many variants are rendering wired protocols obsolete. So more commonly, the last foot of connectivity is provided by wireless mesh protocols.

Each IoT application needs a spanning network. The nuts and bolts of this network are built from standard, off-the-shelf components like cellular routers and RF radios, but the performance characteristics of a spanning network are very application specific. How big are the payloads? How far and wide must payloads be distributed and through what routes? With what frequency must payloads be uploaded and downloaded? These application details along with the economics involved in operating a spanning network at scale drive the success of an IoT application, perhaps even more than its functionality.


Designing, testing and then operationalizing a spanning network at scale is nontrivial. Delicately balancing throughput requirements against cellular data plan requirements and costs is just one of the key drivers, but one that has an oversized effect on operational costs. How many wireless mesh nodes worth of payloads can a single cellular edge router manage? How many cellular edge routers are required to cover an IoT application’s service area? How big a machine-to-machine data plan does each cellular edge router need and what will the costs be in full operation?

Early investigations on these topics during the IoT application design phase can be dramatically simplified by a cellular edge router with the right functionality. The same is true during testing, operational scaling and even maintenance over a service agreement’s term. A cellular edge router designed for IoT application developers would support these features and functions:

  • Application Agnostic
  • Cloud Connected
  • Virtualization
  • Wireless Mesh NAT and DHCP
  • Bandwidth Modeling
  • Energy Management
  • Differential Monitoring

Application Agnostic

Attempting to be the Ginsu Knife of cellular edge routers for IoT developers is folly. There is no way to predict how a customer in a particular vertical market segment will wish to integrate IoT devices with a spanning network, and even if you could, satisfying such a disparate set would bloat a cellular edge router and render performance for any specific customer lame. Companies have struggled for years trying to be just such a solution.

Instead, the paradigm needs to change. The interface between a specific IoT application and the cellular edge router becomes Ethernet only, with the app developer encapsulating their app logic into an Ethernet push device like Synapse Wireless’ SNAP Connect E10 and E20 or EKM Metering’s EKM Push. This push device has intimate knowledge of the payloads being exchanged between the wireless mesh and the cloud as well as message semantics, recovery strategies and the like, which frees the cellular edge router to focus solely on optimized TCP/IP payload exchange.

Cloud Connected

Public IP addresses are too valuable a resource to dole out willy-nilly so instead, carriers dole out private IP addresses to cellular edge routers, which require a VPN connection into the carrier’s network for direct access to a cellular edge router’s configuration. These private IP addresses are used to configure, troubleshoot and optimize the performance of a cellular edge router. For initial configuration a VPN client and configuration may not be necessary because the router can be connected locally to a computer via an Ethernet cable, but once the router heads into the R&D lab or out into the wild it’s a different story. VPN client licensing and configuration across multiple roles within an organization is an unwieldy proposition at best, and one that gets disproportionately worse as the number of routers grows. Edge router support, troubleshooting and optimizations over time to maintain an IoT application’s spanning network must be simple and low-cost.


The solution is for a cellular edge router targeting IoT application developers to be “cloud connected”. Instead of connecting directly to an edge router’s configuration webpage through a VPN pipe, the IoT app developer securely logs into a cloud service provided by the edge router manufacturer in order to manage the edge router’s configuration initially as well as over time. Once provisioned into the carrier’s network, the edge router receives all of its configuration parameters from this manufacturer cloud service while pushing router monitoring and status information to the cloud service as well. No VPN client is required. No direct connection to the edge router’s webpage is needed either. The IoT app developer can then manage roles, authentication and authorization to specific edge routers in a way that is consistent with other managed devices. What’s more, a cloud service enables a stickier, longer-lasting relationship between the edge router manufacturer and the IoT app developer that improves monetization over time and can help fund cloud service development for the cellular edge router manufacturer.


Virtualization has been an economic boon for corporate IT and datacenters because of its dramatic improvement in utilization. Shared connectivity was the enabling technology, lower operating costs the benefit.

A similar benefit occurs when all of the edge routers in a spanning network share connectivity as they do when cloud connected. Cellular data plan utilization improves. In fact, the IoT application developer can optimize this part of their business, which can have a huge impact on the bottom line when considered across many customer installations.


Additionally, spanning network reliability improves. Each cellular edge router’s configuration for a particular IoT application resides in the cloud, simplifying failover and reducing downtime. A cellular edge router can be re-flashed in minutes. Or the router can be swapped in the field by lower cost field resources that know nothing about the IoT application, and then flashed and spun up remotely by the application developer. A spanning network can even be designed with overlapping meshes and overhead bandwidth so that a single cellular edge router can temporarily backhaul multiple meshes should an edge router fail in the field. This temporary releveling can be done remotely to preserve uptime at the expense of throughput, but then returned once the edge router gets replaced, all without rolling a truck.

Wireless Mesh NAT and DHCP

Low power, low throughput wireless meshes are the norm for IoT applications because the operating expenses are more favorable. Unfortunately, these meshes do not use the Internet Protocol. Neither IP addresses nor protocols like DHCP are supported, but a cellular edge router designed for IoT application developers could deliver these capabilities. Using IPv6, a cellular edge router could individually address each node in an 802.15.4 mesh, and provide NAT as well as DHCP to simplify management from the cloud. These edge router features would further simplify the application design process for IoT developers, adding even more value.

Bandwidth Modeling

Many, possibly even most, IoT applications will be delivered as financed services so that the “customer” pays over time, often in the context of a performance contract. Financing adds a time dimension to the economics of a solution, and heightens the importance of operational expenses like cellular backhaul data plans to the overall value proposition of an IoT application. Initially identifying the optimal data plans becomes crucial. Maintaining the optimal data plans over time as carriers change plans also becomes essential.

A cellular edge router will not know the dollars per megabyte for its backhaul pipe, but it will know the megabytes moved through the gateway per month, which obviously drives data plan economics. Early on in the design of an IoT application, the megabytes per month for each class of IoT device must be determined and then used to model the throughput of each router in the IoT application’s spanning network. Making this analysis and optimization easy and then simplifying verification in the lab as well as out in the wild during a performance contract has huge value to the IoT application developer.

A set of bandwidth modeling steps might unfold like this:

  1. Connect an IoT device to the router using a wireless mesh and enable real time throughput monitoring, then run the device through its usage scenarios, keeping track of payload size and frequency (i.e., data usage) per scenario.
  2. Assemble a collection of IoT devices sharing a single wireless mesh along with a single router into a scaled down spanning network, then put the IoT devices through their combined usage scenarios to determine the maximum number of IoT devices a single router can effectively backhaul.
  3. Operate the scaled down spanning network beyond the router’s capacity to understand throughput failure modes and how to set alert thresholds for managing to a performance contract.
  4. Expand to a multi-router, multi-wireless mesh spanning network to fine tune the IoT application’s operational parameters and alerts, all the while using the router’s bandwidth modeling tools as the design feedback loop.

Throughout, easily accessible bandwidth information provided by the cellular edge router enables the IoT developer to economically deliver optimized spanning networks for the IoT application.

Energy Management

Remote IoT applications may be battery powered, so understanding the energy characteristics of a cellular edge router across a broad set of scenarios, as well as being able to affect the router’s energy profile programmatically and in real time, is crucial. However, even when the IoT application is not remote, energy matters. The cost of energy comes out of the IoT application developer’s top line revenue when providing a service. Nowhere is this truer than in IoT applications targeting the energy sector, where every kilowatt generated or saved gets monetized.

The solution involves insuring the IoT application developer has access to rich energy usage data for the cellular edge router as well as a programmatic way to affect the router’s energy profile over time. Like bandwidth data, energy usage data provided by the router helps during the design and test phase and then rolls into differential monitoring to help the IoT application developer craft, then meet, a customer performance contract.

Differential Monitoring

A typical wide area IoT application requires a spanning network with numerous cellular edge routers for backhaul. A successful IoT application developer will have many customers, each with an instance of a wide area IoT application spanning network. Therefore, a successful IoT application developer must manage a large number of routers. Proactively assessing each router’s ongoing performance is untenable at scale, even if doing so can be accomplished using a cloud service. Instead, differential monitoring hosted at the cellular edge router’s cloud service is the key. Granular, side-by-side, near real time graphs of many routers performing similar functions is the simplest way to identify anomalies at scale. Once identified, troubleshooting can begin.

Exceptions to normal behavior, surfaced as alerts, are another form of differential monitoring. Performance thresholds that trigger alerts can be configured to proactively manage a spanning network’s performance to a service level agreement, a competitive advantage in the IoT space. Facilitating the performance analysis of an IoT application’s spanning network in order to craft the terms and conditions of a service level agreement, and then creating the associated alerts, enables the IoT developer to over deliver in the eyes of the customer.

Three Examples

A cellular edge router with these features could be used to design IoT applications where the spanning network provides stateless connectivity only as well as where the spanning network is state-full and provides unique capabilities to the application. A few examples should help illustrate.

Remember the tool tracking and utilization service for truck fleets from above? This application is primarily a monitoring application, so no algorithms need to reside and run at the edge and the spanning network simply passes data from RFID tags on tools through the truck’s cellular edge router and up to the cloud. No app aggregator would be needed in the wireless mesh because no state or semantics reside there. However, IPv6 addressing and DHCP from the wireless mesh all the way through to the cloud would be very beneficial and easily delivered by this cellular edge router.

Similar to the tool tracking example, imagine a service for tracking a department’s fleet of police cars. This application is also primarily a monitoring application without algorithms at the edge, but there are additional store and forward semantics that would improve data collection and bandwidth utilization. Desired data might include location of the vehicle, of course, but also identifiers for the officers in or near the vehicle as well as gun access and stow events. An application aggregator, in this case, would be needed in the wireless mesh and would include a GPS chip for location, an RFID receiver for officer and gun identification plus a holster detector. These monitoring payloads would be packaged up by the aggregator and then passed through the Ethernet interface of the cellular edge router to the cloud. The cellular edge router would know nothing of the application beyond throughput requirements.

At the other end of the spectrum, imagine a mesh of off-grid solar lights that coordinate a shared dusk and dawn demarcation for a cloud-based lighting schedule. Each solar light has a solar collector and can detect dawn and dusk, but these detections will vary from light to light causing a rolling turn on across the entire mesh. For a single on and off across all lights, a coordinated on/off can be determined by majority. Once a majority of the lights within the mesh detect dusk, for example, all lights turn on simultaneously. Ditto for dawn, except that the lights turn off. These semantics are handled within the mesh rather than in the cloud to eliminate the data throughput requirements and connectivity latency. Here an application aggregator would be required, with the algorithm to determine “majority” and then broadcast the turn on or turn off message to the mesh. This aggregator would include a wireless mesh radio so that it can communicate with all the solar light nodes in the mesh, as well as a processor to execute the coordinated on/off algorithm and an Ethernet chip for communications with the cellular edge router. Though more functionality resides at the edge in this example, the cellular edge router still knows nothing about these semantics because they are encapsulated in the application aggregator.


A Missed Opportunity

Nobody seems to be targeting IoT application developers and building this cellular edge router, which seems like a missed opportunity. Fifty billion reasons seems like plenty of motivation, I wonder where the takers are.

A Solar Tale

Remember “The Long Tail”? Maybe not. Unless you were up to your eyeballs in the nuances of search engines and niche marketing around the turn of the century, you wouldn’t. The phrase originated with a Wired article by Chris Anderson, but more generally Marziah Karch describes it like this; traditionally records, books, movies, and other items were geared towards creating “hits.” Stores could only afford to carry the most popular items because they needed enough people in an area to buy their goods in order to recoup their overhead expenses. The Internet changes that. It allows people to find less popular items and subjects. It turns out that there’s profit in those “misses,” too. Amazon can sell obscure books, Netflix can rent obscure movies, and iTunes can sell obscure songs. That’s all possible because the Internet, search engines and search advertising provide easy access to these niches out on the long tail of the demand curve, allowing them to compete with the head of the curve where the big hits and brick and mortar stores reside.

What does this have to do with solar energy? Plenty as it turns out. Demand for solar has traditionally been met by large, centralized solar farms that generate many megawatts of energy per system, per day, like the big-box retail stores of yore selling blockbuster records, books and movies, the hits at the head of the solar demand curve.

These centralized solar farms are comprised of rows and rows of identically mounted flat crystalline solar modules tilted at the ideal angle for the latitude. With their economies of scale they deliver the lowest installed system costs, in the $2 per watt range according to Greentechsolar, if you ignore the typical transmission infrastructure additions and upgrades required to deliver this energy to market. String inverters are a key ingredient in delivering such favorable economics. Large strings of solar modules, devoid of shading and other sources of performance differences between modules, can be connected to a single, rather expensive string inverter. The number of solar modules per string inverter, and therefore the number of watts by which the cost of the string inverter gets divided, is large, rendering favorable dollars per watt.

Centralized solar farms also fit neatly into the existing utility-driven paradigm and business model. Energy is generated centrally, delivered over wide area networks of transmission and distribution lines to paying customer loads and then paid for and recouped by regulated returns over long time horizons. These are the big hits.

Like the big box retail stores with search advertising, though, this centralized utility-scale model is being disrupted. Land acquisition and permitting for new solar farms combined with the challenges of adding net new or even upgrading existing transmission and distribution lines is constraining big solar. At the same time the cost of crystalline solar modules and supporting electronics has plummeted, opening up the first wave of distributed solar, known more commonly as rooftop solar. Rooftops are smaller than the acres devoted to centralized solar farms, by a lot, so the fixed costs of a rooftop solar generating system – e.g., solar modules, inverters, mounting infrastructure – are divided by fewer watts. As a result, the dollars per watt for rooftop solar initially suffered by comparison, but continues to get rosier and rosier as these costs continue their precipitous decline, sitting just under $4 per watt according to the same Greentechmedia study.

Rooftop solar is more distributed than a centralized solar farm, and more varied. A single rooftop may have several different pitches and possibly even directions these pitches face. Since economics will always drive towards maximizing the number of watts installed per rooftop, these variations become more and more common. Plus, shading plays a role. Rooftops are not pristine like single-purpose solar sites. Trees, neighbor houses, nearby foothills and the like can cause seasonal shading during times of the day, emphasizing the point that a rooftop is first and foremost, a rooftop. Fortunately for the rise of distributed solar, a Module-Level Power Electronics (MLPE) market has emerged to assuage the technical ramifications. Microinverters and power optimizers are examples of MLPEs. Each optimizes a single solar module’s output, an important innovation when adjacent solar modules may perform very differently due to shading or even their orientation relative to the sun. Mating a microinverter or power optimizer with every solar module costs more in dollars per watt, but as the distributed solar market grows and gains economies of scale for MLPE manufacturers, costs are coming down rapidly as they have with solar modules, while overall system generation across varied solar modules increases.

 Many Facet Rooftop Solar

Rooftop solar is filling out the inflection point between the head of the solar demand curve and the tail, but it cannot fuel the long tail all on its own. As of the third quarter of 2014, nearly 600,000 home and business owners already generate their own solar electricity from rooftop systems. Unfortunately, only as many as 20% of rooftops are suitable to host solar generation. Plus socially, rooftop solar contributes to the electrical divide, the increasing cost of energy low-income families will face as part of the utility death spiral – i.e., the concept where falling barriers to distributed generation coupled with rising electric bills will cause consumers to defect from the grid, leaving a smaller population to pay for the costs of maintaining the electrical infrastructure. This smaller population is filled with low-income families, families without the means or often even the rooftops to participate in the benefits of rooftop solar.

What will fuel the long tail? What is at least as distributed and local as rooftop solar, more egalitarian and offers unlimited surface area to cover and generate solar energy? Infrastructure Solar! Imagine the ability to economically cover all shapes and sizes of existing infrastructure out in the wild with solar generation, like light and utility poles of all heights and diameters, traffic intersection poles and arms and supports, bus and rail stops, wind turbine towers, water towers, floating bridge barricades, the list goes on and on. Each system is small in terms of nameplate generation – a 75 kilowatt lighting system, a 4.5 kilowatt traffic intersection – but like the Long Tail of the Internet, the sum of all installed Infrastructure Solar kilowatts will eventually dwarf the centralized and rooftop kilowatts being installed today because, well, the tail is really, really long.

Solar Cells and Modules

Standing between today and the explosion of Infrastructure Solar are a few innovations. Traditional flat crystalline solar modules can be added to existing infrastructure such as rooftops using mounting rails and attach points that depend on the type of roof material and structure. These flat solar modules work well on rooftops with large, flat, generally south-facing surfaces. When mated with MLPEs like a microinverter, each flat solar module’s generation is optimized. Localized shading only affects the generation of the shaded module, unlike string inverters where the performance of shaded solar modules can affect the performance of other solar modules sharing the same string inverter. Or when rooftops have multiple flat surfaces with different slopes and orientations, flat crystalline solar modules with microinverters per module perform optimally as well. However, these flat crystalline solar modules are big. A typical 60-cell solar module is in the 65 by 40 inch range, and getting bigger. Sunpower is now producing a 128-cell, 435 watt solar module that is a whopping 82 by 41 inches and over 20 percent efficient!

While bigger and more efficient is better for solar farms and most rooftops because the dollars per watt decrease, bigger is worse when the goal is to cover existing infrastructure. Curvature is the problem. Flat crystalline solar modules are, well, flat and rigid. They do not bend, so the bigger the flat crystalline solar module the less curvature it can effectively cover. Much less existing infrastructure can be transformed into solar energy generating devices with big, efficient crystalline solar modules.

Flexible Solar Modules

Flexible amorphous-silicon and CIGS solar modules can more easily attach to and cover existing curved infrastructures like poles and arms, but the cell efficiencies are less than crystalline cells and the orientation of bypass diodes between cells may or may not align optimally for the infrastructure being wrapped or the position of the sun throughout the day. When not ideally oriented, module generation performance suffers. For example, wrapping an amorphous silicon solar module designed to lay flat between spars on a metal roof, around a vertically oriented cylinder like an aluminum light pole, yields less than optimized generation because the cells were not wired with this geometry in mind.

The first innovation needed to unlock Infrastructure Solar combines the best of both crystalline and flexible solar cells into an articulating solar module; a solar module designed to transform existing infrastructure into optimized solar energy generating devices by attaching to and covering with articulating facets comprised of crystalline solar cells. This new class of solar module is comprised of two or more facets that articulate relative to one another, while each facet is comprised of one or more solar cells whose size and shape is determined by the geometry of the existing infrastructure being transformed and whose orientation relative to the sun is the same.  The size and shape of a facet’s crystalline solar cells need not be square or rectangular, but instead should be determined by the infrastructure being transformed and its curvature. These cells may take on the shape of all kinds of polygons such as triangles, pentagons, hexagons, octagons and the like, all to facilitate covering arbitrarily curved, already standing infrastructures.


Second, like the optimization benefits gained from mating microinverters with today’s solar modules, MLPEs must be applied more granularly than a single 60 or 70 or 128-cell solar module. Each articulating crystalline cell, or each group of crystalline cells that articulate together (i.e., facet), must be mated with an MLPE to optimize its performance regardless of orientation relative to the sun. Generalizing this notion and extending it across years of technological advancements, the logical result is the incorporation of a direct-current, solid state, Maximum Power Point Tracking (MPPT) power optimizer directly into each facet, and then sharing a single, separate, grid-tied inverter across numerous so-equipped facets to create an articulating solar module. An Infrastructure Solar system is then constructed from as many articulating solar modules as are necessary to cover the existing infrastructure being transformed.

Power Optimization

Obviously economics plays a big part in Infrastructure Solar too. The previous two technical innovations open up the market, but the dollars per watt must also be compelling. Balance of system costs should be less for most types of Infrastructure Solar because the infrastructure already exists and the cost is already sunk. However, a new type of articulating solar module employing more granular MLPEs will drive up system cost, initially.  Fortunately, if we have learned anything from the solar boom these past several years it’s the fact that solid state technologies and manufacturing processes consistently outperform predictions about economies of scale, solar modules and MLPEs included.

Data Center

The final innovation that will unlock the potential of Infrastructure Solar involves big data. Microsoft and Google both have truly massive geocoded data sets along with ecosystems seeded with platform development tools and services to extend these data sets. Think about the mapping app on your mobile device and all the supporting data overlays you see when following directions, like restaurants with their menus and star ratings, gas stations with their gas prices, etc. Now what if this same machinery were used to geocode existing infrastructure like street lights, traffic signals, water towers, and so on, and then overlay these locational data with ever more detail like height and diameter of street and traffic poles, easement ownership information for the land on which these poles reside, specifics about the below-ground power available to the poles like voltage and the nearest circuit panel, and so on? This level of detail would dramatically reduce the cost of standing up the first wave of Infrastructure Solar. Infrastructure will need to be cherry picked initially because economies of scale will not have kicked in, so easily and cost effectively identifying these cherries will be crucial initially. Yet even after this first wave helps to drive down system costs, the data will remain invaluable as a tool to reduce balance of system costs, perpetuating the economies of scale cycle.

Eleven years ago it was The Long Tail of the Internet. Eleven years from now it may very well be The Long Tail of Solar, with every size and shape of existing infrastructure transformed into solar energy generating devices. When summed, all these small, niche, solar generating systems will dwarf the kilowatt-hour capacity of the big solar farms just like Internet search advertising did for niche products relative to big product hits. Maybe then we will finally be able to put the 1 kilowatt of direct sunlight that hits every square meter of the Earth’s surface to good use.

Roadside Resiliency

It’s an occupational hazard I suppose, this compulsion I have to examine roadside infrastructure everywhere I go. Street lights, traffic signals and control cabinets, bus and rail stops… they all smell of underutilization, and there are a lot of them. Why not repurpose? Why not leverage that real estate like American Tower Corporation did when they ingeniously bought up strategically located real estate in the 90s, stood up towers and waited for cellular and broadband companies to pay them for placement? Instead of delivering connectivity, however, this roadside infrastructure could be delivering energy. What’s more, because this infrastructure resides along the low-voltage, secondary distribution end of the electricity grid – the edge as it were – energy delivered here offers unique benefits.

Resiliency is one such benefit. Grid resiliency is a fundamental tenet of the smart grid, and one whose import swells along the eastern seaboard, which is still reeling from Superstorm Sandy. In October of 2012 Sandy delivered a wallop that caused nearly $62 billion in carnage and 13 days without power, punctuating the fragility of our nation’s electrical system. The situation would have been different had roadside infrastructure been upgraded with solar generation and battery storage. Imagine an outdoor lighting microgrid, or a traffic intersection microgrid?

Wait a minute, what’s a microgrid? A microgrid is a group of interconnected loads and distributed energy resources within clearly defined electrical boundaries that acts as a single controllable entity with respect to the grid and that can operate in either grid-connected or “island” mode. Translating, that means an outdoor lighting microgrid is a lighting circuit with an Automatic Transfer Switch (ATS) at the head end interconnection point to the grid, with solar generation, battery storage and lighting loads all residing behind the switch on the same circuit. When properly sized for a particular location’s solar generation capability and then islanded, the solar plus battery microgrid can power the lighting system load indefinitely even though the broader electrical grid is down.

Outdoor Lighting Microgrid

In a similar fashion, a traffic intersection microgrid is a low-voltage circuit on the secondary distribution system that is fronted by an ATS and includes solar generation, battery storage and loads in the form of traffic signals, red light cameras, pedestrian crossing signals, overhead lights, etc. With the right amount of battery storage for the loads and available sunshine, this intersection can also remain functional until grid power returns. Together, outdoor lighting and traffic intersection microgrids would have allowed vehicles and pedestrians to safely get around for all 13 days of Superstorm Sandy’s grid calamity, had they been in place.

Traffic Signal Microgrid

How hard is it to repurpose an already standing outdoor lighting system or traffic intersection into a microgrid? Today, it’s not as hard as you might think, though it does require some unique experience, a healthy dose of ingenuity and an ecosystem of key partners. One of the technical innovations that have emerged recently to simplify such infrastructure reuse is the grid-tied AC battery storage device. Companies like STEM and CODA Energy play in this space, solving the problem of high demand charges in places like California. Being grid-tied AC devices, these battery systems can be added to an existing AC circuit where they will source or sink energy on the circuit using cloud-based predictive analytics. When reducing peak demand charges the algorithm lowers monthly energy bills by predicting energy usage patterns and then deploying stored energy at precise times to reduce peak loads.

When put to use in a roadside microgrid the predictive analytics would be different, but the concept is the same. Predicting energy usage patterns is easy for outdoor lighting since there is very little variance in the load characteristics across a set of luminaires. A traffic intersection may be a bit more challenging to the extent energy usage depends on traffic, but this level of prediction pales in comparison to a commercial energy customer’s load characteristics, so it is well within scope. A roadside microgrid, however, presents a different challenge on the energy releasing side. The goal is no longer reducing peak demand but instead, ensuring there is as much energy as possible to power the microgrid’s loads should the broader electrical grid go down.

By itself that goal is easy; continuously top off the battery storage and then anything else can be released through the interconnection point to the broader electrical grid. Unfortunately the situation is not that simple. These distributed generation and storage assets can be used to help manage the efficiency and utilization of the distribution system, which may be at cross purposes with keeping the microgrid loads on as long as possible. Happily, predictive analytics can help. Storms like Superstorm Sandy do not materialize instantaneously. They evolve. Incorporating meteorological data is already part of the predictive analytics. When mated with rooftop solar, these AC battery storage devices predict how much sunshine and therefore how much energy a system will generate in the hours and days to come. This same technique can be used with roadside microgrids to prioritize storage over distribution system management in the hours and days preceding a widespread weather event.

Because of my compulsion, when I hear and read about the “smart grid” I imagine roadside resiliency – underutilized roadside infrastructure transformed into microgrids that help distribution system operators manage their systems in their spare time but then step up to deliver safe locomotion during emergencies. Now isn’t that smart?

Ocean or Archipelago

DeLoreans are rare, especially ones with the flux capacitor option. But if we happened on a pristine example, set the date to 2044 and mashed the pedal, what would we see upon our arrival? Would the energy landscape resemble an ocean where energy consumers and producers are in constant contact, fluidly exchanging energy to capitalize on small differences in price, or would it resemble an archipelago where everyone produces their own energy locally like an island? And what’s so special about 2044?

Bell System

Well, it has been 30 years since the consent decree broke up the Bell Operating Companies that had previously provided local telephone service over copper land-lines in the United States. At the end of 2012, fully half of American households had no land-line telephone service whatsoever, relying 100% on wireless carriers for their real time interpersonal communications. That is a remarkable transformation by any measure, and verifies 30 years as plenty of time for a sweeping sea change. Plus, 30 years is well within the capability of a standard DeLorean flux capacitor.

So ocean or archipelago; the answer may be tied up in how social obligation colors this 30-year sea change. Before diving into the how and the why, though, it’s worth spending a few words on the what.



What is an energy ocean? Arriving at an energy ocean requires dramatic change to be sure, but most of the fundamental pieces of today’s energy ecosystem remain. Transmission exists. Its role involves bi-directional interstate energy delivery with service offerings delineated by capacity and quality. A consumer of transmission services buys, say, two terawatts of peak transmission capacity per month with five “9”s of reliability at a much higher price than if they were buying 150 gigawatts of off-peak transmission capacity at three “9”s of reliability. This consumer of transmission capacity may be pulling or pushing energy through this transmission pipe.

Distribution exists as well, with a similar role in the energy ecosystem as transmission but covering smaller capacities and more regional geographies; megawatt and kilowatt capacities spanning cities and neighborhoods with similar service level agreements for reliability.

Centralized generation continues to participate in the ecosystem, utilizing transmission and distribution to deliver product into markets that lack the resources for distributed generation. Yet centralized generation lives right alongside distributed generation, competing for customers on price and quality, each winning business when and where its service and economics are more favorable.

Finally, energy consumers remain in the ecosystem, but their energy bills are decoupled. Line items appear for the kilowatt-hours of off-site energy consumed but also for the distribution and/or transmission infrastructure used along the way for delivery to the service address when consuming as well as when generating and delivering energy offsite, beyond the meter. What’s more, a single service address may have multiple meters provided by different energy companies measuring different businesses in which a single service address is participating. Even more fundamentally, however, all these ecosystem pieces are connected. The grid remains.



On the other hand, what is an energy archipelago? An energy archipelago looks very different from an energy ocean, even though both involve water. Connectedness is gone. Each service address is an island that generates all of its energy needs locally. No transmission or distribution or centralized generation exists and in fact, the grid is no more.

In some regions where there is a dearth of locally available fuels like sunshine, wind or geothermal, there will be alternative fuels. Natural gas and hydrogen delivered via pipeline and used in high-efficiency fuel cells are examples; other examples will emerge to fill this gap as well.

The consumer’s conception of energy changes radically as well. Instead of an ongoing energy service measured in kilowatt-hours, energy morphs into just another appliance like a refrigerator or a furnace. It’s an expensive appliance for sure, so it may be financed when purchased new or rolled into a mortgage when buying an existing home or office, but it comes with a manufacturer’s warranty and will eventually be owned outright.

The service address becomes a dispatch location for maintenance services, just like the appliance provider down the road that services dishwashers and repairs ice makers. In fact your energy storage appliance will have a magnetic sticker on it that unabashedly promotes “Jake’s Energy Appliances” as the last provider to have serviced the appliance, which you call with your smartphone because you have no land-line. Since energy is generated and consumed locally, summertime brownouts and widespread outages caused by hurricanes become stories told to grandchildren as evidence of the much harder life endured by the story teller when they were a child. The grid morphs into thousands of microgrids, then into millions of nanogrids and then the meters disappear altogether, leaving just enough on-site energy generation and storage to meet each site’s demand.

Social Obligation

Energy ocean or archipelago: which eventuality we experience depends on the path taken, of course. The road to an energy ocean is evolutionary. A meaningful number of Investor Owned Utilities (IOU) and municipal utilities make the hard decisions early regarding existing business models. Via their net metering infrastructures they begin providing pricing signals that encourage rather than punish long-term connection to the electrical grid. Distributed generation is rewarded with favorable pricing while utilities carve out revenue from other services involved in maintaining the reliability of the grid and delivering centralized generation into underserved markets with favorable economics. These hard yet critical decisions keep utility companies and the electrical grid relevant long term.

The battle line is drawn at the meter. If utilities are not so proactive and continue to send pricing signals that encourage investment behind the meter, they will be locked out long term. Utilities have no control behind the meter beyond meter-based pricing signals so penalizing distributed generation and storage in order to preserve existing business models necessarily drives innovation behind the meter. Technology and financing creativity flourishes. Meeting residential and commercial energy needs onsite with economical generation and storage becomes commonplace and the grid becomes irrelevant because residential and business owners are forced to take control and hedge against the future risk of skyrocketing energy prices.


Social obligation plays into this story in at least two ways: the stranded utility customers who cannot invest behind their meter are left shouldering an unfair proportion of the utility’s profitability burden while the billions of large financial institution dollars tied up in utility bonds evaporate. Can market dynamics be allowed to dispassionately select the most efficient and economical solutions regardless of the social cost? Probably not.

There will be carnage, just like there was throughout the 30 years following the breakup of the Bell Operating Companies. Many and possibly even most of the utilities we know today will disappear. Large financial institutions will lose hundreds of millions and possibly even billions in capital tied up in stranded utility sector investments. Energy customers who cannot invest behind their meter will suffer much higher energy costs for a time. In the end the societal cost of a complete failure on either of these fronts is too high. Help will be provided. The grid will not disappear entirely. A steady state will be re-achieved that is mostly archipelago, but a fluid ocean like grid will persist in areas where there are fundamental limits to 100% onsite generation and storage. This grid will be funded and managed differently – no PUC but instead, public-private ventures will dominate the financing and ownership structures require for these large societal investments.

So the answer to ocean or archipelago is… yes.

Home Ownership & Distributed Energy: Two Peas in a Pod

At first blush, home ownership and distributed energy seem about as similar as kale and cheeseburgers. Look a little deeper, though, and a striking similarity emerges, suggesting they may actually be two peas in a pod. That similarity is funding. Where does the money come from for mortgages or distributed energy installations? In the near future, monies for both may come from an identical mechanism.

Ever heard of Fannie Mae or Freddie Mac? If you own a home or have ever worked with or known a mortgage broker, these names may be familiar. Fannie Mae, a friendly nickname for the Federal National Mortgage Association (FNMA), was created in 1938 as part of FDR’s New Deal. The goal was home ownership, and it worked! By creating a secondary market for mortgages, funds became available for home loans – lots of home loans.

But what does that mean – secondary market? In the old days, your neighborhood bank provided passbook savings accounts. They paid you interest in return for holding your money. To pay that interest they put your money to work. By pooling your money and all your neighbors’ monies together, they could do things like make loans. As long as the interest they received on loans exceeded what they were paying on savings accounts they made money. However, they could only loan the money they collected from you and your neighbors. This was very limiting. Few people owned their own homes as a result. Enter Fannie Mae and the secondary market.

Nobody Wants To Be Second

Fannie Mae was established to provide local banks with federal money to finance home mortgages. They accomplished this by creating a secondary market, which sells mortgage-backed securities (MBS) to investors. An MBS is a pool of loans with similar characteristics. An investor buys, or invests in, an MBS because it provides a fixed return over a period of time. This process is called securitization. More importantly, this process expanded the amount of money to lend well beyond what any neighborhood could support. Today investors all over the world invest in US mortgage-backed securities in droves. Over time the primary market, your neighborhood bank, has given way to the scale and efficiency of this secondary market.

Secondary Market

In 1970, Freddie Mac was created to perform the same function as Fannie Mae and provide competition to the deemed monopoly Fannie Mae had become. Like Fannie Mae, Freddie Mac is a friendly nickname for the Federal Home Loan Mortgage Corporation (FHLMC). Together Fannie Mae and Freddie Mac, both government-sponsored enterprises (GSE), have made pervasive home ownership a reality, even for people with poor credit.  But don’t get me started on the sub-prime debacle of the previous decade…

Put the “Fun” in Funding

How is this related to energy? Today if you want to put solar on your rooftop or a wind generator on your property you must pay for the system with money you have saved, or possibly a home equity loan from your local bank, after you fight with your homeowner’s association of course. It’s happening, but locally, and in ones and twos. Widespread adoption of distributed renewable energy is not occurring, even though the economics make tons of sense in many regions today. Like the limitations neighborhood banks placed on home ownership, access to funding is inhibiting widespread adoption. Can securitization of energy help?

Rooftop Solar

It can. And it’s starting. SolarCity Corporation is the poster child for the securitization of energy. A SolarCity customer signs a twenty year agreement for energy services. In return they receive an additional energy bill from SolarCity. Nothing gets paid up front. The combination of their old utility energy bill and their new SolarCity energy bill is generally about 30 percent less than they paid before because a meaningful percentage of their energy needs are met by the sun. The rooftop equipment delivering this solar energy is essentially leased and rolled into the customer’s monthly payment. SolarCity owns the equipment and operates it on the customer’s behalf. But where does SolarCity get the funding to pay for the equipment in the first place? Well, they have a fund. An investor buys, or invests in, the SolarCity fund because it provides a fixed return over a long period. It’s not a mortgage-backed security, but it plays one on TV. Plus, because of the solar Investment Tax Credit (ITC) the fund provides unique benefits to investors with a tax equity appetite. In turn, SolarCity uses this money to pay for the upfront cost of the solar equipment needed to deliver the energy. Returns to the investor are paid from monthly energy payments by customers. Like a mortgage-backed security, the fund aggregates large numbers of rooftop solar installations and customer payments, improving the economics and mitigating risk in the same way a diversified portfolio mitigates risk.

Government Sponsored or Star Trek Enterprise?

SolarCity is a private sector company, although it recently IPO’ed so it is publicly traded as SCTY. It is not, however, a government-sponsored enterprise like Fannie or Freddie, even though it shares many characteristics of a GSE. What would happen if a GSE or two were created as part of a New New Deal, with a goal of widespread distributed energy generation via energy-backed securities and a nickname of Felix? Well then, home ownership and distributed energy would indeed be pod-mates.