Losing the Neutral Conductor

This one came across my social media timeline this morning (edited a bit):

I came home on Friday, an hour before we had a birthday party planned. There was a cable company guy who came over and asked if I was the home owner. He did not explain the problem very clearly and became very frustrating but in short, he saved our house from burning down.

He shut down the Internet and told us to shut down the electricity. Apparently the neutralizing wire that runs under ground was not working causing brown outs and power shortages. The smell of electrical fire was heavy in the house.

We managed to have a great party despite the problems. The output caused a shortage in the hot tub and pool. We have no refrigeration or dishwasher along with a few other things that burned out. Last night, we found a power strip that had really burned out with burn marks on the floor. As he moved it, the same electric burn smell filled the room.

Through it all God spared us big. We are still without a refrigerator but at least the stove works.

What happened was that this home lost the neutral conductor from the utility to the service entrance. Without that neutral, there’s no return path except for the safety ground, which is often substandard or high impedance (~25 ohms). The result: Phase-Phase voltages (such as used for an electric stove, water heater, or electric dryer) are fine, but Phase-Neutral voltages can be anywhere from 0 VAC to 240 VAC.

So yes, things blow up, burn, etc. and often in a bad way (high current but not a dead short, so not enough to trip breakers). The “power strip with burn marks on the floor” is typical as internal surge suppressors / MOVs overheat, not because of short term transients, but because of prolonged, sustained AC overvoltage.

Oftentimes this sort of situation has some warning signs: lights dimming or brightening as appliances switch on and off, light bulbs failing prematurely. One online board reports:

When i turned the oven on, the fan went back to normal, the lights normal.  The 240v load
apparently balanced the system.

Sadly, a lot of electricians and utility workers are not that well versed in this sort of issue. From the same message board:

So i get on the horn with the power company.  They come out, and basically look at what i’m experiencing and the first thing the guy does is pull the meter.  Then he measures the voltages on the incoming legs.  All is equal.  Then he tells me the problem must be on the inside.  Puts the meter back in and the imbalance returns.  “yep , he says, problem is on your side”.


System Down or Ride Right Through?

We recently reviewed a set of power monitor data from an MR (Magnetic Resonance) site. The facility was plagued by severe voltage sags; we ended up with a rather copious collection of classic but nonetheless ugly event waveforms. And in the course of analysis, we noticed that some sags caused system shut-down, some rode right through, and some perhaps caused an error or lock-up which the customer attempted to reset by powering down the system.

Reviewing equipment response to severe events in this way can help to calibrate system sensitivity when manufacturer or factory data about sag susceptibility is not available.

Example #1: System Rides Through Voltage Sag

Sag RMS No Shut Off

Despite a fairly serious sag, no sign of direct impact on the imaging system. Current levels shift during the sag event itself, but remain at about the same level before and after the sag.

Sag Waveform No Shut OffExample #2: System Shuts Down During Voltage Sag

Sag RMS System Down

At the time of a severe voltage sag, load current drops to a lower, standby or system-off level, and remains there.

Sag Waveform System DownSag Waveform Customer Shut OffExample #3: System Shuts Down During Second Voltage Sag

Sag RMS System Down Second Sag

During these sag events, the system appears to ride through a severe voltage sag; but shuts down during a subsequent sag 15 seconds or so after the initial sag event.

Sag Waveform System Down Second SagSag Waveform System Down First Sag

Example #4: System Current Drops Following a Voltage Sag; Customer Shuts System Down

Sag RMS Customer Shut Off

Following a severe sag event, current drops partially. Suspect that one or more subsystems shut-down and resulting system alarm or errors results in customer shutting down the system, 30 seconds after the sag event. Note drop in current not directly related to a voltage event.Sag Waveform Customer Shut Off

Power Quality as a Whipping Boy

Every so often I get supporting info with a power audit that reads something like this:

“Inordinate number of issues occurring with equipment as compared to other sites and systems. Everyone is in agreement the power sags, but questions if this has an adverse affect on the equipment.”

True Confession: It gives me a little bit of pleasure to find not a single facility or utility voltage sag in the resulting data. Power quality becomes, at times, the convenient excuse for equipment problems that are actually rooted in operator error, inadequate design, improper installation, inadequate servicing, environmental conditions, etc. When people confidently sling around power quality as the one known reason for problems, it’s almost always a good sign there’s something else going on.

Parkview Mains

Facility / utility voltage. We see a local outage, a small drop in voltage during emergency power system testing, and voltage flicker. But no serious voltage sags.

In this case, associated monitoring on the output of a UPS system showed perhaps the real problem – severe voltage sags associated with load switch-on. The UPS is either undersized for the applied load or in need of adjustment or maintenance. Yet another case of the “solution” being part of the problem.

Parkview UPS

UPS Output. Minor voltage drop during equipment operation, and severe voltage sag related to switch-on / inrush, points to a UPS that is undersized or in need of adjustment or maintenance.

Parkview Inrush

Load inrush current, with visible collapse of UPS output.


All Better!

Recall the case back in December when we identified a power conditioner that had apparently gone bad? When Power Conditioners Go Bad (5Dec15)

We received a follow-up power audit this week from the same site. Apparently, the power conditioner or UPS has been replaced or repaired. All better! We don’t often get to see such a full life cycle (good power → bad power → good power) but when we do, it feel good to be part of the solution!

Hannibal SnapshotVoltage waveforms are sinusoidal and well balanced, under all load conditions

Hannibal RMSRMS voltages are very stable and well regulated, with small load related fluctuations

Hannibal THDVoltage THD is low throughout the monitored period, under all line and load conditions

White Paper: Power Strip Safety and Regulatory Compliance

Came across this courtesy of the folks at 24×7 Magazine and Tripp-lite Corporation

Power Strip Safety and Regulatory Compliance:
A Comprehensive Guide to Utilizing Power Strips in Healthcare Facilities

Executive Summary
As the use of power strips in hospitals has become more widespread, their misapplication has also become increasingly prevalent. Incorrect use of power strips in healthcare facilities can result in citations, fines, or even patient injuries. This White Paper discusses the common mistakes made in using power strips in healthcare applications and introduces a methodology to promote safety and compliance. Moreover, this White Paper examines the most current codes and standards governing power strips in healthcare applications and the ways those codes and standards may impact your healthcare facility.

Download Here

Lousy Power? Or Lousy Voltage Probes?

When is a voltage problem an actual case of poor power quality, and when is it instrumentation?

We received a data set this morning that seemed, on first glance to have spectacularly bad power.

Voltage Probe RMSPhase AB voltage is marginally high (500 Vrms, for a 480 Vrms source) but Phases BC and CA show serious issues with voltage level, sags, and waveform distortion. We rolled up our sleeves.

However, if did not take long to realize we were on a Wild Goose Chase. Specifically, event waveshape graphs show arcing that is typically associated with a loose or intermittent connection, but no apparent impact on load current.

Voltage Probe Event 1More damning, zooming out to the RMS level shows very significant shifts in RMS voltage on Phases B-C and C-A, with absolutely no change in RMS current.

Voltage Probe Event 2We’re sending this one back to the client to review. We might still be able to pull out a bit of analysis based on Phase A-B only – but it will be very limited.

One of the benefits of having a live human being performing your power quality analysis – we spot these things in a way that automated report writers can not!



Happy New Year

We’re about to dig in to power analysis of a mobile MR site.

Mobile MR VoltageA quick review shows 27 power interruption events (as the mobile system transitions to and from the on-board generator) and probably a similar number of serious voltage sags (related to source transitions and to load switch-on while operating from the generator)

Not exactly the most fulfilling of work; it’s rather tedious to fully document each of these events. But it must be done, and the billable hours clock is running. So off we go!


When Power Conditioners Go Bad

We recently came across what appeared to be the worst power quality (as measured by voltage waveform, distortion, and regulation) we’ve ever seen (Fluke 1750 power analyzer). Power quality this poor is rather inexplicable – hard to imagine anything working well with this sort of power.

However, we’ve been analyzing power quality data for this particular client since 2003 (over 4000 individual reports) so we researched our surprisingly complete and useful database, and realized that we had reviewed this same site and same piece of equipment several years back (2009 and 2010), using an RPM 1650 power analyzer. At that time, the power quality was not only much better, it was nearly perfect. The equipment, in fact, appeared to be supplied via a UPS or power conditioner (make and model not indicated).

Since the power conditioner or UPS was dedicated to the medical imaging system, no other loads within the facility were affected. Clearly, the device feeding the equipment has failed or aged out of useful service – and needs to be bypassed (to start) and then repaired or replaced. We think you’ll agree….

Snapshot New

Voltage waveform (Phase B) in 2015 shows high levels of voltage distortion. Voltages were also highly imbalanced across three phases.

Snapshot Old

Voltage waveform (Phase B) in 2010 shows nearly perfect voltage with very low levels of distortion.

Harmonics New

Voltage harmonics (2015) show 3rd, 7th, 9th, and 13th far higher than 5% of fundamental, with total harmonics exceeding 25% THD.

Harmonics Old

Voltage harmonics (2010) were very low – with a small peak near the 33rd harmonic, typical of UPS or power conditioner switching frequency.


Voltage and Current RMS logs (2015) show high voltage, many voltage swells, and poor regulation. Voltage imbalance (across all phases) was also very high.


Voltage and current RMS logs (2010) show very tight regulation, with small fluctuations related to load changes, typical of a static power converter.

Frequency: Island vs. Mainland

We were looking at some power monitoring data this week for an MRI (Magnetic Resonance Imaging) site located in Puerto Rico. The site power was generally pretty good (in terms of things like voltage regulation, harmonics, sags and swells, etc.) but we were struck by the difference in frequency regulation at this site as compared to a mainland site.

We look at a lot of mainland US power data sets (~300 per year) and frequency is almost never anything but rock stable. Frequency fluctuation is often a good indication of an alternate power source (such as a free-running UPS or back-up generator)

Island Frequency

Frequency at an MRI site in Puerto Rico (one week monitoring, 480 VAC, with good voltage regulation)

Mainland Frequency

Frequency at an MRI site in New Jersey (one week monitoring, 480 VAC, with good voltage regulation)

The frequency was technically “acceptable” at both sites – most equipment is spec’d for 60 Hz +/-1% (59.4-60.6 Hz) or +/- 1 Hz (59 – 61 Hz) and some is spec’d much wider. Switch mode power supplies might work fine from 47 – 63 Hz, for instance.

However, some older equipment does use the line frequency for timing or for regulation, might control output or dose using phase-controlled regulators, or might have some protection or control circuits that trigger off the incoming AC waveform – and these might act up a bit if the frequency varies. So frequency is a good thing to keep an eye on when working with equipment installed on an island or other smaller power grid.