Vivatech Service Cart – Circa 1997

This one goes back 20 years, to the earliest days of PowerLines; prompted by an off hand recollection at a customer site this morning. I’ve always been willing to jump in with both feet, and to pick up work however it showed up, especially in the early days.

Vivatech (Teterboro, NJ) was a start-up company intent on developing tools and processes designed to prolong the life of an X-Ray tube, through replacement and processing of the tube cooling oil. As background, an X-Ray tube is a fairly high dollar expendable for a CT scanner; with a lifespan measured in “slices”. The principal of Vivatech was a doctor who owned an imaging center, and as far as I can see, was simply pissed off at having to purchase replacement tubes. So he came up with the concept of periodically replacing and/or filtering the X-Ray tube cooling oil as a way to extend tube life, and obtained a patent – US 5440608 A: Method and system for extending the service life of an x-ray tube

If you have seen the carts that the quick lube places use to process / replace automobile coolant, you get the idea.

To do this, he fitted his CT Scanner with some connectors to permit the cooling lines to be opened, a service cart with pumps, reservoirs, filters and valves inserted into the oil cooling system, a fairly complicated sequence or process intent on filtering the oil, replacing the oil, and then working the air bubbles out of the system. The claim: “The company has developed a tube maintenance program that it says can help tubes last for up to 300,000 slices, far more than the 75,000 slices that are the industry average.” (Diagnostic Imaging, July 1996). That’s pretty much the only online evidence that I can find that the company existed – the Vivatech name has been picked up by a variety of businesses and conferences / events in the ensuing years.

I got sucked in through the side door; I had worked with one of the principals on power quality issues at NJ area clinics; and ended up being brought in to consult in a bunch of areas:

  • Develop the controls for a production quality cart (PLC vs. relays)
  • Advanced diagnostics and user interface
  • User / Service Manual
  • Marketing support (PowerPoint, etc.)

I’d have to dig a bit for some photos, but I did find the Operator’s Manual for the Vivatech Service Cart (PDF) in my archives. Kind of a fun look back (for me, anyway). Put together in MS-Word, all the graphics are RF-Flow (if I recall correctly), interesting that we were working in B/W documentation back then (plenty of places where color would have been useful / advised, thinking of the cautions and warnings in particular)

The front panel (Page 3-2) was all me – we had transitioned from a hand-wired panel using 24 VAC relay logic and incandescent bulbs to a PC Board with mounted, square LED lamps (in one of three colors: Red / Amber / Green, before Blue LEDs came along).

The other fun part was adding some user diagnostics / indicators – the prototype unit simply had lights that turned on and off. I added all sorts of “value added” indications using the PLC to make the same sets of lights flash during operating (slow flash = normal / wait, fast flash = error / fault).

I also recall an intricately machined manifold that all the various valves, motors, filters, and ports were mounted on, and a large plexiglass reservoir to hold the in process oil.

Some of the wild and crazy engineering things I got involved with, back in the day. I’ll see if I can dig up some photos and add those.

Edit: No dice on the photos. I did find a box of photos that go back that far, and there was a divider in the box labelled “Vivatech”, but no photos. I recall putting everything to do with that project (hard copies, manuals, PLC hardware, spare PC boards, photos, etc.) in a box; and years later donating all the tech to a local makerspace. I imagine I tossed the rest at that time. It was back when you developed a roll of film carried the photos around, and if you needed something electronically, you scanned it (180 degrees away from today, when a hard-copy photo is exceptional). I could be wrong; things sometimes turn up in odd boxes or shelves, but for the moment, I think I’m out of luck.

Om Street 2017: Time Lapse Video

“On July 22, 2017, over 2500 people gathered on LaSalle Road in West Hartford, CT to “get their asana in the street!” The weather was perfect and the energy was incredible, making the 7th annual OM Street a huge success. The 75 minute all-levels yoga class was led by Barbara Ruzansky of WHY (http://www.westhartfordyoga.com) and included assistants from 40 area studios and businesses. Every studio that gathered their tribe, and every individual that put a mat or a chair on LaSalle represented a community, a coming together of like-minded spirits into something like a neighborhood, like a family.”

Remarkably, I made it into the video frame this year (directly behind, to the left of the tree at the far right lower corner of the shot, blue shirt) standing at the audio table, keeping an eye on the sound. And at the end (around 2:50) I take the front speakers down…

It’s kind of amazing to be part of this each year, to be the adult in charge of the sound system, and that, seven years running, we’ve never had a significant technical issue!

Om Street 2017: Better, Faster, Stronger

We survived the 2017 edition of Om Street: Yoga on LaSalle road, my yoga studio’s annual “yoga in the streets” event that I’ve been audio engineering since the event’s inception in 2011.

Some history via the blogosphere:

Just finished cleaning up the audio equipment from the event this morning; three bins and one large duffel bag loaded with equipment that needed to get sorted, re-wrapped as necessary, and restowed for the next gig.

It was a pretty quick and pain-free recovery this year, owing to a few factors:

  • I ended up wrapping the bulk of the long cables myself – 4 x 100′ XLR (audio) cables, 4 x 100′ extension (AC power) cords, and all of the 1/4″ and speakon speaker cable (several hundred feet worth). There were just 2 x 100′ AC power cords that someone else wrapped and needed to be re-wrapped. In past years the cables were a bit of a hot mess.
  • I had a great assistant (yay, Steve!) who was dedicated to the band, so when I unpacked the storage bin from the band, cables were nicely  wrapped, mic stands nicely folded, mics and headphones all bagged up neatly. He also did a yoeman’s job of setting the band up (mics, direct boxes, and monitor headphones) so I could focus on other things.

Honestly, the only bin that was a bit of a hot mess was the main audio station bin (that I packed, last) because it was kind of the catch-all for everything lying around.

It’s a huge outdoor yoga event:

Om Street 2017 Wide Shot

Om Street 2017 (Breck Macnab Photography / West Hartford Yoga)

And it’s pretty much me handling the audio all by myself. Some things that helped out this year:

  • I added a pair of low cost wireless stick mics for emcee / stage mics this year. In previous years I used the studio wired mics but that’s a couple of fairly long XLR cables I do not need to worry about setting up or striking. I picked them up for Q&A for some larger workshops we have, but they proved useful for Om Street as well.
  • I did some more work on the band monitor setup, adding a small mixer, buying some dedicated cables / adapters, buying a gamer headset for myself (with a small headset mic on a boom), and setting it all up ahead of time to get levels. So the band had great monitors in both ears and we had a talk-back channel, with one band mic and my headset mic going only to the monitor channel, so we could communicate during the practice.
  • I took the time to kit out the audio equipment. Typically I show up with audio stuff in bins sorted mostly by who owns the equipment (yoga studio, or me) and general function (audio cables, power, speaker cabloes, mic stands, etc.). I would end up running around a lot during set-up, getting things to the right place This year I put everything for the band in one bin (power strip, mics, stands, direct boxes, cables, monitor headphones and amplifier) and everything for the main audio station in a second bin. I also got some of those big family zip-lock bags and kitted together all the cables and adapters for the satellite PA systems (200′ and 400′ down the road) so I could unload those with the speakers, stands, and amplifier, and not have to walk back and froth so much.

This year I took the time to sketch out the audio schematic:

RFFlow - Om Street 2017

I’ll add some links to “sure to be posted” video of the event, but here are a few news articles that have already made it to press:

Tale of Two Power Systems

Looking over a power monitoring dataset recently; we came across a site with a dual personality. The site in question had low total harmonic distortion (THD ~ 1.5%) from 4/1/17 up until 4/10/17 (specifically, at 2:00 pm). After that, the THD fluctuated much higher, rising as high as 5.4% (outside of manufacturer requirements for medical imaging equipment).

Tale2Power THD

A closer “before / after” look at the voltage and current waveforms provides more evidence, with visible notching on the voltage waveform “after”; monitored current showed some noise but was similarly low.

Tale2Power Waveforms Before Tale2Power Waveforms After
Before 4/10/17 After 4/10/17

Individual harmonics similarly supported the findings of the THD log, with all harmonics higher, and 5th harmonics exceeding 3%.

Tale2Power Harmonics Before Tale2Power Harmonics After
Before 4/10/17 After 4/10/17

Tale2Power NG RMS
Finally, the “before after” affect was also seen in the Neutral-Ground voltage – with noise voltages evident although the lower frequency voltages were not much higher.

Tale2Power NG Before Tale2Power NG After
Before 4/10/17 After 4/10/17

The funny thing is that the RMS voltage and the current of the device under test were not significantly different before / after the 4/10/17 date, in terms of RMS level or in terms of stability: sags, swells or fluctuations.
Tale2Power RMS

So what’s the scoop? We’re not on site, but odds are good that some facility load (we’re betting air conditioning, but could be other facility loads) got switched on at this time. Alternately, perhaps the facility transitioned to an alternative power source. But whatever the reason, this is clearly a tale of two power systems, and we’re curious about it!

Remembering Joe Briere

I found out this morning that Joe Briere of Computer Power Northeast passed away back in March – obituary.

Joe and I go back a long ways. When I worked at Philips Medical in Shelton (pre-1995) he stopped in a few times to see if there was any business there; he was plying his trade as a power quality consultant in the medical imaging field, doing a lot of work with Siemens Medical, so it was natural that our paths would cross. He was one of those “I’ll help you with your power quality issues and maybe sell a transformer, voltage regulator, power conditioner, Uninterruptible Power Supply, or Surge Protective Device along the way” kind of consultants that inevitably made a lot more money than me (who chose not to sell or rep products, just provide technical services). He was always a straight-shooter, never found him to oversell or over-promise, and was a hands-on guy who knew his way around grounding, the electrical code, isolation transformers, etc. We did not agree on everything, but I always respected his opinions and experience.

For a few years there (2001 – 2003) Joe and I would often find ourselves meeting at Siemens trouble sites across the country as the service organization tried to get a handle on power and grounding issues, so I got to know him pretty well, poking around hospital electrical systems and sharing a beer and a meal afterwards.

Have not been in contact for many years (my last contact with Joe was 2003, and with Computer Power Northeast was 2009) but I’d pop over to the website now and then to see if they were still in business. Joe was probably retirement age when I started working closely with him, and reportedly kept busy well into his 80’s. His business partner called for a little consulting project this morning and shared the news.

Was a long time ago in a galaxy far, far away. Power Quality Consulting was kind of the wild west back then and the folks who knew what they were doing, were not afraid to open an electrical panel and make some measurements, and could sort out technical issues that left others scratching their heads were a rare breed. RIP, old friend.

Ground Resistance Testing

I just sent a client a document from a seminar that I created and led in 1996. (The seminar client is long out of business).

It’s nice to be (a) the old dog who was around back in the day, and (b) a bit of a digital pack rat. Also interesting that the technical issues of 2017 are no so different from the technical issues of 1996.

Here’s a snapshot of that document (pretty slick for 1996, no?) – and no guarantees that the IEC / UL references or requirements are still valid. But the concept – that measuring ground resistance with a low current ohmmeter is going to give you sketchy results – remains valid.

Ground Resistance Testing

Reading the Tea Leaves

Sometimes when reviewing power monitoring data, key information is left out of the problem statement. But the astute power quality engineer can “read the tea leaves” and pick up information about the installation, equipment, and technical issues.

A set of data from a Magnetic Resonance Imaging system was presented for analysis with the following notes:

System has had several intermittent issue that have caused system to be down and functional upon arrival. System issues have been in RF section. No issues have been reported since system since installation of power recorder.

RMS Voltage and Current

Chiller RMS Logs

First clues come from looking at the RMS logs of the voltage and current. The voltage is suspiciously well regulated – and is probably a UPS or power conditioner rather than a normal utility source (which will tend to fluctuate over a 24 hour period). A second clue is the small voltage increase or swell related to load switch-off – typical of an active source, not typical of a passive source.

Second, this appears to be a regularly cycling load – a pump or compressor. MRI systems typically have a chiller or cryogen cooler associated with them – so odds are good this was monitored on this load, and not on the MRI system itself.

Chiller or Cryogen Cooler Load

Chiller Cycling RMS
More evidence supporting the chiller or cryogen cooler load – a regular (practically like clock-work) cycling load, with a marginally higher operating current (~30 Amps), but a very high inrush current (~180 Amps)

Chiller Transient

Normal chiller or cryogen cooler inrush is seen here. A minor (~5%) voltage sag was captured during each inrush current, as well as minor associated transients (probably relay or contactor switch bounce)

Abnormally High Inrush Currents

Chiller Sag

In addition to the regular inrush currents associated with chiller cycling, six instances of very high inrush current were captured. These were seen both as voltage sag events as well as current triggered events. We’re concerned that this high inrush current may be causing an overcurrent condition on the UPS / power conditioner – which may be throwing a fault or error, or perhaps switching to Bypass.

Chiller Swell RMS

Looking at the RMS logs of the high current swell / voltage sag event, we see that it precedes a period of extended chiller / compressor operation. Unknown if this is normal operation for the system / device or indicates an error or fault of some sort.

Summary

Although the accompanying technical information was thin, we’ve “read the tea leaves” and provided the following analysis bullets

  • System appears to be powered by a UPS or power conditioner. Service personnel may not have known this.
  • System appears to be a chiller, compressor, or similar device (not the medical imaging system itself)
  • Occasional high current swells were seen; these may be normal or may point to system issues.
  • Voltage sags and collapse during these high current swells may indicate that the UPS or power conditioner is overloaded, and may be experiencing faults or alarms that may be impacting system operation / uptime

The Curious Case of the UPS Loading

We recently got to review input and output monitoring data from a UPS system (make and model not specified) feeding a medical imaging system. The monitoring was done as a precaution, but we noticed something unusual.

First, take a look at the RMS voltage and current logging of the UPS input and output. Phase A voltage, Phase B current shown for clarity, but all voltage and current phases are balanced and similar.

UPS Compare Input RMS

UPS Input – Normal facility RMS voltage (daily fluctuations, with occasional sags) and RMS current peaks at approximately 80 Amps.

UPS Compare Output RMS

UPS Output – Highly regulated RMS voltage (with small load related fluctuations) and RMS current peaks at approximately 170 Amps.

The discrepancy between the input current and output current is unusual. It would be typical for input current to be marginally higher than output current (due to device efficiencies) but not lower. Our guess – the UPS DC bus (and probably, the battery string) is being called on to support the peak output load.

UPS Compare Output Highest Load

UPS Output – Step change in load current and nonlinear load is typical of medical imaging system. Very small fluctuation in output voltage related to load changes, and small increase in voltage distortion related to nonlinear load current.

UPS Compare Input Highest Load

UPS Input – Even at highest levels, current is linear, UPS must have a unity power factor front end / rectifier. However, lower current level is unusual, and indicates that UPS battery is probably being called on to supply the peak medical imaging load.

There’s really no immediate problem here – the UPS is doing a great job of correcting input power issues, as well as supplying the complex loads (step change, pulsing currents, nonlinear power factor) of the medical imaging system.

However,it’s pretty clear that the UPS batteries are getting discharged during highest current imaging system operations – not really their intended purpose, which is to ride through far less frequent utility sags and outages. So it’s possible that the UPS batteries are being stressed and may degrade or fail prematurely, and need replacement. We’ve referred this to the UPS manufacturer / supplier for attention.

As a quick “in the field” test (we’re doing this analysis remotely, not on site) we might suggest disconnecting the battery string temporarily, and seeing how the UPS performs without the battery, just relying on the DC bus. We’re guessing the UPS might start to collapse or struggle to supply the medical imaging load – and may be undersized for the application without the battery string supplied.

We’ve seen situations where a UPS that has heretofore worked well for years stops working quite so well, because the batteries started to wear out, and the unit was no longer able to supply the peak loads required by the imaging system.

The Guru’s Cat

When the guru sat down to worship each evening, the ashram cat would get in the way and distract the worshipers. So he ordered that the cat be tied during evening worship.

After the guru died the cat continued to be tied during evening worship. And when the cat died, another cat was brought to the ashram so that it could be duly tied during evening worship.

Centuries later learned treatises were written by the guru’s disciples on the religious and liturgical significance of tying up a cat while worship is performed.

– Anthony De Mello, The Song of the Bird

A music festival where I’ve been a volunteer for nearly 25 years is doing their annual mid-winter pre-fest sales – selling a limited number of tickets at a reduced price. It’s a good way to carry the festival organizers over the winter, and to give regular festies a price break.

What’s NOT so good is how they do it – phone only, with a limited staff processing orders manually over a three day period. I’ve seen some posts on social media:

“After 111 attempts to get through, over a span of twelve minutes, tickets have been procured!”

Mine took longer than usual — 179 calls and 48 minutes (you re-dial faster than I do) . . .

and from one of the folks on the other end of the line:

FYI y’all S & A are working fingers off to accommodate youralls calls for tix & are very grateful for your wonderful patience

See, the thing is, this could be done online through eCommerce – put a limited number of tickets for sale (so you do not oversell) and for a limited time. Yeah, there’s a service fee (but probably not all that much higher than the credit card fees) and I suspect most customers would pony up an additional $5 or $10 per ticket to cover an eCommerce solution (and not have to dial in 100+ time). You could sell out your winter pre-fest stock without having to tie up customers, your staff, etc.

But….it’s been done this way for 25+ years and will probably always be done this way. Like the Guru’s Cat, sometimes we do things out of habit or tradition or inertia without stepping back and considering other options.