Backups and Archives

We’re in the process of re-organizing our office back-ups and archives.

For a while now, we’ve had a 1TB D-Link mini-server, dual parallel drives, serving as a backup device. It’s been getting full – so we decided to review and revamp. Turns out a big chunk of that drive (700 GB, at the moment) is devoted to one client, and an archive of site data and reports that go back nearly 15 years. In the early days the data sets were relatively small (by today’s standards) – 10 or 20 MB maximum. But today, we regularly see data sets that exceed 1GB.

So, new plan, we picked up a relatively low cost, 4TB backup drive (USB connection) and are moving all of the customer data over there. There’s no real requirement for this data to be backed up permanently, it’s more of a “nice to hang on to” archive. That way, we can free up the 1TB drive (still nicely serviceble and redundant as an automated backup device) for everything else.

What this requires, however, is patience. In the process of copying 700GB of data from the network drive to the USB drive is taking some time (days really); it’s slowing down my main workstation a bit but not enough for me to set up something else to handle the chore.

Once the data gets pushed to the new drive, I’ll clean up the old backup drive, and also clear out some space on my main workstation and spend some time defragging the disks.

 

RIP: MCM Electronics

From the ARRL Website (9/21/17):

MCM Electronics, in business for 40 years, will close two plants and its corporate headquarters in Ohio and lay off more than 90 workers, the Dayton Daily News reported earlier this summer. The company, which carries an electronics inventory of more than 300,000 items, including 3-D printers, tools, wire, cable, and other items, has been a Dayton Hamvention® vendor.

The layoffs will begin at month’s end and continue through the end of the year.

You can now find the MCM Electronics catalog on the Newark.com site:

MCM Electronics and Newark element14 have partnered together for over 32 years as part of the Premier Farnell family. Now, MCM will be strengthening this partnership under the Newark name. MCM’s unique product offering combined with Newark’s vast inventory, expanded services and global reach makes us the one stop shop for engineers, installers, educators and makers.

MCM Electronics and I go back a long ways. I’m pretty sure my first purchase was a set of Spin Tite nut drivers; I’d found them super handy in the labs at my first job at Superior Electric and soon after purchasing my first home I outfitted a workbench. Still have that small set of color-coded, english gauged drivers. Over the years I bought oddball tools and test equipment, parts and components, and most recently audio-video equipment. I’ve got a storage locker full of A/V gack: LED Par Lamps; speakers, speaker stands, lighting stands, lighting truss, XLR and speaker cables. Often purchased via one of the seemingly insane 50% off sales that would periodically flood my physical mailbox. MCM Electronics seemed to be one of the last bastions of physical catalogs; I still have their last catalog on my shelf.

Why yes, I did buy four of the 12″ PA speakers at $44.99 each…

It was never the highest quality stuff, but it was light duty, serviceable, good enough for my needs.And for whatever reason, their shipping department seemed to be amazingly responsive – I’d order stuff “slow boat” and it would show up 1-2 days later; big boxes of speakers or truss or whatever.

And while stuff still seems to be available, I have no doubt that the selection will narrow, the catalogs will stop coming, the to good to be true sales prices that often enticed me to buy will no longer be offered.

End of an era. There’s a lot of that going ’round in my world these days….

 

Engineering Templates

Popped up on Facebook this week; a friend posted some guitar innards and a commenter referenced engineering templates – and it was off to the races.

Picket 16101

Picket 1610I – A personal favorite by dint of the transformers and the Delta-Wye transformer windings

Which is more terrifying:

  • I’m of an age where I actually used these for their intended purpose?
  • I still have these?
  • I knew exactly where they were? They’ve sat waiting patiently in the hanging file folder I put them in when I started consulting in ’95.

Apparently one can still purchase these – although I’m not sure how many are sold. I’ve not seen a drafting table in use (except perhaps ironically) for many years.

Truth be told my love affair with these tools goes back much further – Dad worked in IT back when the Univac brand was on top of the industry, and weekend trips to his office meant a morning of messing around with programming templates, making punch cards, shooting big rubberbands (used to bundle punch cards or print-outs). Every year at xmas we’d get a dot-matrix, ascii art peanuts calendar – I found a pretty representative sample at Hackaday.

Vivatech Service Cart – Circa 1997

This one goes back 20 years, to the earliest days of PowerLines; prompted by an off hand recollection at a customer site this morning. I’ve always been willing to jump in with both feet, and to pick up work however it showed up, especially in the early days.

Vivatech (Teterboro, NJ) was a start-up company intent on developing tools and processes designed to prolong the life of an X-Ray tube, through replacement and processing of the tube cooling oil. As background, an X-Ray tube is a fairly high dollar expendable for a CT scanner; with a lifespan measured in “slices”. The principal of Vivatech was a doctor who owned an imaging center, and as far as I can see, was simply pissed off at having to purchase replacement tubes. So he came up with the concept of periodically replacing and/or filtering the X-Ray tube cooling oil as a way to extend tube life, and obtained a patent – US 5440608 A: Method and system for extending the service life of an x-ray tube

If you have seen the carts that the quick lube places use to process / replace automobile coolant, you get the idea.

To do this, he fitted his CT Scanner with some connectors to permit the cooling lines to be opened, a service cart with pumps, reservoirs, filters and valves inserted into the oil cooling system, a fairly complicated sequence or process intent on filtering the oil, replacing the oil, and then working the air bubbles out of the system. The claim: “The company has developed a tube maintenance program that it says can help tubes last for up to 300,000 slices, far more than the 75,000 slices that are the industry average.” (Diagnostic Imaging, July 1996). That’s pretty much the only online evidence that I can find that the company existed – the Vivatech name has been picked up by a variety of businesses and conferences / events in the ensuing years.

I got sucked in through the side door; I had worked with one of the principals on power quality issues at NJ area clinics; and ended up being brought in to consult in a bunch of areas:

  • Develop the controls for a production quality cart (PLC vs. relays)
  • Advanced diagnostics and user interface
  • User / Service Manual
  • Marketing support (PowerPoint, etc.)

I’d have to dig a bit for some photos, but I did find the Operator’s Manual for the Vivatech Service Cart (PDF) in my archives. Kind of a fun look back (for me, anyway). Put together in MS-Word, all the graphics are RF-Flow (if I recall correctly), interesting that we were working in B/W documentation back then (plenty of places where color would have been useful / advised, thinking of the cautions and warnings in particular)

The front panel (Page 3-2) was all me – we had transitioned from a hand-wired panel using 24 VAC relay logic and incandescent bulbs to a PC Board with mounted, square LED lamps (in one of three colors: Red / Amber / Green, before Blue LEDs came along).

The other fun part was adding some user diagnostics / indicators – the prototype unit simply had lights that turned on and off. I added all sorts of “value added” indications using the PLC to make the same sets of lights flash during operating (slow flash = normal / wait, fast flash = error / fault).

I also recall an intricately machined manifold that all the various valves, motors, filters, and ports were mounted on, and a large plexiglass reservoir to hold the in process oil.

Some of the wild and crazy engineering things I got involved with, back in the day. I’ll see if I can dig up some photos and add those.

Edit: No dice on the photos. I did find a box of photos that go back that far, and there was a divider in the box labelled “Vivatech”, but no photos. I recall putting everything to do with that project (hard copies, manuals, PLC hardware, spare PC boards, photos, etc.) in a box; and years later donating all the tech to a local makerspace. I imagine I tossed the rest at that time. It was back when you developed a roll of film carried the photos around, and if you needed something electronically, you scanned it (180 degrees away from today, when a hard-copy photo is exceptional). I could be wrong; things sometimes turn up in odd boxes or shelves, but for the moment, I think I’m out of luck.

Sneaky Low Frequency Transients

Low frequency transients, sometimes called Utility Switching Transients or Power Factor Correction Capacitor Switching Transients, can be pretty hard to identify. Traditional power monitoring equipment has never done a particularly good job at spotting these – folks of a certain age will recall that the BMI-4800 power monitor would throw a frequency error (either 61.9 Hz or 64.0 Hz) if the transient caused an extra zero-crossing – sometimes that was the only way to detect the transient, and savvy engineers would use these frequency faults as a diagnostic tool.

Looking through a lot of Fluke 1750 data sets over the years (we’re looking at Site #4472 this week), we’ve gotten pretty good at pulling these transients out of the 100s or 1000s of transient events captured. Some detection tools:

  • Some transients do indeed trigger a voltage transient event, but need to be carefully reviewed because the reported magnitude is often that of the higher frequency leading edge
  • Many transients are accompanied by a rise in RMS voltage, so carefully adjusting the voltage swell threshold can often help to spot these.
  • In Wye systems, many transients cause a Neutral-Ground swell event, which can often be spotted.

In a recent data set; none of these indicators worked out. We were very fortunate that the first current event captured (with a current swell threshold set to 10 Amps, a typical threshold for our reports) was a transient event – so we happened to notice it.

Low Frequency Transient

Current Triggered Event #1

Then, identifying the duration of the current swell event (~ 17 msec, much shorter than the normal equipment loading) and the amplitude (between 15-20 Arms, normal equipment current swells were much higher) we were able to sort through 100s of current triggered events to find nine (9) low frequency transients in the data.

Low Frequency Table

Normally, we would not be so concerned about these transients, which are comparatively minor, simply looking at the voltage waveforms, with no significant overvoltage nor multiple voltage zero-crossings, However, the associated current swell (70-100 A peak) indicates something in the equipment under test is sensitive to or reacting to these transients, and drawing a slug of current. So they are worth looking into….

Low Frequency Transient 2

Current Triggered Event #646

Om Street 2017: Time Lapse Video

“On July 22, 2017, over 2500 people gathered on LaSalle Road in West Hartford, CT to “get their asana in the street!” The weather was perfect and the energy was incredible, making the 7th annual OM Street a huge success. The 75 minute all-levels yoga class was led by Barbara Ruzansky of WHY (http://www.westhartfordyoga.com) and included assistants from 40 area studios and businesses. Every studio that gathered their tribe, and every individual that put a mat or a chair on LaSalle represented a community, a coming together of like-minded spirits into something like a neighborhood, like a family.”

Remarkably, I made it into the video frame this year (directly behind, to the left of the tree at the far right lower corner of the shot, blue shirt) standing at the audio table, keeping an eye on the sound. And at the end (around 2:50) I take the front speakers down…

It’s kind of amazing to be part of this each year, to be the adult in charge of the sound system, and that, seven years running, we’ve never had a significant technical issue!

Om Street 2017: Better, Faster, Stronger

We survived the 2017 edition of Om Street: Yoga on LaSalle road, my yoga studio’s annual “yoga in the streets” event that I’ve been audio engineering since the event’s inception in 2011.

Some history via the blogosphere:

Just finished cleaning up the audio equipment from the event this morning; three bins and one large duffel bag loaded with equipment that needed to get sorted, re-wrapped as necessary, and restowed for the next gig.

It was a pretty quick and pain-free recovery this year, owing to a few factors:

  • I ended up wrapping the bulk of the long cables myself – 4 x 100′ XLR (audio) cables, 4 x 100′ extension (AC power) cords, and all of the 1/4″ and speakon speaker cable (several hundred feet worth). There were just 2 x 100′ AC power cords that someone else wrapped and needed to be re-wrapped. In past years the cables were a bit of a hot mess.
  • I had a great assistant (yay, Steve!) who was dedicated to the band, so when I unpacked the storage bin from the band, cables were nicely  wrapped, mic stands nicely folded, mics and headphones all bagged up neatly. He also did a yoeman’s job of setting the band up (mics, direct boxes, and monitor headphones) so I could focus on other things.

Honestly, the only bin that was a bit of a hot mess was the main audio station bin (that I packed, last) because it was kind of the catch-all for everything lying around.

It’s a huge outdoor yoga event:

Om Street 2017 Wide Shot

Om Street 2017 (Breck Macnab Photography / West Hartford Yoga)

And it’s pretty much me handling the audio all by myself. Some things that helped out this year:

  • I added a pair of low cost wireless stick mics for emcee / stage mics this year. In previous years I used the studio wired mics but that’s a couple of fairly long XLR cables I do not need to worry about setting up or striking. I picked them up for Q&A for some larger workshops we have, but they proved useful for Om Street as well.
  • I did some more work on the band monitor setup, adding a small mixer, buying some dedicated cables / adapters, buying a gamer headset for myself (with a small headset mic on a boom), and setting it all up ahead of time to get levels. So the band had great monitors in both ears and we had a talk-back channel, with one band mic and my headset mic going only to the monitor channel, so we could communicate during the practice.
  • I took the time to kit out the audio equipment. Typically I show up with audio stuff in bins sorted mostly by who owns the equipment (yoga studio, or me) and general function (audio cables, power, speaker cabloes, mic stands, etc.). I would end up running around a lot during set-up, getting things to the right place This year I put everything for the band in one bin (power strip, mics, stands, direct boxes, cables, monitor headphones and amplifier) and everything for the main audio station in a second bin. I also got some of those big family zip-lock bags and kitted together all the cables and adapters for the satellite PA systems (200′ and 400′ down the road) so I could unload those with the speakers, stands, and amplifier, and not have to walk back and froth so much.

This year I took the time to sketch out the audio schematic:

RFFlow - Om Street 2017

I’ll add some links to “sure to be posted” video of the event, but here are a few news articles that have already made it to press:

Tale of Two Power Systems – UPS Edition

This one nearly fooled us; we recalled the “two power systems” nature of a recent site and so when a second data set came in with somewhat similar characteristics, we thought it might be more data from the same facility. But this is a completely different site, and a completely different problem!

Looking over a power monitoring data set recently; we came across a site with a dual personality. The site in question had a marginally high total harmonic distortion (THD ~ 5.5%) from 5/15/17 up until 5/24/17 (specifically, at 3:50 pm). After that, the THD trended much higher, rising as high as 12% (with a large amount of visible high frequency noise).

A “before / after” look at the voltage and current waveforms provides more evidence, with visible notching on the voltage waveform “before”; and very high levels of high frequency noise (broad spectrum “after”.

Before 5/24/17 After 5/24/17

Individual harmonics similarly supported the findings of the THD log, with harmonics under 3% before 5/24/17, and very high harmonics across the spectrum after 5/24/17.

Before 5/24/17 After 5/24/17


Finally, the “before / after” affect was also seen in the Neutral-Ground voltage – with severely high NG voltages after 5/24/17; consisting mainly of high frequency components.

Before 5/24/17 After 5/24/17

The clue to understand this puzzle is that the RMS voltage of the device under test was very stable and well regulated (probably a UPS or power conditioner output) before the 5/24/17 date, and higher / less well regulated after the 5/24/17 date.

And here is the “moment of truth” when the voltage changes from moderately distorted / notched to severely distorted with high frequency noise.

Moment of Truth

So what’s the scoop? We’re not on site, but here’s our bet – that there is a UPS supplying 480Y/277 VAC power to the load, but is itself being fed 480 VAC Delta (ungrounded and/or no neutral). During Inverter operation, the UPS works fairly well (although we bet the notching and higher THD are not normal for this device). But when switched to Bypass, the load loses the neutral reference, and is picking up noise from the UPS rectifier and/or inverter circuitry.

The trip report notes “No power problem suspected, multiple tube failures, want to eliminate power as an issue.” They probably assume UPS installed = no power issues (and they are probably not experiencing sags / swells / etc. on other facility loads). But if they are operating a system requiring 480Y/277 VAC from 480 VAC Delta, and relying on the UPS to provide a neutral connection point, they are probably having some serious grounding, reference, and noise issues!

Tale of Two Power Systems

Looking over a power monitoring dataset recently; we came across a site with a dual personality. The site in question had low total harmonic distortion (THD ~ 1.5%) from 4/1/17 up until 4/10/17 (specifically, at 2:00 pm). After that, the THD fluctuated much higher, rising as high as 5.4% (outside of manufacturer requirements for medical imaging equipment).

Tale2Power THD

A closer “before / after” look at the voltage and current waveforms provides more evidence, with visible notching on the voltage waveform “after”; monitored current showed some noise but was similarly low.

Tale2Power Waveforms Before Tale2Power Waveforms After
Before 4/10/17 After 4/10/17

Individual harmonics similarly supported the findings of the THD log, with all harmonics higher, and 5th harmonics exceeding 3%.

Tale2Power Harmonics Before Tale2Power Harmonics After
Before 4/10/17 After 4/10/17

Tale2Power NG RMS
Finally, the “before after” affect was also seen in the Neutral-Ground voltage – with noise voltages evident although the lower frequency voltages were not much higher.

Tale2Power NG Before Tale2Power NG After
Before 4/10/17 After 4/10/17

The funny thing is that the RMS voltage and the current of the device under test were not significantly different before / after the 4/10/17 date, in terms of RMS level or in terms of stability: sags, swells or fluctuations.
Tale2Power RMS

So what’s the scoop? We’re not on site, but odds are good that some facility load (we’re betting air conditioning, but could be other facility loads) got switched on at this time. Alternately, perhaps the facility transitioned to an alternative power source. But whatever the reason, this is clearly a tale of two power systems, and we’re curious about it!

UPS Overload and Bypass: CT Scanner Load

A quick consulting project came over the transom this week. A 150 KVA UPS, protecting a CT scanner, was occasionally overloading and transferring to bypass.

UPS Bypass 02

Here, the transition to Bypass is evident by the step change in voltage from a rock solid 480 VAC (UPS Inverter) to a very high 515 VAC (Bypass)

UPS Bypass 01

Drilling in a bit more, we see the CT Scanner switch on (point “A”) with a maximum current of 245 Amps and a resultant collapse of the UPS output, a short period where the CT current drops and the UPS output stabilizes, then a transition to Bypass (point “B”). Note the increase in voltage while operating on Bypass.

At the end of the CT scan (point “C”) the voltage rises due to impedance. And the UPS stays in Bypass for an extended period (point “D”) needing to be manually reset.

UPS Bypass 03

A close-up of the “start of scan” waveform shows the nature of the inrush current (higher for just once cycle) – although the UPS voltage drops more than usual, it does not really fold or collapse.

Nothing really unusual here – some finger pointing at impedance (not really an issue, the voltage drop on the unregulated bypass was just 2.7% at full load) and voltage distortion (under 3% voltage distortion on the UPS input) – neither of which is a problem. The UPS got sized in based on power monitoring, which apparently did not capture peak load condition.

I suggested that the higher voltage on Bypass (515 VAC = 7% higher than nominal) would mean lower observed current, although that did not factor into the calculations (they were monitoring further upstream on a 208 VAC source). The UPS vendor is going to see if they can tweak the protection circuitry a bit to be able to survive and supply this short overload without a bypass transition.