2018 Fiber Optic Year's Update
another busy year for fiber. On this page we've gathered some of
the more important stories for the year, stories covering topics
that FOA believes every tech needs to know. Many of these
articles are from the FOA
monthly newsletter, which you can subscribe
We also recommend the FOA "Fiber
FAQs" page with tech questions from customers originally
printed in the FOA Newsletter. We had lots of interesting
questions in 2018.
This page will become part of a Fiber
U Tech Update Course.
ALL Got It All Wrong - And They Confuse A Lot Of People
recently got this email from a student with field experience
taking a fiber optic class:""The instructors are telling us
that we are stripping the cladding from the core when prepping
to cleave MM and SM fiber. I learned from Lenny
Lightwave years ago, this is not correct. I do not want
to embarrass them, but I don't want my fellow techs to look
foolish when we graduate from this course."
share with you our answer to this student in a moment, but
first it seems important to understand where this
misinformation comes from. We did an image search on the
Internet for drawings of optical fiber. Here is what we found:
fiber drawing we found on the Internet search with one exception
(which we will show in a second) showed the same thing - the
core of the fiber separate -sticking out of the cladding and the
cladding sticking out of the primary buffer coating. Those
drawings are not all from websites that you might expect some
technical inaccuracies, several were from fiber or other fiber
optic component manufacturers and one was from a company
specializing in highly technical fiber research equipment.
The only drawing we found that does not show the core separate
from the cladding was -
really! - on the FOA
Guide page on optical fiber.
No wonder everyone is confused. Practically every drawing shows
the core and cladding being separate elements in an optical
So how did FOA help this student explain the facts to his
instructors? We thought about talking about how fiber is
manufactured by drawing fiber from a solid glass preform with
the same index profile as the final fiber. But we figured a
simpler way to explain the fiber core and cladding is one solid
piece of glass was to look at a completed connector or a fusion
We started with a video microscope view of the end of a
connector being inspected for cleaning.
Here you can see the fiber in the ceramic ferrule. The hole of
the connector is ~125 microns diameter (usually a micron or two
bigger to allow the fiber to fit in the ferrule with some
adhesive easily.) The illuminated core shows how the cladding
traps light in the core but carries little or no light itself.
This does not look like the cladding was stripped, does it?
Here is the same view with a singlemode fiber at higher
And no connector ferrules have 50, 62.5 or 9 micron holes so
that just the core would fit in the ferrule, do they?
What about stripping fiber for fusion splicing. Here is the view
of fiber in an EasySplicer ready to splice.
What do you see in the EasySplicer screen? Isn't that the core
in the middle and the cladding around it? In fact, isn't this a
"cladding alignment" splicer?
We rest our case. If that's not sufficient to convince everyone
that you do not strip the cladding when preparing fiber for
termination or splicing, we're not sure what is.
Special Request: To everyone in the fiber optic industry
who has a website with a drawing on it that shows the core
of optical fiber separate from the cladding, can you please
change the drawing or at the very least add a few words to
tell readers that in glass optical fiber the core and
cladding are all part of one strand of glass and when you
strip fiber, you strip the primary buffer coating down to
the 125 micron OD of the cladding?
Manufacturers Beginning To Realize That It's Time To Go
years ago when I (JH) was being introduced to optical fiber
technology by the scientists at Bell Labs in Murray Hill, NJ who
were developing it, they gave me a glimpse of the future
technologies. At the time, the installed fiber links were based
on 850nm Fabry-Perot lasers (VCSEL technology was 20 years in
the future) and multimode fiber (62.5/125 micron was their
standard then, later replaced with higher bandwidth 50/125 fiber
for the early long distance links.)
But the future would be dominated by singlemode fiber, they
assured me. These scientists, many of whom had worked on the
Bell Labs projects associated with millimeter wave transmission
in the 1960s, explained that multimode fiber had the same
problem as mm RF waveguides - noise and bandwidth limits caused
by multimoding. To realize the potential of optical fiber, it
was necessary to move transmission to singlemode fiber.
The transfer from multimode to singlemode fiber was not simple.
At first, there were attempts to make 850nm singlemode fiber.
The problem was the core needed to be ~4-5 microns diameter, a
problem since the technology to make optical fiber with precise
geometry did not exist then, nor was there technology to make
connectors precisely enough. Some of that fiber was made, and is
still being made, but it's use is mainly for sensors like fiber
gyros, and in the 1990s for use in measuring mode fill in
multimode fiber using the CPR (coupled power ratio) technique.
Furthermore, moving the wavelength of transmission to longer
wavelengths in the infrared allowed larger core sizes (~8-10
microns) where the loss of the fiber is much lower as well as
easier manufacture of the components.
All the necessary technologies came together in the early 1980s.
Lasers were developed and mass produced at 1310nm (built on
assembly and test equipment supplied to AT&T by my company
Fotec.) Singlemode fiber was successfully engineered and
manufactured by AT&T, Corning and others. The final
breakthrough came from Japan, where NTT and Kyocera developed
the ceramic ferrule connector that had the precision to mate
singlemode fiber reliably. Other techniques like fusion splicing
worked just fine with the new fiber.
We can probably say the turnaround to SM fiber started at one
meeting, the KMI Newport Conference in the Fall of 1984. The
technical lead of MCI presented a paper that stated MCI was
abandoning digital microwave transmission, their transmission
system of choice at that point, for singlemode fiber. After
their talk, attendees, representing practically component and
system manufacturer plus many users, ran for the pay phones
(remember pay phones? this was before cell phones) in the hotel
lobby to call their offices with the news.
From that point, the industry never looked back - telecom was
all based on singlemode. Speeds increased, wavelength division
multiplexing (WDM) increased fiber utilization. Fibers got
installed under the seas and inside power lines. Now everything,
including wireless, depends on those fibers.
But the data people who were just getting started connecting PCs
on LANs, stayed with multimode. They could use cheaper LED
sources with the multimode fiber for their slower systems
(10Mb/s) over shorter links (<2km.) Longer links than about
500m used 1300nm LEDs rather than the 850nm LEDs because of the
lower attenuation of the fiber (~1dB/km @ 1300 vs ~3dB/km @
850nm.) When LANs got to 100Mb/s, fiber was re-engineered for
higher bandwidth for 1300nm bandwidth to get the longer length
That was a stable technology for the entire 1990s. But when
gigabit Ethernet was introduced, LED sources were no longer
usable; they run out of bandwidth at a few hundred Mb/s.
Fortunately inexpensive VCSEL lasers were developed that had the
ability to be modulated at gigabit speeds, and have been
upgraded for 10 and 25 Gb/s over the years. But VCSEL technology
only works at 850nm, limiting its use to multimode fiber. And
the introduction of VCSELs led to a return to the older fiber
with a 50/125 micron core that had been optimized for lasers
when it was used for the original telecom systems in the early
VCSELs and multimode fiber have had a long run - 20 years now.
Speeds have increased from 1 to 10 gigabits/second. Fiber has
been engineered to have higher bandwidth, from OM2 to OM3 and
OM4, and even to allow short wavelength division multiplexing
(SWDM) with VCSELs in the range of 850-950nm.
But that run may be ending. We may see some use of MM fiber at
25 Mb/s, but that's just a small step up from 10 Gb/s. To get to
the next big step, 100 Gb/s requires making a difficult choice:
use parallel transmission over 8 or 20 multimode fibers, use the
new VCSEL WDM over a pair of multimode fibers or use
well-developed SM WDM over a pair of singlemode fiber singlemode
fiber. The masses of multimode fiber needed for parallel
transmission is a complicated solution and generally requires
using prefab cable assemblies with array connectors. The
transceiver industry has shown limited interest in SWDM and OM5
fiber is expensive. The SM WDM solution has been the choice of
data centers who are using 100G and working their way toward
200, 400G and 1T - one terabit/s.
While high speeds have definitely migrated to singlemode, it's
also become a viable choice for LANs. The passive optical
network technology developed for fiber to the home (FTTH) has
been successfully been adapted to LANs and has become the choice
of many large LANs over the last 5 years. Adopting a passive
optical LAN (OLAN) requires abandoning the 25+ year old
structured cabling model, a tough choice for those who grew up
in the "Cat 5" era. But a passive OLAN uses much less fiber,
simplified electronics and typically costs half as much to build
and maybe a quarter as much to operate. It's a no-brainer for
large LANs as early adopters like government facilities
discovered first and is now being implemented in hotels,
hospitals, campuses, etc. While it's good for LANs, it's also
compatible with wireless, both WiFi and cellular for indoor use.
Fiber techs who have worked with multimode in premises cabling
often claim singlemode is much harder to install. Maybe that was
true a decade ago, but the technology developed for outside
plant and FTTH, especially multi-dwelling units and data centers
has changed all that. Bend-insensitive fiber allows the cables
to be made much smaller (microcables) and the same for ducts.
Microducts allow fiber to the "blown in" more quickly. Splice on
connectors (SOCs) and low cost fusion splicers solve the
termination problem. This will cause techs to invest in some new
gear and get some training, but the payback is there. And
finally the last complaint about electronics cost is going away;
transceivers are getting cheaper because of the higher volume,
especially in data centers.
We've been saying this for decades, but now we're seeing others
ask the same question. The impetus for this article was the
announcement of two webinars - one by premises cabling stalwarts
Leviton and Fluke Networks called "The
Road To Singlemode" and one by the TIA FOTC called "Implementing
next-gen PON technologies over existing fiber infrastructure."
Maybe it's time. But don't hold your breath. Like Cat 5,
multimode will not "go gentle into that good night..." Dylan
How about structured cabling? Last month we discussed a new
version of Ethernet over a new single-pair UTP cable. Recently,
Belden, a cable manufacturer, posted an interesting article
Ethernet Standards: Is Structured Cabling Dead? It
includes the statement "With the advancement of networking
protocols and applications, and the growth and evolution of IoT,
we are seeing the end of the structured cabling world we’ve
known so well for the past 30 years."
See also the article below on
fiber types in data centers.
ED: In the article below on faster Ethernet standards, we've highlighted the fiber types to emphasize the dominance of SM fiber. Note that 6 of the 7 use singlemode fiber and 4 of the 7 use WDM over 2 fiber links.
December, the Ethernet committee approved a new standard, IEEE
Std 802.3bs-2017: 200 Gb/s and 400 Gb/s Ethernet with seven
200GBASE-DR4: 200 Gb/s transmission over four lanes (8 fibers
total) of singlemode optical fiber cabling with reach up
to at least 500 m
200GBASE-FR4: 200 Gb/s transmission over a 4 wavelength division
multiplexed (WDM) lane (i.e. 2 fibers total) of singlemode
optical fiber cabling with reach up to at least 2 km
200GBASE-LR4: 200 Gb/s transmission over a 4 wavelength division
multiplexed (WDM) lane (i.e. 2 fibers total) of singlemode
optical fiber cabling with reach up to at least 10 km
400GBASE-SR16: 400 Gb/s transmission over sixteen lanes
(i.e. 32 fibers total) of multimode optical fiber cabling
with reach up to at least 100 m
400GBASE-DR4: 400 Gb/s transmission over four lanes (i.e. 8
fibers total) of singlemode optical fiber cabling with
reach up to at least 500 m
400GBASE-FR8: 400 Gb/s transmission over an 8 wavelength
division multiplexed (WDM) lane (i.e. 2 fibers total) of
singlemode optical fiber cabling with reach up to at least
400GBASE-LR8: 400 Gb/s transmission over an 8 wavelength
division multiplexed (WDM) lane (i.e. 2 fibers total) of
singlemode optical fiber cabling with reach up to at least
An MPO-16 plug and receptacle is required to support the
32-fiber 400GBASE-SR16 multimode application. The MPO-16
plug is designed with an offset key to prevent accidental mating
with a standard MPO/MTP receptacle. All 2-fiber applications may
be supported with a 2‑fiber LC singlemode interface and all
8-fiber applications may be supported with standard MPO/MTP
graphic shows how many "outlaw" Ethernet versions there are:
100G SWDM4 is for VCSEL WDM on OM5 fiber. 100GBase-ZR is a
tech marvel using special modulation and coherent technology
like long haul telecom. As you can see from the list above, the
400G standards are now approved along with some 200G not listed
here. Remember our quote (above) from Bob Metcalfe, co inventor
"The wonderful thing about standards is we have so many to
does Bend-Insensitive Fiber Look Like?
researching the answers to the question above, we talked to Phil
Irwin at Panduit. He mentioned that you could see the structure
of BI fiber and sent along this photo:
At the left, you can see the gray area surrounding the core,
shown in the drawing in the right as the yellow depressed
If you want to try to see it yourself, it's not easy. Phil tells
us that OFS fiber is the easiest to see, Corning a bit more
difficult. You need a good video microscope. You may need to
vary the lighting and illuminate the core with low level light.
If you try it and it works for you, send us your results.
"Fast" Is Fiber?
of the FOA instructors sent us this question: "I work
with at Washington Univ with an engineer who works for an
electrical utility. He asked a question about the speed of
signal transmission over fiber optics, single mode, at top of
towers. They need signal to be sent in 18 millisecs for relays
to function properly. Is there a problem over a distance of
Let’s do a calculation:
C = speed of light in a vacuum = 300,000 km/s = 186,000
V= speed of light in a fiber = c/index of refraction of fiber
(~1.46) = 205,000 km/s or 127,000 miles/sec
150 miles / 127,000 miles/sec = 0.00118 seconds or ~1.2
Another way to look at it is 127,000 miles/sec X 0.018 seconds
(18ms) = 2,286 miles
So the fiber transit time is not an issue. The electronics
conversion times might be larger than that.
I used to explain to classes that light travels about this fast:
300,000 km / sec
300 km / millisecond
0.3km /microsecond or 300m / microsecond
0.3 m per nanosecond - so in a billionth of a second, light
travels about 30cm or 12 inches
Since it travels slower by the ration of the index of
refraction, 1.46, that becomes about 20cm or 8 inches per
That is useful to know since an OTDR pulse 10ns wide translates
to about 200cm or 2 m pr 80 inches (6 feet and 8 inches), giving
you an idea of the pulse width in distance in the fiber or an
idea of the best resolution of the OTDR with that pulse
Does A FTTH ONT Look Like Today?
That's all there is to the ONT that goes into the home. The
arrow points to the 1310 TX/1490 RX transceiver for SC-APC
several technologies that have continued growing in importance
in the fiber optic marketplace - components that
every tech needs to learn about and become familiar with their
Microcables, Microducts and Microtrenching
MiniXtend cable is smaller than a pencil
microducts and microtrenching - three technologies that have
more in common than the prefix "micro" are gaining in
acceptance along with blown cable, the obvious method of
installation using them. Smaller is always better when it
comes to crowded ducts, especially in cities where duct
congestion is a problem in practically every city we have
demand for more fiber for smart cities services like small
cells and smart traffic signals, not to mention a ton of other
smart cities services, installing more cables in current ducts
- without digging up streets - is a major interest. Sometimes
it's possible to install microducts in current ducts with a
cable and blow in a new microcable. Sometimes it's worth it to
pull an older cable out and install a new microduct that will
accommodate 6 cables, making room for future expansion. The
makers of the fabric ducts, Maxcell, can even show you how to
remove the ducts in conduit without disturbing the current
cables and pull in fabric ducts to install more cable.
Comparison of MaxCell ducts to rigid plastic duct
have to trench, microtrenching is probably the best choice for
cities and suburbs. Rather than digging wide trenches or using
directional boring (remember the story about the contractor in
Nashville, TN using boring to install fiber who punctured 7
water mains in 6 months?), microtrenching is cheaper, faster
and much less disruptive.
this implies that contractors are willing to invest in new
machinery and training, sometimes an optimistic assumption.
Microtrenching machines and cable blowing machines are
available for rent, but personnel must be trained in the
design of networks using these technologies and operating the
actual machinery in the field. That's still a considerable
with SC SOC in EasySplicer
has been seeing greater acceptance of the SOC - splice-on
connector - using fusion splicers. It's popularity started in
data centers for singlemode fiber where the number of
connections is very large so the cost of a fusion splicer is
readily amortized and the speed of making connections is the
real cost advantage. The performance of SOCs is much better
(mechanical splice) connectors simply because of the
superiority of a fusion splice and the cost of the SOCs are
much less since they do not have the complex mechanical splice
in the connector.
used SOCs in training and the techs take to them readily. In
classes you can combine splicing and termination in one session.
The cost of fusion splicers has been dropping to near the cost
of a prepolished/splice (mechanical splice) connector kit so the
financial decision to use SOCs is easier to make.
DAS or Small Cell?
provides perspective better than looking at something as an
outsider. Especially an outsider who's just trying to understand
something instead of an insider trying to perform successfully
as an insider. That's how we feel about wireless communications.
If you say "wireless" to an IT or LAN person, they think WiFi.
But to a telecom person they think cellular. FOA's involvement
is based on trying to understand the infrastructure to support
wireless, OSP or premises, WiFi or cellular, tower site or small
We're basically outsiders on the technology looking at the
infrastructure to support them. Recently we've been trying to
understand the technologies, markets and applications for both
to better include the two technologies in our training and
The initial question we had dealt with distinguishing DAS
(distributed antenna systems for cellular) and small cells (also
cellular). In most ways they seem to be very similar, except
perhaps DAS is indoors and small cells outdoors.
We've started to interview insiders in both technologies to try
to understand how they work and why we should have both. Right
off, we found that there appears to be a general lack of
technical understanding about the other from almost everybody we
talk to who works with one of them. And we're talking real
basics - what frequencies are used, protocols, coverage,
bandwidth, etc. etc. etc. Even the jargon is different, but
that's not unexpected. So we've tried to consolidate information
on the three different premises wireless technologies
appropriate for general usage. Over time we expect to refine
this comparison with more data and user feedback. (got any? send
it to us)
Based on the current evaluation, WiFi is essential to premises
spaces and because of the ubiquity of WiFi, it is inexpensive.
However, WiFi connections for cellular mobile devices appears to
have not yet been refined sufficiently to provide reliable
coverage for cellular voice, but data is good and video, maybe.
Given the cost structure of data plans, using cellular for video
can be very expensive but WiFi is preferable since it is only
limited by bandwidth.
The choice between small cell and DAS in premises spaces is
simple - small cells are generally single carrier connections
and that is too limiting for most users. DAS is similar
technology but has the advantage of offering multiple service
providers. If better cellular service is desired indoors and
WiFi connections for cellular calls is unreliable, a DAS is the
Small cells appear to be a good solution for better cellular
service outdoors in metropolitan areas but the capital costs for
building systems is quite high - Deloitte, you might remember
from an earlier FOA Newsletter, forecast a cost of over $200
billion. It makes one wonder if the carriers can make that
investment while simultaneously investing in 5G.
tablets, phones, many other devices
tablets, some other devices
tablets, some other devices
2.5GHz (802.11n, 14 - 40MHz channels, 3 max
5GHz (802.11ac or 802.11n, 24 - 80 MHz channels,
23 max non-overlapping)(more bandwidth, less range)
3G: 850, 1700, 1900, 2100 MHz
4G/LTE: 600, 700, 850, 1700, 1900, 2100, 2300,
CBRS (Citizens band Radio Service, shared, unlicensed):
3600 MHz, 20MHz channels,
5G: Eur: 24-27GHz, US: 37-48GHz, 71-74GHz
3G: 850, 1700, 1900, 2100 MHz
4G/LTE: 600, 700, 850, 1700, 1900, 2100, 2300,
CBRS (Citizens band Radio Service, shared, unlicensed):
3600 MHz, 20MHz
5G: Eur: 24-27GHz, US: 37-48GHz, 71-74GHz
in to each new private system required, limited handoffs
between WiFi systems or WiFi and cellular
handoffs subject to coverage
(bring your own device)
on service provider device connects to
Max data rate:
802.11ac: ~400Mb/s - 7 Gb/s (MIMO)
5G: ~Gb/s (proposed)
5G: ~Gb/s (proposed)
Cellular on WiFi: not optimal, depends on
with proper coverage
with proper coverage
5G: Good (proposed), cost?
5G: Good (proposed), cost?
backbone to Cat 5, POE
sometimes Cat 5
sometimes Cat 5
for data on PCs, tablets, smartphones, good for VoIP
systems, marginal on cellular devices
for cellular devices since can cover all service
providers, not optimal for high throughput data (today,
future 5G ?)
for cellular devices but can cover only one service
provider, not optimal for high throughput data (today,
future 5G ?)
Learned From Visiting A Wireless Conference
recently attended the WIA's Connect(X) conference in
Charlotte, NC. This was the first wireless show we'd attended
in over a year and the topics of conversation were similar to
last year - 5G topped the list. We attended several tech
sessions and our takeaway from one was the answer to an
attendees question to a speaker: "When can we expect a
standard for 5G." The answer was revealing: "5G is not a
standard, 5G is a goal."
search the web for cellular standards, you will probably end
up at a Wikipedia
page called "Comparison of Mobile Phone Standards." It's
an interesting history of the development of cellular systems.
Nothing on that page refers to 5G, but there is a page
on 5G that starts off saying "This article is about
proposed next generation telecommunication standard.
5th-Generation Wireless Systems (abbreviated 5G) is a marketing
we don't recommend using Wikipedia for technical information
because it is too often edited for commercial bias, (that's
why we created the FOA
Guide,) but in this case the candor is refreshing.
toured the trade show exhibits, we did see something new, this
"Standalone Small Cell" from Zinwave. What's notable, is that
like a similar device we saw last year from Ericsson that saw
at the IWCE wireless meeting and we reported on in the June
2017 FOA newsletter, it looks similar to a WiFi wireless
access point including Gigabit Ethernet interfaces to standard
Category-rated copper cabling. DAS, it seems is migrating to
operating off Cat 6/Cat 6A in a structured cabling system.
Since most offices need both cellular (small cell or DAS) and
WiFi, this makes sense.
tried to find a link to this Zinwave device on the company
website and could not find it, we found something even more
interesting on a page called "Cellular
As A Service": Unfortunately, carriers are no longer
spending on in-building commercial cellular coverage in the
way they used to. That means building owners—whether they
are in commercial real estate, healthcare, hospitality, or
the enterprise—are now having to find and fund the solution
themselves, and it’s not easy. It’s difficult to budget for
the kind of capital outlay needed to deploy an in-building
indicate a movement to make indoor cellular more accessible
using small cells replacing DAS. We've been told that DAS is a
declining market because most of the large public areas like
sports arenas and convention centers have been done.
Enterprise DAS has not been as big but if small cells on LANs,
similar to WiFi, becomes cost effective - and at least one
person told us it would be - then we are looking at a change
in enterprise networks.
At FOA: This addition of cellular wireless to WiFi and
of course the usual fiber or copper Ethernet connectivity
expected of a corporate network is something we've seen
before, and it's the reason FOA expanded it's course offerings
and certifications to include a general "Fiber For Wireless"
programs. We now offer a free
For Wireless" program on Fiber U, a curriculum for our
schools to teach, and of course a
page on the FOA Guide.
Underlying Wi-Fi Problems with Ultrafast Broadband"
by Adtran, a provider of equipment for networks.
Watching: A YouTube
video on how the Mexican city San Miguel Allende installed a
small cell/DAS system in a historic city. La ciudad de
San MIguel de Allende se pone a la vanguardia en
telecomunicaciones, al contar con un sistema de antenas
distribuido conectado por fibra óptica. (In Spanish with
Battles In The "Pole Wars"
You may remember the FOA
Newsletter of July 2016 when we first reported on the
battles raging over attaching new fiber optic cables to utility
poles. The incumbents were trying to restrict access to their
poles to try to stop or at least slow down the encroachment of
newcomers wanting to build fiber networks, especially FTTH.
We've covered other skirmishes in the Pole Wars including the
victory from the newcomers when several cities passed a
"one-touch make-ready" (OMTR) ordinance (FOA
NL 9-16) to ease the entrance of newcomers to the market
after the incumbent dragged their feet on making ready poles for
the newcomer. The battles were fierce as incumbents sent in
battalions of lawyers to fight the OMTR ordinances in cities.
But now a couple of years later, the same companies that fought
OMTR in cities are took the battle to the national level and
convinced the new, big business friendly FCC to pass a national
OMTR regulation. (See article below from last month.) Why the
change of sides on this front? Simple, the big guys wanted
access to those same poles - and every pole with a street light
on it - for their small cell sites.
But the big guys did not stop there, they also asked for - and
got - a nationwide maximum rate of $270 that cities can charge
for small cell installations on public poles. Furthermore, the
FCC decreed that the cities must respond to permit requests in
less than 60 to 90 days, depending on the type of installation.
And, If the cities charge more than $270 or take longer than 60
- 90 days they are subject to litigation.
“There has never been a federal decision to price-regulate the
way local governments provide access to their own property,”
said Blair Levin, a fellow at the Brookings Institution who
served as chief of staff to Reed Hundt, the Clinton-era chairman
of the FCC. “That’s an extreme step.”
Cities are reacting negatively to this intrusion on their
sovereignty also. Not only is the FCC meddling in what cities
see as their affairs, they note the idea of a nationwide fee of
$270 is not realistic. Companies in Seattle can pay up to $1800
per pole annually. In Manhattan it can go as high as $5100.
Comparing this to rural costs is irrelevant as the issue is
where to cite small cells in cities.
According to an
article in the LA Times*, the city of LA said that the
break-even point for small cell facilities is $800 per
installation. But in exchange for amenities such as free Wi-Fi
in Skid Row and at recreation centers, $400,000 of scholarship
money, and launching an innovation center in the city, L.A. is
charging Verizon just $175 per device per year for 10 years for
up to 1,000 installations, plus the cost of electricity. It is
estimated that the city of LA will need 8-10,000 small cells for
The FCC says the new rule will save money for telecommunications
companies, which will redirect those funds to deploy 5G service
to less-connected rural areas. Haven't we hears promises like
that before? And isn't it interesting how companies can switch
sides of an issue like OTMR when it goes from being an advantage
for others to being an advantage for them.
Stay tuned for the next battles in "The Pole Wars."
*If you read the article
in the LA Times about LA and 5G you will find the
current common misconception that all small cells are 5G and
current field trials are using 5G mobile devices. Not so.
Small cells are being installed for current 4G/LTE mobile
devices because 1) there is no standard for 5G - "it's a goal,
not a standard" as one speaker said at a conference earlier
this year and 2) there are no 5G mobile devices. Here's how
website described a phone introduced as the first 5G
"The world’s first 5G smartphone just launched and we all
missed it mainly because there are no 5G networks to begin
with. You’d have no way of actually taking advantage of 5G
features just because your device supports the new standard.
Also, the new Android phone, as it is now, can’t do 5G for
another reason. It doesn’t technically have a 5G modem inside,
which will be a problem when 5G launches."
Adopts One Touch Make Ready (OTMR) Rules For Utility Poles
3, The US Federal Communications Communications Commission
adopted a new rule that allows "one-touch make-ready" (OTMR) for
the attachment of new aerial cables to utility poles. From the FCC
explanation of the rule, "the new attacher (sic) may opt
to perform all work to prepare a pole for a new attachment. OTMR
should accelerate broadband deployment and reduce costs by
allowing the party with the strongest incentive to prepare the
pole to efficiently perform the work itself."
You may remember that FOA has reported on the "Pole
Wars" for several years. Battles over making poles
available and/or ready for additional cable installation has
been slowing broadband installations for years and now threatens
upgrading cellular service to small cells and 5G in many areas.
A Good Idea?
the potential to speed deployment of new communications networks
if handled properly. However, one hopes the installers doing
OTMR know what they are doing. We've heard so many horror
stories about botched installations, cut fiber and power cables,
punctured water mains and gas lines done by inept contractors
that we just hope this doesn't cause even more trouble.
For example, here is a pole in the LA area where small cells are
being installed. Can just any contractor handle OTMR on this
Center Connections - 40G Looks Obsolete Already
Fiber For Data Centers?
The Leviton Blog:
Market Has Spoken: OM4 (MMF), OS2 (SMF) Leave No Place for
Unproven OM5 (MMF)
industry standards and associations set the stage for the
next-generation of cabling and infrastructure that support
network communications. But there are instances when the market
decides to take a different route. This is currently the case
with the recently standardized OM5 fiber. Even though TIA
developed a standard for OM5 (TIA-492AAAE), this new fiber type
very likely won’t see wide industry adoption because there is no
current or planned application that requires it.
Due to new transceiver launches, coupled with customer
perception of their needs and network requirements, the market
is ignoring the new, unproven OM5 cable and sticking with proven
solutions like OM4 and single-mode fiber.
This trend is supported by a recent Leviton poll that found a
significant jump in OS2 single-mode, compared to surveys from
Some of the follow-up comments from the Leviton survey included
responses about OM5:
“I do not believe that OM5 offers a real advantage, it’s mainly
a marketing ploy by manufacturers.” — IT manager at a global
“OM5 isn’t needed. There is no real place for it between OM4 and
OS2.” — communications consultant
to CI&M for bringing this to our attention.
Sources For Multimode Fiber Testing
our schools recently asked up for recommendations on test
sources for multimode fiber, wondering if the sources should
be a LED or laser. Multimode test sources are always LEDs and
these sources should be always used with a mode conditioner,
usually a mandrel wrap. See here.
This is how all standards for testing multimode fiber require
ago, as systems got faster and LEDs were too slow at speeds
above a few hundred Mb/s. Fortunately 850nm VCSELs were
invented to provide the solution for faster transmitters. But
VCSELs were not good for test sources. They had variable mode
fill and modal noise, so testers continued using LEDs for test
sources, but with mode conditioners like the mandrel wrap that
filtered out higher order modes to simulate the mode fill of
an ideal VCSEL
The bigger issue with MM fiber is whether to test at both 850
and 1300nm. In the past, we did both because there were
systems that used 1300nm LEDs or Fabry-Perot lasers for
sources because the fiber attenuation was lower at 1300nm than
850nm. As network speeds increased to 1Gb/s and above,
bandwidth became the limiting factor for distance, not
attenuation. VCSELs only worked at 850nm and all systems
in MM basically have been switched to 850nm VCSELs.
We also used to test at both wavelengths because if a fiber
was stressed, the bending losses were higher at 1300nm, so you
could determine if a fiber had problems with stress. But since
MM fiber has all gone to bend-insensitive fiber, that no
longer works and the need or reason to test at 1300nm went
away. It has not been purged from all standards yet however.
To complicate things, standards say that you should not use
bend-insensitive fiber for test cables (launch or receiver
reference cables) because they modify modal distribution, but
it’s a moot point - practically all MM fiber is
bend-insensitive so you have no choice but to use it. And most
links will have BI to BI connections that should be tested.
But we checked with some technical contacts and there are no
specifications for BI fiber mandrels as mode conditioners.
solution - 850 LED with a mode conditioner on non-BI fiber.
Conversation About Fiber And Testing
extended conversation between Eric Pearson and Jim Hayes, both
FOA founders, covered the issues of testing fibers at multiple
wavelengths. We've summarized the conversation here because
there is some very interesting and useful information in it.
March 2018 update: Here is a OTDR trace from EXFO Eric
uses in one of his books to illustrate bend sensitivity of fiber
and different wavelengths:
And our conversation from February 2018:
EP: Testing multimode fiber at both wavelengths (850 and 1300nm)
has been recommended to evaluate the presence of stress. If EF
(encircled flux) testing fills the 30µ center of the core and
1300 nm testing fills the entire core, is it possible that this
difference in core fill would indicate stress, even though there
is no stress?
JH: There are definitions for EF testing at 1300nm for multimode
with similar mode fill requirements included in various
standards documents that cover EF so there should be no
difference in stress loss due to mode fill. However, you need a
1300nm EF source. But using 1300 nm for finding stress
loss may be irrelevant since the majority of multimode fiber
today is bend-insensitive (BI) fiber. That's becoming true for
more singlemode fiber also, as BI SM fiber is used in
microcables, spider ribbons and other new cable types.
EP: Is there any reason to test multimode at both 850 and 1300
nm then? How about SM at both 1310 and 1550?
JH: The issue of dropping 1300nm testing of MM fiber has been
discussed in standards committees for years. Since the advent of
the cheap VCSEL - which of course is only feasible at 850nm -
there is practically no use of MM fiber at 1300nm and no real
reason to test at that wavelength. But just try to remove
something from a standard - it's just about impossible!
The issue of SM fiber at 1310 and 1550nm is different. Today
most SM fiber is probably used at both 1310 and around 1550nm
with DWDM systems and PONs, so testing at both wavelengths is
necessary. Much of the testing of SM fiber for stress is done by
OTDRs at 1625nm anyway, but I do not know how that is affected
by BI fiber structures.
The whole topic of testing MM fiber skirts the BI fiber issues.
Controlling mode distribution in BI fiber is problematic. Those
20-25mm mandrels you used for regular MMF don't work with BI
fiber. It takes a mandrel ~6mm to produce the same mode
filtering as a 20-25mm mandrel, but then the BI fiber structure
simply refills the higher order modes. Creating EF in BI fiber
is questionable. Standards have recommended against using BI
fibers for reference cables (because it produces higher losses
than mode-filtered regular fiber) but you are probably testing
cable plants with BI fibers. Finding non-BI fiber patchcords to
use for reference cables is difficult since most MM fiber is BI.
Some manufacturers have made nothing but BI fiber for nearly a
Historical Footnote 1: As an example of how long this
issue has been discussed, look at these two clips from Corning
AEN-131 from 2009.
Footnote 2: While searching for this ap note (we
found it in our own files where we had downloaded it in 2010),
we found another interesting Corning ap note, AN3060,
March 2014, on OTDR testing of SM fiber fusion splices.
What interested us was this graph, showing OTDR loss measurement
differences depending on direction and mode field diameter
We remembered another graph similar to this that we (JH) created
30 years ago. You can tell its age by the crudeness of the
computer generated graph from Lotus 123, an early spreadsheet
This data was taken with some early singlemode fibers and an
early OTDR from a sample of Spectran SM fiber spliced to Corning
fibers. The work was done for Spectran to show their fibers
could be spliced to Corning fibers. We obtained the data from a
contact at Spectran specifically to analyze for directional loss
differences and show why "gainers" happened. This graph was one
of two graphs that showed the reason - gainers were caused by
backscatter differences in fiber of differing mode field
Out of curiosity we overlaid a red line showing the modern
Corning data and the similarity is obvious. The scattering is
much less because fibers today are much more consistent
(Corning's largest MFD difference was 0.3 microns and the 1980s
fiber MFD varied up to 0.8 microns, a combination of actual
fiber variations and the greater errors in measurements then)
and modern OTDR data is undoubtedly more consistent too.
In our analysis of the data, we also had data on the attenuation
coefficient of the fiber, so we looked at another relationship
that we thought would be useful - loss difference vs the
difference in fiber attenuation coefficient.
Within the limitations of the data, it's obvious that the
directional difference in splice loss is also related to the
difference in the attenuation coefficient of the fiber. We found
this very interesting because most techs running OTDR tests do
not have data on MFD but they can easily measure the attenuation
coefficient of the two fibers being spliced. With these early
fibers, a difference in fiber attenuation of 0.1dB/km would
indicate a loss difference of around 0.4dB. Looking at the
difference in fiber attenuation could provide an indication of
the potential error of the OTDR loss measurement.
OTDRs could easily calculate this - the data is imbedded in the
LSA splice loss measurement. Perhaps if someone would repeat the
Corning test with modern fibers and duplicate the graph above to
show the relationship of difference
in the attenuation coefficient to the difference in splice
loss and it looked better - like the Corning data on modern
fibers - the OTDR manufacturers might incorporate this and
provide a better single-ended splice loss test.
Nitpicking: The Corning graph has a series of black
dots labeled "Actual Splice Loss." In the paper they refer to
them as " bi-directional
averaged (actual) splice loss values." While that is a
commonly accepted fact, technically it's inaccurate. It is
really an average splice loss for the two directions, because
if you measure the actual loss of a splice between two fibers
in each direction, you will find the difference in MFD will
cause real differences in loss in each direction. Measuring it
is non-trivial, however, and the difference is small with
small MFD differences. We go into this in the new FOA book on
You Learn While Training
we were training instructors at a new FOA approved school. Only
FOA has a program to train and certify instructors because we
believe the instructor is key to any school offering a quality
training program. Thee guys were experienced teachers, just not
in fiber optics, so we were focused on fiber knowledge and
skills. One of the instructors was originally trained at MIT
where they not only provide quality education but instill in
graduates a very strong curiosity! Thus we spent the time
delving into several topics in great depth and the discussions
led to some unique ways of explaining some of these topics. We
made copious notes and took photos of the boards covered with
diagrams to capture this information.
One of our boards filled with notes
We decided it would be worthwhile to share these topics with our
readers and we'll archive these into the FOA Guide in the near
Let us know what you think about these explanations - we'll make
them discussions on LinkedIn so you can comment.
Key: Question means we were asked to elaborate on the
topic. Comment is a comment from the instructors that
led to more discussion. Note: That's one of our
Budgets, Loss Budgets, Setting A "0dB" Reference And Modal
was this topic a long discussion with our new instructors but
it's a common question asked of the FOA - we received two
inquiries on loss budgets in the last month alone. The confusion
starts with the difference between a power budget and a loss
budget, so we'll start there. and we'll include the points where
we were stopped to explain things.
What's The Difference Between Power Budget And Loss Budget?
Consider this diagram:
At the top is a fiber optic link with a transmitter connected
to. a cable plant with a patchcord. The cable plant has 1
intermediate connection and 1 splice plus, of course,
"connectors" on each end which become "connections" when the
transmitter and receiver patchcords are connected. At the
receiver end, a patchcord connects the cable plant to the
Question: A connector is the hardware attached to the end
of a fiber which allows it ti be connected to another fiber or a
transmitter or receiver. When two connectors are mated to join
two fibers, usually requiring a mating adapter, it is called a
Below the drawing of the fiber optic link is a graph of the
power in the link over the length of the link. The
vertical scale (Y) is optical power at the distance from the
transmitter shown in the horizontal (X) scale. As optical signal
from the transmitter travels down the fiber, the fiber
attenuation and losses in connections and splice reduces the
power as shown in the green graph of the power.
Comment: That looks like an OTDR trace. Of course it
does. The OTDR sends a test pulse down the fiber and backscatter
allows the OTDR to convert that into a snapshot of what happens
to a pulse going down the fiber. The power in the test pulse is
diminished by the attenuation of the fiber and the loss in
connectors and splices. In our drawing, we don't see reflectance
peaks but that additional loss is included in the loss of the
On the left side of the graph, we show the power coupled from
the transmitter into its patchcord, measured at point #1 and the
attenuated signal at the end of the patchcord connected to the
receiver shown at point #2. We also show the receiver
sensitivity, the minimum power required for the transmitter and
receiver to send error-free data.
The difference between the transmitter output and the receiver
sensitivity is the Power Budget. Expressed in dB, the
power budget is the amount of loss the link can tolerate and
still work properly - to
send error-free data.
The difference between the transmitter
output (point #1) and the receiver power at its input (point
#2) is the actual loss of the cable plant experienced by the
fiber optic data link.
Comment: That sounds like what was called "insertion
loss" with a test source and power meter. Exactly! Replace
"transmitter" with test source, "receiver" with power meter
and "patchcords" with reference test cables and you have the
diagram for insertion loss testing which we do on every
loss of the cable plant is what we estimate when we
calculate a "Link Loss Budget" for the cable plant, adding
up losses due to fiber attenuation, splice losses and
connector losses. And sometimes we add splitters or other
Power Budget For A Link
Question: How is the power budget determined? Well, you
test the link under operating conditions and insert loss while
watching the data transmission quality. The test setup is like
Connect the transmitter and receiver with patchcords to a
variable attenuator. Increase attenuation until you see the link
has a high bit-error rate (BER for digital links) or poor
signal-to-noise ratio (SNR for analog links). By measuring the
output of the transmitter patchcord (point #1) and the output of
the receiver patchcord (point #2), you can determine the maximum
loss of the link and the maximum power the receiver can tolerate.
test you can generate a graph that looks like this:
A receiver must have enough power to have a low BER (or high
SNR, the inverse of BER) but not so much it overloads and signal
distortion affects transmission. We show it as a function of
receiver power here but knowing transmitter output, this curve
can be translated to loss - you need low enough loss in the
cable plant to have good transmission but with low loss the
receiver may overload, so you add an attenuator at the receiver
to get the loss up to an acceptable level.
You must realize that not all transmitters have the same power
output nor do receivers have the same sensitivity, so you test
several (often many) to get an idea of the variability of the
devices. Depending on the point of view of the manufacturer, you
generally error on the conservative side so that your likelihood
of providing a customer with a pair of devices that do not work
is low. It's easier that way.
Furthermore, if your link uses multimode fiber at high bit
rates, there will be dispersion. Dispersion spreads out the
pulses, causing a power penalty. That's why high speed Ethernet
at 10G has a loss budget of 2dB while the power budget
calculated from transmitter and receiver specifications is about
Note: We've talked about measuring power. Fiber optic
power meters have inputs for attaching fiber optic connectors
and detectors designed to capture all the light coming out of
the fiber. This connection is considered a "no loss" connection.
In reality, we do not capture all the light from the fiber
because there is a glass window on the detector and that window
and the detector are slightly reflective. However the coupling
is very consistent and when we calibrated the meter, we
calibrate with a fiber optic cable under the same conditions.
Thus, what we measure of the light by presenting a connector to
the power meter is both consistent and calibrated (as long as
you choose the proper calibration wavelength, of course.)
But what about connections from the transmitter to the patchcord
and the connection of the patchcord to the receiver? We can't
measure those connections because we do not have access to the
actual devices the fiber is coupling to to know what the
connection loss is. Therefore our measurement convention is to
measure them coupled to a patchcord. We simply have to ensure we
have good patchcords. A patchcord that is low loss connected to
another patchcord should be low loss connected to a transmitter
or receiver port.
The connection to the receiver is also unknowable. All we can do
is measure the output of the cable that we connect to the
receiver when testing the power budget of the link. Whatever the
connection loss is becomes irrelevant, but it is included in
testing of the receiver and the link.
Comment: Stan Hendryx, the MIT graduate in our class,
became so interested in the notion of measuring in dB that he
researched to topic and sent us a treatise on dB that includes
"In 1924, engineers at Bell Telephone Laboratories adopted the
logarithm to define a unit for signal loss in telephone lines,
the transmission unit (TU). The TU replaced the earlier standard
unit, miles of standard cable (MSC), which had been in place
since the introduction of telephone cable in 1896. 1 MSC
corresponded to the loss of signal power over 1 mile of standard
cable. Standard cable was defined as having a resistance of 88
ohms and capacitance of 0.054 microfarads per mile. 1 MSC equals
1.056 TU. The loss factor in TU was ten times the base-10
logarithm of the ratio of the output power to the input power.
In 1928, Bell Telephone Laboratories renamed the transmission
unit (TU) the decibel (dB)."
can read Stan's paper here.
Question: If we measure the transmitter output and
receiver input that way, what does that mean for calculating the
loss budget or measuring insertion loss?
Once we understand the way we measure (and calibrate) power as
the output of a fiber optic cable connected to a power meter,
these two topics make more sense.
Refer to the first diagram of the fiber link and the power in
the link. Note we measure transmitter power at point #1 on the
graph, the power we use as the output of the transmitter, the
reference power for insertion loss measurements or the power for
the calculation of the power budget. We measure that power before
the connection to the cable plant so the transmitter power is
attenuated by that first connector on the cable plan.
Therefore that first connector must be included in the
calculation of the loss budget of the cable plant.
At the receiver end, the receiver patchcord connects to the
installed cable plant and suffers connection loss before it is
connected to the receiver. Thus that connection should be
included in the link loss budget also.
So when calculating a link loss budget, include the connectors
on both ends of the cable plant.
Note: When you do an insertion loss test, you use a meter
and source and two reference cables - launch and receive. They
substitute for the link cable pllant patchcords and you make
measurements just like you would in testing the link power
budget. Your insertion loss test will also include both
One, Two, Three Cable 0dB Reference
Question: This looks like the 1-cable reference
method for insertion loss testing. What happens when you use the
2-cable or 3-cable reference method? And why would you use those
other methods anyway?
One common misunderstanding is why you use the two or three
cable reference methods (see below) for insertion loss testing.
Some people think it's related to how you want to perform the
test, but the reason is much more a matter of practicality. It
all depends on the connectors on the cable plant you are testing
and the connector interfaces on your test equipment. And some
The 1-cable method has always been the method of choice because
it does not require compensating for the connections in the
reference cables when setting the "0dB" reference. It's like we
discussed measuring transmitter power above. You measure the
output of the launch reference cable, connect it to the cable
plant under test, launch power through that first connection and
measure the loss of all losses in the cable plant. The meter
connects to the cable plant at the far end with a receive
reference cable, and when the meter makes it's measurement it
includes the connection of the receive reference cable. Thus
both connections on each end of the cable plant are measured,
Just like the actual link will work in operation. No corrections
But suppose you have a LC cable plant and your instruments have
SC connector interfaces? Or suppose 35 years ago, you test set
had SMA connectors and you needed to test a cable plant with
Biconics. (Don't know those connectors? Look
them up here.) You can use hybrid reference cables with
SMA connections on one end for your instruments and Biconics on
the other end to mate with the cable plant. Use a biconic mating
adapter to set your reference - including that connection - and
make measurements remembering - or ignoring - that your
reference value included one unknown connection.
Or suppose you are trying to test connectors that do not match
the connections on your instruments nor do they mate with each
other because they are gendered - male/female or plug/jack?
Hybrid reference cables won't help here, so you go to a "cable
substitution" test. Set up your instruments with hybrid cables
and set your reference with a third cable that is a short
version of the cable plant you want to test. Since most cable
plants using plug/jack connectors (like a MPO prefab cable plant
or multipin military connectors) have the permanently installed
cable plant end in connectors of the same gender (MPO jacks are
connectors with pins), you will have your instruments with
similar cables and the reference cable will have the opposite
styles of connectors to mate with them.
Note: All three methods are approved in most standards
and at least the 1-cable or 3-cable methods are approved in all
standards we're aware of.
Note: Just remember that you will make measurements that
yield different loss values depending on the reference method
Question: If you use a 2-cable method don't you just
reference out one of the connectors on the end of the cable
plant you are testing, and if you
use a 3-cable method don't you just reference out both of the
connectors on the end of the cable plant you are testing? No!
Each connection is different. If you include one or two
connections in your reference setting, you will reduce the
loss by one or two unknown connection losses - it has nothing
to do with the final insertion loss measurement which includes
all connection losses from the ends of the cable plant. Here
is an example by Fluke that shows the variation based on
standard connection loss values.
One thing that confuses people is how multimode fiber works.
We discussed total internal reflection in fiber and how graded
index multimode fiber was made in layers, so it works like
this (from the FOA Guide on Fiber):
To help visualize the layers in the fiber, we like to show a
Fresnel lens, a "flat" lens made from annular rings of glass
that approximate a regular lens. These lenses are used in
lighthouse lights like this one:
A Fresnel lens like this one used in a lighthouse is a flat
lens made of segments of a regular lens.
Multimode Loss With A Mandrel Wrap, Testing The Effect In
When we got to the slide in the lecture about multimode mode
conditioning for testing, we got into a discussion about how to
do mode conditioning. One of the instructors had read about
using a "mandrel wrap" on the launch cable so we spent some time
discussing it. First we covered the reason why mode power
distribution makes a difference.
Here is a slide showing testing with a fully filled fiber and
one where the higher order modes have been stripped off to
simulate the fiber with a typical VCSEL source.
The industry has always known about the effects of modal
distribution and has created metrics to measure and standardize
it for testing multimode fiber. The methods
included MPD (mode power distribution), CPR (coupled power
ratio) and the latest, EF
In class, the instructors had each made at least one good
connector in our termination lab (we were using the most basic
technique, heat-cured epoxy and polishing) so we decided to test
their connectors with and without a mandrel wrap mode
conditioner to see if it made a difference.
adding the mandrel wrap to the launch cable, we tested the LED
test source using a HOML (higher order mode loss) test as
described in the page on EF.
With the mandrel wrap, the power was reduced by
~0.6dB, so we left the mandrel on for our testing.
Adding the mandrel wrap certainly did make a difference.
Connectors tested single-ended without the mandrel wrap at
~0.6dB loss were measured at ~0.2dB with the mandrel wrap. That's
how much difference modal conditioning can make on a single
Think about that the next time you are testing multimode fiber!
That's all our notes from this instructor training
session. Hope you found them interesting!
recently had a message with this thought:
"It is time for spring cleaning, and we don't mean just at home.
Contaminated fiber end faces remain the number one cause of
fiber related problems and test failures. With more stringent
loss budgets, higher data speeds and new multifiber connectors,
proactively inspecting and cleaning will help you ensure network
uptime, performance, and reliability. Despite "everyone" knowing
this, fiber contamination and cleaning generates a lot of failed
Well, experience tells us that "proactively inspecting and
cleaning" can generate a lot of damage to operating fiber optic
Inspection and cleaning should be done whenever a fiber optic
connection is opened or made, of course. But the act of opening
the connection exposes it to airborne dirt and the possibility
of damage if the tech is not experienced in proper cleaning.
Fiber optic connections are well sealed and if they are clean
when connected, they will not get dirty sitting there. Fiber
optic connections do not accumulate unseen dirt like under your
bed or sofa, requiring periodic cleaning, as implied in this
Clean 'em, inspect 'em to ensure proper cleaning, connect 'em
and LEAVE THEM ALONE!!!
And, duh, remember to put dust caps on connectors AND
receptacles on patch panels when no connections are made
this perhaps another early April Fools' joke...like this one
we ran several years ago about the wrong way to clean
Clean Connectors Before You Make Connections
Teague of Microcare/Sticklers
send us this series of photos showing what happens when you make
connections with dirty connectors. It speak for itself!
Cleaning With a Mechanical Connector Cleaner
Teague of Microcare/Sticklers,
the cleaning experts, offers this trick.
Here's a simple and easy to use 3 step process that will
significantly improving the cleaning performance of your
mechanical fibre optic connector click cleaners. The cleaning
fluid will break the end face contamination and create the
dissipative medium for eliminating static charges that pull in
dust onto the connector's end face.
You Need To Protect Cables From Water
These are photos of some indoor/outdoor cable that suffered
water ingress from a cable cut. In a few months the cable grew
these calcium deposits on the end of the cable. We discussed
this with several cabling companies and they have seen it
before. Basically the cable grows "stalactities" on the
Backfill A Trench For Underground Construction
answer to a question we've gotten. Where did we find the answer?
In the new FOA
Guide section on OSP Construction developed using Joe
Botha's OSP Construction Guide which is published by the FOA.
Joe's book covers underground and aerial installation from a
construction point of view, covering material after the FOA's
design material and before you get into the FOA's information on
splicing, termination and testing.
The 2019 update of the FOA
Reference Guide To Outside Plant Fiber Optics contains
this and lots of other new material on OSP construction.
On The Job
the most important part of any job. Installers need to
understand the safety issues to be safe. An excellent guide to
analyzing job hazards is from OSHA, the US Occupational Safety
and Health Administration. Here
is a link to their guide for job hazard analysis.