Recently in Tech Tuesday Category

I came across this rather cool NASA publication dealing with aerospace accidents and incidents, a 244 page (in PDF format) report on all kinds of aerospace accidents and their causes, covering everything from crashes of X-planes, rough landings of the Space Shuttle, and problems with "almost" loss-of-consciousness incidents with F22 Raptor, amongst a number of aircraft/spacecraft covered in the report.

The report shows that quite often it is not a single factor that causes these incidents, but a chain of errors that leads up to problems encountered.

The free download from NASA can be found here with three different formats available, E-Pub, Mobi, or PDF.

It's quite fascinating reading for those of you out there who are aviation and spaceflight fans.
It's another in an irregular series - it's another Tech Tuesday!

We've all seen how computer hardware has progressed at an astronomical rate over the past 40 years or so, going from something that would fill a room and use magnetic core memory, reel-to-reel magnetic tape drives, and fanfold paper pouring out of printers to something that fits in the palm of your hand with millions of times more computing power than those old mainframes.

But despite all of the progress, the basic technology behind it hasn't changed all that much, only gotten smaller. The CPUs and peripheral ICs that are the heart of any computer are all based upon silicon semiconductor material technology that's been around since 1948. The supporting electronic circuitry uses the same resistors, capacitors, and inductors that have been around since the electronics age started, though again in smaller form. Nothing really new has been added to the repertoire in some time...until now.

Enter the memristor, the fourth basic electronic component to add to the aforementioned resistors, capacitors, and inductors.

While memristor theory itself has been around for 40 years, it wasn't until 2008 that it became a reality. And now the same folks that brought it to life, Hewlett Packard, have plans to use it to create better solid state memory devices which can be used in place of flash memory (used in a host of devices like iPods, smart phones, USB thumb drives, and the newer solid state hard drives), as well as other types of computer memory such as DRAM (this includes most of the various types of SDRAM used in computers today and over the past few decades) and SRAM.

One of the disadvantages of flash memory is that each 'memory' cell that records a bit has a limited number of write cycles before it wears out. This weakness means some kind of algorithm must be used that assures each cell is written to no more than any other. This is also known as wear leveling. Memristors do not suffer from this problem, meaning the number of write cycles is unlimited.

Another advantage of memristors: they can be put on the processor chip itself, making for very fast systems-on-a-chip that do not require external memory to operate.

The days of spinning magnetic storage medium (meaning the traditional hard-disk drives we all know and love) are numbered, and memristor technology may be the last shove that sends it to a long overdue retirement as it replaces memory technology at all levels. The days of waiting for your computer to boot up may also be coming to an end as memristors replace the RAM in your computer, because memristors don't lose the data stored in them when the power is shut off. You'll be able to turn your computer on and it will come up exactly where it was when you shut it off. Of course that might not be a good thing you're running Windows, if you know what I mean.
Welcome to another edition of Tech Tuesday, though in this instance we're not dealing with high tech so much as retro tech.

How many of you out there remember the Easy-Bake Oven? It's been around for decades - since 1963 (I remember when one of my sisters got one. It was the coolest thing because you could make cakes and brownies with it!) It was a simple piece of technology - a plastic outer case that looked like a miniature version of a kitchen stove, an inner case made out of some kind of sheet metal, a 100 watt incandescent light bulb, and an electrical cord. That's all there was to it.

It came with small packages of Betty Crocker cake and brownie mix, no different from the larger boxes our mothers bought to make their cakes and brownies. It helped that you could use mix from the larger boxes to make the small baked goods. (No, I am not being sexist. You have to remember that many of us grew up in the 50's and 60's before gender roles were redefined. But as a side note, the WP Mom was never a traditional homemaker. My folks were well ahead of their time as the 'traditional' roles didn't really apply in our family.)

So what makes me bring up this bit of retro tech?

It's being redesigned.

Because the gubmint, in its self-delusional wisdom, decided to ban incandescent light bulbs starting with the 100 watt bulbs, the Easy-Bake Oven could have gone the way of the eight track tape.

Hasbro has redesigned it to do away with the need for the soon-to-be-defunct 100 watt bulb, replacing it with a more traditional heating element. The Easy-Bake Oven also gets a new, 'swoopier' look in it's eleventh revision of the classic toy. Unfortunately the redesign also includes a much higher price: $49.99 versus the last model's $29.99, a 66% increase.

I'll bet the geniuses in Washington never realized the negative effect their light bulb ban was going to have on the next generation of bakers. Call it yet another example of the Law of Unintended Consequences.
It's another Tech Tuesday!

As the price of copper has been rising, researchers have been looking for a replacement that is less expensive but would have the conductivity of the copper it would replace.

For some applications, the use of fiber optics has replaced the copper wiring and coaxial cabling used by telephone and cable companies. Both use optical fiber rather than copper for new builds due to its higher bandwidth and lower cost.

But for things like carrying electricity something else is needed. Enter carbon nanotubes.

Researchers from Rice University have managed to produce a cable using carbon nanotubes for carrying electricity.

Enrique Barrera, a professor of mechanical engineering and materials science at Rice, said that highly conductive nanotube-based cables could be just as efficient as traditional metals at one-sixth of the weight. He added that such cables may initially find use in applications where weight is a critical consideration, such as in airplanes and automobiles. In the future, he said, it could replace traditional wiring in homes.

The university's release continued, "The cables developed in the study are spun from pristine nanotubes and can be tied together without losing their conductivity. To increase conductivity of the cables, the team doped them with iodine and the cables remained stable. The conductivity-to-weight ratio beats metals, including copper and silver, and is second only to the metal with the highest specific conductivity, sodium."

The mention of doping the carbon nanotubes with iodine reminded me of an article I read years ago about conducting polymers, specifically polyacetylene. When doped with iodine the polymer's conductivity increased dramatically. Such a polymer would have all kinds of uses, particularly where weight was a factor (like in aircraft, as mentioned above). If I recall correctly from the article Plastics That Conduct Electricity (Scientific American, February 1988, no link available), the polymer would be used for carrying control and other signals throughout the aircraft, but not power as its conductivity wasn't quite good enough for that purpose. But nothing came of it, at least in the aircraft industry, as optical fiber has supplanted it due to its light weight and virtually unlimited bandwidth.

But carbon nanotubes can carry the required power and do it with less weight. The raw material used to create the nanotubes is limitless, as carbon is one of the most abundant elements on earth, not far behind hydrogen. Depending upon the structure of the nanotube cable, they could carry more current for a given size. This means smaller gauge wiring to carry the same amount of power.

Is there anything carbon can't do?

Seamless Computing

| | Comments (0) | TrackBacks (0)
It's another Tech Tuesday.

This time around we're talking about truly seamless computing. This video gives a demonstration of what may become routine in the near future as most of the technology to perform seamless computing is already in use.



There was a WGBH/BBC series that ran on PBS in 1992 called The Machine That Changed The World which was all about how computers came to be and became such a big part of our lives. One point that was brought up by someone from Xerox PARC was that eventually computers would become ubiquitous. If we haven't already reached that point (and I think we have) we will soon enough. Seamless computing might be that last little bit to ensure ubiquitous computing becomes a reality.
It's Tuesday, and I have not one, but two tech items to cover today, therefore it must be Tech Tuesday!

First, let's delve into photography.

Have you ever taken a picture and you think it's going to be a good one only to find out that it isn't focused on what you thought it was? Then maybe it's time to use a camera that can focus the image after it's been taken.

...a startup company named Lytro (Mountain View, CA) is launching a new digital camera technology later this year that could change this reality forever: a camera that lets you adjust the focus after you've taken the picture.

Lytro founder and CEO Ren Ng is also the inventor of what he calls the Light-Field Camera. Ng's 2006 PhD thesis dissertation from Stanford University--which won the Association for Computing Machinery (ACM) 2007 Doctoral Dissertation Award--explains how a microlens transforms an ordinary 2D imaging sensor into a 3D world of "living" digital data.1 The user can view a light-field image on a computer screen, click on an object of interest, and watch the image change as that object moves into sharp focus--all without compromising image quality by reducing the aperture size to increase depth of field.

Here's an example of what one might see with this technology. Both images are from the same picture. The only difference is the computer used to view the images changed the focus from the cat in front to the cat in back with a click of a mouse (no pun intended).

After Focus - Cats.jpg
If this technology plays out, fuzzy out-of-focus pictures will be a thing of the past.

Second, we must always remember there can be a dark side to technology as well. In this case, technology that gives an owner all kinds of neat function for controlling their car can also allow hackers to take control of your car's electronic systems...by texting.

Computer hackers can force some cars to unlock their doors and start their engines without a key by sending specially crafted messages to a car's anti-theft system. They can also snoop at where you've been by tapping the car's GPS system.

That is possible because car alarms, GPS systems and other devices are increasingly connected to cellular telephone networks and thus can receive commands through text messaging. That capability allows owners to change settings on devices remotely, but it also gives hackers a way in.

Researchers from iSEC Partners recently demonstrated such an attack on a Subaru Outback equipped with a vulnerable alarm system, which wasn't identified. With a laptop perched on the hood, they sent the Subaru's alarm system commands to unlock the doors and start the engine.

Not good. I would think OnStar equipped vehicles would also be vulnerable to such hacking as well. One must think of the downsides to some of this technology before employing it, otherwise they may find their neato wizbang new toy has gone missing because someone with an iPad or smart phone hijacked your car's electronic systems and made off with your ride.
As if we need even more proof that the technology we first saw in the original Star Trek series is now becoming reality, there this: a hand-held medical scanner.

Can anyone say "medical tricorder"? Sure you can.

It looks like a cross between a flip-top phone and the medical scanner used by Dr McCoy in the TV series Star Trek.

The Vscan is not science fiction but a hand-held ultrasound machine with a scanning wand attached, which has been approved for use in Europe and North America.

It's getting closer all the time.
Received via e-mail.

What happens when the IT department runs out of money in its budget:

pizza box laptop 800x450.jpg
For as long as man has used fire, he has also had to deal with fire when it is out of control. Usually that means dowsing it with water, dirt, or other means of snuffing it out.

Over the past two hundred years or so fighting fire has meant using some kind of apparatus to move large amounts of water, allowing those fighting the fire to put out the flames. While quite effective, it has a number of downsides, including soaking everything anywhere near the fire. This usually damages objects and possessions within a structure almost as badly as if they had been burned. A limited source of water can also severely restrict the effectiveness of this method. While fire departments and fire engineers have been working to develop new ways of putting out fires more effectively, progress has been slow...until now.

Instead of using water, researchers at Harvard University have found a way of extinguishing flames using electricity.

No, that isn't a misprint. They're talking about using electrical fields to put out fires.

Firefighters currently use water, foam, powder and other substances to extinguish flames. The new technology could allow them to put out fires remotely -- without delivering material to the flame -- and suppress fires from a distance. The technology could also save water and avoid the use of fire-fighting materials that could potentially harm the environment, the scientists suggest.

In the new study, they connected a powerful electrical amplifier to a wand-like probe and used the device to shoot beams of electricity at an open flame more than a foot high. Almost instantly, the flame was snuffed out. Much to their fascination, it worked time and again.

Ironically, the effect of electric fields on fire was observed over 200 years ago, but little research has been done on the phenomenon until recently.

Using such an apparatus would certainly solve a number of problems, including eliminating the need for large amounts of water to fight fires or risking the lives of firefighters to enter burning structures in order to attack the fire more aggressively.

As Glenn Reynolds would say, "Faster please."
We hear of innovative inventions all the time. Some are slightly different versions of existing technologies. Others are radically different and new. The one I'm covering here is somewhere between the two.

A new type of engine, called a Wave Disk Generator, has no pistons, crankshafts, or valves. It is reminiscent of the old Chrysler turbine engine, manufactured in 1963 and installed in 55 cars for testing purposes.

While the details of the Wave Disk Generator are different from the Chrysler turbine, there are similarities. One of the biggest is that both are capable of using just about any fuel, be it gasoline, kerosene, diesel, Jet-A, alcohol, or vegetable oil.

The new engine will be connected to an electrical generator, which in turn could be used to charge batteries or drive electric traction motors in a vehicle. According to the developers the new engine will be able to use up to 60% of the energy from its fuel,compared to about 15% in a standard internal combustion engine used today.

Might this engine be the basis for advanced hybrids some time in the near future? Maybe.
I was in the midst of putting together a post about how to annoy progressives when I got a call from the WP Parents. It seems they had a problem a little earlier in the day and now needed assistance. When they told me what had occurred all thoughts about my original post were put aside.

What had happened that was so earth-shattering that they required help from me? What was it that took me away from my home (and keyboard)? Something truly awful.

Their two year old HDTV died in an almost spectacular fashion.

As the WP Mom explained it, it started with something that sounding like an electrical arc, then sparks, and finally puffs of smoke pouring out of the vents at the rear of the screen. Then the picture died.

The audio survived the spitzen-sparks show, but the picture was gone.

With March Madness upon us (and UConn playing), it was of the utmost importance to remove the dead HDTV, procure a new one, and install it. They took care of the first two and I took care of the last.

A quick trip from The Manse expedited the matter, and an hour later the old 37" HDTV was in the garage to await disposal and the new 46" set was up and running.

From the description of the failure I have to guess the high voltage inverter that provides the 1500 to 3000 volts for the LCD backlight failed, hence the arcing sound, sparks, and smoke. While it could probably could have been replaced, the cost of doing so, specifically the labor, was more than that of going out to buy a new set.

The fact that it failed after two years leads me to suspect it was the direct effect of the banning of lead in electronic solders under the EU's RoHS (Restrictions on Hazardous Substances) directive. (Damn the EU!!) One of the side effects of that ban is a decrease in the long-term reliability of certain electronic and electrical equipment. I have a feeling that either a solder joint failed or a tin whisker grew from the lead-free solder and shorted out something that burned out the inverter.

Many of the newer LCD sets have replaced the fluorescent backlights (and the need for high voltage to drive them) with LEDs, which operate from a much lower voltage and last longer, too. (I wish the WP Parent's new HDTV did, but it also uses a fluorescent backlight. We'll see how long that lasts.)
Like any bit of software, there are times when updates or upgrades can cause far more problems than they were meant to fix. Sometimes the new problems are minor in scope and very few users will notice them. Other times the new problems will bring a computer system to its knees.

Such was the case with a recent update pushed out by the the folks at McAfee, the anti-virus software company.

Many companies and people on Thursday [April 21] were fixing thousands of Windows PCs that went haywire as a result of a seriously flawed software update sent by antivirus vendor McAfee.

The update distributed at 3 a.m. Eastern time Wednesday misclassified a critical Windows XP system file, called svchost.exe, as a malicious program. As a result, McAfee's AV software was instructed to detect and remove the threat, sending affected PCs into fits of rebooting that made the machines useless.

Um...Oops?

Fortunately my recently resurrected main computer was still off-line when the accidentally malicious software update was released, so it did not suffer the fate of so many other XP machines. (A note: my main computer also runs Linux, thank goodness). None of the other machines here at The Manse use McAfee (Deb's computer uses AVG and the other computers, which run Linux, use ClamAV).
Over the past few years the functionality of cell phones has grown to the point that there are so many functions built in that they rival many home computers in regards to the types and numbers of software applications they can run. They can act as organizers, send and receive e-mail, surf the web, text message, take pictures, record video and audio, play music, play games, give turn by turn directions, and perform a host of other tasks. But one thing they don't always do so well is make phone calls, something customers want them to be able to do.

Over 1,300 survey respondents were asked the open ended question, "What features are desired on your next phone?" The top three responses were better connectivity, better audio and simplicity.

In many cases vendors have been so focused on making complex camera phones, music phones or mobile Internet devices, they have lost sight of the fact that phone functionality is mediocre at best. How often have we seen someone with a finger in one ear and a cellphone pressed to the other ear, desperately trying to hear a conversation? Our survey responses suggest that there is an opportunity for vendors to develop phones with great audio quality, robust connectivity and antenna features that are simply easy to use.

I know there are times when I am not pleased with the quality of the connection and audio on my cell phone. It isn't a problem with drop outs that I find the most vexing, but the poor quality of the transmit and receive audio. It would be nice to have what is called toll-quality audio when I'm using my cell phone rather than the variable and consistently poor quality I deal with now.

Tech Support Tales

| | TrackBacks (0)
After talking to our Repair Guy today, I knew I had to relate these two tales about customers and tech support.

It is tales like these that make me wonder just who the heck is actually running the technical side of our telecommunications companies.

The first tale of woe:

Field Supervisor from one of the major telephone companies, which shall remain nameless: "I need to send the laser source you sold us back for repair. It doesn't work."

Repair Guy: "What seems to be the problem?"

Field Supervisor: "I turned it on and there's no laser output. The indicator lights come on, telling me the source is on, but there's no output!"

Repair Guy: "Did you measure it with a power meter?"

Field Supervisor: "I didn't need to. I could see it wasn't working!"

Repair Guy: "What do you mean you could see it wasn't working?"

Field Supervisor: "I looked down the barrel of the output connector and I couldn't see any light coming out. It wasn't working."

Repair Guy: "What model is the unit?"

Field Supervisor: "It's a DWLS-2." (Model number changed to protect the innocent...and the guilty.)

Repair Guy: "Umm...Sir, that model uses infrared lasers."

Field Supervisor: "Yeah, so?"

Repair Guy: "Infrared isn't visible to the human eye."

Field Supervisor: "Oh. Uh...so it's working?"

Repair Guy: "Yes, sir. It is."

Field Supervisor: "Oh, OK. Thanks."

One would think that someone in charge of maintaining part of our telecommunications infrastructure would have a basic understanding of the technology he's supporting. Thinking otherwise is too scary.

Our second tale of woe:

A laser source is received from a customer for repair, wrapped in a note that says "Toggle switch on top panel of unit is broken off. Can't turn on unit power."

The problem was immediately apparent to our faithful Repair Guy.

The toggle switch was indeed gone...because the unit in question never had one.

The power switch on the unit in question is on the front, a red circle with a vertical line in the center. The user's manual even shows a diagram of where the switch is located and what it looks like.

It's no wonder why our Repair Guy either laughs all day or is tearing his hair out.
As much as nuclear power seems to be on the verge of a revival here in the US, there are still issues to deal with, particularly the One-Big-Plant designs favored by most of the world. But maybe that's not the best means of bring more nuclear power plants on line. Instead, smaller may be better, and in the end, less expensive.

A smaller scale, economically efficient nuclear reactor that could be mass-assembled in factories and supply power for a medium-size city or military base has been designed by Sandia National Laboratories.

"This small reactor would produce somewhere in the range of 100 to 300 megawatts of thermal power and could supply energy to remote areas and developing countries at lower costs and with a manufacturing turnaround period of two years as opposed to seven for its larger relatives," [Tom] Sanders said. "It could also be a more practical means to implement nuclear base load capacity comparable to natural gas-fired generating stations and with more manageable financial demands than a conventional power plant."

The reactor system is built around a small uranium core, submerged in a tank of liquid sodium. The liquid sodium is piped through the core to carry the heat away to a heat exchanger also submerged in the tank of sodium. In the Sandia system, the reactor heat is transferred to a very efficient supercritical CO2 turbine to produce electricity.

Because the right-sized reactors are breeder reactors -- meaning they generate their own fuel as they operate -- they are designed to have an extended operational life and only need to be refueled once every couple of decades, which helps alleviate proliferation concerns. The reactor core is replaced as a unit and "in effect is a cartridge core for which any intrusion attempt is easily monitored and detected," Sanders said. The reactor system has no need for fuel handling. Conventional nuclear power plants in the U.S. have their reactors refueled once every 18 months.

There are certainly advantages to having a number of smaller, more distributed power plants compared to the 1000MW+ big plants, particularly if the cost per megawatt-hour is comparable or less than that of the big plants. The cost of building the smaller plants is also significantly lower because many of the major systems will be built in factories and not on-site as they are today.

Since the smaller plants do not require the maintenance of the larger plants nor the more frequent refueling, the cost of operating them might be less on a per megawatt-hour basis as well. Because they use a different power cycle they will also have less of a problem with long-lived nuclear waste, unlike existing U235 fueled reactors. (Some of the problems with spent fuel rods of this type are not technical in nature, but political. However we won't delve into that here.)

Unlike the alternative energy sources being touted as the Answer-To-It-All, these small reactors are designed to handle what is called base load. Alternative energy sources tend to be supply-driven sources, meaning they aren't available on demand, particularly if there's no wind or if it's dark out. Base load plants are demand-driven sources, meaning they can be turned up or down as the electrical demand requires.

You can rest the rest of the article by clicking here. (A direct link to the article does not yet exist.)
Batteries.

It's not often we think of batteries until we need them (usually when they've just gone dead at exactly the wrong time).

Over a couple of hundred years batteries have evolved from carboys or jars filled with acid and metal plates to compact cells with high energy capacity installed in packs for all kinds of equipment, from laptops to cell phones to iPods to hybrid autos, just to name a few.

The battery chemistry of choice these days is lithium ion, also called Li-Ion. Li-Ion batteries have very high capacity while having little weight in comparison to other chemistries like lead-acid, alkaline, zinc-air, carbon-zinc, or nickel metal hydride (or NiMH).

Early Li-Ion cells were temperamental and not forgiving of abuse, often catching fire if they were over charged or physically damaged. But better electrode materials and electrolyte chemicals have greatly reduced that danger, making them far safer than they used to be. (That's not to say they're perfectly safe.) The big attraction of lithium-ion batteries, as anyone using batteries on a regular basis can tell you, is the high capacity, which allows longer time between recharges for the electronic and electrical equipment that uses them. That's why Li-Ion batteries have supplanted most other rechargeable battery types in portable equipment. But as good as Li-Ion batteries have become, they are slated to become >even better.

Researchers at the Institute for Chemistry and Technology of Materials [at Graz University of Technology in Austria] have developed a new method that utilises silicon for lithium-ion batteries. Its storage capacity is ten times higher than the graphite substrate which has been used up to now, and promises considerable improvements for users.

In the newly developed process, researchers utilise a silicon-containing gel and apply it to the graphite substrate material. "In this way the graphite works as a buffer, cushioning the big changes in volume of the silicon during the uptake and transfer of lithium ions," explains Koller.

Silicon has a lithium-ion storage capacity some ten times higher than the up-to-now commercially used graphite. The new material can thus store more than double the quantity of lithium ions without changes to the battery lifetime.

Researchers at Stanford University have developed something similar, using silicon nanowires to coat the graphite, potentially boosting the battery capacity by a factor of ten. If either of these methods become commercially available, we could see cell phone batteries that will give a user 24 hours of talk time, and laptop batteries with 20 hours or more of run time. It could also make hybrid or all-electric automobiles far more cost effective, using smaller battery packs, all while extending the range between charges.

Who knew that battery technology could actually be exciting?

Another Tech Tuesday Oops

| | Comments (0) | TrackBacks (0)
I had hoped to have the third part of my fiber optics tutorial all set to go for Tuesday night. I had all the relevant resources ready to quote or link or whatever.

Earlier this evening BeezleBub hopped on my machine to check a website he thought about at the last minute, his machine being in the process of shutting down.

When he was done, he closed the browser, telling Firefox to Quit without saving all the tabs I had open. And me, being the kind of guy I am, hadn't bothered bookmarking those particular tabs because I am usually the only one using this machine. So, with the click of a single button, over a week's worth of research disappeared into the bit bucket. Also gone were newspaper articles, opinion pieces, and blogs I was perusing with the idea of posting my own opinions or adding to the blogroll.

ALL. OF. IT. WAS. GONE.

Shame on me for not bookmarking them (as I do from time to time, but not regularly).

Shame on BeezleBub for using someone else's machine without asking first, and then closing programs that "someone else" was using.

He has since been banned from using my computer for anything. He has his own computer as well as a communal laptop to use. There was no need to use my computer at all. None.

Needless to say, my planned Tech Tuesday post has been totally hosed and will be delayed until next week.
As mentioned at the end of Fiber Optics Technology 101 - Part I, I will be covering how fiber optics is used to connect various local telephone switching systems together, how it is used in undersea cables, as well as its use in CATV and delve a little bit into Fiber To The Home (or FTTH). Part III will cover Fiber To The Home in more depth.

- Public Switched Telephone Network or PSTN -

While almost everyone's home or business telephone is connected to the central switching office (or CO) by a twisted pair of copper wires, the switching offices themselves connect to each other and to the various long distance carriers via fiber optic cables. Fiber has replaced all of the copper lines and most of the microwave systems that were used in the past to connect CO's together or to carry connections to and from long distance carriers.

As the sophistication of the electronic switching system (or ESS) used has increased, the actual number of them required for phone call routing has decreased. In many cases, where there was once a switching system in every town (or a large number of them in a single city), the new ESS can each handle many more times the number of phones than the older mechanical switching systems or early electronic switches. It's not uncommon to find a single ESS servicing a large number of towns. A fiber optic cable connects the ESS to what is called a concentrator, which is basically a combination optical-to-electronic and electrical-to-optical converter. The concentrator is usually located in the old central office where the mechanical switching systems had been housed. All of the twisted pairs that connect the customer telephones are still there, but now they connect to the concentrator. All of the call switching is done back at the ESS via the concentrator. This reduces the cost of operating the phone system while it adds more features for the customers (Call Waiting, Caller ID, etc.)

While the concentrators are usually connected to the ESS at a CO in a hub and spoke configuration, the CO's themselves are often (though not always) connected together in a loose mesh or ring configuration, allowing for redundant paths between CO's, allowing for quick restoration of service if there is a failure of any fiber connection between them. Depending upon the amount of phone traffic being carried between CO's and to and from the long distance carriers, the fiber links may be on multiple fibers or will use Coarse or Dense WDM to increase traffic capacity.

- Undersea Cables -

Undersea cables to carry messages have been around since the late 1800's. The first intercontinental telegraph cable between the US and England was run along a ridge in the North Atlantic. It carried a single telegraph channel and failed not long after it was completed. The undersea cables used today are now high capacity fiber optic cables which replaced the massive copper cables used by telephone companies for decades. The physical size of the fiber optic cables is a fraction of that of the old copper cables, meaning it is easier to manufacture and transport as well as less expensive to make. Like the old copper cables, the fiber cables require amplifiers along the cable in order to maintain signal strength and fidelity over long distances. The amplifiers, called EDFAs (or Erbium Doped Fiber Amplifiers), are spliced in approximately every 50 kilometers along the cable. Normally amplification is not required after such a short run. Long-haul fibers can go hundreds of kilometers between amplifiers, but due to the harsh conditions and the expense of pulling up undersea cables in order to make repairs, the cables have quite a bit of redundancy built in. With a 50 kilometer spacing, three or four adjacent amplifiers can fail and the system will still remain in operation with no interruption of the data flow.

Undersea fiber cables will have up to a couple of dozen fiber pairs. Each fiber pair is capable of carrying multiple wavelengths using DWDM, giving these cables massive bandwidth capabilities.

Most undersea cables are laid out in a rough 'ring' configuration, allowing for redundant paths in case one cable fails or is removed from service for maintenance, upgrade, or repair.

Not all undersea cables are used for long-haul circuits. Many are run along coastlines to interconnect widely separated coastal cities. This has been done along the coast of South and Central America, Australia, and Africa. It is less expensive to run undersea cable in relatively shallow waters than trying to install terrestrial fiber cables through jungles, rain forests, over mountain ranges, or through deserts. With these shorter span cables little, if any, amplification is required, greatly reducing the cost of the cable. These short span cables can also have a larger number of fiber pairs due to the fact that they usually don't require expensive amplifiers and won't be sitting on the ocean floor hundreds of meters below the surface of the ocean. The short span cables make landfall at a number of places along the coastline where the traffic they carry can be routed to terrestrial fiber cables, telephone, and/or data systems. Or the traffic can continue along the next stretch of undersea cable after being amplified at the landfall facility.

- Cable Television Systems -

Cable TV, or CATV, has been around for decades. Over that time the capabilities of CATV systems has increased dramatically. Where originally CATV was used to bring network television broadcasts to homes that were normally incapable of picking them up due to distance or terrain, CATV systems now carry a couple of hundred channels as well as Internet and telephone services to their customers. Most CATV operators are now called Multiple Services Operators, or MSOs. Comcast, Harron, and Cox are all examples of MSOs.

What makes it possible for MSOs to offer all of these services is fiber optics.

In the past, CATV operators would have antenna arrays or satellite dishes that would receive the various broadcasts. These signals were then fed down a coaxial cable from the CATV head end (where all of the receivers, amplifiers, and control equipment are located). The coaxial cable would then feed other coaxial cables that ran through a community and to each customer's home. Along the way the signals coming down the coaxial cable would be amplified to make up for cable losses and loss each time the signals were divided and fed to other runs of coaxial cable. This type of system required a considerable amount of coaxial cable and a large number of amplifiers to keep the signals strong enough for use by the customers. The downsides to this system were the expense of the copper coaxial cables and amplifiers, the electricity to power the amplifiers, as well as the fact that the systems were for the most part one-way systems. That means that signals came down from the head end to the customer, but not the other way. This limited the types of services that CATV operators could offer in the past.

Today, MSOs use a combination of fiber and coaxial cable to bring services to their customers. These are called Hybrid Fiber/Coaxial systems, or HFC. The addition of fiber to the system allows for two things - low loss connections from the head end to the neighborhood and a return path from the customer to the head end. Fiber between the head end and what is called a node removes the need for expensive coaxial cable and amplifiers while increasing the bandwidth available to carry more TV programming, data for Internet connections and telephone services.

Here is an example of what a traditional Coaxial and Hybrid Fiber-Coaxial systems look like:

TraditionalCoaxialArchitect.gif

A node is the 'black box' that converts the optical signals on the fiber from the head end to an RF signal that is fed into the coaxial cables that run through a neighborhood and to the customers, as well as converting RF signals from the customers back to optical signals that return to the head end on a second fiber. Each node services up to 1000 customers. Depending upon the demand for Internet and phone services, each node can be divided to service a smaller number of customers. All that's required is another pair of fibers between the head end and the node.

For a somewhat more in depth explanation of HFC, click here. It also includes the use of DWDM in HFC systems in order to reduce the number of fibers required to provide services to customers.

- Fiber To The Home -

This is where things get really interesting. While the concept of Fiber To The Home (or FTTH) has been around for a while, it had not been widely used due to the high costs of the supporting equipment. But that has all changed.

One of the big advantages of FTTH is that it provides a large amount of bandwidth to the average consumer as well as providing video services along the lines of a standard CATV system. It helps matters that three the three Regional Bell Operating Companies (or RBOCs) - AT&T, Qwest, and Verizon, have agreed to standards for deployment of FTTH, meaning that equipment manufacturers will be able to provide the necessary equipment at a much lower cost due to high volumes.

So what exactly is Fiber To The Home?

As the name implies, FTTH brings a fiber optic network connection directly to the home. This connection will provide a high speed data link - 1250 Mbit/sec download link and 622 Mbit/sec upload link - for the consumer (different systems use different data speeds, so you mileage may vary). The link will provide both data (Internet) and phone services. There will also be a video downlink that will provide a couple of hundred TV channels in both Standard and High Definition formats.

I won't go into depth on FTTH at this point as it is a rather broad subject. I will cover it in Part III next week.

Tech Tuesday - Oops!

| | Comments (0) | TrackBacks (0)
Boy, did I screw up.

First, for some reason I kept thinking yesterday was Monday. It must have been the Columbus Day holiday that threw me.

Second, the second part of the post I was updating for reposting wasn't done because I was making some major changes. Things have changed drastically in the six years since I made the original post about Fiber Optics. Some things that were true back in 2003 are no longer true, primarily because the technology has blown right past some of the assumptions I made about where things were going. Fiber is deployed in places very few had expected for purposes that didn't exist back then.

Mea culpa. Mea maxima culpa.

I'll have the updated post next week. I promise, Really!
Added Note: It wasn't until after I posted this I found out the creator of optical fiber, Charles Kao, had been awarded the Nobel Prize for Physics. This makes the post even more time-worthy.

********************

As I've mentioned here a number of times, I work for a small fiber optics firm in New Hampshire. When I first joined the company I knew very little about fiber optics. My background was in radar and microwave. I knew about electrons and waveguides and frequency sources and transmitters and receivers and magnetrons and traveling wave tubes and phased array antennas and phase discriminators and YIG filters and microwave striplines and DROs. I also had some exposure to infrared target tracking systems as well as hybrid IC construction and test. But I knew squat about fiber optics other than it used glass fibers thinner than a human hair.

The past twelve years have been an education.

I got into fiber optics just as the Internet really started its boom and the need for more and more bandwidth climbed. I've learned so much, and yet I still know not nearly enough. But I know just enough to clue the curious amongst you in to how the whole thing works. I won't delve deeply into theory, but I will try to include as many links as I can for those of you wanting to know more than I will cover here.

-Optical Fiber-

There are many kinds of optical fiber, though we are more interested in fiber made from glass.

There is optical fiber made from plastic, much like the ones used to hook up DVD players or satellite tuners to Dolby Digital Surround receivers, for those of you fortunate enough to have them. Plastic optical fiber is also used for illumination.

There are also specialty fibers used for such things as temperature sensors, strain gauges, fluid level sensors, and high power laser waveguides. Many of these are made from sapphire or are metal coated in order to withstand high temperatures or other harsh environmental conditions.

With glass fiber, there are a number of different applications. Most are used for telecommunications, be it voice, data, or video. Others are used for imaging. One such imaging application is the medical field. Sometimes diagnosis or surgical procedures require taking a look inside the body without cutting it open. Fiber optic imaging scopes make this possible, though sometimes a small incision is necessary to allow the fiber scope access to the area in interest. Other imaging applications include use for inspection of machinery in tight spaces where one would not normally be able to look without dismantling the machine. Some glass fiber is also used for illumination, much like plastic fiber, but it gives better overall spot illumination than plastic.

So what is it about glass optical fiber that makes it so useful to carry communications? There are two answers to that question.

First, bandwidth.

Optical fiber can carry incredible amounts of data, more than any other technology being used today.

Bandwidth is the big attraction of fiber optics. The technology to allow this large amount of bandwidth has already been deployed. A single fiber is capable of carrying up to 16 terabits of data per second. That's 16 trillion bits of data every second, enough to transmit the entire contents of the Library of Congress in less than a tenth of a second, or in excess of 250 million phone conversations simultaneously. That's a lot of phone calls.

I will delve into how this bit of technological legerdemain is done later in this post.

Second, it has minimal signal loss over distance as compared to copper wire, coaxial cable, or wireless technology (i.e., radio).

The big attraction to using fiber optics for communication is that an optical signal can travel a long way before it needs to be amplified or regenerated. It's not uncommon to have fiber optic links hundreds of miles long that don't require the use of amplifiers or regenerators. Copper wire or coaxial cable, on the other hand, require amplifiers and repeaters every couple of miles or so in order to maintain signal integrity. They use a considerable amount of equipment and power to run signals between two points. That's one reason why the phone companies and other telecommunications companies all use fiber to connect their central offices together. Copper wire is primarily used to connect what is called the 'last mile' to their switching systems. It's what connects you to your phone company's central office, using the same technology as one hundred years ago.

So how is it that optical fiber has such low loss? The simple answer is chemistry. Though the fiber is made from glass, it isn't the same as the window glass in your home or in your car. Optical fiber glass is a very pure formulation that exhibits incredible clarity, particularly at wavelengths of light used in fiber optic communications. In comparison, dry air is a murky, hazy curtain. And just to complicate things a little, optical fiber actually uses two different glass formulations - one for the inner 'core' and a different one for the outer coating, or cladding. The reason for this is something called refraction.

Refraction is defined as the bending of light as it passes between materials of different densities. (For a good demonstration of refraction, click here or here.)

Do you remember looking into a pool of water, poking a stick into that pool and seeing that the stick 'bent' where it entered the water? The apparent bend was due to refraction, the bending of light at the interface between the air and water. Different materials will bend light to different degrees. How much a material will bend light is defined by the index of refraction. The index of refraction is a ratio between the speed of light in a vacuum and the speed of light through the material in question. The index of refraction is almost always a number equal to or greater than 1. (Unless Einstein and Hawking are wrong, light cannot travel faster through a material faster than it can through a vacuum. Therefore the index of refraction can never be a number less than 1 unless the material is something called a metamaterial, a man-made substance that exhibits negative refraction, but we won't go into that now.)

Refraction is this property fiber optics exploits in order to keep the light traveling down the fiber contained in the fiber. The inner core has one index of refraction and the outer cladding has a slightly different one. This difference is what guides the light down the fiber and keeps it from escaping even if it is bent somewhat. However, there are limits to how much one can bend an optical fiber. Bend one far enough and it will break. For the kind of fiber used in telecommunications, a bend tighter than a certain radius will allow light to leak out of the fiber core and cladding but generally won't harm the fiber itself.

-Optical Transmitters and Receivers-

Okay, we have a medium that can be used to carry optical signals a long distance without the need for amplifiers or regenerators. But what is it that generates or detects the optical signal in a fiber?

Transmitters and receivers, of course.

In fiber optics, transmitters primarily consist of a laser and a modulator. The laser generates the light and the modulator changes some characteristic of the laser light in order to couple the data to be transmitted. In effect, the modulator piggybacks the data on to the laser light.

Many of us are familiar with lasers because we see them in use almost every day. The checkout at your local supermarket uses a laser scanner to tally your purchases (that's the 'doot doot' sound you hear at the checkout line). Those red lines you see flowing over the package of Pop-Tarts you wanted as it passes over the scanner come from a red laser diode. You're probably also familiar with the pen-like red or green laser pointers lecturers and teachers use.

The lasers used in fiber optic communications are similar in many ways to those used at supermarkets and lecture halls. Both types are made in a similar fashion, using semiconductor materials much like the silicon used to make most electronic IC components. However, the one big difference between them is that communications lasers do not generate visible light, but rather infrared light. The 'colors' of the infrared spectrum lie just below that of visible light, below the color red. Coincidentally, optical fibers of the type mentioned above actually pass infrared light far better than visible light.

The light from the laser diode is coupled into the optical fiber by means of a miniature lens that focuses the light into the core of the fiber. From there it travels along the fiber to the modulator (assuming the modulator isn't built in to the package). The modulator is usually a crystal that has properties that make it absorb laser light or shift its phase when an electric current is applied to it. The digital data stream that consists all of the phone calls and Internet traffic and so on is what actually controls the electric current feeding the modulator crystal.

Data can be transmitted by a laser by directly modulating the laser by turning it on and off, but there are limitations to how fast a laser can be switched like that. Therefore, most high speed systems use an external modulator. Most external modulators run at a data rate of up to 10 gigabits per second (10 billion bits per second). There are some modulators that can run as high as 40 or even 100 gigabits per second, but they are not as numerous or as widely deployed as 10 gigabit systems.

Now that you have all of that data modulating the laser, you need to have some way of detecting it and turning it back into an electronic data stream. That's where receivers come in to play.

At their most basic, a fiber optic receiver is made up of two parts: a photodiode and an amplifier.

A photodiode is a semiconductor that does one of two things, depending on how it is used. It either generates an electric current when illuminated by light (i.e. photovoltaic effect), or it allows a current to flow under the same conditions (i.e. photoconductive effect). Both configurations are used in fiber optic receivers.

The amplifier is connected to the output of the photodiode and amplifies the signal detected by it. In most cases, the amount of light being detected by the photodiode in a fiber optic receiver can be as little as 10 microwatts (that's 10 millionths of a watt). The amplifier raises the level of the detected signal so that other circuitry can shape and clean up the signal and turn it back into data that can be read and routed by the switching systems, whether they switch data or phone calls. Sometimes the data is turned back into an optical signal and sent down another optical fiber to yet another switching system.

-Optical Amplifiers and Regenerators-

For fiber optic communications over relatively short distances (less than 80 km), there is rarely a need to amplify or regenerate an optical signal. But for longer distances or instances when there is higher than normal loss, optical signals need to be restored to their original strength and the pulse waveforms corrected. This is done in one of two ways - optical amplification and regeneration.

The first is optical amplification. An optical signal comes into one end of the amplifier and comes out the other end many times stronger than the input. How is this accomplished?

Simply, it's magic.

Well, not really magic, but it might seem that way to the uninitiated. Optical amplifiers use something called laser pumping in order to amplify optical signals in a fiber. There are three different methods used to achieve amplification via laser pumping: Erbium Doped Fiber Amplifiers, or EDFAs; Semiconductor Optical Amplifiers, or SOAs; and Raman Amplification.

EDFAs and SOAs use similar methods to achieve amplification. Only the medium used differs.

The Erbium Doped Fiber Amplifier uses an optical fiber doped with erbium atoms as the amplification medium. Lasers with wavelengths below those being amplified are coupled to the doped fiber, causing excitation of the erbium atoms in the fiber (the erbium atoms absorb energy from the pump lasers), charging them to a higher energy state. The optical signals enter the doped fiber at one end. The excited erbium atoms donate their energy to the optical signals as they pass through the fiber, and the energy level of those signals increases. Depending upon the length of the doped fiber and the power of the pump lasers, the optical signals leaving the doped fiber can be anywhere from 10 to 50 times stronger than were when they entered the EDFA. All of this amplification takes place without the need to convert the optical signal to an electronic signal, and back again. All of the amplification is done in the optical domain. Neat, huh? EDFAs are used primarily on long haul fiber optic links like undersea cables and long run terrestrial links.

The SOA works on a similar principle as the EDFA, but rather than using a doped fiber as the amplification medium it uses an optical waveguide. The SOA doesn't provide nearly as much amplification as an EDFA, but then it isn't designed for that. SOAs are used in shorter fiber links, usually in what are called MANs, or Metro Area Networks.

The third amplification method uses something called Raman Amplification. Like the EDFA and SOA, it uses pump lasers to generate the 'donor' energy for amplification. But in this case no special amplification medium is used, just the fiber in the communications link. A high powered laser operating on the same wavelengths one would use in an EDFA is coupled into the communications fiber. The energy from this laser couples to the photons of the communications laser pulses also in the fiber, and the power of the comm laser pulses are amplified. One of the big advantages to Raman Amplification is that it generates less noise than an EDFA. The downside is that it doesn't provide as much amplication as an EDFA. In some cases, EDFAs are used in conjunction with Raman Amplification in long haul fiber links which combines the best of both - high amplification with lower noise.

The second method used to extend range in a fiber optic system is regeneration. A regenerator is basically a fiber optic receiver coupled to a fiber optic transmitter with control, filtering, and sometimes switching electronics in between. The filtering and 'shaping' circuitry can correct for signal defects created by the characteristics of the fiber being used. The regenerator receives the optical signal and converts it to an electrical signal. The electrical signal is then filtered to clean up any noise or pulse distortions in the signal. It is then amplified and the amplified signal is used to drive the optical transmitter.

Each method has its advantages and disadvantages. One big advantage of optical amplification over regeneration is that it can be used to amplify multiple optical signals simultaneously.Remmeber, a single optical fiber can carry multiple wavelengths, or colors, of infrared light at the same time. Optical amplifiers amplify all of those wavelengths at the same time. Regenerators can do this too, but it requires separating each wavelength and sending each one to its own regenerator. That's a lot of receivers, electronics, and lasers, as well as power to run them all.

-Wavelength Division Multiplexing-

As mentioned earlier, one way to increase the amount of data a single fiber can carry is to use more than one wavelength of light as a carrier. This is called Wavelength Division Multiplexing, or WDM. The more wavelengths in a fiber, the more data that can be transmitted. There are a number of different versions of WDM being used today - simple, coarse, and dense. All of them use multiplexers and demultiplexers. These devices allow multiple wavelengths to be coupled to and from a single optical fiber.

Simple WDM is the oldest form of WDM. It uses only two wavelengths, 1310nm and 1550nm, to transmit data through a fiber. This form of WDM uses inexpensive uncooled lasers and inexpensive multiplexers and demultiplexers. In some cases specially tuned optical couplers are all that is necessary for this function. Simple DWM has been in use for well over a decade.

Coarse WDM uses up to 18 wavelengths between 1270nm and 1610nm. CWDM is inexpensive compared to DWDM. It can use uncooled lasers and inexpensive multiplexers and demultiplexers.

Dense WDM packs up to 160 wavelengths into a fiber, allowing up to 16 terabits per second of data to be transmitted on a single fiber. DWDM systems presently deployed use nowhere near this many wavelengths, but the capability is there. DWDM is rather expensive to implement because of the tight wavelength tolerances required by the lasers. The lasers are temperature controlled, which minimizes wavelength shifts due to temperature changes. The multiplexers and demultiplexers are also quite expensive and have to exhibit the same tight tolerances as that of the lasers. DWDM is used primarily for long haul connections, particularly in undersea cables.

A good tutorial for DWDM and all of the subjects covered so far can be found here or downloaded in PDF form here.

********************

And so ends Part I. Part II will cover how fiber optics is used to connect the various telephone switching systems, cable TV systems, as well as undersea fiber cables and the technology behind Fiber To The Home.

Expatriate New Englanders

Other Blogs We Like That Don't Fit Into Any One Category

Categories

Sitemeter

    -->
Powered by Movable Type 4.1