Think of all the modern electronic conveniences you use throughout the course of your day. The list is practically endless: your dishwasher, oven, washer and dryer, heater, air conditioner, television, computer, and let’s not forget your cell phone.
All these devices are powered by an invisible mix of both electric and magnetic energy. In the past few decades, these devices, along with wireless Internet and Wi-Fi, have transformed life as we know it, providing incredible conveniences.
But at what cost?
The enormous time-saving benefits of these amenities make it easy to ignore the harm they may cause. For decades, many well-respected researchers have had serious concerns about the health effects of EMFs. To help you understand the negative impact of wireless EMFs, you need a basic grasp of what EMFs are, how they work, and how they affect things they encounter. That’s what you’ll find in this chapter.
Let’s keep it simple. There are many different types of EMFs. Each has its own frequency, which is the number of waves that will pass through a fixed point per second. Frequency is measured in units called Hertz, which is named after the 19th century German physicist Heinrich Hertz and abbreviated Hz. One thousand Hz is a kilohertz (KHz), one million Hz is a megahertz, and one billion Hz is a gigahertz (GHz).
As I mentioned in the introduction, EMFs come from both natural sources, such as lightning and sunlight, and man-made sources, such as cell phones, Wi-Fi routers, electrical wiring, and microwaves. They exist in a spectrum, from extremely low frequency (3 Hz to 300 Hz) all the way up to gamma rays, which have a frequency greater than 1022 Hz.
You can see the spectrum in the chart below.
Figure 1.1: The spectrum of EMFs.
As you can also see from this chart, EMFs are typically classified into two major groups: ionizing and nonionizing radiation.
Ionizing means that that particular EMF has enough energy to disrupt the structure of an atom by knocking off one or more of its tightly bound electrons, transforming that previously neutral atom into an ion with a positive charge.
Ions are a problem because they can produce free radicals. Free radicals are simply molecules that have become ionized and have not found anything to latch on to so as to remove their unbalanced charge. They behave like loose cannons in the ordered and civilized world of your cell’s biochemistry.
Free radicals by themselves are not dangerous as your body requires a certain level to stay healthy, but when they are produced in excess quantities they become problematic. They can attack the complex and precisely formed molecules of your cell membranes, proteins, stem cells, and mitochondria and convert them to damaged, and in many cases useless, forms.
Ionizing radiation can also cause DNA damage. This is an undisputed fact, and explains why any time you have ever gotten an X-ray (a form of ionizing radiation) , you have likely been given a protective lead apron to cover your torso and shield your organs from exposure.
The major types of ionizing radiation are: neutrons from radioactive elements like uranium, alpha particles, beta particles, X-rays, and gamma rays. Since alpha and beta particles can be stopped by physical barriers, such as a sheet of paper or an aluminum plate, they are not typically of much concern. But neutrons from radioactive elements and X- and gamma rays are far more penetrating, and exposure to them can cause serious biological damage.1,2
Ionizing radiation exposure | Dose in millirems |
---|---|
Background | 0.006 |
Chest X-ray | 10 |
Flying at 35,000 feet | 0.6/hour |
CT scan | 200–1,000 |
Data above compiled from the U.S. Nuclear Regulatory Commission.3
Nonionizing radiation does not have enough energy to create ions, and thus it has been generally regarded as safe and biologically “harmless” for decades. But we are now learning that there are other mechanisms by which nonionizing radiation can cause damage to living cells.
As you can see in Figure 1.1, nonionizing radiation is produced by electronics such as cell phones and other wireless devices including baby monitors, cordless phones, and smart appliances.
The classification of nonionizing radiation as universally “safe” in appropriate exposures has been proven to be false, though many still cling to it. (I will explore the science behind this claim further in Chapter 4.)
Not all forms of nonionizing radiation are damaging. The graphic also shows that visible and infrared light are forms of nonionizing radiation; both are important for human health. It is well established that exposure to these forms of light is necessary for optimal health.
And yet, when you review the research and become aware of the efforts made to distort or suppress its findings, you will see compelling proof that nonionizing EMFs have the ability to cause great harm to your health.
The following devices emit the vast majority of the EMFs you are exposed to in your home. I will cover how to replace these devices, or reduce the level of EMFs they emit, in Chapter 7; for now, put as much distance as you can between yourself and these devices, as proximity increases exposure exponentially.
How can nonionizing radiation sometimes be good and sometimes be bad?
To help you understand this seeming contradiction, allow me to drill down a little more deeply on why both ionizing and nonionizing radiation can be so dangerous.
First, I’ll explain how ionizing radiation damages your body. As I mentioned earlier, ionizing radiation easily passes through every tissue in your body. It can knock electrons out of the orbit of atoms and turn them into destructive ions that can create damaging free radicals.
One of the most concerning aspects of this process is when the ionizing radiation passes through the nucleus of your cells where most of your DNA is stored. It has enough energy to directly break some of the covalent bonds in your DNA. This is the way that ionizing radiation causes genetic damage, which can then lead to cell death or cancer.
There is also an indirect way that ionizing radiation damages DNA, and that is by converting the water in your nucleus into one of the most dangerous free radicals in your body, the hydroxyl free radical. This highly unstable hydroxyl free radical can then go on to cause its own DNA destruction.
This direct and indirect DNA damage by ionizing radiation is illustrated in the graphic below.
Figure 1.2: How X-rays damage your DNA.
For many years the wireless industry and the federal regulatory agencies have insisted that nonionizing radiation cannot cause DNA damage because it does not have enough energy to directly break DNA bonds.
The concept that nonionizing radiation, the type emitted by your cell phone and Wi-Fi, can cause similar genetic damage as ionizing radiation, is highly controversial. The reason why this issue is so confusing is largely because nonionizing radiation from your wireless devices causes biologic damage by an entirely different mechanism than ionizing radiation.
It’s true that nonionizing radiation, by definition, doesn’t have enough energy to directly break the covalent bonds in your DNA or produce hydroxyl radicals that do the same. However, wireless radiation results in DNA and biologic damage that is nearly identical to the harm caused by ionizing radiation. It just does it in a different way that very few people are aware of.
Nonionizing radiation from your wireless devices actually creates carbonyl free radicals—instead of the hydroxyl radicals that ionizing radiation gives rise to—that cause virtually identical damage to your nuclear DNA, cell membranes, proteins, mitochondria, and stem cells.
Of course, the full extent of the process is more involved than this simple explanation, which is why I delve deep into the science of how EMFs from nonionizing radiation cause damage in Chapter 4, where you will learn why the nonionizing radiation you are exposed to every day from your wireless devices and Wi-Fi are collectively far more dangerous to you than ionizing radiation.
As a result of the coordinated and costly efforts of the wireless industry, you and your family are left woefully unprotected by the current federal safety guidelines because they are fundamentally flawed.
The Federal Communications Commission (FCC) establishes safety guidelines for the radiation emitted by cell phones by using what’s known as a specific anthropomorphic mannequin (SAM)—a plastic facsimile of a human head filled with liquid designed to mimic the absorption rate of brain tissue—to determine what’s known as specific absorption rate (SAR).
The only value of the SAR reading is to measure the short-term thermal effect of the radiation on your body. As I discuss at length in Chapter 4, though, the primary way EMFs damage your body is not through heat but through changes at the cellular level, which the SAR reading does not measure.
There are many additional problems with the SAR:
Perhaps you might be lulled into purchasing a low SAR phone to ease your mind. But this would be a false sense of security, because the SAR rating has nothing to do with the true biological damage done by the EMFs emitted by cell phones. It is merely a gauge of the intensity of the heating effect, which provides only the benefit of being able to compare the SAR of one phone to another.
Even if a low SAR rating did reflect a phone’s potential for harm, you would probably still be at risk. All mobile phone manufacturers recommend that you hold your phone at least 5 to 15 millimeters away from your body. Yet very few are aware of this directive. Sadly, your phone company buried it deep within the cell phone manual, which virtually no one ever reads.
Even with all their inaccuracies as an estimate of biological damage, SAR ratings can provide some benefit, as higher ratings are correlated with higher RF radiation and should correspond to greater cellular damage.
Finally, the FCC and other regulatory bodies around the world derive their standards from work done by a private group called the International Commission on Non-Ionizing Radiation Protection (ICNIRP). The ICNIRP even stated in 1998:
These guidelines are based on short-term, immediate health effects such as stimulation of peripheral nerves and muscles, shocks and burns caused by touching conducting objects, and elevated tissue temperatures resulting from absorption of energy during exposure to EMF.4
In other words, they are only intended to “protect” from short-term exposure, and as you’ll read more about in Chapter 2, the diseases of EMFs—especially brain cancer—can take decades to develop.
To top it all off, the ICNIRP has also been recently criticized by the group of investigative journalists called Investigate Europe as being part of an industry-controlled cartel of industry-favorable regulatory agencies. 5
You need to understand that you simply cannot determine the safety of your phone from the SAR standards currently set by the FCC.
In addition to the distinction between ionizing and nonionizing, there is another classification of EMFs that you should be familiar with so you can understand the science I will review in the chapters to come—the difference between alternating current (AC), which is pulsed, and direct current (DC), which is non-pulsed.
An AC charge moves in two different directions, and switches between these directions in regular pulses, similar to your heartbeat. Our electric grid delivers an AC that pulses 60 times per second, known as 60 Hertz (Hz) in the United States and 50 Hz in most countries outside the U.S.
Direct current (DC) electricity, on the other hand, flows only in one direction. DC currents are what you experience in nature. The Earth creates a DC magnetic and electric field. DC electricity is based on the idea of a battery sending the electrons in one direction. All batteries are DC.
Your body’s nervous system does the same and uses DC for synapses and signals. The sodium-potassium pump in your cells is essentially a battery that produces DC current. As such, your body is designed to work with DC current.
As I discuss a little later in this chapter, Thomas Edison popularized DC current and that was what people started using when electricity was first distributed to the public. The reason that we use AC electricity over DC electricity is because Nikola Tesla found out that AC can travel greater distances than DC without significant reduction in voltage, which is the pressure of electricity.
This is most unfortunate, because using DC to power the electric grid would have been a far better biological solution—since living organisms have been regularly exposed throughout their biological evolution to the Earth’s static electric and magnetic fields, our bodies tolerate DC far better than AC.
In fact, when there are variations of more than 20 percent in the Earth’s natural electromagnetic fields during magnetic storms or geomagnetic pulsations that occur approximately every 11 years due to changes in solar activity cycles, there are increased rates of animal and human health incidents, including nervous and psychiatric diseases, hypertensive crises, heart attacks, cerebral accidents, and mortality. 6,7
Since living organisms do not have defenses against variations of greater than 20 percent of natural EMFs, it is realistic to expect that they do not have defenses against man-made EMFs, which vary unpredictably and at 100 percent or more from average intensity.
To make matters even worse, wireless signals use several different frequencies simultaneously, making the variability even higher. This is likely why living organisms perceive the pulsation of man-made EMFs as an environmental stressor. 8
For example, it was found that a 2.8 GHz EMF pulsed on 500 Hz was significantly more effective in increasing heart rate in rats than the corresponding continuous wave (unpulsed) 2.8 GHz EMF with the same average intensity and exposure duration.9
Also researchers found exposure to 900 MHz radio-frequency (RF) pulses caused changes in human EEGs (diagnostic tests of brain activity), while the corresponding carrier wave signal (same frequency but continuous instead of pulsed) with the same exposure duration did not. 10
Most of the EMFs that I cover in this book—primarily those used by cell phones and wireless devices—are classified as very low frequencies and higher. But there is a category of EMFs beneath this group, and that is extremely low frequencies (ELFs). ELFs have a frequency between 0 and 300 Hz, and are emitted by power lines, electrical wiring, and electrical appliances, such as hairdryers.
But there are also ELFs associated with regular wireless signals in the form of pulsing and modulation. There is some evidence indicating that the effects of these wireless EMFs on living organisms are due to the included ELFs.11,12 Moreover, ELFs alone are found independently to be bioactive.13,14 As you’ll read in Chapter 5, there have been many studies on the link between exposure to power lines and breast cancer, impaired sleep, and childhood leukemia.
The potential for ELF exposure to negatively impact health seems to be highest when the ELFs are pulsed. For example, researchers found that a 1.8 GHz RF signal amplitude-modulated by pulsing ELFs caused DNA damage in cultured human cells, while the same signal with an unmodulated continuous wave with the same exposure duration was ineffective.15
Electromagnetic fields have two components—an electric field and a magnetic field. The Earth has a geomagnetic field, as our planet is essentially one large magnet—its magnetic field is what allows compasses to work and empowers migratory animals to know which way to travel. Your body has a magnetic field too—both these natural magnetic fields are DC, and measured in units of either tesla (T) or gauss (G).
An electric current naturally generates a magnetic field around it. If you’ve ever played with two magnets, you’ve already experienced the fact that a magnetic field quickly gets weaker with distance.
However, there is some evidence that magnetic fields have a danger all their own.
Much of the research into the health effects of magnetic fields has been related to increases in childhood leukemia and brain cancers. A study that scanned a collection of data from 1997 through 2013 examined 11,699 cases and 13,194 controls and concluded that “magnetic field level exposure may be associated with childhood leukemia.”16
These studies are some of the research that the World Health Organization refers to when admitting that some types of EMFs are indeed related to cancers, are biologically harmful, and should be limited.
Furthermore, in 1979 Nancy Wertheimer and physicist Ed Leeper found that childhood leukemia rates doubled versus controls for children subjected to only 3 milligauss of magnetic field exposure when in the vicinity of neighborhood distribution power lines in Denver.17 This finding was also repeated in a 1988 study conducted by the New York State Department of Health.18
There is also research linking higher levels of exposure to magnetic fields during pregnancy and an increased risk of miscarriage.19,20
This type of EMF is a specific type of electric and magnetic field known by a few different names: the most common one is dirty electricity and the most accurate one is high-frequency voltage transients. Electromagnetic interference (EMI) is another term frequently used to describe dirty electricity.
Many EMF experts now use the additional term microsurge electrical pollution, or MEP, to describe dirty electricity, and define dirty electricity as all electric and magnetic fields from any frequency above 50/60 Hz (which is the fundamental frequency of electricity from electric utilities around the world).
These transients typically occur whenever alternating current (AC) electricity that runs along power lines (with a frequency standardized to 60 Hz in North America and 50 Hz in the rest of the world) is manipulated into other types of electricity (such as direct current, or DC), when it is transformed to another voltage using what’s called a switched mode power supply, or its flow is interrupted.
Dirty electricity most often ranges from 2,000 Hz (2 kHz) to 100,000 Hz (100 kHz). This is a very special range as it is the frequency in which electric and magnetic fields most easily couple to your body, causing biological damage through a mechanism I will describe later in the book.
The primary way dirty electricity occurs throughout the world is when an electric motor that uses an AC switching power supply is run, such as in your air conditioner, refrigerator, kitchen blender, TV, or computer. The good news about these sources of dirty electricity is that they are locally produced and easily remediated with filters; I will cover exactly how to do that in Chapter 7.
In North America, however, there is another common source of dirty electricity: electric utility substations that deliver power to the community but fail to separate the returning neutral wires from the grounding line from each user back to the utility substation.
Instead, utilities use the cheaper route and allow the actual ground to return a good deal of the current, as the Earth is a conductor of electricity. Since dirty electricity rides along with 60 Hz electricity wherever it goes, this practice contaminates soil with dirty electricity.
Another common source of dirty electricity is compact fluorescent light bulbs. They create dirty electricity because they have a switched mode power supply in their base that converts the 60 Hz AC current first into DC current and then changes the voltage into a higher frequency, typically around 50,000 Hz (50 kHz).
Not only do fluorescent bulbs create dirty electricity, but they also produce digital light with an unhealthy spectrum that is predominately blue, which disrupts your melatonin levels if you view it after sunset. So, an excellent strategy to improve your health is to limit your exposure to fluorescent lights at home and the office.
Newer electronic dimmer switches, which modulate the level of light emitted by bulbs by turning the power source on and off—very quickly for brighter light and more slowly for dimmer light—are also significant sources of dirty electricity. (Older rheostat-based dimmers from decades ago do not cause dirty electricity.)
Computers, monitors, and TVs create dirty electricity because their various components run on DC electricity. They also use switched mode power supplies to convert AC to the various DC voltages, and it is those components that emit the dirty electricity.
Cell phone towers themselves are a substantial source of dirty electricity. When I interviewed Sam Milham, an M.D. and M.P.H. epidemiologist and author of Dirty Electricity,21 on my website, mercola.com, he pointed out:
Every cell tower in the world makes dirty electricity by the ton. Lots of schools have cell towers on campus. What they’re doing is they’re bathing the kids [with EMI, or electromagnetic interference—dirty electricity]. It gets back into the wires; the ground wires and power wires that service it. The grid becomes an antenna for all this dirty electricity, which then extends miles downstream.
Solar panels and wind turbines are also major contributors to dirty electricity levels, or rather, their inverters are. Solar panels generate low-voltage DC electricity, which isn’t usable by either the wiring in your home or the power grid. So the panels are usually connected to an inverter, which converts the DC into AC and raises the voltage to 120 volts.
Many people who have installed solar panels (photovoltaic panels) on their homes are completely unaware of the fact that their inverters are a source of dirty electricity. Large, commercial solar arrays have a similar problem, as they also use inverters—sometimes thousands of them if they’re really big arrays—and they all generate EMI or dirty electricity.
When I had my solar panels installed at my home many years ago, I was unaware of this problem. Once I learned of the issue, I was able to remediate this powerful source of dirty electricity, and I will share how you can do this too later in the book. This is important because it is clear that the country is moving rapidly toward renewable energy, which uses these inverters that produce dirty electricity. So eventually it will be a problem for most of us.
In my book Fat for Fuel, I chronicled how processed vegetable oils, such as cottonseed, soybean, and canola, debuted at the end of the 19th century and then proliferated through the food system at an ever-expanding rate—as did incidences of heart disease.
The relationship between the rise in electrification and chronic diseases follows an eerily similar trajectory and, I believe, presents a compelling reason why this electrification—and the expansion of devices that emit EMFs that came along with it—is one of the primary reasons for the epidemic of chronic diseases that we are now experiencing.
It seems like we’ve always had instant and widespread access to electrical power, but the reality is that it never really existed prior to 150 years ago. And it took nearly another 75 years before it became widely available in the U.S. outside of urban areas.
The introduction of electrical services all started during the late 1870s, when Thomas Edison was working in his New Jersey lab to develop an incandescent light bulb that used DC power to heat a filament that then glowed. It took him 14 months of testing, but on October 21, 1879, Edison got an incandescent light bulb to glow for 13 1/2 hours. He patented his light bulb in 1880.
The first people to enjoy on-demand incandescent light in their homes were well-to-do families in New York City, with small generators used to power each individual home. The question then became, how to get electricity to multiple homes in multiple locations?
Rural areas remained largely without power, however, and for more than 50 years there were basically two populations in the U.S.: those who lived in urban areas and had access to electricity, and those who lived in rural areas and did not. It wasn’t until the 1950s that the electric grid reached most outlying areas, thanks to the Rural Electrification Project.
Of course, there are still vast swaths of the world without electricity—primarily in sub-Saharan Africa and central Asia. In fact, as of 2016 an estimated 13 percent of the world’s population didn’t have access to electricity.22
The number of people worldwide who don’t have electricity is still significant, although it does get smaller every year; 2017 was the first year that number fell below 1 billion,23 and 100 million people throughout the world gain access to electricity every year.24
That means we haven’t yet achieved peak EMF saturation on Earth. As more regions of the world become electrified, and as more technology evolves and spreads that produces EMF during its use, our exposure will only continue to grow.
X-rays are among the best examples of society’s blind trust in the ability of technology to improve lives, well before that technology’s physical effects are understood or even examined. At the turn of the 20th century, Americans embraced X-rays just as their grandchildren would later welcome wireless technologies—with a near-total lack of health concerns.
X-rays were first discovered in 1895 by Wilhelm Conrad Röntgen, a physics professor at the University of Würzburg in Germany. Röntgen was experimenting with a cathode ray tube when he noticed that a wooden board covered in phosphorous resting on a nearby table glowed whenever the cathode ray tube was in operation.
Legend has it that he then covered the cathode ray tube in thick black paper, yet still the phosphorous-covered board emitted a subtle luminescence. Röntgen knew then that he had discovered some type of invisible ray that followed an unexpected path. Because he didn’t quite understand where the ray came from, or how it worked, Röntgen named this unknown ray an “X-ray,” with the X representing its unknown origin.
X-rays quickly caught the attention and imagination of medical and scientific experts at the time. Thomas Edison was one of the early and enthusiastic experimenters with X-ray technology. In 1896, he even invited reporters to his lab to witness a series of experiments with X-rays.
Quickly believed to cure acne and heal other skin conditions, shrink tumors, and cure cancer, X-rays offered the promise of medical miracles without surgery. The media furthered this promise by running articles heralding X-rays’ healing abilities, such as the 1896 Chicago Daily Tribune article that ran with the headline, “Is the X Ray a Curative Agent?”25
There was widespread fascination with the “magical” ability of X-rays to reveal the vast unknown that catalyzed and encouraged their widespread use. Salons used them for their ability to remove hair, photographers used them to craft a far more intimate portrait, and hobbyists made or purchased their own X-ray machines for personal experimentation.
By 1920, these magic rays were being used at airports (to inspect luggage), the art world (to authenticate paintings), and the military (to evaluate the structural integrity of ships, planes, and cannons). X-ray machines even pervaded rural areas well before the electric grid had spread to more remote regions. Generators, sometimes gasoline-powered, added to the sheer sensory spectacle that the early X-ray machines provided.
A well-known radiation martyr was Pierre Curie, who, along with his wife, Marie, discovered the radioactive element radium and coined the term radioactivity.
Although Pierre didn’t die as a direct result of his radiation-triggered ailments, which included pervasive dermatitis and radiation sickness, he surely would have had he not been trampled by a horse in 1906 first. His wife, Marie, as well as their daughter, Irène, and her husband, Frédéric Joliot-Curie, all died of radiation-induced illnesses.
Yet, the fact that people were dying because of exposure to X-rays did little to stifle their use. A 1926 New York Times article described the fate of Frederick Baetjer of Johns Hopkins University, who lost eight fingers and an eye, and endured 72 surgeries as a result of his work with X-rays.26 Despite these obvious examples of X-rays’ potential for danger, they soon expanded into use in, of all places, shoe stores.
One particular use of X-rays implemented shortly after their discovery was to provide an image of what the bones and soft tissues of feet looked like while wearing shoes.
This device was a wooden cabinet with a space at the bottom for customers to insert their foot inside the shoe they were considering buying. When peering into the viewer, one could see the shape of the bones and soft tissues of the foot while wearing the shoe and determine if the shoe fit properly.
The X-ray was located at the bottom of the cabinet, separated from the compartment for the customer’s foot by a thin aluminum or lead lining. It pointed straight up, which meant that not only did the feet get irradiated, but so did the legs, pelvises, and abdomens of the people crowded around the contraption.
In fact, the entire body of the child being measured—along with the parent and the salesman—was bathed in radiation; others in the shop were also being irradiated through the walls of the machine.
The machine also irradiated the hands of the shoe salesman, who would often reach in to the compartment to squeeze the customer’s foot during the X-ray procedure. There were many reported cases of shoe salesmen contracting dermatitis of the hands, and at least one shoe model had to have her leg amputated due to a severe radiation burn.27
Shoe stores rapidly adopted foot fluoroscopes from the 1920s to the late 1940s. By the early 1950s, it is estimated that there were 10,000 of these machines in use throughout the United States, with an additional 3,000 machines in the United Kingdom, and approximately 1,000 in Canada.28
Fig. 1.3. Pedoscope Company Advertisement, The Shoe & Leather Journal, 12 June 1938, page 73.
The manufacturers of shoe-fitting fluoroscopes also deluded parents into believing that the machines could guarantee a better fit and, therefore, a lower chance of impaired foot development caused by shoes that were too restrictive. The whiffs of scientific truth gave confidence to moms who were largely responsible for making the purchasing decisions.
In this way, the shoe-fitting fluoroscope was a perfect example of science providing cover for naked capitalistic ambitions. Americans were lured into sacrificing their health by a concealed effort to increase sales for shoe retailers.
Similarly, today we’re told we need ever-increasing exposure to wireless radiation in the name of faster download speeds and better connectivity, when what’s primarily driving the industry’s growth is a hunger to sell more products and services, no matter what the health costs may be.
What is important to note here is that the foot fluoroscopy craze happened well after American doctors and scientists knew that exposure to X-rays was dangerous. There had already been many well-publicized incidences of agonizing deaths from radiation exposure by so-called martyrs to science. There were some calls to abandon the foot X-ray machines, but it took decades for the message to be fully heard, and the machines to fall out of use.
It wasn’t until after World War II and the dropping of the first atomic bomb that concerns over radiation exposure grew to such a point that governments and the public began in earnest to pursue a path toward banning foot fluoroscopy use. In March 1948, New York City became one of the first places to regulate the machines.29
A 1950 New York Times article noted that shoe store personnel and customers (both adults and children) who were repeatedly exposed to the fluoroscope throughout the year had an increased risk of suffering from stunted growth, dermatitis, cataracts, malignancy, and sterility.30
In 1953, the esteemed journal Pediatrics published an editorial that called for ending the practice of using shoe-fitting fluoroscopes on children.31,32 By this time, the ball really started rolling. In 1954 the International Commission on Radiological Protection called for the curtailment of X-ray use for anything other than “medical procedures.”33
It still took a few more years for legislative action to protect consumers. In 1957, Pennsylvania became the first state to outright ban the use of shoe-fitting fluoroscopes.34 In 1958, New York City withdrew all the fluoroscope permits it had issued. By 1960, 34 states had passed some form of regulatory legislation.35 By 1970, there were as few as two machines still in operation in the world.36
In the end, these radiation-spewing machines were unleashed on the public for more than three decades, despite the dangers being well known from the very beginning of their proliferation.
Overall, the 30-year use of deadly fluoroscopes to sell shoes is an undeniable example of how profit so often trumps common sense. We are living through another decades-long lag between the introduction of an exciting new technology and the regulation of said technology by the government.
I hope that my sharing the story of foot fluoroscopes with you here (and the eerily similar story of the rise and fall of the tobacco industry that you’ll read in Chapter 3) will help convince you that we can’t trust technology companies to protect their customers’ health, we can’t trust the government to protect consumer health, nor can we trust ourselves to consider the potential for harm when introduced to exciting new technologies.
We have to take measures into our own hands to protect ourselves from exposure, to educate ourselves as consumers, and to advocate for our health and the health of our planet to our lawmakers.
Another innovation that extended the influence of EMFs in daily life was the development of microwave technology. Microwaves were first predicted by mathematical physicist James Clerk Maxwell in 1864. The first practical application of microwaves was radar, which was first produced in 1935 by the British physicist Sir Robert Watson-Watt and came into more widespread use by the military during World War II.
The term radar is an acronym for radio detecting and ranging. The radar frequencies are in the microwave range of the electromagnetic spectrum: Some radar equipment operates in the same frequency range as does the cellular telephone, 800–900 MHz. Other radar systems operate at higher frequencies, around 2,000 MHz (or 2 GHz).
In 1945, radar began to be used in an entirely new way when an engineer named Percy Spencer discovered that a peanut-cluster candy bar that was in his pocket while he stood near a radar device known as a magnetron had melted. Quite by accident, he had discovered that microwaves were capable of heating food. The microwave oven has since evolved into one of the most popular household appliances in the world.
After Spencer demonstrated that higher frequency radar, around 2.45 GHz (the same frequencies now used by many cordless phones, cell phones, and Wi-Fi), could cook popcorn and eggs, his employer, Raytheon, agreed that they had a new mode of cooking on their hands. Raytheon and Spencer went on to patent the Radarange oven and brought it to market in 1947.
The first Radarange was as big as a refrigerator. It weighed 750 pounds and cost $5,000 (the equivalent of more than $57,000 in today’s economy). Due to a combination of its steep cost, large size, and unfamiliar technology, the Radarange was a commercial flop. But the concept stuck around long enough to see the microwave oven enjoy a meteoric rise in popularity.
By 2015, the U.S. Census Bureau37 estimated that 96.8 percent of American households owned a microwave oven. While microwaves undoubtedly shorten cooking times and can get dinner on the table much more quickly, this convenience comes at a high price in terms of EMF exposure and secondary health consequences, as your microwave, when it is on, is likely the biggest source of radiation exposure in your home. (Cumulatively, however, your Wi-Fi router creates a larger EMF risk.)
Another novel use of microwave radiation was discovered in the 1950s, when researchers first developed the cordless phone. Although not widely available to consumers until the 1980s, cordless phones were quickly embraced. According to a 1983 New York Times article,38 50,000 cordless phones were sold in 1980. By 1982, that number had jumped to just over a million.
Cordless phones worked by using radio waves to communicate between the base of the phone and the handset. They started out using lower frequencies, such as 27 MHz, and quickly grew to 900 MHz, then 2.4 GHz, and even as high as 5.8 GHz.
The rush to switch from traditional, corded household telephones to cordless versions meant the biggest introduction of EMFs to homes since the widespread adoption of the microwave. But there was more to come.
As cordless phones were swelling in popularity, cell phones were just getting started. On April 3, 1973, Martin Cooper, the Motorola engineer who developed the world’s first working cell phone, placed the first wireless phone call. While Cooper was undoubtedly aware that his invention would change the way people communicated with each other, it’s doubtful he could have ever imagined just how much the cell phone would change life as we know it.
It took another 10 years for Motorola to develop a cell phone that was available to the public. In 1983, the company debuted the DynaTAC—a model that weighed 1.75 pounds and cost $3,995,39 or the equivalent of nearly $10,000 in 2019. It took several more years for the price and the size of cell phones to come down enough to become widely accepted.
Throughout the 1980s and early 1990s, mobile phones slowly gained acceptance—they were quite the status symbol in the early days. It wasn’t until the late 1990s and 2000s that cell phones truly gained mass appeal. In 1998, 36 percent of American households owned a cell phone. By 2001, that figure was 71 percent.40
By 2005, 33.9 percent of the global population had a mobile subscription, according to a 2015 Information and Communications Technology (ICT) report.41 Ten years later, that number was up to 96.8 percent.
By the second decade of the new millennium, cell phone use around the world had proliferated to the extent that mobile devices were more available than the Internet, landlines, and even running water.
According to the 2016 Household Survey on India’s Citizen Environment & Consumer Economy, 77 percent of the poorest Indians had cell phones, while only 18 percent had access to tap water.
And their usage rates are still going up: according to a report from the research firm IHS Markit,42 the number of global smartphones is expected to reach six billion by 2020, up from four billion in 2016.
Cell phone usage is dependent on towers that receive and transmit radio waves—your voice is converted into a digital stream of information that is sent to the nearest cell tower where it is received and then sent back out to the person on the other end of your call.
The incredible popularity of cell phones and constant desire for cell phone coverage means that more and more cell phone towers are needed to broadcast and receive radio waves (which are EMFs) over greater and greater areas.
According to the World Bank, 99.9 percent of Americans have mobile network coverage.43 This is important because if you have a cell phone signal—even if you aren’t using your phone at that moment, or don’t even have a cell phone—you are being exposed to radiation. When you begin using the phone and hold it close to your body, you are being exposed to even more.
As demands for more functionality—such as watching videos—from mobile devices rises, the more these cell towers need to be expanded and strengthened, with new frequencies added in order to handle demand.
In addition to receiving and transmitting radio waves, cell phone towers are also sources of dirty electricity, as they must convert AC current from the grid into DC, which the transmitters use for power and which charges the backup batteries.
Of course, cell phones emit even more EMFs when you are using them to make a call or access the Internet (whether by Wi-Fi or the cellular network), and this exposure increases the closer you hold it to your body.
Even cell phone manufacturers admit this, because they state in their user manuals that cell phone users should always keep their phone at least 5 to 15 millimeters away from their body. Sadly, this information typically appears only deep inside the manual, which very few people ever read.
Cell phone antennas are pointed in all directions. This is why getting measurements from a qualified expert, especially those that measure the body for radio frequency (RF) as an antenna, is important. Directional meters measure only the frequencies that the RF meter is pointed at.
Your body is exposed from all angles, so it collects the micro-voltage from multiple frequencies as an antenna from all directions. Some antennas could potentially be aimed right at your home, while others could be aimed away or have obstructions that reflect the energy away.
To see how much cell phone radiation you are exposed to at your home, office, or school, I encourage you to visit AntennaSearch.com. This site is a useful tool to see the various types of frequencies and saturation that you are exposed to in your living situation.
The best way to search is to process and view the “antenna results” instead of focusing on the “tower results.” The antenna results provide you with the frequencies that you are exposed to in addition to the location relative to your home. Once the antenna results are loaded, a list of companies appears under “multiple” and “single.” The “multiple” are multiple antennas, or frequencies, that are installed on each tower.
There can be as few as two transmitters or as many as several hundred installed on one tower! Some people get a false sense of security using this site when they see only a few antennas yet fail to see how many transmitters are on each antenna. There could potentially be only five antennas near your home but several hundred transmitters when you add them all together.
In order to view the frequency and the number of transmitters you have to click on each company’s name. When you do that the website will open up a new window with the information about the frequency, the power output, and the power radiated.
You must do this for each company that comes up in the search results in order to add up all the various frequencies and understand the true saturation of your home’s location. The addresses of the towers are also listed, so you can drive by and see the antennas for yourself and try to determine if they are pointed toward your home or not.
It surprised me to find out that my daily beach walks were taking me past a grove of cell phone towers. When I investigated further, I discovered the EMF readings (which I will teach you how to take in Chapter 7) were 1,000 times higher on the beach than inside my house! Now I take a different route and head south on the beach instead of north because there are fewer cell towers there and the radiation levels test lower.
The seeds of Wi-Fi were sown in 1985, when the FCC opened up several bands of the EMF spectrum for communication purposes without requiring a government license.44 The sections of the spectrum in question were 900 Hz, 2.4 GHz, and 5.8 GHz—what were referred to as “garbage bands”—that were already being used by devices such as microwave ovens.
It took the next 14 years for engineers and corporations to develop a regulated system that would enable devices made by different vendors to access a wireless broadband signal. To minimize interference between Wi-Fi signals and household appliances, Wi-Fi was developed to transmit by bouncing between multiple frequencies.
Wi-Fi burst onto the market and into the public consciousness in July 1999, when Apple released its first laptops with Wi-Fi capability via an adapter made by Lucent Technologies called an AirPort.
These early adapters freed laptop users from needing to be plugged into an Internet connection while working at home, and the technology spread quickly. We have now come to rely on and expect wireless access to the Internet in our offices, homes, hotels, and coffee shops. Entire cities have established virtually ubiquitous and continuous wireless access to the Internet.
New classes of devices, such as tablets like the iPad, were developed primarily for their ability to connect wirelessly to the Internet and allow users to read books, play games, watch videos, and check e-mail without needing access to a full computer.
Unlike computers, these devices are often held just inches from a user’s face, where the radiation exposure is exponentially higher than when it is an arm’s length away (as with a desktop).
According to a report by the PEW Charitable Trusts, in 2010 only 3 percent of Americans owned a tablet; by 2016 that number was up to 51 percent.45 And it’s expected to rise to 62 percent, or 185 million people, in the U.S. by 2020.46 What all this connectivity also delivers is constant exposure to radiation.
It’s not just that more people have wireless access to the Internet; we’re spending ever more amounts of time using this wireless connection—nearly three times as much as at the start of the 21st century.
The 2017 Digital Future Report by USC Annenberg’s Center for the Digital Future found that Americans spend 23.6 hours per week online—up from 9.4 hours in 2000.47 That’s more than just a lot of screen time—it’s a lot of time being bombarded by unhealthy EMFs.
Riding on the popularity of Wi-Fi is the development of appliances that use a wireless Internet connection to provide access to information, monitoring, and reporting.
These include thermostats that you can adjust by using an app on your smartphone; baby monitors, refrigerators, and “smart” utility meters that report your consumption to the utility company without needing to send a representative to read it; and virtual home assistants such as Google Home and Amazon’s Alexa.
Collectively known as the Internet of Things, these so-called smart devices raise concerns about privacy and security as they are vulnerable to hacking.
But the other risk they pose is that they become yet another source of EMF radiation and dirty electricity in your home. There were 15.4 billion connected devices worldwide in 2015, a number that is predicted to go up to 75.4 billion by 2025.48
And to top it all off, in order to make the Internet of Things possible we will be forced to adopt 5G, which poses a huge risk to public health that I’ll cover in Chapter 2.
Every scientific and technical development I’ve shared in this chapter brings with it a mixed blessing. On the plus side, the gadgets and technology offer greater convenience, enhanced capabilities, and a leap forward in our ability to expand our learning. On the negative side, they provide ever-larger exposures to EMFs in amounts that humans have never before experienced. It is only natural to think that there would be some health consequences of this.
One of the guiding principles I’ve used throughout my four decades of practicing natural medicine is to compare new research to our ancestral heritage to see how it reconciles.
Let’s apply this thinking to EMFs and compare the type and amount of EMF fields your ancient ancestors were exposed to and the types and levels you are subjected to today.
Your ancestors did encounter electromagnetic radiation, from their own cells, the Earth’s magnetic field, the atmosphere’s electric field, lightning, and, of course, the sun.
To compare that to today, when, in addition to this natural radiation, we are continually exposed to more and more manmade electromagnetic radiation, really isn’t a fair comparison since, as you just learned, man-made EMFs didn’t exist until about 170 years ago. So let’s compare the EMF exposure in the early 1900s to today.
To make an accurate comparison, we need to restrict our answer to a specific wavelength. So let’s choose a pervasive one that nearly all of us are exposed to, 2.4 GHz, which is very close to the frequency your Wi-Fi and cell phones use.
So, how much of an increase in your exposure to EMF have you had in the last 100 years?
I have posed this question to thousands of individuals in many of the lectures I have given and no one has ever answered it correctly. In fact no one has ever come close—because the answer is truly mind-boggling. Typical answers are somewhere between 10 to 1,000 times more exposure now compared to 100 years ago. The rare, courageous soul will guess a million times more. But even this seemingly outrageous guess is off by many orders of magnitude.
The answer is well beyond a billion. It is larger even than one trillion. The truth is, we are exposed to one billion billion more EMFs now than we were just 100 years ago. (In case you were wondering, a billion billion is 10 with 18 zeros.)49
(For my scientifically minded reader: Even if small amounts of wideband frequencies existed as background radiation from the big bang that many theorize created the universe, the manmade frequencies we encounter today have a different shape and polarity—they are square and pulsed—than any naturally occurring frequency. As such, you could argue that we are exposed to infinitely more EMFs.)
Your body was never designed to be exposed to these levels of EMFs. It takes thousands and thousands of years for evolution to do its work and for humans to adapt to changing environments. One hundred years in evolutionary terms is not even a tiny fraction of the time required to adapt to this type of exponential change. Thus, it is perfectly reasonable to suspect that there will be some health consequences from persistent exposure to this level of radiation.
Figure 1.4: Typical daily human exposures over time of natural and manmade radio-frequency electromagnetic power densities, plus ICNIRP safety guidelines.
Essentially, our hunger for electronic devices and connectivity turns us into research subjects in a global health study; one that we never consented to be part of, and one that is getting increasingly more difficult, if not impossible, to opt out of. And one of the biggest reasons we won’t be able to opt out is the widespread adoption of 5G—a topic we’ll unpack in the next chapter.