I Need EMF Testing for My Business, Commercial, Industrial, or Technological Facility - A Complete Guide on What You Need to Know, Part 4
How EMF Findings Are Interpreted, Benchmarked, and Reported
By James Finn
Copyright © 2026 All Rights are Reserved.
When a business receives an EMF report, the most important question is usually not, “What number did you measure?”
It is, “What does this actually mean for us?”
That is the moment when many clients discover that the real value of a professional assessment is not the field visit alone. It is the interpretation. It is the benchmarking. It is the quality of the documentation. It is the clarity of the conclusions. And it is the honesty of the limits.
A strong EMF report should help a client understand conditions, make decisions, and prioritize next steps. A weak report may contain pages of data while saying very little. A biased report may try to lead the client toward a predetermined conclusion. And a technically dense report can be accurate while still failing its client if the client cannot tell what matters and what does not.
This guide is written for clients. Its purpose is not to teach others how to perform the work. Its purpose is to help ELEXANA clients understand what they are looking at when they receive a report, how to interpret benchmark references, what questions to ask in the post-report meeting, and how to distinguish a disciplined document from a shallow or slanted one.
A Report Is Not Just a Collection of Readings
The first thing a client should understand is that a professional EMF report is not supposed to be a spreadsheet with commentary attached.
A report is the written interpretation of a measurement event. It should connect the engagement's purpose, field conditions, instrumentation, variables, limitations, reference framework, and findings into a coherent explanation.
That means the report should answer several questions at once.
What was the client asking?
What was actually measured?
Under what conditions was it measured?
How confident are we in the data?
What benchmark or reference framework was used?
Why were certain benchmarks relevant and others not?
What do the findings mean in the real context of this site?
What should the client do next, if anything?
If a document gives you numbers but does not answer those questions, it may still be useful as raw documentation, but it has not yet fully done the job of a decision-grade report.
Interpretation Comes Before Benchmarking
Many clients assume the first step in understanding a result is to compare it to a limit.
That is often too simplistic.
Before benchmarking can be meaningful, the consultant must interpret the kind of environment being evaluated. Is the issue about worker presence, public access, sensitive equipment, an implanted medical device concern, a complaint area, a rooftop telecom zone, or a pre-construction site review? Is the source low-frequency power infrastructure, radiofrequency transmission, building wiring, industrial equipment, or a mixed environment?
Only after those questions are understood does benchmarking become useful.
In other words, a measurement does not become meaningful just because it is placed next to a standard. It becomes meaningful when it is interpreted in context and then benchmarked appropriately.
That distinction matters because clients are sometimes shown a standard or limit that sounds authoritative but is not actually the most relevant frame for the business question being asked.
What Benchmarking Actually Means
In client terms, benchmarking means comparing measured conditions to a reference point.
That reference point may be a regulatory limit, a consensus safety standard, a technical guidance document, a manufacturer recommendation, a site-specific design criterion, a peer-environment baseline, or a practical operational threshold.
Not every benchmark serves the same purpose.
Some are meant to address established adverse health effects for general human exposure. Some are meant for controlled occupational settings. Some are intended for RF compliance. Some help frame measurement practice. Some are useful for equipment siting. Some are not legal requirements at all, but are still informative. Some are so broad as to be inadequate for evaluating sensitive electronics or nuanced workplace problems.
A strong report should tell you not only which benchmark was used, but also why it was chosen and what it can and cannot tell you.
How to Understand IEEE, ANSI, OSHA, and FCC References
Clients often see these acronyms in reports and assume they all do the same thing. They do not.
IEEE
IEEE publishes influential consensus standards, including the current IEEE C95.1 standard on safety levels for human exposure to electric, magnetic, and electromagnetic fields from 0 Hz to 300 GHz. That standard presents exposure limits intended to protect against established adverse health effects.
For clients, the practical meaning is this: when a report cites IEEE C95.1, it is usually using a recognized technical framework for evaluating whether measured exposure conditions are above or below published safety levels.
ANSI
Clients may also see the same standard referenced in procurement language or reports as ANSI/IEEE C95.1. In practice, this usually reflects the standard’s publication and distribution path in the U.S. standards ecosystem rather than a completely different technical document. The key point for clients is to make sure the report identifies the actual edition and scope being used, not just the acronym string.
OSHA
OSHA is different. OSHA is the workplace safety regulator, but for radiofrequency and microwave radiation, it states that there are no specific OSHA standards devoted to that hazard category. OSHA’s own resource pages instead point users toward recognized references such as IEEE C95.1 and IEEE C95.3 for exposure criteria and measurement practice.
For a client, that means OSHA references in a report are often about workplace safety context and program responsibility, not necessarily about a dedicated OSHA numeric EMF limit that applies across every circumstance.
(Please note: Every ELEXANA staff member is required to take the OSHA training and become certified in OSHA safety practices and standards. We each carry our certification cards to every job site. This is reassuring for our clients.)
FCC
FCC references usually matter when radiofrequency sources are involved, especially transmitters, antenna systems, rooftop installations, distributed antenna systems, or other RF-emitting infrastructure. The FCC’s RF safety materials and OET Bulletin 65 provide guidance on evaluating compliance with the FCC's RF exposure limits. The FCC also makes clear that OET Bulletin 65 offers acceptable methods and suggestions for evaluation, but it is not the only possible procedure if another sound engineering method is used.
For clients, that means FCC references are especially important in transmitter-related projects, but they are not a substitute for all other forms of interpretation. The guidelines distinguish general population exposure from occupational exposure. This is because regulators assume that workplaces will supply PPE.
What These Guidelines Do and Do Not Mean
One of the biggest client misunderstandings is assuming that being “below guideline” means “there is no issue,” or that being “above guideline” automatically tells the entire story.
Neither is true in every case.
A human-exposure guideline is not the same thing as an equipment-performance threshold. A workplace reference is not the same thing as a property due diligence criterion. A report may correctly say that measured RF conditions are below FCC public-exposure limits while still identifying practical issues related to tenant management, rooftop access control, signage, transmitter maintenance zones, or future buildout conflicts. Similarly, a low-frequency environment may not suggest a human-exposure exceedance but still be a poor location for sensitive instrumentation or a problematic zone for a particular business use.
A serious report should therefore distinguish between at least three layers of meaning:
compliance meaning,
operational meaning, and
decision meaning.
Those are not always the same.
The Difference Between a Report, a Study, a Survey, an Assessment, an Inspection, and an Exam
Clients often use these terms interchangeably, but they are not identical.
A survey is usually a mapping or screening exercise. It tells you what appears to be present across a space or property.
An inspection is generally an observational review of conditions, equipment, layouts, source locations, or physical circumstances. It may include measurement, but it often emphasizes what is visibly present or structurally relevant.
An assessment is broader. It combines observation, measurement, interpretation, and professional judgment to help answer a practical question.
A study usually implies a more structured analytical effort, often involving multiple observations, comparisons, or a defined investigative design. It can be more formal and may extend beyond ordinary field reporting.
A report is the written deliverable that communicates the findings of a survey, inspection, assessment, or study.
An exam is not a standardized term in this field. When used, it is often informal or vendor-specific language. Clients should ask exactly what the firm means when it uses that word.
For a client, the important thing is not the label alone. It is whether the scope and deliverables match the seriousness of the business question.
Margin of Error
Clients should not be alarmed when a report discusses uncertainty, tolerance, or margin of error. They should be concerned when it does not.
Every real-world measurement contains uncertainty. NIST’s guidance emphasizes that measurement results should be understood together with their uncertainty, and that traceability alone does not guarantee fitness for purpose unless the uncertainty is also suitable for the task.
In client terms, a margin of error or uncertainty statement is the report’s way of saying, “This value is our best estimate, within a known range of confidence based on instrument limits, calibration, method, and conditions.”
That matters because a reading taken at the edge of a threshold should not be interpreted the same way as a reading far below or far above it. A disciplined report explains this. A careless one ignores it.
What dB Plus/Minus Means
Clients often find decibel notation intimidating, but the concept can be made simple.
A decibel is a logarithmic way of expressing a ratio. It is often used in RF, shielding, attenuation, gain, and signal-related work because many electromagnetic quantities span very large ranges.
When a report shows a value in dB with a plus/minus tolerance, it generally indicates uncertainty or variation around that logarithmic value. It may reflect instrument accuracy, repeatability, setup sensitivity, test geometry, or variable operating conditions.
The practical client takeaway is this: a dB value should not be read as an absolute, context-free truth. It should be understood as a measured estimate under stated conditions, with defined uncertainty and assumptions.
That is one reason a report should explain not just the dB value, but also the test setup and the variables behind it.
Why Time Stamps Matter
Time stamps are one of the most underrated elements in a professional report.
Clients should expect important measurements, photos, logs, and observations to be time-stamped. Why? Because electromagnetic conditions often vary with equipment load, shift activity, wireless traffic, operational schedules, weather conditions, access state, or maintenance condition. OSHA’s RF resources and FCC RF evaluation guidance both reflect that exposures and operating conditions can depend on the actual source activity and environment being evaluated.
If a report lacks time awareness, the client cannot confidently answer questions such as:
Was this measurement taken during normal operation?
Was the transmitter active?
Was the production line running?
Was the backup system cycling?
Was this before or after a change in load?
Could this result be reproduced at the same hour or shift?
A client should see time stamps as part of the chain of meaning, not mere administrative detail.
Why GPS Coordinates and Exact Locations Matter
The more consequential the project, the more important location precision becomes.
GPS coordinates, mapped points, floor plan references, room identifiers, and exact measurement locations help ensure that results are not abstracted from the physical world. Public field-data guidance from agencies such as EPA and NPS emphasizes that positional data matter because location quality affects the usefulness, repeatability, and interpretation of field observations.
For clients, that means exact location documentation serves several purposes.
It allows retesting.
It supports future mitigation.
It helps tie a finding to a property boundary, rooftop zone, workstation, or equipment location.
It makes the report far more useful in design, legal, landlord, tenant, insurer, and facilities contexts.
If a report says “high reading near exterior edge” without identifying where that actually was, the business loses value immediately.
What Test Variables Are and Why They Matter
A test variable is any factor that can influence the result.
That can include source load, occupancy, distance from source, instrument orientation, antenna state, door position, machine operating mode, environmental background, cable routing, grounding condition, room configuration, measurement duration, time of day, and whether nearby systems were active.
Clients do not need to master all of those variables, but they do need to know that a good report identifies the important ones.
Why? Because a measurement without variable awareness can be misleading.
If one reading was taken with the equipment energized and another with it idle, they may not be comparable. If a rooftop RF reading was taken during low-traffic conditions, that matters. If a magnetic-field reading was taken at floor level rather than at head height, that matters. If a shielding result was obtained under one geometry and applied to another, that matters.
Variables are what separate numbers from evidence.
How to Spot a Biased Report
Clients often ask what bias looks like in technical reporting. Usually, it does not look dramatic. It looks selective.
A biased report may begin with a conclusion and then choose only the frames that support it.
It may cite only one benchmark when multiple frames are relevant.
It may hide uncertainty.
It may omit calibration information.
It may fail to say when measurements were taken.
It may use vague phrases like “within safe limits” without identifying which limits, for which scenario, under which edition of which standard.
It may emphasize comforting language while glossing over operational concerns.
It may do the reverse as well, using alarming language to push mitigation sales.
It may avoid discussing variables that could change the result.
It may compare readings to a public-exposure limit when the real client issue is sensitive equipment or a medical-device accommodation.
It may include charts that look sophisticated but do not actually connect to the decision the client needs to make.
A fair report is not one that tells the client what they want to hear. It is one that tells the client what the data support, what they do not, and what remains uncertain.
What a Strong Report Should Make Clear
By the end of a strong report, the client should be able to say:
We know what was measured.
We know where it was measured.
We know when it was measured.
We know the conditions under which it was measured.
We know what references were used for interpretation.
We know why those references were chosen.
We know the limitations and uncertainty.
We know whether the issue is primarily about compliance, operations, people, equipment, siting, or future planning.
We know what questions remain open.
We know what the recommended next steps are.
That is the standard clients should hold.
Questions to Ask in the Post-Report Meeting
The post-report meeting is where clients often gain the most value, provided they ask the right questions.
A strong first question is this:
What is the single most important conclusion you want us to take from this report?
That forces clarity.
Then ask:
What did you measure, and what did you intentionally not measure?
Which benchmark frameworks did you use, and why were they the right ones for our case?
Are your conclusions about compliance, operational suitability, equipment risk, or all of the above?
What findings are the most robust, and which ones are more sensitive to assumptions or conditions?
What were the most important test variables that could have changed the result?
How much uncertainty or margin of error applies to the key findings?
Were these measurements taken during normal operating conditions? If not, how should that affect our interpretation?
Which findings would you expect to remain stable if we repeated the test next week, and which might vary?
Which areas, times, or source states would you most want to retest to build deeper confidence?
Did you see anything that suggests the issue is not an EMF problem but another electrical or environmental issue?
What practical business decisions can we make now based on this report, and what decisions would still require more information?
If we act on only one recommendation first, which one should it be?
These are client questions. They are not technical-performance questions. And they are exactly the right questions to ask.
How Clients Can Read Findings Without Overreacting or Underreacting
Clients sometimes make one of two mistakes.
The first is to panic at any number that sounds large or unfamiliar.
The second is to dismiss everything because a guideline was not exceeded.
Neither reaction is disciplined.
The right approach is to read the findings in layers.
First, determine whether the report identifies a compliance issue.
Second, determine whether it identifies an operational or siting issue.
Third, determine whether it identifies a people-specific or equipment-specific concern.
Fourth, determine whether the issue is localized, manageable, time-dependent, or structural.
Fifth, determine what the report says about confidence, uncertainty, and limitations.
A mature client response does not begin with fear or relief. It begins with understanding.
Why Calibration, Traceability, and Uncertainty Belong in the Conversation
Clients should not treat calibration details as technical trivia.
NIST’s traceability and uncertainty guidance makes clear that sound measurement depends on an unbroken chain of calibration and on understanding the uncertainty associated with the result. Traceability without suitable uncertainty does not automatically make a measurement fit for the purpose at hand.
For clients, the lesson is straightforward.
If a finding matters, you should know that the instrument was calibrated, that the result is traceable, and that the uncertainty is appropriate to the business decision being made.
That is not overkill. That is professional measurement hygiene.
Why ELEXANA’s Reporting Philosophy Matters
This is exactly where ELEXANA offers exceptional value to clients.
A client does not hire ELEXANA merely to receive readings. A client hires ELEXANA to receive meaning. ELEXANA’s reporting philosophy is built around interpreting findings in a real business context, not simply attaching numbers to a page.
That matters because many business questions are not purely regulatory in nature. They involve layered concerns: worker complaints, operational performance, equipment placement, future mitigation, medical-device accommodation, neighboring properties, pre-construction planning, or technical environment quality. ELEXANA is an excellent choice because it approaches reporting as a strategic translation exercise between field conditions and client decisions.
That means clients are not left alone with data. They receive a framework for understanding what matters, what does not, what is uncertain, and what should happen next.
For commercial, industrial, and technological clients, the difference between a document and a decision tool is that.
The Real Purpose of Part 4
A professional report should not impress you merely because it is long, technical, or full of acronyms.
It should help you understand.
It should tell you what was found.
It should tell you how those findings were interpreted.
It should tell you which benchmarks were relevant and why.
It should tell you how strong the evidence is.
It should explain which business decisions the findings support.
And it should do all of that without exaggeration, omission, or hidden bias.
That is what clients should expect.
That is what they should ask for.
And that is the standard ELEXANA believes reporting should meet.