Blood Pressure Monitors


Digital blood pressure monitors allow physicians to diagnose hypertension (high blood pressure) and help their patients keep it under control. Portable blood pressure monitors help in the early diagnosis and control of hypertension by allowing patients to cost–effectively run tests and measurements at their own homes without having to visit a physician. Home monitoring can also help physicians differentiate white coat hypertension from essential hypertension. Table 1 illustrates how hypertension awareness and control have improved over the years as concluded in the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation and Treatment of High Blood Pressure (2004).

This article covers the basics of blood pressure monitoring as well as one of the toughest challenges in any measurement system–accurately translating signals from the analog to the digital domain. High–resolution analog–to–digital converters (ADCs) provide good granularity (the ADC resolution is in the nanovolt range) but don´t provide high accuracy because the errors are greater. However, various ADC techniques (i.e. over sampling and calibration) can be used to increase the accuracy of results in measurement–based applications.

Trends in Awareness

Blood pressure monitors

A blood pressure monitor is a device used to measure arterial pressure as blood is pumped away from the heart. Typically, from a user perspective, the monitor includes an inflatable cuff to restrict blood flow and a manometer (pressure meter) to measure the blood pressure. From a system designer´s perspective a blood pressure monitor is more complex. It consists of a power supply, motor, memory, pressure sensor and user interfaces, which can include a display, keypad or touchpad and audio, as well as optional USB or ZigBee® communications interfaces.

Figure 1 illustrates Freescale´s blood pressure monitor reference design RDQE128BPM, which demonstrates how the sensing, data communication and processing capabilities of Freescale products interact to create a complete medical handheld solution. For more details on this reference design, download the Blood Pressure Monitor Design Reference Manual PDF (search for document number DRM101) from

Blood pressure varies between systolic (SBP) and diastolic (DBP). Systolic is the peak pressure in the arteries, which occurs near the beginning of the cardiac cycle when the ventricles are contracting. Diastolic is minimum pressure in the arteries, which occurs near the end of the cardiac cycle when the ventricles are filled with blood. Typical measured values for a healthy, resting adult are 115 millimeters of mercury (mmHg) (15 kilopascals [kPa]) systolic and 75 mmHg (10 kPa) diastolic. SBP and DBP arterial blood pressures are not static and undergo natural variations from one heartbeat to another throughout the day. They also change in response to stress, nutrition, drugs, illness and exercise.

How measurements are made

As the cuff that is wrapped around the patient´s arm deflates, small variations in the overall pressure against the cuff (red trace in Figure 2) can be observed. These pressure variations are created by the patient´s pulse, which are then amplified through a 1 Hz high–pass filter and offset, producing the grey blood pressure trace. This new signal is the heartbeat signal.

Blood Pressure reference design

Using the heartbeat detection as explained, a simple oscillometric method employed by the majority of automated non–invasive blood pressure monitoring devices is used to determine SBP and DBP. The oscillometric method measures the amplitude of pressure change in the cuff as it is inflated above SBP and then deflated. The amplitude suddenly increases as the pulse breaks through the patient´s SBP. As the cuff pressure is further reduced, the pulse amplitude reaches a maximum and then diminishes rapidly. The index of diastolic pressure is taken where this fast transition begins. Therefore, the SBP and DBP are obtained by identifying the region where there is a rapid increase (SBP) then decrease (DBP) in the pulse amplitude. Mean arterial pressure (MAP) is at the point of maximum pressure.

Measuring SBP and DBP can help diagnose hypertension in general, but clinical monitoring alone cannot differentiate between the two common types of hypertension.

Essential hypertension

Essential (or primary) hypertension is high blood pressure with no identifiable or correctable cause. Essential hypertension is diagnosed when the SBP is consistently over 140 mmHg or the DBP is consistently over 90 mmHg.

White coat hypertension

People suffering from white coat hypertension only exhibit high blood pressure symptoms in higher stress environments away from the normal home environment, such as a clinic or physician´s office (hence, the "white coat" reference). People with white coat hypertension exhibit high readings (SBP over 140 mmHg, or DBP 90 mmHg) when measured in a clinical environment but have normal blood pressure readings outside the clinic. White coat hypertension can be misdiagnosed as essential hypertension, which can lead to unnecessary treatment and increased insurance premiums. For this reason, medical professionals frequently support home readings over a few weeks to verify a diagnosis. Therefore, portable, easy to use blood pressure monitors are becoming common in the domestic environment.

People with white coat hypertension have a higher risk of developing essential hypertension in the future than those who currently do not suffer from any hypertension. This, along with other risk factors, such as smoking and high cholesterol, has helped drive increased demand for home monitoring kits.

Blood Pressure reference design

There are a number of drugs available for hypertension treatment, which physicians may choose to use in combination, including:

  • ACE inhibitors and angiotensin II receptor antagonists that keep the blood vessels from narrowing
  • Alpha blockers and beta blockers, which relax the blood vessels and heart respectively
  • Calcium–channel blockers that help expand the blood vessels to ease blood flow
  • Diuretics that help rid the body of excess salt and fluids

In addition to following a medical treatment plan, patients must instigate a number of changes in lifestyle and diet and may employ therapeutic relaxation techniques to help reduce hypertension. Regardless of which medical alternative is used or what lifestyle changes are implemented, continual blood pressure monitoring is the common denominator for effective treatment. It is essential that the physician has accurate, up–to–date information so any changes in treatment can be initiated. This means blood pressure monitoring equipment must be available to the patient outside the clinical environment if he or she hopes to lead a relatively normal life.

SAR DAC Block Diagram

Analog–to–digital converter accuracy

As illustrated in Figure 1, the microcontrollers (MCUs) and pressure sensor are the core technologies in the blood pressure monitor. The RDQE128BPM reference design block diagram also shows that the most important MCU module in this application is the ADC. Freescale´s embedded controller ADC modules are successive approximation ADCs (see Figure 3). These have sample and hold circuitry to acquire the input voltage (VIN), a comparator, a successive approximation register sub circuit and an internal reference capacitive digital–to–analog converter (DAC). The DAC supplies the comparator with an analog voltage equivalent of the digital code output from the successive approximation register (SAR) for comparison with VIN.

Applications like blood pressure monitors have to measure very small signals. Therefore, the ADC resolution is often a key parameter (i.e., 10–bit, 12–bit or 16–bit resolution) and an important factor to consider when choosing an MCU for the application design. Just as important, if not more so, is the ADC accuracy. Bear in mind that all ADCs have built–in inaccuracies because they digitize a signal in discrete steps, a process known as quantization. Consequently, the output cannot perfectly represent the analog input signal. For instance, a 12–bit converter would provide a least–significant bit (LSB) with a 1.22 mV step for a maximum VIN of 5V. Therefore, the ADC can only digitize values in 1.22 mV steps: 1.22 mV, 2.44 mV, 3.66 mV, etc. In this case, it means a perfect measurement can never be more accurate than ±0.5 LSB (±610 µV).

Unfortunately, several other embedded ADC characteristics introduce errors and reduce accuracy, including offset, gain, temperature drift and non–linear performance. Some ADCs, such as the 16–bit ADC on some of the newest Freescale Flexis™ products, have the ability to reduce errors in the offset and gain through calibration. Many ADCs have the ability to measure the temperature of the die via an on–chip temperature sensor internally connected to the ADC channels, allowing for temperature compensation to be incorporated.

An ADC´s effective number of bits (ENOB) is the true indication of resolution and accuracy. This value shows how many of the bits in a given system provide accurate information. It can be calculated by the following formula:

ENOB = (SNR – 1.76 dB)/6.02 dB

Here, the signal–to–noise ratio (SNR) is the ratio between the meaningful information (signal) and the background noise (noise or error). The SNR value is not only affected by the ADC design and chip integration but also by the layout and design of the printed circuit board (PCB) and by the selection of additional discrete components. A large SNR value means that more of the signal is data and the error is minimal, which, when measuring signals that change by microvolts, improves the accuracy of the resultant data. Small SNR values mean that the data is distorted by the noise in the system and accuracy is affected. "Noise Reduction Techniques for Microcontroller– Based Systems" (document number AN1705) is one of many resources that can be downloaded from to help blood pressure monitor system designers mitigate any potential SNR degradation.

Techniques to improve accuracy

Adding a small amount of controlled noise (0.5 LSB of Gaussian white noise) to an ADC´s input, often referred to as "dithering", can force a signal above or below the closest resolution step, which avoids having to round down to the value below. The state of the conversion´s LSB randomly oscillates between 0 and 1 rather than staying at a fixed value. Instead of the signal being cut off altogether at this low level (which is only being quantized to a resolution of 1 bit), the process extends the effective range of signals that the ADC can convert at the expense of a slight increase in noise. Effectively, the quantization error is spread across a series of noise values. Dithering only increases the resolution of the sampler and improves the linearity but not necessarily the accuracy. However, a technique that adds 1–2 LSB of noise to a signal with oversampling can increase accuracy.

When adding artificial noise to a signal it is important to remember that the noise must have a mean value of zero. However, many systems have white noise present from other sources, including thermal noise, the CPU core, switching ports and variations in the power supply. Blood pressure monitors are especially prone to white noise as the pump generates electromagnetic interference, vibrations, etc., which are absorbed by the PCB and thus, the microcontroller.

Oversampling is the process of sampling a signal with a sampling frequency significantly higher than the Nyquist frequency of the signal being sampled. In practice, oversampling is used to achieve cheaper higher–resolution ADC conversions. For instance, to implement a 16–bit converter it is sufficient to use a 12–bit converter that can run at 256 times the target sampling rate. For each additional bit of resolution the signal must be oversampled four times. Averaging a group of 256 consecutive 12–bit samples adds 4 bits to the resolution of the averaged results, producing a single result with 16–bit resolution. Because a real–world ADC cannot make an instantaneous conversion, the input value should be constant during the time that the converter performs a conversion. The sample and hold circuitry performs this task by using a capacitor to store the analog voltage at the input and an electronic gate to disconnect the capacitor from the input. Using the ADC setting with the sample and hold time most suited to the input signal will help to improve the result´s accuracy.

Oversampling and desimation signal

The above two methods, noise injection and oversampling, can be combined to improve accuracy further, as illustrated in Figure 4. This technique is often referred to as oversampling and decimation. The top plot shows the ADC conversion result over time and identifies what the result would be using oversampling alone, without the addition of noise. By adding 1–2 LSB of noise, concurrent samples do not end up with the same result, as shown in the bottom plot in red. This method increases the SNR and enhances the ENOB. By adding the 1–2 LSB of noise to the input signal and oversampling, the results can be averaged to provide a more accurate result. Averaging data from ADC measurements also has the advantage of minimizing signal fluctuation and noise as it flattens out spikes in the input signal.

There are four other manageable sources of inaccuracies: offset, gain, leakage and, to a lesser extent, temperature. Some embedded ADC modules, such as the 16–bit ADC on some of the newest Freescale Flexis products, have a hardware calibration feature that enables repeated calibration during code execution. Embedded ADC modules without hardware calibration can still be calibrated, but this must either be done in the factory or by a solution designed into the product.

Calibration is a three step process:

  • 1. Configure the ADC
  • 2. Initiate a calibration conversion and wait for the conversion to complete
  • 3. Generate offset and gain calibration

The offset and gain calibration values can be subtracted and multiplied respectively to the result. This can be done in software or automatically in hardware on some ADC implementations, such as the ADC16 on Freescale´s latest Flexis products for monitoring applications.

The offset of the input is the easiest of the three sources for which to compensate. For a single–ended input conversion, the input can be referenced against the same voltage internally. This should produce a zero result. If the result is not zero, this is the offset, which must be subtracted from the ADC result. If a differential conversion mode is available, the offset can be found by converting the same signal on both input pins.

Once the offset is known, the ADC´s gain can be found from the full–scale error. This is the difference between the ideal code at the highest output code, such as 0xFFF in a 12–bit ADC, and the actual output code when the offset error is zero.

Figure 5 shows the exaggerated effect of offset and gain on un–calibrated ramp (black) vs. an ideal ramp (red), from ground to full scale. In applications sensitive to accurate ADC results, such as blood pressure monitors, which are required to identify tiny changes in readings (µV), calibration should be done frequently, at least after every reset sequence. If a hardware function does not exist, calibration can be achieved by designing ground and VDD inputs into the application, subtracting offset and multiplying by the calculated gain after every set of conversions.

Oversampling and desimation signal

There is another source of input error that is often overlooked but can be significant. Leakage on the input pin can cause the voltage to drop across the resistive portion of the input source. This error can be in the order of tens of LSB in such circuits as battery voltage and temperature detection, which use high value resistive voltage dividers to create the analog reference if the analog DC source resistance is high. The best way to eliminate this error is to reduce the analog DC source resistance and any form of leakage that is within the designer´s control. An op–amp that buffers the input voltage can reduce analog DC source resistance.

The temperature of the MCU die can have an effect on the ADC result. This is because the characteristics of the ADC change over temperature, as does the MCU–induced noise, power consumption and frequency. However, temperature is a slow changing factor. Regular recalibration of a blood pressure monitor that has been designed into the application code so the user does not have to be concerned about ideal conditions will help to minimize the effect. However, full in–factory calibration, with results stored in a look–up table in memory, can nearly eliminate temperature effects. Many ADCs have on–chip temperature sensors that can be used to monitor the temperature so adjustments can be made. Device data sheets will normally specify the temperature sensor slope expressed at mV/°C to indicate the typical characteristic.

Nonlinearity is an error source for which little can be done, since it is normally inherent in the design of the module. The voltage difference between each code transition should be equal to 1 LSB. Therefore, nonlinearity is the irregular spacing of the code steps, which will cause some distortion. Freescale application note "ADC Definitions and Specifications´ (document number AN2438, available as a PDF download from explains in more detail the difference between integral and differential nonlinearity errors.


Digital blood pressure monitors help physicians diagnose and help patients control hypertension. Accurate blood pressure monitoring both in the healthcare facility and the home is critical, particularly when diagnosing white coat hypertension vs. essential hypertension.

The toughest challenge in any measurement system is the translation accuracy of the real–world analog signals to the embedded controller´s digital domain. High–resolution ADCs offer good granularity of results (LSB indicates nV changes) but do not necessarily deliver high accuracy. Various ADC techniques, such as oversampling and decimation, calibration, leakage control and temperature compensation, can be used to increase the accuracy and the ENOB in a measurement–based application.

Freescale´s embedded controller ADCs have high levels of functionality integrated into each device to allow designers to customize them to suit the characteristics of their applications, making high accuracy more achievable. The latest 16–bit ADC in the Flexis product series enables developers to improve accuracy by adjusting the ADC´s offset and gain without adding to the system´s hardware and software requirements.

Freescale´s blood pressure monitor reference design demonstrates how the sensing, data communication and processing capabilities of Freescale´s Flexis QE128 and JM controllers, sensors and analog products interact to create a complete medical handheld solution. More details on this reference design are available from the Blood Pressure Monitor Design Reference Manual, which can be downloaded from (document number DRM101).

Read More

  • DRM101: Blood Pressure Monitor Design Reference Manual
  • AN3500: Blood Pressure Monitor Using Flexis QE128

Return to Top Return to Top