technical – تقني

What is the best methodology to design different op-amps in nanometer technologies? by Khaled Alashmouny

Answer by Khaled Alashmouny:

I would break this question to multiple pieces and try to answer each piece.

1) Is the design methodology for opamps (or say analog circuits) that people used in deep-sub-um technologies any different than what people should use in latest nm technologies?

A:

Not really. The physics did not change and the back-of-envelop calculations won’t change either. You will still consider second-order effects the same way you used to do it before.

In recent technologies you have more complex effects due to layout and technology restrictions, but it is very hard to use hand calculations for them. So, you would still understand their root cause, but you would rely on the foundry providing accurate model for these effects.

2) What are the challenges for analog (or opamp) design in these deep technologies?

A:

There are some challenges and limitations due to the technology itself. This includes the layout-dependent effects mentioned before, the discrete dimensions designers should use, the poor intrinsic gain as you move from one technology to another, the limitations on the supply range you can use if your design requires high-voltage .. and so on. One key here is to make sure that you know exactly the region covered by your models and to be extra careful if your innovation is based on biasing devices in regions not covered by the model. While you can see the simulation results working, you will never get the performance if the model does not support your use case.

Another type of challenges is due to the fact that your opamp is in a nm technology for a reason. It won’t scale much in area is its fellow digital blocks. In this case, since the chip is almost dominant by digital circuits then you want to make sure the integration does not cause failure to your analog design. There can be further restrictions on the supply, extra guard-rings needed to isolate from digital noise, and many others that can be discovered when you work on full-chip solutions.

3) What methodology should you use?

A:

Again, it is not really different from before. You should understand the limitations of models, the limitations of device choices, do the most simple and intuitive method for hand calculation, understand the devices limitations and try to do some characterization for each device in simulation by itself to get parameters such as gm/Id vs. Id, Ids vs. Vgs and Vds, and other simple relations with different device geometries. Understand when saturation occurs per device width. Check the definition of threshold voltage according to the foundry … etc. Extract the parameters to use them in your hand calculations to get the most accuracy out of it. Once you feel you are comfortable with your knowledge about the technology go ahead and do a simple design, say a differential amplifier. Try cascoding and see what gain do you gain out of it. See if this matches your previous device characterization for output impedance and intrinsic gain.

Finally, go ahead and do your design in small steps to understand how each piece work, then assemble parts together and run simulations. It will match your intuition and your understanding of physics and you will know at which direction you need to tweak current, device size, loading …etc

You are not done yet, you still have to check how PVT variations affects you. Later to make your design robust you need to consider reliability and aging effects.

This answer is not complete and it is not meant to be complete. Analog design comes from the understanding of basics and the amount of experience you build from different design and from always thinking about why or why not it may work. So, don’t be overwhelmed. Just get started and build strong basic and intuition. You will get there.

What is the best methodology to design different op-amps in nanometer technologies?

An important note: As part of my undergrad studies in Systems and Biomedical Engineering, we were required to work on a final graduation project. I was lucky to work with lifetime friends: Ahmed Ehab (he is an associate professor now in Cairo University), Samy Ali, and Mohamed Al-Olfy. At that time our mission was to build a prototype or actually POC (proof of concept) for an ICU (Intensive care unit) monitor. This is part of the final report that shows the work we did at that time. I am adding the ECG part since it is going to be useful – even though this work was done in 2002 – for those who are starting and need to have a starting guide to build an analog front-end to record ECG.

Most of the photos/figures that explain ECG is from an online book titled ‘Bioelectromagnitism’. I highly recommend this book and you can check it in this link

I have to say that our English was not that good at that time and I am adding in the same format. I am happy to receive any questions on this topic as well. 


 

CHAPTER 2: ELECTROCARDIOGRAPH 

The heart shoulders the responsibility for pumping blood  the entire human circulatory system. The circulatory system delivers much needed oxygen and nutrients to the organs and tissues of the body, and then returns depleted blood to the heart and the lungs for regeneration. This perpetual cycle represents the scientific essence of human life. On an average day, the heart will “beat”, i.e. expand and contract, nearly 100,000 times, while pumping about 2000 gallons of blood. In a 70-year lifetime, a normal heart will beat more than 2.5 billion times.

Given the arduous physical demands placed on the human heart, it should come as no surprise that heart disease represents one of society’s gravest health risks. Essentially, heart disease is present when the pumping and circulatory functions described above encounter interference. Although heart disease comes in myriad forms, its variations can be grouped into two basic categories. “Congenital” heart disease involves organ defects that are inborn or existent at birth. These defects may impede the flow of blood in the heart or in the vessels near it. Furthermore, the defects may cause blood to flow through the heart in abnormal patterns. “Congestive” heart failure, on the other hand, doesn’t necessarily involve inborn organ defects. Rather, this condition is present when the heart’s pumping function is restricted by an underlying medical condition that has developed over time, such as clogged arteries or high blood pressure.

Congenital and congestive forms of heart disease take an enormous toll on society. As noted previously, the heart’s pumping action supplies the body with the oxygen and nutrient-rich blood it needs in order to function properly. Persons plagued by early and middle stage heart disease suffer from a shortage of these life-sustaining elements. Thus, such persons often tend to feel weak, fatigued, and short of breath. As the American Heart Association notes, basic daily activities such as walking, climbing stairs, and carrying groceries can begin to feel like insurmountable tasks for patients suffering within this category.

While the productivity and lifestyle-related losses that stem from early and middle stage heart disease are quite substantial, the terrifying impact of this health condition is most clearly illustrated by the experiences of those suffering at the end-stage of the disease. Each year, nearly 1,000,000 people die from complications of cardiovascular disease. Indeed, according to some experts, heart disease kills as many persons as nearly all other causes of death combined. Because of the substantial strain that heart disease places on society, physicians, scientists and policy makers have, for decades, devoted significant amounts of time and resources to combating its effects. Furthermore, numerous health organizations have undertaken efforts to better educate the public about demonstrable linkages between heart disease and personal choices regarding diet and lifestyle. Despite these efforts, however, a large segment of the population lives with hearts that have been severely damaged by heart disease, and thus face imminent death.

The first known step ,in whatever heart-related problems, is to see the patient’s Electrocardiograph as a non-invasive inspection tool to notice some of the heart problems such as blocks, fibrillation…etc, as will be shown on the text. Before getting deep into the equipment itself it is preferably to know a brief overview about the heart anatomy and physiology.

In this text we concentrate mainly about the application of Electrocardiograph in the ICU monitor so that much details is not required to be in the middle of the subject. For readers who wants more details, an Appendix is added at the end of this report. (see Appendix A)

2.1. Medical Overview (ANATOMY AND PHYSIOLOGY OF THE HEART)

 

2.1.1. Location of the Heart

The heart is located in the chest between the lungs behind the sternum and above the diaphragm. It is surrounded by the pericardium. Its size is about that of a fist, and its weight is about 250-300 g. Its center is located about 1.5 cm to the left of the midsagittal plane. Located above the heart are the great vessels: the superior and inferior vena cava, the pulmonary artery and vein, as well as the aorta. The aortic arch lies behind the heart. The esophagus and the spine lie further behind the heart. An overall view is given in Figure 2.1 (Williams and Warwick, 1989).

fig01.jpg

Figure 2.1. Location of the heart in the thorax. It is bounded by the diaphragm, lungs, esophagus, descending aorta, and sternum.

2.1.2. Anatomy of the Heart

The walls of the heart are composed of cardiac muscle, called myocardium. It also has striations similar to skeletal muscle. It consists of four compartments: the right and left atria and ventricles. The heart is oriented so that the anterior aspect is the right ventricle while the posterior aspect shows the left atrium (see Figure 2.2). The atria form one unit and the ventricles another. This has special importance to the electric function of the heart, which will be discussed later. The left ventricular free wall and the septum are much thicker than the right ventricular wall. This is logical since the left ventricle pumps blood to the systemic circulation, where the pressure is considerably higher than for the pulmonary circulation, which arises from right ventricular outflow.

The cardiac muscle fibers are oriented spirally (see Figure 2.3) and are divided into four groups: Two groups of fibers wind around the outside of both ventricles. Beneath these fibers a third group winds around both ventricles. Beneath these fibers a fourth group winds only around the left ventricle. The fact that cardiac muscle cells are oriented more tangentially than radially, and that the resistivity of the muscle is lower in the direction of the fiber has importance in electrocardiography.

The heart has four valves. Between the right atrium and ventricle lies the tricuspid valve, and between the left atrium and ventricle is the mitral valve. The pulmonary valve lies between the right ventricle and the pulmonary artery, while the aortic valve lies in the outflow tract of the left ventricle (controlling flow to the aorta).

The blood returns from the systemic circulation to the right atrium and from there goes through the tricuspid valve to the right ventricle. It is ejected from the right ventricle through the pulmonary valve to the lungs. Oxygenated blood returns from the lungs to the left atrium, and from there through the mitral valve to the left ventricle. Finally blood is pumped through the aortic valve to the aorta and the systemic circulation.

fig02.jpg

Figure 2.2. The anatomy of the heart and associated vessels.

fig03.jpg

Figure 2.3. Orientation of cardiac muscle fibers.

2.2. Electric Activation of The Heart

2.2.1. Cardiac Muscle Cell

In the heart muscle cell, or myocyte, electric activation takes place by means of the same mechanism as in the nerve cell – that is, from the inflow of sodium ions across the cell membrane. The amplitude of the action potential is also similar, being about 100 mV for both nerve and muscle. The duration of the cardiac muscle impulse is, however, two orders of magnitude longer than that in either nerve cell or skeletal muscle. A plateau phase follows cardiac depolarization, and thereafter repolarization takes place. As in the nerve cell, repolarization is a consequence of the outflow of potassium ions. The duration of the action impulse is about 300 ms, as shown in Figure 2.4 (Netter, 1971).

Associated with the electric activation of cardiac muscle cell is its mechanical contraction, which occurs a little later. For the sake of comparison, Figure 2.5 illustrates the electric activity and mechanical contraction of frog sartorius muscle, frog cardiac muscle, and smooth muscle from the rat uterus (Ruch and Patton, 1982).

An important distinction between cardiac muscle tissue and skeletal muscle is that in cardiac muscle, activation can propagate from one cell to another in any direction. As a result, the activation wavefronts are of rather complex shape. The only exception is the boundary between the atria and ventricles, which the activation wave normally cannot cross except along a special conduction system, since a nonconducting barrier of fibrous tissue is present.

fig04.jpg

Figure 2.4. Electrophysiology of the cardiac muscle cell.

2.2.2. The Conduction System of the Heart

Located in the right atrium at the superior vena cava is the sinus node (sinoatrial or SA node) which consists of specialized muscle cells. The sinoatrial node in humans is in the shape of a crescent and is about 15 mm long and 5 mm wide (see Figure 2.5). The SA nodal cells are self-excitatory, pacemaker cells. They generate an action potential at the rate of about 70 per minute. From the sinus node, activation propagates throughout the atria, but cannot propagate directly across the boundary between atria and ventricles, as noted above.

The atrioventricular node (AV node) is located at the boundary between the atria and ventricles; it has an intrinsic frequency of about 50 pulses/min. However, if the AV node is triggered with a higher pulse frequency, it follows this higher frequency. In a normal heart, the AV node provides the only conducting path from the atria to the ventricles. Thus, under normal conditions, the latter can be excited only by pulses that propagate through it.

Propagation from the AV node to the ventricles is provided by a specialized conduction system. Proximally, this system is composed of a common bundle, called the bundle of His (named after German physician Wilhelm His, Jr., 1863-1934). More distally, it separates into two bundle branches propagating along each side of the septum, constituting the right and left bundle branches. (The left bundle subsequently divides into an anterior and posterior branch.) Even more distally the bundles ramify into Purkinje fibers (named after Jan Evangelista Purkinje (Czech; 1787-1869)) that diverge to the inner sides of the ventricular walls. Propagation along the conduction system takes place at a relatively high speed once it is within the ventricular region, but prior to this (through the AV node) the velocity is extremely slow.
From the inner side of the ventricular wall, the many activation sites cause the formation of a wavefront which propagates through the ventricular mass toward the outer wall. This process results from cell-to-cell activation. After each ventricular muscle region has depolarized, repolarization occurs. Repolarization is not a propagating phenomenon, and because the duration of the action impulse is much shorter at the epicardium (the outer side of the cardiac muscle) than at the endocardium (the inner side of the cardiac muscle), the termination of activity appears as if it were propagating from epicardium toward the endocardium.

fig05.jpg

Figure 2.5. The conduction system of the heart.

Because the intrinsic rate of the sinus node is the greatest, it sets the activation frequency of the whole heart. If the connection from the atria to the AV node fails, the AV node adopts its intrinsic frequency. If the conduction system fails at the bundle of His, the ventricles will beat at the rate determined by their own region that has the highest intrinsic frequency. The electric events in the heart are summarized in Table 2.1. The waveforms of action impulse observed in different specialized cardiac tissue are shown in Figure 2.6.

Table 2.1. Electric events in the heart

 

Location in
the heart
Event Time [ms] ECG-
terminology
Conduction
velocity [m/s]
Intrinsic
frequency [1/min]
SA node
atrium, Right
Left
AV nodebundle of His
bundle branches
Purkinje fibers
endocardium
Septum
Left ventricle

epicardium
Left ventricle
Right ventricle

epicardium
Left ventricle
Right ventricle

endocardium
Left ventricle

Impulse generated
depolarization *)
depolarization
arrival of impulse
departure of impulse
activated
activated
activateddepolarization
depolarization

depolarization
depolarization

repolarization
repolarization
repolarization

0
5
85
50
125
130
145
150175
190

225
250

400

600

P
P
P-Q
intervalQRS

T

0.05
0.8-1.0
0.8-1.0
0.02-0.051.0-1.5
1.0-1.5
3.0-3.5

0.3 (axial)

0.8
(transverse)

0.5

 70-80

20-40

*) Atrial repolarization occurs during the ventricular depolarization; therefore, it is not normally seen in the electrocardiogram.

 

fig06.jpg

Figure 2.6. Electrophysiology of the heart. The different waveforms for each of the specialized cells found in the heart are shown. The latency shown approximates that normally found in the healthy heart.

2.3. Theory of Operation of ECG

The electric potentials generated by the heart appear throughout the body and on its surface. We determine potential difference by placing electrodes on the surface of the body and measuring the voltage between them, being careful to draw little current (ideally there should be no current at all, because current distorts the electric field that produces the potential differences). If the two electrodes are located on different equal-potential lines of the electric field of the heart, a nonzero potential difference or voltage is measured. Different pairs of electrodes at different locations generally yield different voltages because of the spatial dependence of the electric field of the heart. Thus it is important to have certain standard positions of clinical evaluation of the ECG. The limbs make fine guideposts for locating the ECG electrodes. This is mentioned in more details in Appendix A.

In the simplified dipole model of the heart, it would be convenient if we could predict the voltage, or at least its waveform, in a particular set of electrodes at a particular instant of time when the cardiac vector is known. We can do this if we define a lead vector for the pair of electrodes. This vector is a unit vector that defines the direction a constant-magnitude cardiac vector must have to generate maximal voltage in the particular pair of electrodes. A pair of electrodes, or combination of several electrodes through a resistive network that gives an equivalent pair, is referred to as a lead.

In clinical electrocardiography, more than one lead must be recorded to describe the heart’s electric activity fully. In practice, several leads are taken in the frontal plane (the plane of the body that is parallel to the ground when one is lying on his back) and the transverse plane (the plane of the body that is parallel to the ground when one is standing erect).

The implementation we did was considering the limb leads, or they are scientifically called bipolar leads, hence they are described in much details while other types are shown in the Appendix.

2.3.1. Limb Leads

Augustus Désiré Waller measured the human electrocardiogram in 1887 using Lippmann’s capillary electrometer (Waller, 1887). He selected five electrode locations: the four extremities and the mouth (Waller, 1889). In this way, it became possible to achieve a sufficiently low contact impedance and thus to maximize the ECG signal. Furthermore, the electrode location is unmistakably defined and the attachment of electrodes facilitated at the limb positions. The five measurement points produce altogether 10 different leads (see Fig. 2.7A). From these 10 possibilities he selected five – designated cardinal leads. Two of these are identical to the Einthoven leads I and III described below.

Willem Einthoven also used the capillary electrometer in his first ECG recordings. His essential contribution to ECG-recording technology was the development and application of the string galvanometer. Its sensitivity greatly exceeded the previously used capillary electrometer. The string galvanometer itself was invented by Clément Ader (Ader, 1897). In 1908 Willem Einthoven published a description of the first clinically important ECG measuring system (Einthoven, 1908). The above-mentioned practical considerations rather than bioelectric ones determined the Einthoven lead system, which is an application of the 10 leads of Waller. The Einthoven lead system is illustrated in Figure 2.7B.

fig07.jpg

Figure 2.7. (A) The 10 ECG leads of Waller. (B) Einthoven limb leads and Einthoven triangle. The Einthoven triangle is an approximate description of the lead vectors associated with the limb leads. Lead I is shown as I in the above figure, etc.

The Einthoven limb leads (standard leads) are defined in the following way:

Lead I:     VI   = ΦL – ΦR
Lead II:    VII  = ΦF – ΦR (2.1)
Lead III:   VIII = ΦF – ΦL
where VI = the voltage of Lead I
VII = the voltage of Lead II
VIII = the voltage of Lead III
ΦL = potential at the left arm
ΦR = potential at the right arm
ΦF = potential at the left foot

(The left arm, right arm, and left leg (foot) are also represented with symbols LA, RA, and LL, respectively.)

According to Kirchhoff’s law these lead voltages have the following relationship:

VI + VIII = VII (2.2)

hence only two of these three leads are independent.

The lead vectors associated with Einthoven’s lead system are conventionally found based on the assumption that the heart is located in an infinite, homogeneous volume conductor (or at the center of a homogeneous sphere representing the torso). One can show that if the position of the right arm, left arm, and left leg are at the vertices of an equilateral triangle, having the heart located at its center, then the lead vectors also form an equilateral triangle.

A simple model results from assuming that the cardiac sources are represented by a dipole located at the center of a sphere representing the torso, hence at the center of the equilateral triangle. With these assumptions, the voltages measured by the three limb leads are proportional to the projections of the electric heart vector on the sides of the lead vector triangle, as described in Figure 2.7B.

2.3.2. Formation of the ECG Signal

The cells that constitute the ventricular myocardium are coupled together by gap junctions which, for the normal healthy heart, have a very low resistance. As a consequence, activity in one cell is readily propagated to neighboring cells. It is said that the heart behaves as a syncytium; a propagating wave once initiated continues to propagate uniformly into the region that is still at rest.

 

It should be possible to examine the actual generation of the ECG by taking into account a realistic progression of activation double layers. Such a description is contained in Figure 2.8. After the electric activation of the heart has begun at the sinus node, it spreads along the atrial walls. The resultant vector of the atrial electric activity is illustrated with a thick arrow. The projections of this resultant vector on each of the three Einthoven limb leads is positive, and therefore, the measured signals are also positive.

 

After the depolarization has propagated over the atrial walls, it reaches the AV node. The propagation through the AV junction is very slow and involves negligible amount of tissue; it results in a delay in the progress of activation. (This is a desirable pause which allows completion of ventricular filling.)

Once activation has reached the ventricles, propagation proceeds along the Purkinje fibers to the inner walls of the ventricles. The ventricular depolarization starts first from the left side of the interventricular septum, and therefore, the resultant dipole from this septal activation points to the right. Figure 2.8 shows that this causes a negative signal in leads I and II.

In the next phase, depolarization waves occur on both sides of the septum, and their electric forces cancel. However, early apical activation is also occurring, so the resultant vector points to the apex.

fig08.jpg

fig08_1.jpg

Figure 2.8. The generation of the ECG signal in the Einthoven limb leads.

After a while the depolarization front has propagated through the wall of the right ventricle; when it first arrives at the epicardial surface of the right-ventricular free wall, the event is called breakthrough. Because the left ventricular wall is thicker, activation of the left ventricular free wall continues even after depolarization of a large part of the right ventricle. Because there are no compensating electric forces on the right, the resultant vector reaches its maximum in this phase, and it points leftward. The depolarization front continues propagation along the left ventricular wall toward the back. Because its surface area now continuously decreases, the magnitude of the resultant vector also decreases until the whole ventricular muscle is depolarized. The last to depolarize are basal regions of both left and right ventricles. Because there is no longer a propagating activation front, there is no signal either.

Ventricular repolarization begins from the outer side of the ventricles and the repolarization front “propagates” inward. This seems paradoxical, but even though the epicardium is the last to depolarize, its action potential durations are relatively short, and it is the first to recover. Although recovery of one cell does not propagate to neighboring cells, one notices that recovery generally does move from the epicardium toward the endocardium. The inward spread of the repolarization front generates a signal with the same sign as the outward depolarization front. Because of the diffuse form of the repolarization, the amplitude of the signal is much smaller than that of the depolarization wave and it lasts longer.

The normal electrocardiogram is illustrated in Figure 2.9. The figure also includes definitions for various segments and intervals in the ECG. The deflections in this signal are denoted in alphabetic order starting with the letter P, which represents atrial depolarization. The ventricular depolarization causes the QRS complex, and repolarization is responsible for the T-wave. Atrial repolarization occurs during the QRS complex and produces such a low signal amplitude that it cannot be seen apart from the normal ECG.

fig09.jpg

Figure 2.9. The normal electrocardiogram.

2.3.3. The Information Content of The 12-Lead System

The most commonly used clinical ECG-system, which is the 12-lead ECG system, consists of the following 12 leads, which are:

I, II, III
aVR, aVL, aVF
V1, V2, V3, V4, V5, V6

Of these 12 leads, the first six are derived from the same three measurement points. Therefore, any two of these six leads include exactly the same information as the other four.
Over 90% of the heart’s electric activity can be explained with a dipole source model. To evaluate this dipole, it is sufficient to measure its three independent components. In principle, two of the limb leads (I, II, III) could reflect the frontal plane components, whereas one precordial lead could be chosen for the anterior-posterior component. The combination should be sufficient to describe completely the electric heart vector. To the extent that the cardiac source can be described as a dipole, the 12-lead ECG system could be thought to have three independent leads and nine redundant leads.

 

However, in fact, the precordial leads detect also nondipolar components, which have diagnostic significance because they are located close to the frontal part of the heart. Therefore, the 12-lead ECG system has eight truly independent and four redundant leads. The lead vectors for each lead based on an idealized (spherical) volume conductor are shown in Figure 15.9. These figures are assumed to apply in clinical electrocardiography.

 

The main reason for recording all 12 leads is that it enhances pattern recognition. This combination of leads gives the clinician an opportunity to compare the projections of the resultant vectors in two orthogonal planes and at different angles. In summary, for the approximation of cardiac electric activity by a single fixed-location dipole, nine leads are redundant in the 12-lead system, as noted above.

fig10.jpg

Figure 2.10. The projections of the lead vectors of the 12-lead ECG system in three orthogonal planes

2.4. The Basis of ECG Diagnosis

2.4.1. The Application Areas of ECG Diagnosis

The main applications of the ECG to cardiological diagnosis include the following (see also Figure 2.11):

 

  1. 1. The electric axis of the heart
  1. 2. Heart rate monitoring

 

  1. 3. Arrhythmias
  1. a. Supraventricular arrhythmias
  2. b. Ventricular arrhythmias
  1. 4. Disorders in the activation sequence
  1.  . Atrioventricular conduction defects (blocks)
  2. a. Bundle-branch block
  3. b. Wolff-Parkinson-White syndrome
  1. 5. Increase in wall thickness or size of the atria and ventricles
  1.  . Atrial enlargement (hypertrophy)
  2. a. Ventricular enlargement (hypertrophy)
  1. 6. Myocardial ischemia and infarction
  1.  . Ischemia
  2. a. Infarction
  1. 7. Drug effect
  1.  . Digitalis
  2. a. Quinidine
  1. 8. Electrolyte imbalance

Potassium

  1.  . Calcium
  1. 9. Carditis

Pericarditis

Myocarditis

  1. 10. Pacemaker monitoring

Cardiac Rhythm Diagnosis and other heart disorders can be seen in full details in Appendix A

fig11.jpg

Figure 2.11 Application areas of ECG diagnosis.

2.4.2. Determination of The Electric Axis of The Heart

The concept of the electric axis of the heart usually denotes the average direction of the electric activity throughout ventricular (or sometimes atrial) activation. The term mean vector is frequently used instead of “electric axis.” The direction of the electric axis may also denote the instantaneous direction of the electric heart vector. The normal range of the electric axis lies between +30° and -110° in the frontal plane and between +30° and -30° in the transverse plane. (Note that the angles are given in the consistent coordinate system of the Appendix.)

The direction of the electric axis may be approximated from the 12-lead ECG by finding the lead in the frontal plane, where the QRS-complex has largest positive deflection. The direction of the electric axis is in the direction of this lead vector. The result can be checked by observing that the QRS-complex is symmetrically biphasic in the lead that is normal to the electric axis. The directions of the leads were summarized in Figure 2.10. Deviation of the electric axis to the right is an indication of increased electric activity in the right ventricle due to increased right ventricular mass. This is usually a consequence of chronic obstructive lung disease, pulmonary emboli, certain types of congenital heart disease, or other disorders causing severe pulmonary hypertension and corpulmonale.

Deviation of the electric axis to the left is an indication of increased electric activity in the left ventricle due to increased left ventricular mass. This is usually a consequence of hypertension, aortic stenosis, ischemic heart disease, or some intraventricular conduction defect.
The clinical meaning of the deviation of the heart’s electric axis is discussed in greater detail in connection with ventricular hypertrophy.

2.5. Electrocardiogram Electrodes

In order to measure and record potentials and, hence, currents in the body, it is necessary to provide some interface between the body and the electronic measuring apparatus. This interface function is carried out by biopotential electrodes. In any practical measurement of potentials, current flows in the measuring circuit for at least a fraction of the period of time over which the measurement is made. Ideally this current should be very small. However, in practical situations, it is never zero. Biopotential electrodes must therefore have the capability of conducting a current across the interface between the body and the electronic measuring circuit.

The electrode actually carries out a transducing function, because current is carried in the body by ions, whereas it is carried in the electrode and its lead wire by electrons. Thus the electrode must serve as a transducer to change an ionic current into an electronic current. This greatly complicates electrodes and places constraints on their operation. In appendix A there are much details about basic mechanisms involved in the transduction process and how they affect electrode characteristics rather than examining the principal electrical characteristics of biopotential electrodes (especially ECG). We shall here discuss electrical equivalent circuits for electrodes based on these characteristics. We shall then cover some of the different forms that ECG electrodes take in various types of ECG monitoring instrumentation systems.

2.5.1. Electrode Behavior and Circuit Models

The electrical characteristics of electrodes have been the subject of much study. Often the current-voltage characteristics of the electrode-electrolyte interface are found to be nonlinear, and, in turn, nonlinear elements are required for modeling electrode behavior. Specifically, the characteristics of an electrode are sensitive to the current passing through the electrode, and the electrode characteristics at relatively high current densities can be considerably different from those at low current densities. The characteristics of electrodes are also waveform-dependent. When sinusoidal currents are used to measure the electrode’s circuit behavior, the characteristics are also frequency dependent.

For sinusoidal inputs, the terminal characteristics of an electrode have both a resistive and a reactive component. Over all but the lowest frequencies, this situation can be modeled as a series resistance and capacitance. It is not surprising to see a capacitance entering into this model, because the Half-cell potential (see Appendix A) was the result of the distribution of ionic charge at the electrode-electrolyte interface that had been considered a double layer of charge. This, of course should behave as a capacitor—hence the capacitive reactance seen for real electrodes.

The series resistance-capacitance equivalent circuit breaks down at the lower frequencies, where this model would suggest an impedance going to infinity as the frequency approaches dc. To avoid this problem, it can be convert to a parallel RC circuit that has a purely resistive impedance at very low frequencies. If we combine this circuit with a voltage source representing the half-cell potential and a series resistance representing the interface effects and resistance of the electrolyte, we can arrive at the biopotential electrode equivalent circuit model shown in Figure 2.12.

fig12.jpg

Figure 2.12. Equivalent circuit for a biopotential electrode in contact with an electrolyte

In this circuit, Rd and Cd represent the resistive and reactive components just discussed. These components are still frequency- and current-density-dependent. In this configuration it is also possible to assign physical meaning to the components. Cd represents the capacitance across the double layer of charge at the electrode-electrolyte interface. The parallel resistance Rd represents the leakage resistance across this double layer. All the components of this equivalent circuit have values determined by the electrode material, and ___to a lesser extent___ by the material of the electrolyte and its concentration.

The equivalent circuit of Figure 2.12 demonstrates that the electrode impedance is frequency-dependent. At high frequencies, where 1/ωC << Rd, the impedance is constant at Rs. At low frequencies, where 1/ωC >> Rd, the impedance is again constant but its value is larger, being Rs + Rd . At frequencies between these extremes, the electrode impedance is frequency-dependent.

2.5.2. The Electrode-Skin Interface and Motion Artifact

When biopotentials are recorded from the surface of the skin, we must consider an additional interface—the interface between the electrode-electrolyte and the skin—in order to understand the behavior of the electrodes. In coupling an electrode to the skin, we generally use a transparent electrolyte gel containing Cl as the principal anion to maintain good contact. Alternatively, we may use an electrode cream, which contains Cl and has the consistency of hand lotion. The interface between this gel and the electrode is an electrode-electrolyte interface. However, the interface between the electrolyte and the skin is different and requires some explanation. To represent the electric connection between an electrode and the skin through the agency of electrolyte gel, the equivalent circuit of Figure 2.12 must be expanded, as shown in Figure 2.13.

The electrode-electrolyte interface equivalent circuit is shown adjacent to the electrode-gel interface. The series resistance Rs is now the effective resistance associated with interface effects of the gel between the electrode and the skin. We can consider the epidermis of the skin as a membrane that is semipermeable to ions, so if there is a difference in ionic concentration across this membrane, there is a potential difference Ese, which is given by the Nernst equation. The epidermal layer is also found to have an electric impedance that behaves as a parallel RC circuit, as shown. For 1 cm2, skin impedance reduces from approximately 200 kΩ at 1 Hz to 200 Ω at 1 MHz. The dermis and the subcutaneous layer under it behave in general as pure resistances. They generate negligible dc potentials. Thus we see that a more stable electrode will result.

fig13.jpg

Figure 2.13. A body-surface electrode is placed against skin, showing the total electrical equivalent circuit.

There is a potential difference between the lumen of the sweat duct and the dermis and subcutaneous layers. There also is a parallel RpCp combination in series with this potential that represents the wall of the sweat gland and duct, as shown by the broken lines in Figure 2.13. When a polarizable electrode is in contact with an electrolyte, a double layer of charge forms at the interface. If the electrode is moved with respect to the electrolyte, this movement mechanically disturbs the distribution of charge at the interface and results in a momentary change of the half-cell potential until equilibrium can be reestablished. If a pair of electrodes is in an electrolyte and one moves while the other remains stationary, a potential difference appears between the two electrodes during this movement. This potential is known as motion artifact and can be a serious cause of interference in the measurement of biopotentials.

Because motion artifact results primarily from mechanical disturbances of the distribution of charge at the electrode-electrolyte interface, it is reasonable to expect that motion artifact is minimal for nonpolarizable electrodes (see the Appendix). Observation of the motion-artifact signals reveals that a major component of this noise is at low frequencies. The low-frequency artifact does affect signals such as ECG, EEG, and EOG. Consequently, it is important in these applications to use a nonpolarizable electrode to minimize motion artifact stemming from the electrode-electrolyte interface.

This interface, however, is not the only source of motion artifact encountered when biopotential electrodes are applied to the skin. The equivalent circuit in Figure 2.13 shows that, in addition to the half-cell potential Ehc the electrolyte gel-skin potential Ese can also cause motion artifact if it varies with movement of the electrode. Variations of this potential indeed do represent a major source of motion artifact in Ag/AgCl skin electrodes used in ECG. They have shown that this artifact can be significantly reduced when the most upper skin layer, named stratum corneum, is removed by mechanical abrasion with a fine abrasive paper. This method also helps to reduce the epidermal component of the skin impedance. Tarn and Webster (1977) also point out, however, that removal of the body’s outer protective barrier makes that region of skin more susceptible to irritation from the electrolyte gel. Therefore, the choice of a gel material is important. Remembering the dynamic nature of the epidermis, note also that the stratum corneum can regenerate itself in as short a time as 24 hours, thereby renewing the source of motion artifact. This is a factor to be taken into account if the electrodes are to be used for chronic recording as in monitoring . A potential between the inside and outside of the skin can be measured. Stretching the skin changes this skin potential by 5-10 mV, and this change appears as motion artifact. Ten 0.5-mm skin punctures through the barrier layer shortcircuits the skin potential and reduces the stretch artifact to less than 0.2 mV. De Talhouet and Webster (1996) provide a model for the origin of this skin potential and show how it can be reduced by stripping layers of the skin using Scotch tape.

2.5.3. Body-Surface Recording Electrodes for ECG

Over the years many different types of electrodes for recording various potentials on the body surface have been developed. This section describes only the types of these electrodes that are used in long durations so as to be appropriate with the monitoring application. The reader interested in more extensive examples should see Appendix A or consult Geddes (1972).

2.5.3.1.Metal-Plate Electrodes

It consists of a metallic conductor in contact with the skin. An electrolyte soaked pad or gel is used to establish and maintain the contact. Figure 2.14 shows several forms of this electrode. The one most commonly used for limb electrodes with the electrocardiograph is shown in Figure 2.14(a). It consists of a flat metal plate that has been bent into a cylindrical segment. A terminal is placed on its outside surface near one end; this terminal is used to attach the lead wire to the electrocardiograph. A post, placed on this same side near the center, is used to connect a rubber strap to the electrode and hold it in place on an arm or leg. Before it is attached to the body, its concave surface is covered with electrolyte gel. Similarly arranged flat metal disks are also used for this type of electrode, these traditional electrodes remain popular and are frequently used.

fig14.jpg

Figure 2.14. Body-surface biopotential electrodes

A second common variety of metal-plate electrode is the metal disk illustrated in Figure 2.14(b). This electrode, which has a lead wire soldered or welded to the back surface, can be made of several different materials. Sometimes the connection between lead wire and electrode is protected by a layer of insulating material, such as epoxy or polyvinyl chloride. This structure can be used as a chest electrode for recording the ECG or in cardiac monitoring for long-term recordings such as in ICUs. In these applications the electrode is often fabricated from a disk of Ag that may or may not have an electrolytically deposited layer of AgCl on its contacting surface. It is coated with electrolyte gel and then pressed against the patient’s chest wall. It is maintained in place by a strip of surgical tape or a plastic foam disk with a layer of tack on one surface.

Disk-shaped electrodes such as these have also been fabricated from metal foils (primarily silver foil) and are applied as single-use disposable electrodes. The thinness of the foil allows it to conform to the shape of the body surface. Also, because it is so thin, the cost can be kept relatively low.

Economics necessarily plays an important role in determining what materials and apparatus are used in hospital administration and patient care. In choosing suitable cardiac electrodes for patient-monitoring applications, physicians are more and more turning to pregelled, disposable electrodes with the adhesive already in place. These devices are ready to be applied to the patient and are not cleaned after use. This minimizes the amount of time that personnel must devote to the use of these electrodes.

A popular type of electrode of this variety is illustrated in Figure 2.14(c). It consists of a relatively large disk of plastic foam material with a silver-plated disk on one side attached to a silver-plated snap similar to that used on clothing in the center of the other side. A lead wire with the female portion of the snap is then snapped onto the electrode and used to connect the assembly to the monitoring apparatus. The silver-plated disk serves as the electrode and may be coated with an AgCl layer. A layer of electrolyte gel covers the disk. The electrode side of the foam is covered with an adhesive material that is compatible with the skin. A protective cover or strip of release paper is placed over this side of the electrode and foam, and the complete electrode is packaged in a foil envelope so that the water component of the gel will not evaporate away. To apply the electrode to the patient, the technician has only to clean the area of skin on which the electrode is to be placed, open the electrode packet, remove the release paper from the tack, and press the electrode against the patient. This procedure is quickly accomplished and no special technique need be learned, such as using the correct amount of gel or cutting strips of adhesive tape to hold the electrode in place. This type of electrodes is the used one in our implementation. We have already fabricated one using simple material as will be described later in the text.

2.5.3.2. Suction Electrodes

On the other hand there is a modification of the metal-plate electrode that requires no straps or adhesives for holding it in place is the suction electrode illustrated in Figure 2.15. Such electrodes are frequently used in electrocardiography as the precordial (chest) leads, because they can be placed at particular locations and used to take a recording. They consist of a hollow metallic cylindrical electrode that makes contact with the skin at its base. An appropriate terminal for the lead wire is attached to the metal cylinder, and a rubber suction bulb fits over its other base. Electrolyte gel is placed over the contacting surface of the electrode, the bulb is squeezed, and the electrode is then placed on the chest wall. The bulb is released and applies suction against the skin, holding the electrode assembly in place. This electrode can be used only for short periods of time; the suction and the pressure of the contact surface against the skin can cause irritation. Although the electrode itself is quite large, Figure 2.15 shows that the actual contacting area is relatively small.

fig15.jpg

Figure 2.15. A metallic suction electrode

This electrode thus tends to have a higher source impedance than the relatively large-surface-area metal-plate electrodes used for ECG limb electrodes, as shown in Figure 2.14(a). Generally, the electrodes have standards to be implemented. In the Appendix are some of them in addition to some practical hints when using them.

2.6. Functional Blocks of The Electrocardiograph

Figure 2.16 shows a block diagram of a typical clinical electrocardiograph as a standardization.

fig16.jpg

Figure 2.16. Block diagram of an electrocardiograph

To understand the overall operation of the system, let us consider each block separately.

1. Protection circuit This circuit includes protection devices so that the high voltages that may appear across the input to the electrocardiograph under certain conditions do not damage it.

2. Lead selector Each electrode connected to the patient is attached to the lead selector of the electrocardiograph. The function of this block is to determine which electrodes are necessary for a particular lead and to connect them to the remainder of the circuit. It is this part of the electrocardiograph in which the connections for the central terminal are made. This block can be controlled by the operator or by the microcomputer of the electrocardiograph when it is operated in automatic mode. It selects one or more leads to be recorded.

3. Calibration Signal A 1-mV calibration signal is momentarily introduced into the electrocardiograph for each channel that is recorded.

4. Preamplifier The input preamplifier stage carries out the initial amplification of the ECG. This stage should have very high input impedance and a high common-mode-rejection ratio (CMRR). A typical preamplifier stage is the differential amplifier that consists of three operational amplifiers, shown in Figure 2.17. A gain-control switch is often included as a part of this stage.

fig17.jpg

Figure 2.17. An Instrumentation Amplifier

5. Isolation circuit The circuitry of this block contains a barrier to the passage of current from the power line (50 or 60 Hz). For example, if the patient came in contact with a 120-V line, this barrier would prevent dangerous currents from flowing from the patient through the amplifier to the ground of the recorder or microcomputer.

6. Driven right leg circuit This circuit provides a reference point on the patient that normally is at ground potential. This connection is made to an electrode on the patient’s right leg. Details on this circuit are given later on.

7. Driver amplifier Circuitry in this block amplifies the ECG to a level at which it can appropriately record the signal on the recorder. Its input should be ac-coupled so that offset voltages amplified by the preamplifier are not seen at its input. These dc voltages, when amplified by this stage, might cause it to saturate. This stage also carries out the bandpass filtering of the Electrocardiograph to give the frequency characteristics described in Table 2.2 below. Also it often has a zero-offset control that is used to position the signal on the chart paper. This control adjusts the dc level of the output signal

8. memory system Many modern electrocardiographs store electrocardiograms in memory as well as printing them out on a paper chart. The signal is first digitized by an ADC, and then samples from each lead are stored in memory. Patient information entered via the keyboard is also stored. The microcomputer controls this storage activity.

9. Microcomputer  The microcomputer controls the overall operation of the electrocardiograph. A keyboard and a display enable the operator to communicate with the microcomputer.

10. Recorder-printer  This block provides a hardcopy of the recorded ECG signal. It also prints out patient identification, clinical information entered by the operator, and the results of the automatic analysis of the electrocardiogram.

Specific Requirements of The Electrocardiograph

Because the electrocardiograph is widely used as a diagnostic tool and there are several manufacturers of this instrument, standardization is necessary. Standard requirements for electrocardiographs have been developed over the years (Bailey et al. 1990; Anonymous, 1991).

Table 2.2 gives a summary of performance requirements from the most recent of these (Anonymous, 1991). These recommendations are a part of a voluntary standard. The Food and Drug Administration is planning to develop mandatory standards for frequently employed instruments such as the electrocardiograph.

Table 2.2  Summary of Performance Requirements for Electrocardiographs (Anonymous, 1991)

Requirement Description Min/max Units Min/max value
Operating Conditions:
line voltage range V rms 104 to 1127
Frequency range Hz 50 or 60 ± 1
Temperature range °C 25 ± 10
relative humidity range % 50 ± 20
atmospheric pressure range Pa 7 X 104 to

10.6 X 104

Input Dynamic Range:
range of linear operations of input signal min mV ±5
slew rate change max mV/s 320
dc offset voltage range min mV ±300
Allowed variation of amplitude with dc offset max % ±5
Gain Control, Accuracy, and Stability:
gain selections min mm/mV 20, 10, 5
gain error max % 5
gain change rate/minute max % /min ±0.33
total gain change/hour max % ±3
Time Base Selection and Accuracy:
Time base selections min mm/s  25, 50
time base error max % ±5
Output Display:
width of display min mm 40
trace visibility (writing rates) max mm/s 1600
trace width (permanent record only) max mm 1
departure from time max mm 0.5
axis alignment max ms 10
preruled paper division min div/cm 10
error of rulings max % ±2
time marker error max % ±2
Accuracy of Input Signal Reproduction:
overall error for signals max % ±5
up to ± 5 mV and 125 mV/s max µV ±40
upper cut-off frequency (3 dB) min Hz 150
response to 20 ms, 1.5 mV triangular input min mm 13.5
response after 3 mV, 100 ms impulse max mV 0.1
max mV/s 0.30
error in lead weighting factors max % 5
Standardizing Voltage:
nominal value NA mV 1.0
rise time max ms 1
decay time min s 100
amplitude error max % ±5
Input Impedance at 10 Hz (each lead) min 2.5
DC Current (any input lead) max µA 0.1
DC Current (any patient electrode) max µA 1.0
Common-Mode Rejection:
Allowable noise with 20 V, 60 Hz and ± 300 mV dc and 51 kΩ

 max

mm 10
System Noise:
RTI, p-p max µV 30
multichannel crosstalk max % 2
Baseline Control and Stability:
return time after reset max s 3
return time after lead switch max s 1
Baseline Stability:
baseline drift rate RTI max µV/s 10
total baseline drift RTI (2-min period) max µV 500
Overload Protection:
no damage from differential voltage,
60-Hz, 1-V p-p, 10-s application min V 1
no damage from simulated defibrillator
discharges:
overvoltage N/A V 5000
energy N/A J 360
recovery time max s 8
energy reduction by defibrillator
shunting max % 10
transfer of charge through defibrillator

chassis

max µC 100
ECG display in presence of pacemaker

pulses:

amplitude range mV 2 to 250
pulse duration range ms 0.1 to 2.0
rise time max µs 100
frequency max pulses/min 100
Risk Current (Isolated Patient Connection) Max µA 10
Auxiliary Output (if provided):
no damage from short circuit risk current (isolated patient connection) max µA 10

2.7. Distortion Factors in The ECG

It was pointed out that uncorrected lead systems evince a considerable amount of distortion affecting the quality of the ECG signal. In the corrected lead systems many of these factors are compensated for by various design methods. Distortion factors arise, generally, because the preconditions are not satisfied.

None of these assumptions are met clinically, and therefore, the ECG signal deviates from the ideal. In addition, there are errors due to incorrect placement of the electrodes, poor electrode-skin contact, other sources of noise, and finally instrumentation error. The character and magnitude of these inaccuracies are discussed in great details in Appendix A. Only some of them, which are with respect to the apparatus itself, are summarized here.

2.7.1. Frequency Distortion

The electrocardiograph does not always meet the frequency-response standards we have described. When this happens, frequency distortion is seen in the ECG.

High-frequency distortion rounds off the sharp corners of the waveforms and diminishes the amplitude of the QRS complex.

2.7.2. Saturation Or Cutoff Distortion

High offset voltages at the electrodes or improperly adjusted amplifiers in the electrocadiograph can produce saturation or cutoff distotion that can greatly modify the appearance of the ECG. The combination of input-signal amplitude and offset voltages drives the amplifier into saturation during a portion of the QRS complex. the peaks of the QRS complex are cut off because the output of the amplifier cannot exceed the saturation voltage.

In a similar occurrence, the lower portion of the ECG are cut off. This can result from negative saturation of the amplifier. In this case only a portion of the S-wave may be cut off. In extreme cases of this type of distortion even the P and T waves may be below the cutoff level such that only the R wave appears.

2.7.3. Ground Loops

Patients who are having their ECGs taken on either a clinical electrocardiograph or continuously on a cardiac monitor are often connected to other pieces of electric apparatus. Each electric device has its own ground connection either through the power line or, in some cases, through a heavy ground wire attached to some ground point in the room.

A ground loop can exist when two machines are connected to the patient. Both the electrocardiograph and a second machine have a ground electrode attached to the patient. The electrocardiograph is grounded through the power line at a particular socket. The second machine is also grounded through the power line, but it is plugged into an entirely different outlet across the room, which has a different ground. If one ground is at a slightly higher potential than the other ground, a current from one ground flows through the patient to the ground electrode of the electrocardiograph and along its lead wire to the other ground. In addition to this current’s presenting a safety problem, it can elevate the patient’s body potential to some voltage above the lowest ground to which the instrumentation is attached. This produces common-mode voltages on the electrocardiograph that, if it has a poor common-mode-rejection ratio, can increase the amount of interference seen.

2.7.4. Open Lead Wires

Frequently one of the wires connecting a biopotential electrode to the electrocardiograph becomes disconnected from its electrode or breaks as a result of excessively rough handling, in which case the electrode is no longer connected to the electrocardiograph. Relatively high potentials can often be induced in the open wire as a result of electric fields emanating from the power lines or other sources in the vicinity of the machine. This causes a wide, constant-amplitude deflection of the pen on the recorder at the power-line frequency, as well as, of course, signal loss. Such a situation also arises when an electrode is not making good contact with the patient. For such an error a circuit for detecting poor electrode contact is almost implemented

 

2.7.5. Artifact from Large Electric Transient

In some situations in which a patient is having an ECG taken, cardiac defibrillation may be required. In such a case, a high-voltage high-current electric pulse is applied to the chest of the patient so that transient potentials can be observed across the electrodes. These potentials can be several orders of magnitude higher than the normal potentials encountered in the ECG. Other electric sources can cause similar transients. When this situation occurs, it can cause an abrupt deflection in the ECG, as shown in Figure 2.18. This is due to the saturation of the amplifiers in the electrocardiograph caused by the relatively high-amplitude pulse or step at its input. This pulse is sufficiently large to cause the buildup of charge on coupling capacitances in the amplifier, resulting in its remaining saturated for a finite period of time following the pulse and then slowly drifting back to the original baseline with a time constant determined by the low corner frequency of the amplifier. The slowly recovering waveform is shown in Figure 2.18.

fig18.jpg

Figure 2.18. Effect of a voltage transient on an ECG.

Transients of the type just described can be generated by means other than defibrillation. Serious artifact caused by motion of the electrodes can produce variations in potential greater than ECG potentials. Another source of artifact is the patient’s encountering a built-up static electric charge that can be partially discharged through the body.

This problem is greatly alleviated by reducing the source of the artifact. Because we do not have time to disconnect an electrocardiograph when a patient is being defibrillated, we can include electronic protection circuitry in the machine itself. In this way, we can limit the maximal input voltage across the ECG amplifier so as to minimize the saturation and charge buildup effects due to the high-voltage input signals. This results in a more rapid return to normal operation following the transient. Such circuitry is also important in protecting the electrocardiograph from any damage that might be caused by these pulses.

Artifact caused by static electric charge on personnel can be lessened noticeably by reducing the buildup of static charge through the use of conductive clothing, shoes, and flooring, as well as by having personnel touch the bed before touching the patient.

2.7.6. Interference from Electric Devices

A major source of interference when one is recording or monitoring the ECG is the electric-power system. Besides providing power to the electrocardiograph itself, power lines are connected to other pieces of equipment and appliances in the typical hospital room or physician’s office. There are also power lines in the walls, floor, and ceiling running past the room to other points in the building. These power lines can affect the recording of the ECG and introduce interference at the line frequency in the recorded trace, as illustrated in Figure 2.19(a). Such interference appears on the recordings as a result of two mechanisms, each operating singly or, in some cases, both operating together.

fig19.jpg

Figure 2.19. (a) 50-Hz power-line interference. (b) EMG interference on the ECG.

Electric-field coupling between the power lines and the electrocardiograph and/or the patient is a result of the electric fields surrounding main power lines and the power cords connecting different pieces of apparatus to electric outlets. These fields can be present even when the apparatus is not turned on, because current is not necessary to establish the electric field. These fields couple into the patient, the lead wires, and the electrocardiograph itself.

 

The other source of interference from power lines is magnetic induction. Current in power lines establishes a magnetic field in the vicinity of the line. Magnetic fields can also sometimes originate from transformers and ballasts in fluorescent lights.

2.7.7. Interference Reduction Circuit

Bioelectric recordings are often disturbed by an excessive level of interference. Although its origin in nearly all cases is clear -the main power supply – the cause of the disturbance is not at all obvious because in many cases very sophisticated equipment is used. Apparently, the use of equipment with very good specifications does not guarantee interference free recordings. In this paper it is argued that if a significant reduction of the level of intrference is persued, the whole measurement situation has to be analised.

In most bioelectric measurements an interference level of 1 – 10 microV,p-p (less than 1% of the peak-peak value of an ECG) is acceptable. As the noise of a typical electrode is also several microV,p-p (Geddes and Baker, 1966a; Spekhorst et al., 1988), in most circumstances 10 microV,p-p can be accepted as the upper level of interference. The most common mechanisms of electrical mains interference are described in the Appendix. In this section we concentrate greatly on the use of right leg driven circuit mentioned in the functional block diagram in the 5th section.

Driven-Right-Leg System

In many modern electrocardiographic systems, the patient is not grounded at all. Instead, the right-leg electrode is connected (as shown in Figure 2.20 to the output of an auxiliary op amp. The common-mode voltage on the body is sensed by the two averaging resistors Ra, inverted, amplified, and fed back to the right leg. This negative feedback drives the common-mode voltage to a low value. The body’s displacement current flows not to ground but rather to the op-amp output circuit. This reduces the pickup as far as the ECG amplifier is concerned and effectively grounds the patient.

fig20.jpg

Figure 2.20. Driven-right-leg circuit for minimizing common-mode interference

The circuit can also provide some electric safety. If an abnormally high voltage should appear between the patient and ground as a result of electric leakage or other cause, the auxiliary op amp in Figure 2.20 saturates. This effectively ungrounds the patient, because the amplifier can no longer drive the right leg. Now the parallel resistances Rf and Ro are between the patient and ground. They can be several megohms in value—large enough to limit the current. These resistances do not protect the patient, however, because 120 V on the patient would break down the op-amp transistors of the ECG amplifier, and large currents would flow to ground.

2.7.8. Other Sources of Electric Interference

Electric interference from sources other than the power lines can also affect the electrocardiograph. Electromagnetic interference from nearby high-power radio, television, or radar facilities can be picked up and rectified by the p-n junctions of the transistors in the electrocardiograph and sometimes even by the electrode-electrolyte interface on the patient. The lead wires and the patient serve as an antenna. Once the signal is detected, the demodulated signal appears as interference on the electrocardiogram.

Electromagnetic interference can also be generated by high-frequency generators in the hospital itself. Electrosurgical and diathermy equipment is a frequent offender. Grobstein and Gatzke (1977) show both the proper use of electrosurgical equipment and the design of an ECG amplifier required to minimize interference. Electromagnetic radiation can be generated from x-ray machines or switches and relays on heavy-duty electric equipment in the hospital as well. Even arcing in a fluorescent light that is flickering and in need of replacement can produce serious interference.

There is also a source of electric interference located within the body itself that can have an effect on ECGs. There is always muscle located between the electrodes making up a lead of the electrocardiograph. Any time this muscle is contracting, it generates its own electromyographic signal that can be picked up by the lead along with the ECG and can result in interference on the ECG, as shown in Figure 2.19(b). When we look only at the ECG and not at the patient, it is sometimes difficult to determine whether interference of this type is muscle interference or the result of electromagnetic radiation. However, while the ECG is being taken, we can easily separate the two sources, because the EMG interference is associated with the patient’s muscle contractions.

2.8. Analog System Design and Criteria (Materials and Methods)

In this section the implementation of the project will be described in details. The first section reveals the general block diagram of the system. For each block a brief discussion is added. Besides, the circuit diagram corresponding to the block is also described briefly. Any details about the components characteristics may be found in Appendix C.

2.8.1 General block diagram.

Figure  .  shows a simplified block diagram of the Electrocardiograph implementation of the ICU monitor project. It consists mainly the sensing electrodes, an instrumentation amplifier with high CMRR to pick up the weak signal and reject the noise. The further step is the filtration of signal to be in the range from 0.5 to 45 Hz as stated in the standard of patient monitors. An ADC is used to digitize the signal to be processed by a high speed Microcontroller unit. This unit also selects the lead vector to be displayed and also controls the gain of the ECG signal. The controlling and additional information is entered by a Keypad and the signal is displayed by an LCD.

The details of the Analog Blocks and Circuits are described in the next section. The digital part and signal acquisition are explained in full details in chapter 5, in which the hole digital system of the monitor is fully described.

fig21.jpg

Figure 2.21  : Block Diagram of The Electrocardiograph

2.8.2. Details of Each Block:

1. Sensing Electrodes:

They are used to measure the electric potential between leads using the approach of bipolar lead vector described in previous sections. The front end of an ECG sensor must be able to deal with the extremely weak nature of the signal it is measuring. Even the strongest ECG signal has a magnitude of less than 10mV, and furthermore the ECG signals have very low drive (very high output impedance).

Electrodes are used for sensing bioelectric potentials as caused by muscle and nerve cells. ECG electrodes are generally of the direct-contact type. They work as transducers converting ionic flow from the body through an electrolyte into electron current and consequently an electric potential able to be measured by the front end of the ECG system. The figure below shows the two types of the electrodes used in the monitor project.

Lead wire cable:

We made a lead wire cable by using a shielded coaxial cable and snap similar to that use in clothing, the shielding around the central cable connected to the ground and the central cable connected to snap.

2. Leads selector:

The function of this stage is to determine which electrodes are necessary for a particular lead and hence selecting them. The leads of choice are leads I, П and Ш. The dual analog multiplexer (CD4052) is used to perform this task. It is shown below with its controlling parameters.

fig22.png

Figure 2.22. lead selector

The dual analog multiplexer (CD4052BC) is digitally controlled analog switches having low on impedance and very low off leakage currents. Control of analog signals up to 15Vp-p can be achieved by digital signal amplitudes of 3.16V. When a logical 1 is present at the inhibit input terminal all channels are off.

This type (CD4052BC) is a differential 4-channel multiplexer having two binary control inputs, A and B. The two binary input signals select 1 of 4 pairs of channels to be turned on and connect those differential analog inputs to the differential outputs.

3. Instrumentation Amplifier (IA) or Preamplifier:

The purpose of this stage is to reject noise and amplify the signal from the sensor electrodes, which typically falls in the 1 mV range, by a factor of 10. The basic architecture of this step follows a standard ECG monitoring circuit, found in most medical instrumentation textbooks. It functions by measuring the voltage difference between the two connected leads.

The instrumentation amplifier acts as the front-end for a signal acquisition system. The AD620 is chosen as it is a high precision amplifier commonly used in bio-electronics, featuring a measured CMRR of at least 100dB, a low cost, a max supply current of 1.3 mA and a wide power supply range (2.3V to 18V & -2.3V to –18V). It is also easy to use rather than its higher performance than three Operational Amplifiers IA design. The low power and signal accuracy are also important factors when choosing such an amplifier. This is due to its very low input bias current (10 nA) and offset voltage (50 µV), respectively. The figure below shows the AD620 as it is connected in our circuit.

fig23.png

Figure 2.23. Instrumentation amplifier

Gain is set by a single external resistor to be in the range of 1 to 1000 and is given by the equation:

G = 1 + 49.4 kΩ / RG

An RG=5.5 kΩ is selected so that to obtain a gain of 10.

4. Driven Right leg:

This stage provides a reference point on the patient that normally is at ground potential. It is implemented in ECG measurement systems to counter common mode noise in the body. This circuit is as shown in Figure 2.24. The two signals entering the differential amplifier are summed then inverted and amplified in the right leg driver before being feed-backed to an electrode attached to the right leg. The other electrodes also pick up this signal and hence the noise is cancelled.

Without the driven right leg circuit a direct ground path is provided, which hence introduce risk of a ground fault hazard. With the right leg driven circuit provided, the body’s displacement current flows not to ground but rather to the Op Amp output circuit as shown below.

fig24.png

Figure 2.24.  Driven Right leg circuit

We use the LF 411 for diversity of reasons including low cost, high speed, very low input offset voltage (which causes high signal accuracy), it requires low supply current yet maintain a large gain band width product.

5. Signal Filtering.

This block removes the undesirable noise. The two approaches for such a purpose are done through either the use of analogue circuitry or digital signal processing. The weak nature of the ECG signal and the great noise affecting it require multiple stages of filters to be implemented. Topologies and properties of the used filters are described in details here. We use three types of filters low pass, high pass and notch filters.

a- Low Pass Filter:

As a standardization of ECG monitors the frequency range of the displayed signal is to be from 0.5 to 45Hz. The low pass filter can remove a large amount of ambient noise and is responsible of ensuring these do not affect the ECG obtained.

fig25.png

Figure 2.25.  Active low pass filter

The low pass filter implemented is shown in Figure 2.25. It is a first order active filter. The corner frequency is calculated to be 45Hz. from the equation f =1\2πRC, substituting the value of C=0.1µf then the value of R=33KΏ. In the filter implementation the operational amplifier TL084 is used for many reasons low cost, low power consumption, low input bias and offset current, high input impedance. And the response of the low pass filter is shown in the figure below:

b- High pass Filter:

The high pass filter implemented is shown in the Figure below. We use passive high pass filter. The corner frequency is calculated at 0.5 Hz from the equation f =1\2πRC. Substituting the value of C=22 µf, then the value of R=14.75 kΏ.

fig26.png

Figure 2.26.  passive high pass filter at 0.5 HZ

Passive filters make no use of amplifying circuitry such as transistors or op-amps and as such are not constrained by the bandwidth limitation of these devices. The other major advantages are the fact that they require no power supply and the fact that they generate less noise.

 

Also we use a high pass filter (active high pass filter) at frequency corner 10 Hz to calculate heart rate, The frequency corner calculated by equation f =1\2πRC, substitute the value of C=1µf then the value of R=15 kΏ. We use also the operational amplifier TL084.  The active high pass filter implemented is shown in the figure below:

fig27.png

Figure 2.27. Active high pass filter at 10 Hz

6. Amplification Stage and Variable Gain Amplifier.

The purpose of this stage is to amplify the ECG signal coming from the filtering stage. We use TL084 operational amplifier to produce gain of 50. The amplification implemented is shown in figure below:

fig28.png

Figure 2.28. Gain Amplifier

We implement then a variable gain amplifier to produce different gains 2, 4, 8 and 10 by using op amp TL084 and accompanied by an analog multiplexer (CD 4051). The variable gain amplifier implemented is shown in Figure below:

fig29.png

Figure 2.29. Variable Gain Amplifier

7. DC Shift.

After filtering and amplification, the data is ready to be digitized by the ADC0808. The ADC0808 requires the signal it is sampling to be completely in the positive voltage range. The summing amplifier is used to achieve this and its topology is shown in Figure.

fig30.png

Figure 2.30. DC shift circuit

8. Full Wave Rectifier Circuit.

We use this circuit after the high pass filter (10Hz) for rectifying the signal then we introduce the output of this circuit to a comparator for calculating the heart rate, this comparator used for passing peaks of the ECG signal to obtain number of heart beats by further calculation at the microcontroller unit. The full wave rectifier circuit implemented is shown in Figure below:

fig31.png

Figure 2.31. Full wave rectifier circuit

9. The Analog-To-Digital Converter

This stage is considered the most important stage in our system because it converts the signal from the analog form to the digital form so that further calculations and processing are maintained in the microcontroller. The accurate selection of sampling rate is very important.

In our design we use ADC0808 because of its sufficient conversion time for the application, accuracy and low cost. It has the following specs:

    • • Resolution 8 Bits
    • • Total Unadjusted Error ±1.2 LSB and ±1 LSB
    • • Single Supply 5 VDC
    • • Low Power 15 mW
    • • Conversion Time 100 µs

In our application we consider the maximum frequency in the signal to be like the corner frequency of the low pass filter and more over we sampled one channel so the sampling rate must be more than 2 *fm (maximum frequency component) Hz, according to Nyquest theorem. As to be more practical we have sampled the signal by (8 * fm) which is controlled by the microcontroller. The input clock of the ADC0808 is 640 kHz as its datasheet suggests. This clock is supplied from the trigger circuit. The output digital data is then fed to the microcontroller through an octal buffer to protect our circuit from overloading when it is connected to the microcontroller to be more safer and practical.

fig32.png

Figure 2.32.  Analog To Digital Converter

2.9. Results and Conclusion (Problems while implementing the hardware)

May be the lack of experience was the first problem to face us. One should first get very well in the topic via a very good background about it. Some disciplines should be well-learned first, such as physiology, microprocessors, digital and analog circuits and systems…, etc.

The first step therefore, which is the base of the project’s pyramid, was a very wide knowledge base. To be more efficient, a chapter about design criteria was added in this text so as to reveal how the engineers, especially the biomedical, should think about a design and offering good implementation of the problem.

The second problem was how to make our design as simple as possible then try to practice it as to be more professional. The next problem was the great lack of materials, especially the electronics material in the market.

In the ECG analog implementation we have designed the circuit without any patient safety consideration in a bred board, constructing first the instrumentation amplifier without the driven right leg circuit, and even without any filtrations.

As we have moderate results we start implementing some signal filtrations and amplifications. Actually the analog filtration was very complicated due to the very high noise and unsharpness that we noticed in the signal. To solve these problems, we put three low pass filters of nearly the same cut off frequency, hence as a total it was a third order low pass. In the high pass stage we have tried both the active and passive types but the passive one revealed better signal. Following this was the amplification circuit, then we make a variable gain amplifier to produce many gain selections. Afterwards the lead selector was constructed to select any required lead from three bipolar leads I, П and Ш.

As a pre-final step in the analog part the driven right leg circuit is implemented and we have made a lead wire cable due to the great noise we were encountering.

As an ensurence step before implementing the digital part we take the analog signal and display it on the digital oscilloscope, and take this signal by analog to digital converter (ADC0808) and to be displayed on the PC via parallel port by a small program.

The figure below show the signal of ECG taken from a simulator and displayed on a digital oscilloscope before start working on the digital part of the monitor.

The enclosed CD-ROM includes a gallery of the signal step by step on the digital oscilloscope. The final result of the analog part can be seen in figure 2.33.

fig33.jpg

Figure 2.33. The Analog Circuit (The signal displayed on the digital oscilloscope)

You can download my thesis from this link. This work was done under supervision of Prof. Euisik Yoon at the University of Michigan. In other posts in this website, I will try to summarize some of the topics covered in this work. Below you can read the abstract.


 

Abstract

Understanding dynamics of the brain has tremendously improved due to the progress in neural recording techniques over the past five decades. The number of simultaneously recorded channels has actually doubled every 7 years, which implies that a recording system with a few thousand channels should be available in the next two decades. Nonetheless, a leap in the number of simultaneous channels has remained an unmet need due to many limitations, especially in the front-end recording integrated circuits (IC).

This research has focused on increasing the number of simultaneously recorded channels and providing modular design approaches to improve the integration and expansion of 3-D recording microsystems. Three analog front-ends (AFE) have been developed using extremely low-power and small-area circuit techniques on both the circuit and system levels. The three prototypes have investigated some critical circuit challenges in power, area, interface, and modularity.

The first AFE (16-channels) has optimized energy efficiency using techniques such as moderate inversion, minimized asynchronous interface for data acquisition, power-scalable sampling operation, and a wide configuration range of gain and bandwidth. Circuits in this part were designed in a 0.25μm CMOS process using a 0.9-V single supply and feature a power consumption of 4μW/channel and an energy-area efficiency of 7.51×1015 in units of J-1Vrms-1mm-2.

The second AFE (128-channels) provides the next level of scaling using dc-coupled analog compression techniques to reject the electrode offset and reduce the implementation area further. Signal processing techniques were also explored to transfer some computational power outside the brain. Circuits in this part were designed in a 180nm CMOS process using a 0.5-V single supply and feature a power consumption of 2.5μW/channel, and energy-area efficiency of 30.2×1015 J-1 Vrms-1mm-2.

The last AFE (128-channels) shows another leap in neural recording using monolithic integration of recording circuits on the shanks of neural probes. Monolithic integration may be the most effective approach to allow simultaneous recording of more than 1,024 channels. The probe and circuits in this part were designed in a 150 nm SOI CMOS process using a 0.5-V single supply and feature a power consumption of only 1.4μW/channel and energy-area efficiency of 36.4×1015 J-1Vrms-1mm-2, which is the highest reported efficiency to date.

This was a report I wrote in 2001 and I though it is good to share as is. Please excuse my poor language at that time and that I missed writing the references. Still I hope you will find good information about the topic if you are interested.


 

 

Abstract

One reason I got motivated to write about CCD is its existence in a variety of medical and non-medical equipment especially whenever we have digital imaging systems including x-ray, endoscopes cameras, extra- and intra-oral cameras…etc. One important point to say is using such a sensor could be a very active way to avoid many of the dangerous factors of such systems e.g. it can lessen the dose required for an x-ray image to about 80% of the dose when using an x-ray film.

In this report I will have some explanations about

1. What is a CCD?

2. Some basics of CCD.

3. Theory and Operation of CCD. 

4. How CCD record color or distinguish among photons of different energies.

5. Some aspects of CCD behavior (characteristics)

6. Different types of CCDs.

7. Noise sources in CCD.

8. Some methods to test the performance of CCD camera.

9. Why CCD is so great. (Advantages and Early limitations)

10. Some biomedical applications of CCD.

11. Future of CCD.  

1.What is a Charged-Coupled Device?

1.1.Overview on Photographic Detectors

Photographic plates were ubiquitous. The advantages that they offered were basically threefold:

  • • Unlike the eye they were an integrating detector: fainter objects could be detected by making longer exposures to accumulate more light,
  • • The images were objective and reproducible (unlike a sketch)
  • • The photographic image constituted a quantitative measure of the light distribution across the luminous object (at least in principle).

Nonetheless there were problems with photographic plates: they had only a limited dynamic range and their response to the brightness of the illuminating light was non-linear, leading to persistent calibration problems. In the middle years of the twentieth century photoelectric photometers were developed: electronic devices, which were more sensitive, accurate, linear and had a wider dynamic range than the photographic plate. However, they were not imaging devices: they merely produced a single output corresponding to the brightness of one point on the sky.

In many ways CCDs (Charge-Couple Devices) combine the advantages of both photographic plates and photoelectric photometers, though their principles of operation are very different from either. They have a high sensitivity, linear response, and large dynamic range. Imaging devices are sometimes called, perhaps somewhat grandiloquently, panoramic detectors.

1.2.Introduction to CCD

CCD sensors appear to be a fast way to get fast information from laser phenomena. The Charged Coupled Device, or CCD, was co-invented in 1970 by Boyle and Smith at Bell Labs.

A Charge Coupled Device (CCD) is a highly sensitive photon detector. It is an electrical device that is used to create images of objects, store information (analogous to the way a computer stores information), or transfer electrical charge (as part of larger device). It receives as input light from an object or an electrical charge. The CCD takes this optical or electronic input and converts it into an electronic signal – the output. The electronic signal is then processed by some other equipment and/or software to either produce an image or to give the user valuable information.

The CCD is divided up into a large number of light-sensitive small areas (known as pixels), which can be used to build up an image of the scene of interest.

A photon of light, which falls within the area defined by one of the pixels, will be converted into one (or more) electrons and the number of electrons collected will be directly proportional to the intensity of the scene at each pixel. When the CCD is clocked out, the number of electrons in each pixel is measured and the scene can be reconstructed.

The picture in page (1) shows a “typical” CCD. The CCD itself is primarily made of silicon and the structure has been altered so that some of the silicon atoms have been replaced with impurity atoms.

2. Basics and Physics of CCD.

2.1.Introduction

A CCD is best described as a semiconductor chip, one face of which is sensitive to light. The light sensitive face is rectangular in shape and subdivided into a grid of discrete rectangular areas (picture elements or pixels) each about 10-30 micron across. The arrival of a photon on a pixel generates a small electrical charge, which is stored for later read-out. The size of the charge increases cumulatively as more photons strike the surface: the brighter the illumination the greater the charge. This description is the merest outline of a complicated and involved subject. The CCD pixel grids are usually square and the number of pixels on each side often reflects the computer industry’s predilection for powers of two. Early CCDs used in the 1970s often had 64×64 elements. 256×256 or 512×512 element chips were typical in the 1980s and 1024×1024 or 2048×2048 element chips are common now.

The CCD chips are composed of an array of photo sensor baked on a light-sensitive crystalline silicon ship. These photosensitive elements transform incoming light (photons) into voltages that can be stored digitally into discrete values. The electric charges that are produced are stored within metal oxide capacitors (MOS) that function as an electric potential well. These charged are shifted from successive potential wells, using changes in voltage, until they reach an external terminal. This is where the final readout process happens.

2.2.A quick review of the “Band Theory of Solids” 

Shown below are the potentials and allowed energy levels of (1) two isolated atoms, (2) two atoms is a diatomic molecule, (3) four atoms is a 1-d crystal.

  1. 1. Two widely spaced atoms have twofold exchange degeneracy. That is, the energy states are the same for symmetric or antisymmetric space eigenfunctions.
  2. 2. When the atoms are brought together the exchange degeneracy is removed. Atoms in a symmetric space state will have lower energy because the electrons will spend more time in the region between the two nuclei where the potential is lower. Each of the upper energy levels splits into two states.
  3. 3. For a 4-atom system the upper energy levels each split into 4 states. Analogously, an N atom system will split energy levels into N states.

When the number of atoms is on the order of 1023 (a mole), the splittings of a given energy level are so closely spaced that they form a continuous “band”.

The spacing of the allowed energy bands determines whether a material will be an insulator or a conductor.  A semi-conductor has a small gap (~1 eV) between the valence and conduction bands. At low temperatures it is an insulator. At room temperature some electrons are thermally excited into the conduction band.   Another way to excite an electron into the conduction band is via the photoelectric effect. This property is what makes semi-conductors useful as light detectors.

2.3.CCD Components.

The figure below shows a very simplified cross section through a CCD. It can be seen that the Silicon itself is not arranged to form individual pixels.  In fact, the pixels are defined by the position of electrodes above the CCD itself. If a positive voltage is applied to the electrode, then this positive potential will attract all of the negatively charged electrons close to the area under the electrode. In addition, any positively charged holes would be repulsed from the area around the electrode. Consequently a “potential well” will form in which all the electrons produced by incoming photons will be stored.

fig01.png

As more and more light falls onto the CCD, then the potential well surrounding this electrode will attract more and more electrons until the potential well is full (the amount of electrons that can be stored under a pixel is known as the full well capacity. To prevent this happening the light must be prevented from falling onto the CCD for example, by using a shutter as in a camera. Thus, an image can be made of an object by opening the shutter, “integrating” for a length of time to fill up most of the electrons in the potential well, and then closing the shutter to ensure that the full well capacity is not exceeded.

An actual CCD will consist of a large number of pixels (i.e., potential wells), arranged horizontally in rows and vertically in columns. The number of rows and columns defines the CCD size; typical sizes are 1024 pixels high by 1024 pixels wide. The resolution of the CCD is defined by the size of the pixels, also by their separation (the pixel pitch). In most CCDs the pixels are touching each other and so the CCD resolution will be defined by the pixel size, typically 10-20µm. Thus, a 1024×1024 sized CCD would have a physical area image size of about 10mm x 10mm.

fig02.png

A CCD chip is a metal oxide semiconductor (MOS) device. This means that its base, which is constructed of a material, which is a good conductor under certain conditions, is topped with a layer of a metal oxide. In the case of the CCD, usually silicon is used as the base material and silicon dioxide is used as the coating. The final, top layer is also made of silicon – polysilicon.

The basic unit of a CCD is the Metal Oxide Semi-conductor (MOS) capacitor

2.4. The basic unit of a CCD. (The MOS Capacitor)

Consists of:

Gate 

A thin layer of metal or heavily doped polycrystalline attached to an electrode forms the gate. A bias voltage may be applied to the gate in order to change the shape of the underlying potential.

Oxide layer 

A 0.1-micron thick oxide layer (usually SiO2) beneath the gate functions as the dielectric of the capacitor. The oxide is thickened to ~0.5 – 1.5 microns above the channel stops to insulate them from changes in the gate voltage.

Channel stop 

The function of the channel stop regions is to confine charge. They are made of heavily doped p-type materials with an extra thickness of oxide over top. This makes them relatively insensitive to voltages applied to the gate and thus an effective potential barrier.

N-type buried channel 

Most modern CCDs have buried channels. A buried channel is created by the addition of an n-type layer (~1 micron thick) between the gate and the oxide. An n-type (negative) semi-conductor is one, which has been doped with impurities of higher atomic number yielding an excess of free electrons in the conduction band. The effect of the n-type layer is to move the potential minimum back from the Si-SiO2 interface eliminating “fast surface states” which cause problems with charge transfer. The region where the signal charge collects, termed a channel, is within the n-type region.

P-type substrate 

A p-type (positive) semi-conductor is one, which is doped with impurities of lower atomic number, resulting in “holes” in the valence states. The substrate is usually at least 15 microns thick.

Depletion region 

In the depletion region, electrons from the n-type region have combined with holes from the p-type region. The result is the establishment of a potential difference because the n-type region becomes positively charged, while the p-type region becomes negatively charged. When photons are absorbed in the depletion region they form electron-hole pairs. The electrons are attracted to the n-type region. The holes diffuse away into the p-type region.

The silicon that forms the base and the top layer, however, is special in nature. It is a silicon material that is doped with, or made to contain, a small amount of some other material. Doping endows materials with special properties that can be exploited through different electrical means.

2.5.The physical point of view

To understand why doped silicon would have special properties and how those properties can be exploited, consider how silicon normally forms chemical bonds. Like carbon, a silicon atom can form up to four bonds with adjacent atoms. This is because silicon has four valence electrons that it can share to form bonds. In a crystal of pure silicon, all atoms (not on the surface of the crystal) would be perfectly bonded to four neighboring atoms; in this case, there are no extra electrons, but also no places where electrons are missing. You can see this by drawing the Lewis structures.

fig03.png

If, however, we introduce into the perfect crystal an element with only three electrons available for bonding, this atom will form three normal bonds and one bond with a “hole”, meaning that it is missing an electron. What is interesting here is that this “hole” actually can move around the entire crystal. An electron nearby can move to fill in the original hole, but in eliminating the original hole, it has created a new hole. Effectively, this hole is able to move around just a freely as a mobile electron. Such a material, one that contains extra holes, is called a p-type material.

fig04.png

A material with extra electrons is called an n-type material. In an n-type material, the “contaminating” element has five available electrons, so it makes the four usual bonds, but then has an extra electron left over. It is important to note that these materials are all neutral, and that extra electrons or extra holes in this case do not make the materials charged but merely come from what is left over or needed for a neutral atom to form four bonds.

Upon application of the right stimulus, the movement of the hole can be directed. This is one of the fundamental keys to the operation of the CCD. An electron is repelled by negative charge and is attracted by positive charge. A hole, however, is repelled by the positive charge and is attracted by negative charge. In this way, we can think of a hole as sort of a “positive” electron, even though it is not. As we can control the motion of electrons by applying different electrical fields or charges in the vicinity, so can we control the motion of holes?

fig05.png

If a p-type and an n-type material are brought into contact, a p-n junction is formed and a very interesting result occurs. Extra electrons from the n-type material will diffuse to the p-type material and fill in some of the extra holes from the p-type material. The diffusion and recombination of electron-hole pairs across the boundary directly results in the n-type material becoming positively charged and the p-type material becoming negatively charged. Recall that before the two materials were brought into contact and before diffusion occurred, they were both neutral. As diffusion occurs and the n-type and p-type materials become increasingly charged, an electric field is generated around the contact boundary. This electric field eventually slows and stops the diffusion of charge across the boundary. When diffusion stops, there are no more extra electrons or holes around the boundary; they have all recombined. This region surrounding the boundary in which electrons and holes have recombined is called the depletion region. Outside of the depletion region, extra electrons still remain in the n-type material and extra electrons remain in the p-type material. The depletion region is the key area, which can be used to create electrical devices. By applying a voltage to the depletion region, we can either increase or decrease the electric field across the depletion region. If the electric field is increased by an applied voltage (reverse bias), the depletion region is increased and less of any applied current can flow through the two materials. If the electric field is decreased by an applied voltage (forward bias), the depletion region is decreased and more applied current is allowed to flow through the two materials.

fig06.png

The importance of applying voltages to the depletion region (called biasing the p-n junction) is that it precisely allows us to control applied current through any p-n material. When the p-n junction is reversed-biased, an only infinitesimal amount of applied current can flow, which for all practical purposes is zero. This corresponds to the “off” state. When the p-n junction is forward-biased, current easy flows through the junction (because the smaller electric field does not impede the flow of charges as much). In fact, by plotting a graph of applied voltage versus current flow – an I-V Curve – we can see that the dependence of current flow on applied voltage across the junction is exponential. Forward bias corresponds to the “on” state. Thus, biasing of the junction through the application of voltages can be used to precisely control the motion of electrical charge.

3. Theory and Operation of CCD.

3.1.Introduction

A CCD in isolation is just a semiconductor chip. In order to turn it into a usable instrument it needs to be connected to some electronics to power it, control it and read it out. By using a few clocking circuits, an amplifier and a fast analogue-to-digital converter (ADC), usually of 16-bit accuracy, it is possible to estimate the amount of light that has fallen onto each pixel by examining the amount of charge it has stored up. Thus, the charge, which has accumulated in each pixel, is converted into a number. This number is in arbitrary `units’ of so-called `analogue data units’ (ADUs); that is, it is not yet calibrated into physical units. The ADC factor is the constant of proportionality to convert ADUs into the amount of charge (expressed as a number of electrons) stored in each pixel. This factor is needed during the data reduction and is usually included in the documentation for the instrument. The chip will usually be placed in an insulating flask and cooled (often with liquid nitrogen) to reduce the noise level. The whole instrument is often referred to as a CCD camera.

The electronics controlling the CCD chip are interfaced to a computer, which in turn controls them. Thus, the images observed by the CCD are transferred directly to computer memory, with no intermediate analogue stage, whence they can be plotted on an image display device or written to magnetic disk or tape.

3.2.CCD must perform 4 tasks to generate an image: 

  1. 1) Generate Charge  Photoelectric Effect 

Silicon exhibits an energy gap of 1.14 eV.  Incoming photons with energy greater than this can excite valence electrons into the conduction band thus creating electron-hole pairs.  These pairs diffuse through the silicon lattice structure.  The average lifetime for these carriers is 100 microsecs.  After this time the e-h pair will recombine.

fig07.png

Photons with energy of 1.1 to 5 eV generate single e-h pairs.  Photons with energy > 5 eV produce multiple pairs.

2) Collect Charge  pixels: an array of electrodes (called gates)

3) Transfer Charge  Apply a differential voltage across gates. Signal electrons move down vertical registers (columns) to horizontal register. Each line is serially read out by an on-chip amplifier.

4) Detect Charge  individual charge packets are converted to an output voltage and then digitally encoded.

3.3How it works.

In a CCD, an array or matrix of electrodes controls the electrical field at different parts of the surface; these electrodes are called the gates. (CCD arrays can be either one-dimensional or two-dimensional, but here we will consider the one-dimensional array in detail, and then apply that information to understand the two-dimensional array.) This array of electrodes biases each small part of the surface differently, which allows any flow of charge on the CCD to be controlled.

The surface of the CCD is further broken down into smaller regions called pixels, or picture elements. This name is appropriate because they represent a single “grain” of the imaged object (just like you can see that your TV images appear to be made up of tiny “grains”). The array of electrodes applies a positive potential, (+Vg, a positive electric field) to two-thirds of each pixel, thus forward-biasing that portion of the pixel. Let’s represent the first third of the pixel by (1, the second third by (2, and the last third by (3. So, (1 and (3 are at a positive potential of +Vg, and (2 is at a lower potential, Vs.

When light or photons of high enough energy strike the surface, electrons are usually liberated from the surface. For every electron liberated, a hole is created simply by the act of the electron leaving. Thus, incident photons create electron-hole pairs. The hole, being effectively positive, is repelled by the applied positive potential into the base of the chip. The electron, however, is captured in the nearest potential well. The more light incident on a pixel, the more electrons captured in the potential wells. Thus, differences in the intensity of incoming light are “recorded” by the number of electrons collected in each potential well.

So now the challenge is to extract information from these “electron-collecting bins” (which may also be thought of as tiny capacitors). To do this, the charge packets (the collection of electrons in each well) must be transferred to another device for data processing. This is accomplished by sequentially changing the applied voltage at the three parts of each pixel or we may call it clocking out the CCD (The 2nd step).

A two-dimensional CCD is composed of channels, or rows along which charge is transferred. Each channel is essentially a one-dimensional CCD array. Charge is prevented from moving sideways by channel stops, which are the narrow barriers between the closely spaced channels of the CCD.

3.4.How it is clocked out.

fig08

The figure below shows a cross section through a row of a CCD. Each pixel actually consists of three electrodes IØ1, IØ2, and IØ3. Only one of these electrodes is required to create the potential well, but other electrodes are required to transfer the charge out of the CCD. The upper section of the figure (section 1) shows charge being collected under one of the electrodes. To transfer the charge out of the CCD, a new potential well can be created by holding IØ3 high, the charge is now shared between IØ2 and IØ3 (section 2). If IØ2 is now taken low, the charge will be fully transferred under electrode IØ3 (section 3). To continue clocking out the CCD, taking IØ1 high and then taking IØ3 low will ensure that the charge cloud now drifts across under the IØ1 electrodes. As this process is continued, the charge cloud will progress either down the column, or across the row, depending upon the orientation of the electrodes.

The figure below (called a clocking diagram) shows the progression under which each electrode is held high and low to ensure that charge is transferred through the CCD.

fig09

Initially, IØ2 is high – usually to around 12V, and the charge is held under that electrode as in (1) previously. When IØ3 is held high, and IØ2 is taken low (usually 0 V), the charge migrates under the IØ3 electrode (as in (2)). Finally, taking IØ1 high and IØ3 low transfers the charge under IØ1 (as in (3)). This process is repeated in transfer 2 and transfer 3, the charge has now been moved three pixels along. This process is known as charge coupling (hence CCD).

fig10.png

For most of the CCD, the electrodes in each pixel are arranged so that the charge is transferred downwards along the columns. Hence, during the CCD clocking operation, rows are transferred downwards to the final row (the readout register), which is used to transfer the charge in each pixel out of the CCD so it can be measured.

In the read out register, the electrodes are arranged so that the charge is transferred in the horizontal direction, along the readout register.

3.5.How its charge is measured and Read out.

The final process on the CCD is the reading of each pixel so that the size of the associated charge cloud can be measured. At the end of the readout register is an amplifier which measure the value of each charge cloud and converts it into a voltage, a typical conversion factor being around 5-10µV per electron with “typical” full well values being about 100,000 electrons or so.

A CCD camera will consist of the CCD chip, and associated electronics, which is used at this point to amplify the small voltage on the CCD, remove noise components, digitize the pixel values and output the values of each pixel for example, to a PC, where the image can be processed in software and the image displayed. The CCD is an analogue device, and the analogue voltage values are converted into a digital form by the camera electronics.

Calculating the length of time required to read out any particular image is straightforward. Before doing so however, it is necessary to understand how the CCD is read out. As shown in figure, the CCD will be read out by clocking each row down the CCD until it reaches the serial (readout) register. Once in the register, pixels are shifted across, one at a time, into the readout amplifier.

We will have two CCDs, one detecting the long wavelengths and one detecting the short wavelengths. To improve readout time and to provide some redundancy, each CCD has two read out ports, one at the bottom right of the CCD and one at the bottom left.

Charge is transferred along the parallel register (parallel to the channel stops), that is by “clocking the gates, each row is moved in unison to the serial register where the pixels are moved one by one to the amplifier.

4. How CCD record color or distinguish among photons of different energies (Color Differentiation)

In imagers or CCD cameras, CCDs are only part of the whole device. A lens is required to properly focus the incident radiation from the object onto the array. In addition, since the pixels themselves are monochrome, there must be a way to select for the wavelengths of light incident on the array. Colored filters are thus used to record colors in the case of visible light. In most digital cameras, a color filter array registers the intensity of a single color at each pixel. By interpolation, algorithms use the color intensities at nearby pixels to estimate the intensity of a color at each pixel. This is how a full-color image is generated. A single picture made by a CCD imager that is only 500 pixels by 500 pixels holds the same amount of raw information as a 100,000 word book! In addition, the number of electrons collected is proportional to the energy of the incident photons. So mathematically, the energies of the photons liberated can be calculated.

5. Some aspects of CCD behavior (Characteristics)

5.1. Charge Transfer Efficiency (CTE).

Charge in a typical pixel is transferred several hundred thousand times before being read out. Thus the Charge Transfer Efficiency (CTE) is very important.

CTE is the percent of charge transferred from 1 pixel to the next. This number needs to be very close to 100 %. Here’s why:

  • • Consider the middle pixel of a 1024 x 1024 CCD. Charge stored in this pixel will be moved 1024 times (512 transfers along the parallel register, 512 along the serial register.)
  • • Let’s say the CTE is 98 %
  • • By the time it is read out it has (0.981024) x 100 % = 0.0000001 % of its original charge. This is not good.
  • • A good CTE today is 99.999 %. With this CTE the center pixel will have 99 % of its original charge by the time it is read out.

5.2. Quantum Efficiency (QE) or Linearity.

fig11

On the whole, the eye is not a linear detector (except over very small variations in intensity) and has a logarithmic response. An important consideration in a detector is its ability to respond linearly to any image it views. By this we mean that if it detects 100 photons it will convert these to 100 electrons (if we had 100% QE) and if it detects 10000 photons, it will convert these to 10000 electrons. In such a situation, we say that the detector has a linear response. Such a response is obviously very useful as there is no need for any additional processing on the image to determine the ‘true’ intensity of different objects in an image.

Not every photon falling onto a detector will actually be detected and converted into an electrical impulse. The percentage of photons that are actually detected is known as the Quantum Efficiency (QE). For example, the human eye only has a QE of about 20%, photographic film has a QE of around 10%, and the best CCDs can achieve a QE of over 80%. Quantum efficiency will vary with wavelength.

The quantum efficiency (Q.E.) of a sensor describes its response to different wavelengths of light (see chart). Standard front-illuminated sensors, for example, are more sensitive to green, red, and infrared wavelengths (in the 500 to 800 nm range) than they are to blue wavelengths (400 – 500 nm). This is most noticeable in tri-color imaging with color filters, where exposures taken with the blue filter need to be much longer than the exposures taken with the green and red filters, in order to achieve proper color balance.

Back-illuminated CCDs have exceptional quantum efficiency compared to front-illuminated CCDs. This is due to the way they are constructed. How do you make a back-illuminated CCD? Simply, Just take a front-illuminated CCD, thin it to only 15µm thick and mount it upside down on a rigid substrate. The incoming light now has a clear shot at the pixel wells without those pesky gate structures blocking the view.

Typical Q.E. curves for front- and back-illuminated CCDs are shown in the previous page.

Note that CCDs with anti-blooming have about 1/2 the Q.E. of those without anti-blooming. CCDs with backside illumination can boost quantum efficiency to over 85%.

5.2.1.Frontside vs. Backside Illumination. 

  • Frontside Illumination: 

fig12

Limited by absorption of blue photons by relatively thick (5000 A) poly-silicate gates. Absorption depth for 4000 A photon is only 2000 A. Also, surface reflectivity increases with decreasing wavelength. Hence thick frontside illuminated devices have good QE only in the red.

One way of improving QE is by “back-side illuminating”.

fig13

Traditionally, CCDs were illuminated on the front side, meaning the side with the gates. Many of the blue photons were absorbed by the relatively thick (0.5 micron) gates. At longer wavelengths interference effects reduce the QE. Today it is possible to make backside-illuminated devices. In these devices the silicon substrate is thinned to ~15 microns and the gate side is mounted against a rigid surface. An enhancement layer is also added which creates an electric field that forces electrons toward the potential wells. Backside illuminated devices have higher QE especially in the blue and UV portion of the spectrum. QE may also be enhanced by the addition of an anti-reflective coating.

Backside Illumination: 

    • o For high and stable QE the backside of the CCD must be negatively charged to drive signal electrons towards the front surface. Three backside treatments:
    1. a) UV flooding  creates lots of free electrons. Some of these electrons have enough energy to escape to the back surface thus creating a small surface voltage, which in turn attracts and accumulates a thin layer of holes; hole gradient sets up an intense electric field in the silicon (100,000 V/cm). But, doesn’t last unless the detector stays at -95 C (or lower).
    2. b) Flash Gates  deposit mono-atomic layer of gold or platinum; causes silicon electrons to tunnel through the native oxide layer and generate a surface potential equal to the work function difference between the metal and the silicon. Unfortunately, in a vacuum positive charge is observe to buildup in the native oxide layer. So invent, biased flash gate – now you have a shutter and control of the QE! Positive bias, charge is swept towards the backside – no QE; negative bias, charge is swept to the frontside  lots of QE
    3. c) Boron doping  ion implants provide a permanent hole layer (i.e. thin layer of hole accumulation). Manufacturing process requires heating of the device to anneal the damage down during the implantation process. This is best accomplished using a scanning high energy pulsed laser directly on the immediate backside surface
    • Phosphor Coatings: 
    • o Converts incoming UV photons to longer wavelength photons.
    • o Can evaporate under conditions of high vacuum.
    • o Scattering of light from Target pixel to adjacent pixels in the UV –> mostly a problem for front illuminated devices as on the backside the phosphor is in direct contact with the silicon separated only by the thin native oxide layer
    • o multiple e-h pair generation is not possible; Phosphor only emits one visible photon per incoming photon

5.2.2.Blooming vs. Anti-Blooming.

fig14.png

Some sensors offer an optional anti-blooming gate designed to bleed off overflow from a saturated pixel. Without this feature, a bright star which has saturated the pixels (much greater than 85,000 electrons) will cause a vertical streak. This can be irritating at best, and if the streak bleeds onto your target object, there is no way to recover the lost data.

fig15.png

Anti-blooming gates built into the CCD occupy about 30% of the pixel area. The result is a 70% fill factor and reduced sensitivity and well depth. The reduced sensitivity means that you have to expose almost twice as long to get the same signal level as a CCD without the anti-blooming feature. Also, the area of the CCD occupied by the anti-blooming gate leaves a significant gap between pixels, reducing the effective resolution of the sensor.

Because of these drawbacks, users of CCDs without anti-blooming gates have chosen an alternate method to avoid blooming. Rather than taking a single long exposure in which blooming is almost certain to occur, take several short exposures, in which the brightest objects haven’t begun to bloom, and stack the exposures together with image processing software. The signal-to-noise ratio remains the same as in the longer exposure, but the result is free of blooming.

Since opinions vary on which is the ideal method to achieve anti-blooming, we offer the following guidelines for selection:

1) If non-blooming of bright objects is critical for your application, and if guiding twice as long to overcome the loss in sensitivity is not bothersome, then the anti-blooming option may be for you.

2) For tri-color imaging, front-illuminated CCDs already have low response in the blue. Therefore, the lessened response with the anti-blooming gates will require extremely long exposures with the blue filter to obtain good color balance. For this and other applications that require good response to blue light, you may wish to use the stacking method to avoid blooming.

5.3.Wavelength range.

CCDs can have a wide wavelength range ranging from about 400nm (blue) to about 1050nm (Infra-red) with peak sensitivity at around 700nm. However, using a process known as back thinning, it is possible to extend the wavelength range of a CCD down into shorter wavelengths such as the Extreme Ultraviolet and X-ray.

5.4.Dynamic Range.

The ability to view bright and faint sources correctly in the same image is a very useful property of a detector. The difference between a brightest possible source and the faintest possible source that the detector can accurately see in the same image is known as the dynamic range. When light falls onto a CCD the photons are converted into electrons. Consequently, the dynamic range of a CCD is usually discussed in terms of the minimum and maximum number of electrons that can be imaged. As more light falls onto the CCD, more and more electrons are collected in a potential well, and eventually no more electrons can be accommodated within the potential well and the pixel is said to be saturated. For a typical scientific CCD this may occur at around 150,000 electrons or so. The minimum signal that can be detected is not necessarily one electron (corresponding to one photon at visible wavelengths). In fact, there is a minimum amount of electronic noise, which is associated with the physical structure of the CCD and is usually around 2-4 electrons for each pixel. Thus, the minimum signal that can be detected is determined by this readout noise.

In the example above, the CCD would have a dynamic range of 150,000:4 (taking the upper noise level). But this dynamic range is also dependent on the ability of the electronics to be able to fully digitize all of this dynamic range.

5.5.CCD Camera Characteristics.

There are many important characteristics that should be taken into account when buying a camera. Even when a camera should excel in all categories, the buyers sometimes have to make a compromise. The reason for this is that sometimes some of these features are more expensive to include than other. The following will inform about some of the most important features of a CCD camera.

5.5.1.Sensitivity.

This analyzes the variation between different levels of brightness. A higher sensitivity could be able to detect the minute differences between different levels of brightness.

5.5.2.Transfer Function.

Transfer Functions analyze the ratio of output modulation to input modulation. This is a mathematical expression that tells you how accurate is the link between the camera output and the light intensity.

5.5.3.Resolution.

The resolution is determined by the number of sensor elements on the CCD chip. A higher number of elements will increase the detail observed from a particular image. On a CCD camera the resolution is usually defined in the number of pixels for the x and y dimension of the camera. A high resolution will be extremely important when trying to observe the fine details of an image. In order to have high resolution a CCD camera must have:

    • • At least an array of 1 mega pixels. Such as a 1000 by 10000 pixel camera. And must have few bad pixels. Which are pixels that are dead or not responding appropriately.
    • • A cooling system to reduce thermal noise.
    • • A high readout clock frequency.

Black and White cameras have an extra edge over Color cameras. Since the resolution for color cameras is reduce by 1/3 compared to a B&W camera of the same pixel count. This is due to the fact that color cameras usually use 3 types of sensor to detect color images, one for Red, one for Blue and another for Green.

5.5.4.Low light levels Capture.

A camera that has to work with low light level captures must have a low signal to noise ratio. The least level of light that can be detected must be higher than the noise level in the system. This sometimes fudges the low level signals against the background noise. There are many sources for this noise, such as:

    • • Fixed Pattern Noise (FPN) which is caused by defects in the sensor array. This noise pattern is usually constant on all expositions. This source is usually inversely proportional to the quality of the sensor array. However higher quality implies a more costly array.
    • • Thermal noise, which can be reduced by cooling the sensor below 30
    • • Any uncooled electronic will emit charges which are collected and added to the noise level.
    • • Electronic noise, which is produced by the rapid movement of charges during the readout process. It can be lowered by choosing moderate clock frequencies for the charge transfer to the readout section of the system.
    • • Reset noise, which is produced when not every charge is drained out of the CCD elements. This charge leftover will influence the value for the next readout on this CCD element.

5.5.5.Capture speed.

This is important in any field of optical research since you always want to have a fast snapshot of viewed system during any event. Since all events in physics change rapidly with time, a fast camera will reduce the blurriness of the data observed a system drifting out across time.

The speed of a digital camera depends on several factors:

    • • The sensor architecture, a full frame digital camera is slower than a frame transfer camera or a interline transfer due to the higher density of the sensor on the chip.
    • • Number of pixels: the digital camera, at the same clock frequency, is fast if it has few pixels.
    • • Clock frequency, a higher clock frequency makes a fast charge transfer possible. However, increasing the clock frequency higher than 25 MHz increases the signal to noise ratio.

5.5.6.Spectral response.

This informs us of how efficiently the camera picks up photons of different wavelength. In laser research this characteristic of the CCD sensor is highly important since lasers are usually tuned to a very specific frequency. Usually the detection range of the sensor array is around the visible range, passing through the infrared and up to 1000 nm. However, any wavelength other than those, the detection is difficult.
Spectral sensitivity is also referred as Quantum Efficiency (Q.E.) as described before. A perfect CCD sensor has a Q.E of 1. This means that for every photon falling upon the sensor, an electron or charge is produce. In reality, the Q.E. of most cameras max. out at 0.7. That is, for every 10 photons, 7 electrons are produce by the CCD.

5.6.Pixel size and Field of view.

In images observed close to the optical axis an angular displacement is simply proportional to a linear displacement in position in the focal plane. The constant of proportionality is usually called the plate scale (a name which betrays its origin in photographic techniques) and is traditionally quoted in units of seconds of arc / mm. That is:                                    p=∆”/∆                 (1)

Where p is the plate scale in seconds of arc / mm, ∆ is a displacement and ∆mm is the corresponding displacement in the focal plane in mm. If you know the plate scale and the size of either a single pixel in the grid or the linear size of the CCD then it is trivial to use Equation (1) to work out either the angle subtended by a single pixel or the field of view of the CCD respectively. The manual for the instrument that you are using will usually quote a value for the plate scale. However, if necessary it can be calculated from other parameters. By simple geometry the plate scale is the reciprocal of the effective focal length of the system:

p/  =1/ f                      (2)

where f is the effective focal length of the system and p/  is the plate scale in units of `radians / whatever units f is in’. Thus, for  f  in meters and applying the factor for converting radians to seconds of arc:

p = 206.26 / f          (3)

f is itself related to the diameter of the primary mirror, D, and the focal ratio, F:

f = F. D                      (4)

At larger distances from the optical axis there is no longer a simple linear relation between angular displacement on the sky and displacement in position in the focal surface. That is, p varies as a function of position in the focal surface. This effect is usually not important in instruments containing a single chip because of the small size of individual CCDs. However it may be important if a grid of chips is used.

6. Different types of CCDs.

There are three different types of architecture employed in the manufacturing of CCD cameras. The main difference from all these designs is in how they received and process the information. These designs are:

    • Full-Frame CCD 
    • Interline Transfer CCD 
    • Frame-Transfer CCD

6.1.The Full-Frame CCD

fig16.png

After the exposure the Full-Frame CCD must be covered from light during the readout process. The charge from the bottom row of the potential wells is ‘shifted’ to one side in order to be read one by one. When the whole row is read, the next row drops down and the whole process begins again. Once this has been done, the device is ready to receive the input from another picture.

6.2.The Inter-line-Transfer CCD

fig17.png

Every second column within an Inter-line-Transfer CCD is covered by an opaque mask. These covered areas contain the wells that are used in the readout process. After the exposure, the charged packets in each exposed cell is shifted into the adjacent opaque masked wells. From here, the charge is being ‘shifted’ as in the Full-Frame CCD. The advantage of this is that while the charge is being ‘shifted’ the exposed wells can accumulate in the next image. But, the disadvantage is that only 50% of the surface is exposed. This method is fairly rapid.

6.3.Frame Transfer CCD

fig18.png

The frame transfer CCD imager has a parallel register divided into two distinct areas. The upper area is the image array, where images are focused and integrated. The other area, the storage array, is identical in size and is covered with an opaque mask to provide temporary storage for collected charge. After the image array is exposed to light, the entire image is rapidly shifted to the storage array. While the masked storage array is read, the image array integrates charge for the next image. A frame transfer CCD imager can operate continuously without a shutter at a high rate. The front illuminated Frame Transfer CCDs suffer the same fate as the Full Frame CCDs, that is a reduced QE in the visible with a particularly low QE in the blue. The combination of back illuminated (CCD EEV 57), the shutterless operation, relatively high frame rates and very high QE is very desirable to have in a camera system.

7. Noise sources in CCD.

7.1.Introduction 

When a CCD image is taken, noise will appear as well as the main CCD image. Noise can be thought of as unwanted signal, which doesn’t improve the quality of the image. In fact, it will degrade it. The main problem with noise is that most noise is essentially random, and so cannot be completely removed from the image. For example, if we know that a noise source contributes 10 units on each image we can subtract those 10 units from the image. If we only know that the noise is ‘around’ 10 units, then we can’t completely remove all of this noise (as we don’t know its exact value).

Noise manifests itself during two main processes: the collection of electrons and the transfer of charge packets. During the collection of electrons, noise stems from thermal processes, light pollution, and the generation of electron-hole pairs in the depletion regions. During charge-transfer, noise stems from transfer inefficiency.

7.2.The main contributions to CCD noise 

7.2.1Noise on the image itself (“shot noise”) 

The detection of photons by the CCD is a statistical process. If images are taken over several (equal) time periods, then the intensity (the number of photons recorded) will not be the same for each image but will vary slightly. If enough images are taken, it will be seen that the deviation in intensity found for each image follows the well-known Poisson distribution.  In effect, we cannot be sure that the intensity we have measured in a particular image represents the “true” intensity, as we know that this value will deviate from the average. It is this deviation, which is considered to be the noise associated with the image. As the deviation is known to follow a Poisson distribution, we know that the likely deviation will be plus or minus the square root of the signal intensity measured. Thus, if we measure a signal intensity of one hundred photons, then the noise on this signal will be ten photons. If we measure a signal intensity of one thousand photons in the image, then the noise on this signal will be about thirty-one photons.

7.2.2.Thermally generated noise

Additional electrons will be generated within the CCD not by the absorption of photons (i.e. the signal) but by physical processes within the CCD itself. The number of electrons generated will be dependent on the operating temperature of the CCD and hence this noise is known as thermal noise (sometimes also known as dark noise).  As with the detection of the signal, the same number of electrons will not be generated in equivalent periods of time, as the thermal noise will also have a Poisson distribution.

7.3.Some of the Physics behind the generation of various types of noise within a CCD 

7.3.1.Dark current 

Even in the absence of light, thermally generated electrons will be collected in the CCD and will contribute to the overall signal measured. There are three main contributions to dark current:

    • • Thermal generation at surface states.
    • • Thermal generation within the bulk silicon.
    • • Thermal generation in the depletion region.

fig19

The vast majority of the thermally generated electrons are generated at the surface states. Interface states can exist in the forbidden energy gap between the valence and conduction bands (such states do not exist in a perfect lattice and are caused by the change in energy within the lattice due to the introduction of an impurity atom or lattice defect). An electron can be excited by absorption of a photon with insufficient energy to excite the electron into the conduction band but with sufficient energy to excite it into the mid-band interface state. A second photon can then excite the electron into the conduction band.

However, if the interface states are filled up by free carriers then the dark current will be drastically reduced. Such a process can be achieved by operating the CCD in inversion which is a technique used by all modern CCDs. When the CCD is operated in inverted mode, holes from the channel stops migrate to fill the interface states. Two of the three electrodes defining a pixel are driven into inversion to drastically reduce the dark current (it is not possible to invert all three electrodes as a potential well is still required to collect charge). If a CCD is not inverted, then the dark current generation rate may be as high as several hundreds of thousands of electrons per pixel per second (at room temperature) whereas an inverted CCD will have a much lower generation rate of about ten thousand electrons per pixel per second.

If such a CCD is read out a number of times per second (for example in a video camera) then the dark current at room temperature is low enough not to significantly interfere with image quality. However, if the CCD is only read out once a second (or less frequently) then the number of thermally generated electrons will be too high for adequate image quality. Hence, additional measures need to be taken.

The simplest way to reduce the dark current is to cool the CCD as dark current generation is temperature related.
The figure above shows the variation in dark current with temperature for a CCD with a room temperature dark current of 10,000 electrons/pixel/s. There are a number of ways in which a CCD can be cooled, the easiest is to use liquid Nitrogen but thermoelectric coolers (Peltier coolers) can be used, and in space the CCD can be cooled with a direct connection to a passive radiator.

A second way of reducing the dark noise in a CCD is to use a Multi-Pinned Phase (MPP) device. In an MPP device, it is possible to operate the CCD with all three electrodes phases driven into inversion. This is accomplished by adding a suitable dopant under one of the phases during CCD fabrication. The presence of the additional dopant under one of the phases alters the potential under that phase so that there is still a potential well present during integration when all the electrode phases are at clock low level (usually zero volts). Dark current is now only generated in the bulk silicon reducing the dark current to about 300 electrons/pixel/signal.

7.3.2.Readout noise 

The ultimate noise limit of the CCD is determined by the readout noise. The readout noise is the noise of the on-chip amplifier which converts the charge (i.e. the electrons) into a change in analogue voltage using:
Q= CV where Q is the charge on the output node, and C is the output node capacitance. V is the voltage sensed by the on-chip amplifier operating as a source follower.
The figure above shows a schematic of a typical CCD output section. The charge in a pixel is transferred onto the output node where the change in voltage caused by this charge is sensed by the on-chip amplifier.

fig20.png
The on-chip amplifier will have an associated noise performance, which is typically 1/f at low sampling frequencies with a white noise floor at higher sampling frequencies. The sampling frequency corresponds to the rate at which each pixel is read by the CCD.

The figure to the right shows the readout noise response versus the sampling frequency for a typical CCD. It can be seen that as the sampling frequency increases, the root mean square value of the read out noise increases.

fig21

A large amount of effort has been dedicated to reducing the CCD readout noise, as this noise value will ultimately determine the dynamic range and should be as low as possible, particularly when detecting very faint sources for example, detecting photons at X ray energies such as in the XMM-Newton mission. Noise values of 2-3 electrons rms (root mean square) are now typical for many CCDs but some companies have recently claimed a noise resolution of less than 1 electron rms.

7.3.3. Power

CCDs themselves consume very little power. During integration, only a very small current is flowing and the CCD consumes only 50mW or so. Whilst the CCD is being clocked out more power can be consumed but this is typically only several Watts or so. Of course, the electronics required to operate the CCD and process images can consume much more power.

7.3.4.Non-linearity

 

As mentioned before, CCD chips have a wide dynamic range within which their response is essentially linear. However, if the illuminating light is sufficiently bright the response will become non-linear and will ultimately saturate (that is, an increase in the intensity of the illumination produces no change in the recorded signal). In principle the response in the non-linear region can be calibrated. However, in practice, the onset of saturation is sufficiently rapid that it is more sensible to limit exposures to the linear region. In order to prevent saturation it is usual to a take a series of short exposures rather than a single long exposure of equivalent duration. The individual short exposures can then simply be added during the data reduction.

7.3.5.Pixel sensitivity; flat fielding 

Due to imperfections in the manufacturing process the sensitivity of the pixels will vary slightly (usually by a few percent) across the grid. This effect is essentially random, and is not a function of, for example, position on the grid. The relative sensitivities of the pixels can be calibrated by imaging an evenly illuminated source, such as the twilight sky, and examining the variation in values recorded. Once this calibration is known, images can be corrected to the values they would have had if all the pixels had been uniformly sensitive. This correction is known as flat fielding and images of evenly illuminated sources are known as flat fields. The pixel-to-pixel sensitivity variations change with wavelength, so the flat fields should always be acquired using the same filter as the observations of the target objects.

8. Some methods to test the performance of CCD camera.

There are many ways to test the performance of a CCD camera. All depends of the criteria that we use to judge the camera. In laser research, the most important characteristic of the camera is the linearity of the incident light versus the charged recorded by the device. The simplest of the methods would be to just take a picture with camera covers on. This in theory should produce a totally black image, devoid of any features or objects. However, most CCD cameras will produce a dark background with small white spots on it. These are by-products of thermal noise. Also, the CCD doesn’t record the total range of intensities. The recording is made in discrete zones. With the intensity value of small surface elements being the same. The result of this is that every small element gives an average value of the light intensity over that given surface. This leads to another test of the accuracy of the CCD sensor. A surface composed of a series of dark and bright fringes could be composed. The camera used to record the image can only show a finite amount of detail of the lines. If the number of lines is bigger than the number of pixels along an axis, a process called aliasing occurs. Where the frequency of the lines appear to be less than they are since a section of the fringes had to be sampled as the same value. The maximum frequency of such changes between intensity that can be recorded by a camera is called Nyquist frequency. On the other hand, there are mathematical analyses that can help us find the quality of the date gathered by our CCD array. One of this methods is the Modulation Transfer Function (MTF) which measures the ratio of output modulation versus input modulation and other is the Point Spread Function (PSF), which measured the how blurred is the image taken compared with the real object.

9. Why CCD is so great. (Advantages and Early limitations)   

The answer to the above question lies in its resolution. CCDs provide extraordinary detail for objects either very far away or very small in size – resolution that was hitherto impossible to attain. This resolution is a result of the large number of pixels in the CCD array – the more pixels, the finer the detail that can be achieved. Typically, modern CCDs comprise anywhere from about one thousand to about half a million pixels.

The principal advantages of CCDs are their sensitivity, dynamic range and linearity. CCDs are sensitive to a broad range of wavelengths and are much more sensitive to red light than either photographic plates or the photomultiplier tubes used in photoelectric. However, they have a poor response to blue and ultra-violet light. A typical dynamic range is about 105, corresponding to a range of about 14.5 magnitudes.

Thus, CCDs are nearly ideal detectors because of the following properties: 

    • • High Quantum Efficiency (QE) compared to photographic film.
    • • Large dynamic range.
    • • High linearity.
    • • Fairly Uniform response.
    • • Relatively Low noise.
    • • It is Digital!

Early Limitations: 

    • • Low coverage area.
    • • Poor blue response.
    • • Read Out Noise Dominated for Spectroscopy.
    • • Low light level deferred Charge Transfer Problems.

10. Some biomedical applications of CCD.

This small, electrical device is familiar to astronomers, physicists and engineers but now even some biologists and chemists are beginning use CCDs in their research. You have likely encountered it before, as CCDs are used in facsimile machines, photocopiers, bar-code readers, closed-circuit television cameras, video cameras, regular photographic cameras, or other sorts of sensitive light detectors. In fact, CCDs have a wide range of applications – everything from reconnaissance and aerial mapping to medicine, micro technology and astronomy.

CCDs are used in a variety of different imaging devices. Linear imaging devices (LIDs) employing CCDs are typically used in facsimile machines, photocopiers, mail sorters, and bar code readers, in addition to being used for aerial mapping and reconnaissance. Area imaging devices (AIDs) employing CCDs are used for closed-circuit television cameras, video cameras, and vision systems for robotics and as film for astronomical observations. CCDs are also used in drug discovery in combination with DNA arrays in a process called Multi-Array Plate Screening (MAPS).

CCD cameras have revolutionized the way of taking and recording images. They have extended the range of faint objects doctors can study. One of the other huge advantages to using CCD cameras is the ability to convert the gathered analogue information into digital information, which can be analyzed using computer software.

10.1.Multi-Array Plate Screening (MAPS)

In MAPS, potential drugs are applied to their targets in a microtiter plates. Their degree of binding to the target – and thus their potential ability as effective drugs – is assessed by the amount of light emitted from the well through either laser-induced fluorescence, radioactive scintillation, or chemiluminescence. The CCD must be cooled to cryogenic temperatures of about -100°C to minimize noise from thermal photons. CCD cameras are able to detect both RNA and DNA in amounts as low as 30,000 molecules, which are about 5.0 x 10-20 moles. The benefit of using CCD cameras in MAPS is that no amplification of the nucleotides is required, which reduces cost and error and saves time.


10.2.X-ray Detection using a CCD

A CCD is a detector that can be used as an energy-dispersive detector that will give the energy of an x ray and the location where an x ray hit. Each pixel on the CCD can be used as an individual energy-dispersive detector because in the soft x-ray range the generation of electron-hole pairs is dominated by the direct band gap. The direct band gap in silicon has an energy gap of 3.65 eV. In the x-ray range the number of electron-hole pairs created in silicon when an x ray is absorbed is proportional to the energy of the x ray. In order to use each pixel as an energy dispersive detector, the pixels on the CCD must work in single-event count mode.

In order to acquire the energy information from an X ray, only one X ray must be captured in a pixel. If more than one x ray is deposited in a pixel, the pixel will contain the energy information from both of the events and there is no way to separate the energy. As long as one x-ray event is captured in one pixel, then the CCD can be used as an energy dispersive detector.

10.3.What Are the Non-Imaging Applications of the CCD?

Non-imaging applications of CCDs include signal processing, memory applications, video integration, transversal filtering, and MTI filtering for radar. Again, non-imaging applications fall under the categories of either signal processing or delay line applications.

11. Future of CCD.  

The future of imaging devices, however, is not likely to be the CCD. The CMOS or Complementary Metal Oxide Semiconductor image sensor appears to be the future of imaging, because it is fabricated using essentially the same CMOS process as the large majority of modern integrated circuits, which include microprocessors and dynamic random-access memories (RAMs). CCDs, on the other hand, are fabricated using a variant of practically obsolete N-MOS fabrication technology, which is basically only used now in the fabrication of CCDs. What has kept CMOS image devices from replacing CCDs is the trade-off in image quality; there is more noise in CMOS devices, and unwanted signals from various sources degrade the input signal. For many applications requiring that every photon count, the CMOS is simply not sensitive enough. For the meantime, CCDs will be used in imaging devices where resolution is very important, and CMOS imaging devices will be used when their cheaper cost outweighs the benefit of higher resolution.

In winter 2010, I took a class at the University of Michigan (ENGR 520: Entrepreneurship for Engineers and Scientists). During this class, three colleagues (Jinyong Kim, Corey Kosch, and Joanna Widjaja) and I  formed the Health Care Diagnostics Group and write the following report as part of the class project.

I. Introduction

The goal of this work is to analyze the value chain and explore the opportunities to launch a venture capitalist’s back-able firm within the health care industry. Within the health care industry we are focusing on diagnostics, and within diagnostics we are focusing on cardiac monitoring space. This report should be useful for guiding entrepreneurs interested to launch new ventures in the cardiac monitoring space.

This report will start by defining the opportunity space. The value chain of cardiac monitoring space will be described in detail in section II and it will be shown that the cardiac monitors manufacturing segment is capturing the most value. This section will also describe what the manufacturing segment does and who their customers are. In section III, Porter’s five forces of this industry (manufacturing) will be discussed. Sections IV and VI will further discuss Mullin’s framework by analyzing the micro- and macro- market, and micro- and macro- industry, respectively. In section V, the proposed solution (iNurse) will be described in further details. Financial and venture analysis for our proposed firm as well as conclusion will be addressed in section VII.

The opportunity space – Cardiac Monitoring Systems:

Cardiovascular diseases (CVD) are one of the leading causes of death1. Billions of dollars are being spent on the diagnosis, treatment and rehabilitation of patients with cardiovascular diseases. There are two main reasons for the rise in incidence of these life-threatening diseases: (1) Faulty lifestyles and (2) the ageing baby boomer population. Under these circumstances, cardiac monitoring and diagnostics systems enable effective diagnosis and therapy; and thus tend to find substantial application in hospitals and cardiac centers. Hence, enhanced emphasis on the monitoring of cardiovascular diseases to facilitate prevention and early treatment is expected to propel growth in the diagnostic cardiac monitoring systems market2.

Cardiac monitoring generally refers to continuous electrocardiography with assessment of the patients’ condition relative to their cardiac rhythm3. Cardiac monitoring and diagnostic services help physicians to reach accurate and timely decisions in the diagnosis of various forms of CVD. The major types of cardiac monitors are those used in critical care units (bedside monitors), for clinical uses (Electrocardiogram units), and those used for long-term recording (Holters)4.

II. Cardiac Monitoring Value Chain

Screen Shot 2016-03-22 at 11.06.51 PM

Figure 1. Value Chain for Cardiac Monitors

This space includes segments such as design, manufacturing, and distribution of systems or subsystems that would finally result in a device that can monitor the cardiac activity of patients through healthcare providers. As shown in figure 1, the major segments of this space are raw material suppliers, subsystems manufacturers, cardiac monitors’ original equipment manufacturers (OEMs), Design and IP suppliers, distribution, and insurance companies.

The value chain starts with raw material suppliers that usually serve multiple segments in different value chains. One example of this segment is WHX Corporation which supplies precious metal, tubing, and engineered materials to many industries including healthcare industry 5 . Precious metals are required for the fabrication of electrocardiogram (ECG) electrodes. This segment supplies basic materials and components to cardiac monitor subsystems manufacturers. This segment includes manufacturers of sensors (e.g ECG lead wire), electronic modules and printed circuit boards (PCB), software, and other electrical and mechanical subsystems that ultimately goes into cardiac monitors OEMs. An example of this segment is Angeion Corporation that sells cardiorespiratory diagnostic subsystems for wide range of applications in healthcare to medical device manufacturers, clinical research organizations and others 6  . There is also another segment that supplies design and intellectual property (IP) to cardiac monitors OEM. A company such as IntriCon Corporation falls into this segment that is engaged in the design, development, engineering and manufacturing of electronic products and body-worn devices3. As shown in figure 1, the cardiac monitor OEM segment captures relatively the highest value in the chain. A proxy company for this segment is CAS Medical Systems Incorporation (CASMED). CASMED designs, manufactures and markets critical care cardiac monitors, vital signs monitors, and cardiac output monitors 7 . The companies in this segment usually distribute their products to healthcare providers either directly or through distributors such as Henry Schein Incorporation. However, manufacrurers of specialty devices, such as cardiac monitors, typically have not used large distributors for their products 8 . The main reasons for that are: (1) the major missions of manufacturers is to have direct contact with clinicians (unimpeded by intermediaries), (2) the manufacturer posseses critical knowledge about product features and uses the help to satisfy customer needs, while most distributors lack in-depth product knowledge, and (3) there may be insufficient customer demand for many specialty items. Another important segment in this value chain is the insurance companies (payers). OEMs and distributors also sell their products to insurance companies such as Aetna Incorporation. The figure above shows also the advertisement, service and payment flow in addition to product flow within the cardiac monitor value chain.

The EBITDA margin of the cardiac monitoring manufacturing segment shows that it is really an attractive industry to launch a new business.

The Manufacturing Segment of Cardiac Monitoring Systems:

This segment manufactures resting ECGs, stress testing systems, ECG data management systems, Holter monitor systems, and cardiac event monitoring systems9    This segment is also concerned about:

  • • Product design and manufacturing: There is a huge range of product offers from this segment that varies from massive machinery to portable patient care devices and even some implantable versions. In general, the machinery and devices manufactured are geared for either in-hospital care, outpatient care or both, for patients ranging from adults to pediatric and neonatal. The device helps to monitor patients’ vital signs, detects irregularities and records all these data.10            11
  • • Technical support and training of personnel: The manufacturing segment provides technical support to users and training to medical personnel to ensure proper use of their products. This could be done via instruction manuals, web instructions, phone hot-line or direct training sessions.12          13
  • • Servicing and maintenance support: Massive machinery requires consistent maintenance and this service is provided by the manufacturing segment too. In case a product is faulty, the user could also approach the manufacturer to either repair it or return it.14
  • • Markets, sells and maintain customer awareness with new technology and best practices15

This segment competes mainly on the basis of five dimensions: (1) product innovation, (2) product performance, (3) pricing and contracting, (4) costs of goods sold, and (5) customer support services.

III. Porter’s Five Forces Analysis of Cardiac Monitoring Manufacturing

Industry Competitiveness 

The rivalry within this segment is high to moderate.  Larger firms within this segment desire to increase their market share, they are doing this by merging with and acquiring other firms, and building their brands.   16  Mergers and acquisitions create larger firms that are able to exploit economies of scale and drive down their prices.   17  Larger firms can also afford the increasing R&D costs associated with companies that deliver new and improved products.   18  This is especially important in cardiac monitoring manufacturing; new technologies make diagnostic and monitoring work more efficient, thus reducing the number of staff work hours required.   19  This is extremely attractive to hospitals as it reduces overall costs.  Brand building has forced firms to distinguish themselves based on qualities other than price,  however, healthcare products are mostly commoditized and need to meet standards, leaving very little to distinguish them by,  thus most of the competition between firms is still focused on price.  For example, Lantheus had to provide more competitive pricing so that its Cardiolite Kit remained in competition with Covidien’s ANDA Kit.   20  However, as industry concentration increases, rivalry tends to decrease via various agreements.  21

Bargaining Power of Suppliers 

The supplier power in this segment is moderate.  Many suppliers are larger, and some of them are the sole provider of certain components.   22  However, most component pieces are not very useful on their own and need to be incorporated into a larger design; this promotes a partnership between the supplier and manufacturer.   23  Also, some manufacturers vertically integrate their production process by acquiring smaller supply companies.  24

Bargaining Power of Buyers 

Buyers display a moderate degree of power.  The small number of buyers in this segment gives the buyers more power; the smaller number of buyers forces the manufacturers to compete for their purchases.   25 The buyers in this segment also form purchasing agreements with other buyers, strengthening their power position.   26  The possibility of nationalized health care in the US could reduce the number of buyers even more, and give a lot of power to the government on pricing.   27  Also, due to e-procurement systems, buyers are becoming more and more knowledgeable about the products they are purchasing, driving price competition between manufacturers.   28  All of these arguments show that buyer power is strong, however, the market has traditionally shown that sellers have typically dictated prices, with high profits commonplace.   29   Also, product quality and reliability is highly important in this market, which reduces buyer power to an extent.  30

Threat of New Entrants 

The threat of new entrants is moderate.  Although brand identity is becoming more important, it still does not carry a lot of weight as most customers are concerned with price and quality.   31  Also, decent market growth and substantial investments from venture capitalists makes this market attractive to enter into.      3233  However, intellectual property rights, regulatory approval, technology licensing, and expensive clinical trials are a large barrier for new companies to overcome.   34  Also, established manufacturers form partnerships to help each other innovate and provide complementary assets to each other, making it difficult for new players to enter.  35

Threat of Substitutes

The threat of substitutes is weak in this segment because most innovation is done iteratively by established companies.   36 However, recent trends show established companies are creating different solutions for the same problems. For example, imaging companies have been competing furiously to produce the most reliable diagnostic-quality CT angiograms. Yet, Toshiba is going after whole-heart imaging in one scan, Siemens is addressing temporal-resolutions limitation with dual source CT, GE has the new snapshot pulse technology and Phillips is working on new detectors and dual-energy imaging.

In conclusion, we believe that the Cardiac Monitoring Manufacturing segment is an attractive segment to launch a startup, especially if one already has non-dependent intellectual property (IP).  The interest of venture capitalists in cardiac monitoring startups and the importance placed on quality over brand will make it easier to enter as a new entrant.  The current trend in the market to acquire smaller companies is actually beneficial to startups that enter this segment because it shows that an exit through purchase from a larger company is likely.

IV. Micro- and Macro- Market (Mullin’s part 1)

Customers of The Cardiac Monitoring Manufacturing Segment:

We want to understand first of all how the purchasing procedure occurs. The purchasers from the OEM segment are: group purchasing organizations (GPOs) and wholesalers/distributors. GPOs purchase the products on behalf of hospitals, and distributors take title to them and deliver them37   . Typically, the process is initiated by the customer (physician), who submits product requests to the system’s materials and procurement manager, who then selects the items from a product catalog (increasingly electronic). The order is then transmitted to the GPO and distributor for fulfillment. These orders are then bundled and transmitted to the manufacturers for shipment.

Given the complexity of the flow described, there are multiple customers. Hospitals, health care systems, and nursing homes are the ultimate institutionally based customers since they pay the bill for the products that are manufactured and distributed. Within these institutions, however, there are multiple customers who are responsible, directly or indirectly, for ordering the products. According to [38   ], when asked who their customer was, many manufacturing executive joked, “Whoever is on the phone”, or “The person who has a purchase order”. In a more serious vein, they stated they have to “cover all the bases” and consider each of the individuals aforementioned their customer.

Physicians are the end customer for many types of products, especially devices (in our case: cardiac monitor). Their preference must be taken into account for several reasons: (1) At minimum they may complain to senior executives if their preferences are not acknowledged, (2) at maximum, they may decide to take their patients and elective procedures elsewhere if they are dissatisfied. As physicians receive more information around around “best practices” from manufacturers, they may be more willing to make decisions that alter the product type they use. In this manner, manufacturers hope to shape customer demand for clinical preference items. Physicians are also key customers in both hospital-based and freestanding alternate delivery site, such as outpatient centers.

Distributors and GPOs are not generally considered to be customers in the supply chain. They are viewed more as partners for both upstream and downstream players, as well as influencers of what the customer actually purchase.

In conclusion, we will narrow down the customers of the manufacturing segment mainly to physicians and nurses. In section IV of this report we will further classify personas. In the following section, the porter analysis is further discussed to explain why manufacturing (OEM) of cardiac monitors captures relatively high value.

What are some of the challenges faced by the customers?

  • • Price Transparency: There is a need to address pricing policy and disclosure, as sellers in this segment operate in oligopolistic markets where not all buyers pay the same price for a given or similar product. Some sellers of devices even have designed contracts accompanying sales agreements that include language forbidding buyers from disclosing the negotiated price to other buyers, or even to patients or insurers, reflective of their market power.
  • • Diagnostic is not Therapeutic: In this segment, it is challenging to manufacture diagnostic devices as diagnostics are not priced as high as therapies. Also, reimbursement agencies are unwilling to pay high up-front costs to implant a diagnostic device that provides uncertain benefits in an uncertain amount of time.39    Buyers rate products by measuring how much it helps the patients, and unfortunately, monitoring devices are unlike a pacemaker, which both monitors the heart and delivers therapy when necessary; pacemakers provide a direct benefit to the user 40  .  However, due to the recent goal set by Medicare to reduce heart failure hospital admissions by 20% by 2012, many physicians are looking to implantable diagnostic devices for the information necessary to help them keep their patients healthy .
  • • Regulation Levels: This industry is highly regulated by FDA. The products require formal clinical studies to demonstrate safety and effectiveness. The trend of regulation is increasing. Manufacturers have to abide by the rules and this causes challenges in their design and manufacturing work.
  • • Level of Technology Change: Innovation and product development is the most important source of growth within the industry. This allows the first movers to earn above the average profits for a period of time41  . Continuous product innovation increases the applications for which products can be used, increasing the demand for these products. Some of the features manufacturers compete for are speed in performance, portability, power-use and mobility. Hence, manufacturers have to keep abreast of the newest technology and R&D done by competitors to have a leading edge in their product.
  • • Disparity between Buyers and Users: The purchasing process in hospitals is unlike other industries. Products are often ordered by workers on the front line of health care delivery such as physicians, nurses, and so on. Products are ordered in a way that maximizes their availability when needed, rather than minimizes the costs of holding inventory. Moreover, the end user ordering products is not typically the buyer (that is, paying for the product).  Product demand is thus based heavily on the clinical preference of physicians rooted in their medical training, not on any formal cost-benefit analysis or budgetary constraint. this attitude may still be part of the culture and mentality of older generations of practitioners. Business practices have crept into the system incrementally over time and have encountered strong resistance from professional norms of patient care and provider autonomy, as well as public goals regarding patient access and quality of care. Thus, professional training in procurement and logistics has never been a hallmark among providers, given the prominent role of clinicians and their preferences for branded items. The contradiction between physician preference and purchasing department efforts for standardization makes this an administrative challenge for manufacturers.42

In general this segment (manufacturing) competes mainly on the basis of five dimensions: (1) product innovation, (2) product performance, (3) pricing and contracting, (4) costs of goods sold, and (5) customer support services43.

The target market we serve as aggregated into personas: 

Alicia Stones Alicia is a nurse in the Emergency Room (ER) department. She is very time-consciencious, wants to know accurate data fast and is easily-stressed if diagnostic devices do not work efficiently, as there is no room for mistakes in the ER. There are times she wishes for devices that could monitor multiple vital signs simultaneously, and is portable and mobile. As humans tend to err, sometimes she also wishes for a device that could automatically detect irregularities, and provide treatment if needed.
Dr. Brock Jones Dr. Jones works in the Cleveland Clinic (Cardiology specialist) and he has patients that need to move freely while in care. He is enthusiastic about state-of-the-art technology and loves exploring cutting-edge devices. He also finds data tracking and management useful in his assessment work, and prefers convenient, easy-to-use products. Sometimes he is concerned about enhanced accuracy of data collection by means of implantable devices that monitors severely-ill patients wherever they are.
Jennifer Grant Jennifer is a senior nurse in the Intensive Care Unit (ICU). Her daily duty includes monitoring of patients’ conditions and providing defibrillation when needed. Thus, she appreciates devices that could monitor multiple vital signs and deliver automatic defibrillation via programmability. She also wants devices to be user-friendly, have easy data manageability and prefers less “cabling” of patients.
Icabod Shahid Icabod is a paramedic in the Ambulatory department. When handling a patient in an ambulance, he prefers less “cabling” of patient and needs to keep in touch with physicians and nurses. He does this by continuously sending data to a central station, with a physician who is able to instruct him on what to do next. Because of the nature of his work, he requires also mobility. He also wants automatic defibrillation when patients suffer ventricular fibrillation to ensure patients survival rate.
Dr. Harie Yacabo Dr. Yacabo is an operating surgeon at Mayo Clinic Operation Room (OR). He does open heart surgeries and requires supporting devices that he could program to accomodate his sophisticated procedures and record certain heart episodes. He prefers devices that shorten the time between diagnosis and surgical procedures required, with as accurate data as possible. After a surgery, he finds reviewing patients data and surgical reports to be useful for his personal improvement. Patient mobility is not important to him.
Martina Shingles Martina is a senior citizen who suffers from cardiac arrythmia (Home-monitoring and Outpatient). Physicians recommend that she uses a portable heart monitor that records her heart activity at all times. This device should also send her data wirelessly to a central station so her physicians could monitor her regularly. She wants the device to be as user-friendly and small as possible and to be cosmetically-acceptable. She hesitates to have anything implantable due to her age.

What are some of the needs of the customers?

  • • Decoupling cables: In many situations, medical staff, such as Jennifer Grant and Icabod Shahid, are bothered by the amount of cables tied to the patients to collect data. In other words, they would prefer minimized cabling or none to both provide comfort to the patient and to improve ease of device-handling and moving arount patients on their part.
  • • Complete solution and programmability: There is a need for devices that come complete (hardware and software) so as to save cost and improve compatibility. In addition to this, there is also a growing demand for system flexibility that allows the device to be usable in a wide range of clinical environmental changes. This means that the device is programmable depending on the user conditions.
  • • Modularity and multifunctional: Many medical staff, such as Alicia Stones and Jennifer Grant, prefer using modular monitor that can record multiple vital signals simultaneously from a patient. This reduces staff hours and gives a more accurate diagnosis in the future. The procurement department of hospitals also prefers products that can be upgraded rather than scrapped to cut the costs and space and increase the usability of the products.
  • • Remote observation to reduce time between diagnosis and treatment: Current patient monitoring program suffers from one significant disadvantage, that is the patients have to be home-bound. Also, most of the current system does not have cellular capabilities. There is also a demand to reduce or avoid in-patient hospitalization and facilitate patient-care in less expensive settings by improving patient mobility. This is still a partially unmet need for Dr. Brock Jones and Icabod Shahid. Also, there is the launch of American College of Cardiology’s (ACC) door-to-balloon (D2B) time initiative and AHA’s complementary “Mission: Lifeline”. This suggests that if a device is able to send data to medical staff in real time while maintaining patient mobility, it would reduce time between diagnoses and percutaneous coronary interventions (PCI) performed in catheterization labs. As more hospitals strive to meet national guidelines set by these initiatives, demand for wireless ECG solutions will be further boosted44.
  • • Portable devices for home-use: The aging American population, like Martina Shingles, are not satisfied with the current solutions because they are not cosmetically-acceptable and do not allow them to move around. Also, she needs to re-charge the device often as it uses much power, that she hopes for a less troublesome device. There is a need by these people for devices that would help in their outpatient care, while still allowing them to lead a normal daily life.
  • • Automated irregularities detection: There are times where speed in detection is vital in saving a person’s life, such as in Alicia Stones and Dr. Harie Yacabo’s cases. Devices with increased performance speed and reliability are highly in demand in settings where time is of essence. This need is partially unmet as current manufacturers do not provide very flexible programming for the users and they have lower sensitivity and specification.
  • • Clinician-friendly products: Many clinicians, such as Dr. Brock Jones, Dr. Harie Yacabo and Icabod Shahid, see the need for products that are easy to use. This would greatly shorten personnel training hours and reduce time and complexities in procedures. To meet this partially-met need, manufacturers have to work closely with physicians and nurses to understand their concerns, the operational side and design a product that is intuitive for them to use.
  • • Data records and viewing: Though traditionally diagnostic devices are valued for their accuracy and reliability, there is a growing need for much more accurate assessment work from the physician’s side via viewing past records. Hence, not only must the devices meet quality requirements, but it also has to address the need of large data recording, tracking, archiving and viewing.
  • • Customer support services: Many situations can occur in a hospital setting and the staff have expressed the assurance of having someone to contact regarding device malfunctions or other problems. As such, many appreciate 24 hours/daily customer support services for collaborative planning, training and maintenance.

V. The iNurse: Outpatient Monitoring of Cardiac Symptoms

Screen Shot 2016-03-22 at 11.20.00 PM

Figure 2. Evolution of iNurse as a solution for cardiac monitoring

Addressing the major needs of Martina Shingles in the home-monitoring/ambulatory segment, we suggest the iNurse! Figure 2 shows the evolution of this solution. The iNurse is a modular, pocket-size device with wireless sensors and minimal cabling. The device can be carried in a patient’s pocket and the wireless sensors are worn on the chest. This design is more cosmetically-acceptable to patients and would not stand out so much in public.  We are planning to patent the following features;

  • • It is also bluetooth-enabled and is compatible with various mobile devices such as the iPhone and iPad, so that one could view his own data in real-time basis.
  • • Cellular network feature allows to transmit data to healthcare providers on a regular schedule and in case of emergencies.
  • • The iNurse will have modular capabilities in case the user wants to add more vital signs, such as body temperature and blood pressure.
  • • Low power consumption by using body area networking between the device and the wireless sensors.

Although this device was made primarily with the home monitoring segment in mind, development in the future may lead to benefits to the cardiology specialist, ICU, and ambulatory segments.

VI. Micro- and Macro- Industry (Mullin’s part 2)

Teece Analysis

Our proposed solution involves a wireless cardiac monitoring device that connects via bluetooth to a smart phone and then uses the phone’s wireless networking capabilities to transmit data to a physician.  We believe that our intellectual asset position is strong.  The intellectual asset that we possess is the design to make a small wireless cardiac monitoring device that uses the patient’s skin as a transmitter between the sensors and the device.  The major complementary assets we require are; the bluetooth technology to connect our device with a smart phone, the software that allows for our device to be compatible with a smart phone, the physical capital required to manufacture the devices (manufacturing plants), distribution channels, and customer relationships.  We believe that the relative cost of acquiring these complementary assets will be low.  The bluetooth technology can be acquired through buying a license to use the technology, and the software that allows our device to be compatible with a smart phone can be created in house.  The manufacturing plants required to make the devices will require a lot of financial capital to build, however the relative cost of this complementary asset will be medium to low.  Distribution channels and customer relationships will be forged through networking relationships provided by our angel investors, venture capitalists, and development team that have experience in this industry.  Based on our strong IP position, and relatively low costs of acquiring complementary assets, we believe our venture has strong new business potential.  This finding is in accordance with other medical device startups.45

The entrant of the iNurse into the market will shift Porter’s Five Forces as follows;

Buyer Power 

We want to provide a decent solution that will shift the buying power from GPO’s or hospitals, to the very end user, the patient.  We are creating a product with the end user in mind instead of the prescribing physician, we therefore plan for the demand to be pulled from the patient through the hospital. We want to leverage commodity type products, like cellular phones, to save costs and provide a small cosmetically-acceptable solution.  This will be beneficial to health care providers, the payers, and healthcare insurance companies, and the government.  This product will be easily promotable through them because it offers three competitive issues (1) reduces inpatient hospitalization which saves a lot of money and increases profitability, especially for insurance companies, (2) facilitates patient care in a less expensive setting, (3) and extends life-expectancy.

New Entrant 

We will enter this segment as a new entrant by leveraging the IP we will have on the device features and design patents.  As stated earlier, the current high level of interest in wireless cardiac monitor startups makes entering as a new entrant easier because raising necessary capital would be easier.

Industry Competitiveness 

Currently, the companies in the manufacturing segment of cardiac monitoring are concerned with consolidation, through mergers and acquisitions, to strengthen their power.  This is beneficial as it is a likely option that this company will be acquired by a larger firm, allowing for an easier exit strategy by the founders.  If the industry competitiveness is too strong and prevents this manufacturing startup from growing to a size that is attractive to larger manufacturing firms, this startup can transition to a company that focuses on design and IP development.

Solution Positioning Statement: For outpatients (at home or ambulatory) with heart diseases that may be at risk of developing cardiac symptoms, the proposed iNurse device will provide the basic needs of portability and cosmetic acceptability, because it is portable, light, small, wearable, consuming low power and using minimal cabling.

VII. Financials and Venture Analysis

Screen Shot 2016-03-22 at 11.21.03 PM

As can be seen from Table 1, an income statement for the venture was formed. In order to build this table, projected revenue level at liquidation, and EBITDA of a proxy company needed to be estimated.

First of all, we assumed that the revenue of our company would be $165 million as the company goes to an initial public offering (IPO) in about 5 years later. CTIA, the international association for the wireless industry, has recently stated the wireless home healthcare market is expected to grow $4.4 billion in 2013, with estimated annual growth rates of 96 percent in 2010, 126 percent in 2011, 95 percent in 2012, and 68 percent in 201346.

Based on this estimation, we project the market size can be up to $9 billion in 2015, and presume that our venture will be able to penetrate 15 – 20 % of the market. This assumption is within the range of CardioNet’s and Volcano’s revenue, $120.45 million and $171.5 million, in 2008, which is less than 2 years from when they went to public. We believe that our venture is comparable to both CardioNet and Volcano as we derive revenue from medical devices, just like them.

Second, EBITDA was computed by taking a proxy company to approximately represent our venture, and by constructing an income statement with the proxy’s data. When it comes to the proxy company for the venture, CardioNet, Inc. (CardioNet) was chosen, because the two companies have shares similarity in that they both produce wireless cardiac monitoring device. Most importantly, CardioNet is considered the only pure-player in wireless healthcare diagnostics industry to have gone public, specifically had an IPO. CardioNet is also one of the few companies to acquire reimbursement from the Centers for Medicaid & Medicare Services, which is very unlikely unless sufficient cost benefits are justified47. The rates for gross profit, selling/general/administrative expenses, and research and development were also adopted from CardioNet; the values of 66 %, 51 %, and 3 % of revenue were used for estimation, respectively. The proxy company shows an EV/EBITDA ratio of 13.23 based on 2008 data (Appendix 1). This is fairly higher than the average of its sector (medical equipment), 8.56 (median = 9.29)48. Therefore, the company is expected to pay back the investment sooner than others. With an assumption of any prediction based on the proxy company’s EBITDA multiples could contain errors, key uncertainties include difference in product types and customers’ response to it, because these may be the major sources of a manufacturing company’s revenue. The company’s financial structure (e.g. costing, accounting, etc.) is also likely to affect the EBITDA multiples, because the numbers in the financial tables depend on which method the company is using to estimate their resources and earning.

The company valuation at liquidation of $260 million was finally estimated and applied to venture financing. A company named Volcano, Corp. (Volcano) was used as our startup example, because Volcano is a player in similar indusry to where our venture is and mature enough for a valuation is performed on. In our references, the IRRs for investors were found as ranges not as specific values. However, all the other data used for constructing the cap tables, including the amount of investment to each round, the length of investment, are reflecting the actual venture investment that had happened to our proxy companies.

The venture investment we planned seems to have a positive return (Appendix 2). They underwent four investment rounds, and each amount per series is reasonably obtained. All investors meet their IRR goals per round with the founders receiving a good $51.21 million (21.2 % ownership) returns at exit. The market capital obtained at exit was strong enough to cover the returns for all investments. The amount of investment at each round decreased as the round went from 2 to 4 at the expense of lower ownership of the founders at exit. Note also that no down rounds were observed, and investment multiples for the investors looked plausible. It is still possible to assign more ownership to investors, since smallest IRRs were not taken in the cap table.

There is a room for making this plan more VC-attractive. For example, if we increase IRRs for round 2 – 4 by 5 percents, the plan would be even more favorable to investors and still maintain as much as 13.6 % for the founders at exit. Or if we could come up with a more efficient procedure to cut costs in SG&A and to maximize operating margin, it could end up having a larger EBITDA with a given revenue. Also, we plan to ensure that the company manufactures a class I or class II product (faster FDA approval), that is compatible with current smartphones or wireless devices and their wireless carriers. This will also improve our chance of attracting investors as shown by recent trends that are direct-to-consumer and quickly adaptable.49 This would hasten the increment curve of our revenue stream and hence our EBITDA and company valuation.

We acknowledge that, in any case, whether or not this venture is VC backable is highly sensitive to the current valuation of the company at IPO/Liquidation.  If the company were to be valued at $ 250 million at IPO/Liquidation, then not all investors would be able to meet their investment goals, thus the venture is not VC backable.  This difference of valuation is dependent on the projected revenue of the company, a difference of only a couple million in projected revenue. In order to determine whether or not this company is VC backable more extensive research needs to be done to determine a more accurate revenue projection.  A more accurate projection could be found by talking to or surveying potential customers and by investigating possible pricing points.

References
Frost&Sullivan, CARDIAC MONITORING SYSTEMS MARKET, Mar. 2008

Frost&Sullivan, CARDIAC MONITORING SYSTEMS MARKET, Mar. 2008

Wikipedia – Cardiac Monitoring: http://en.wikipedia.org/wiki/Cardiac_monitoring

Encyclopedia of Nursing and Allied Health: http://www.enotes.com/nursing-encyclopedia/cardiac-monitor

OneSource Global Business Browser

Angeion Corporation URL: http://www.angeion.com/

CASMED, Customer Care, http://www.casmed.com/serviceproduct.html#

The health care value chain : producers, purchasers, and providers / Lawton R. Burns, and Wharton School colleagues.
North American ECG and Cardiac Monitoring Products Markets, Frost & Sullivan, April 20, 2007

10 CASMED, Customer Care,http://www.casmed.com/serviceproduct.html#

11 Deltex Medical, Clinical Evidence,http://www.deltexmedical.com/clinicaleducation.html

12 CASMED, Customer Care,http://www.casmed.com/serviceproduct.html#

13 Deltex Medical, Clinical Evidence,http://www.deltexmedical.com/clinicaleducation

14 CASMED, Customer Care,http://www.casmed.com/serviceproduct.html#

15 The health care value chain : producers, purchasers, and providers / Lawton R. Burns, and Wharton School colleagues.

16 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

17 http://0-www.medicaletrack.com.lib.bus.umich.edu/TOC.aspx

18 Freedonia Focus on Medical Equipment, August 2009

19 http://www.dicardiology.net/node/29425

20 http://www.dicardiology.net/node/27983

21 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

22 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

23 http://www.dicardiology.net/node/27983

24 Medical eTrack

25 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

26 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

27 http://fisher.osu.edu/fin/courses/sim/aboutsim/presentations/economics_cap_markets_etc/sp09/health-care-sector.pptx

28 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009
29 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

30 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

31 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

32 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

33 Medical eTrack

34 “Health Care Equipment and Supplies in the US,” DATAMONITOR, May 2009

35 Medical eTrack

36 Galeon, A. K. 2006, Wall Street’s perspective on medical device evaluation. In: Becker, K. M., Whyte, J. J., Clinical evaluation of medical devices: principles and case studies., Humana Press, 2006

37 The health care value chain : producers, purchasers, and providers / Lawton R. Burns, and Wharton School colleagues.

38 The health care value chain : producers, purchasers, and providers / Lawton R. Burns, and Wharton School colleagues.

39 Stuart, Mary. Is there a Market for Wireless Cardiac Monitoring? Start Up, January 2010.

40 Stuart, Mary. Is there a Market for Wireless Cardiac Monitoring? Start Up, January 2010.

41 IBISWorld Industry Report: Medical Instrument & Supply Manufacturing in the US, March 2010.

42 The Health Industry Value Chain, John Burns and Wharton Colleagues

43 The health care value chain : producers, purchasers, and providers / Lawton R. Burns, and Wharton School colleagues.

44 http://www.dicardiology.net/node/34208/

45 Lecture Slides, Industry Examples as related to Teece, A. Ziedonis 1/2007

46  Wireless Health: State of the industry 2009 year end report, MobiHealthNews

47 Industry Metrics: Wireless Health By the Numbers, pg 20, 2009

48 http://www.infinancials.com/en/market%20valuation,Cardionet%20Inc,42795NU.html

49 Industry Metrics: Wireless Health By the Numbers, pg 18, 2009