Which waves are use in mobile phone communication?

Type of electromagnetic wave used in radio broadcasting

Any kind of wireless communication is possible due to propagation of Electromagnetic Waves through air or vacuum.

Whether it is mobile communication, satellite communication, FM radio, TV broadcast etc.

Radio broadcasting electromagnetic waves

An FM radio station broadcasts electromagnetic waves at a frequency of 125MHz (m=mega=106).

These radio waves have a wavelength of 2.


What is the speed of the radio wave?Well, the formula you need is,speed = wavelength times frequency,You do the math.


What type of electromagnetic wave is diagnosis of bone structure

Medical imaging technique became widely available after a computer-based image analysis started to appear in the 1960s.

Itu2019s been around nearly 60 years now and this technology has advanced in such a way that identifying abnormalities is now possible through medical imaging technique.

There are many reasons that medical image analysis is needed:,u2022 Clinical Study,u2022 Diagnosis support,u2022 Treatment planning,u2022 Computer-assisted surgery,Magnetic resonance imaging (MRI) first appeared in 1971 when Paul Lauterbur applied magnetic field gradients in all three dimensions and a back-projection technique in order to create Nuclear Magnetic Resonance (NMR) images.

MRI scan type uses a strong magnetic field and radio waves to produce a details image of the interior of a body.

It is a large tube that contains powerful magnets and can be used to examine almost any part of the body, including:,u2022 Brain and spinal cord,u2022 Bones and joints,u2022 Breasts,u2022 Heart and blood vessels,u2022 Internal organs, such as the liver, womb or prostate gland,Positron-emission tomography (PET) is used to observe metabolic processes in the body as an aid to the diagnosis of the disease and works on nuclear medicine functional imaging technique.

The concept of emission and transmission tomography was introduced by David E.

Kuhl, Luke Chapman and Roy Edwards in the late 1950s.

Thus, a PET scan is the oldest and is widely used in the medical imaging technique.

It is an imaging test that uses a low dosage radioactive tracer, allowing the doctor to see if the organs and tissues in the body are functioning appropriately.

This enables the doctor to diagnose various forms of cancer, heart ailments, and brain disorders.

By detecting diseases at a cellular level, PET Scans allow for early diagnosis and treatments for conditions that progress over time.

,An X-ray is a form of electromagnetic radiation (EM radiation), which is referred to as the waves of the electromagnetic field radiating through space and carries electromagnetic radiant energy (energy of electromagnetic and gravitational radiation).

It was first noticed by scientists in 1869 while investigating cathode rays (energetic electron beams) produced by discharge tubes.

They were just a type of unidentified radiation during 1895 experiments with the tubes.

William Morgan produced them unknowingly around 1785 and then after 234 years, it is being identified as a valuable technique (medical imaging technique ) which is very important in providing lifesaving information.

Exposure to high radiation levels can have a range of effects, such as vomiting, bleeding, fainting, hair loss, and the loss of skin and hair.

However, X-rays provide such a low dose of radiation that they are not believed to cause any immediate health problems.

X-ray imaging exams are recognized as a valuable medical tool for a wide variety of examinations and procedures.

They are used to:,u2022 noninvasively and painlessly help to diagnose disease and monitor therapy;,u2022 support medical and surgical treatment planning; and,u2022 guide medical personnel as they insert catheters, stents, or other devices inside the body, treat tumors or remove blood clots or other blockages.

,Computed tomography scan (CT scan) uses computer-processed combinations of many X-ray measurements taken from different angles in order to produce cross-sectional images of a specific area of a scanned object.

This allows the user to see inside the object without cutting.

Medical imaging is the more common application of X-ray CT.

The history of X-ray CT goes back to at least 1917 with the mathematical theory of Radon transform, which is related to the concept of cross-sectional (tomographic scan) obtained as an output from an unknown density represented as a function f, explains the virtual slices or images produced from CT scan.

It is an important tool in medical imaging and It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with high risk of heart disease.

CT is a valuable medical tool that can help a physician:,u2022 Diagnose disease, trauma or abnormality,u2022 Plan and guide interventional or therapeutic procedures,u2022 Monitor the effectiveness of therapy (e.


, cancer treatment),Further, it is used to obtain the CT scans of Head, Lungs, Angiography purpose, Cardiac, abdominal and pelvic and axial skeleton along with extremities (a limb or extremity).

,Medical ultrasound (also known as diagnostic sonography or ultrasonography) is a diagnostic imaging (or medical imaging) technique based on the application of ultrasound (sound wave with a frequency higher than the upper audible limit of human hearing).

After the discovery of piezoelectricity in 1880, physicist Floyd Firestone devised the first ultrasonic echo imaging device in 1940, the Supersonic Reflectoscope, to detect internal flaws in metal castings.

However, John Wild first used ultrasound to assess the thickness of bowel tissue in early 1940 and has been described as the u201cfather of medical ultrasoundu201d.

,Because ultrasound images are captured in real-time, they can also show a movement of the bodys internal organs as well as blood flowing through the blood vessels.

Unlike X-ray imaging, there is no ionizing radiation exposure associated with ultrasound imaging.

In an ultrasound exam, a transducer (probe) is placed directly on the skin or inside a body opening.

A thin layer of gel is applied to the skin so that the ultrasound waves are transmitted from the transducer through the gel into the body.

The ultrasound image is produced based on the reflection of the waves off of the body structures.

The strength (amplitude) of the sound signal and the time it takes for the wave to travel through the body provide the information necessary to produce an image.

,Ultrasound imaging is a medical tool that can help a physician evaluate, diagnose and treat medical conditions.

Common ultrasound imaging procedures include:,u2022 Abdominal ultrasound (to visualize abdominal tissues and organs),u2022 Bone sonometry (to assess bone fragility),u2022 Breast ultrasound (to visualize breast tissue),u2022 Doppler fetal heart rate monitors (to listen to the fetal heartbeat),u2022 Doppler ultrasound (to visualize blood flow through a blood vessel, organs, or other structures),u2022 Echocardiogram (to view the heart),u2022 Fetal ultrasound (to view the fetus in pregnancy),u2022 Ultrasound-guided biopsies (to collect a sample of tissue),u2022 Ophthalmic ultrasound (to visualize ocular structures,u2022 Ultrasound-guided needle placement (in blood vessels or other tissues of interest)

Which of the electromagnetic waves is used in cancer therapy

It is possible - but it would have nothing to do with the satellite dish.

,First, I believe the satellite dish only receives - unless they have them actually transmitting to the satellite - for an internet connection or something like that - but last time I checked, they only received.

,Second, thereu2019s no evidence that non-ionizing radiation can cause brain tumors - see Gary Larsons answer to Can WiFi cause cancer? WiFi, cell phones and most satellite communication uses radio waves in the microwave portion of the electromagnetic spectrum - which cannot produce ionizations in atoms - a necessary event in cancer induction by radiation.

There are links to other pertinent answers contained in the referenced answer above.

,We all have the same chance of developing a brain tumor (unless you actually received ionizing radiation to your brain in the past - like radiation therapy - or were exposed to nuclear fall-out - only a small part of the population has to worry about that).

Radiotherapy electromagnetic waves

high frequency like x-ray is commonly used.

the idea is to focus the high frequency beam on to cancer cells and kill them.

it is alternative for cutting effected area.

Identify the type of electromagnetic wave used in each application brainly

What you need to know to build Google Glass.

,,So you want to learn about one of my obsessions, pico projectors.

And I will tell you more than you wanted to know.

But before I do, letu2019s build some relevance, and if you are starting out in the devices business, let me tell you why you absolutely need to read this note.

n n,In consumer tech design, a display is the first and last thing that matters.

,n nAppleu2019s X-factor is considered to be its u2018device experienceu2019 and display technology has always been the basis used to construct ,all, Apple experiences.

It is the silk with which designers weave.

No amount of insightful UI or powerful processors or great baseband/memory or security chipsets will help push products if the display, the primary interface between abstract computing and the consumer, is crap.

Heck, UI is a thing ,because, displays support it.

n nWe are not addicted to smartphones, we are addicted to the ,screens, that shape the content.

RAM/processors/GPUu2019s donu2019t shape consumer experience; displays do.

After the somatosensory system, the visual system is the largest input processor in the brain and an entire cortical lobe (occipital) processes visual input.

Great displays make for great marketing.

History of consumer computing machinery adoption is that of display technology adoptionu2013 not silicon [0].

Current HMD/AR/VR hype is a ,display tech, hype.

Everything else is a secondary feature discussed only if the display is good enough[1].

n nAnd the only hardware research company that Apple has invested in despite a documented aversion to hardware research is, a displays company[2].

n nSo from a commercial perspective, ,display technology is a ,pretty, ,big, deal,.

n nOkay, so letu2019s discuss pico projectors in that context.

n n[0] Wozu2019s integration of input with output ,with, RGB for Apple 2.

n[1] ,An imo more definitive argument for why this is so.

[ https://interaxn.


com/Displays-drive-adoption-of-computing-machinery ], n[2] LuxVue/Q2 2014; Of 9 h/w acqui-hires since 1999, all were shipping products except LuxVue which was in the research phase; All have been incorporated incorporated into products except LuxVue.

,,Organization of this note,n nBefore getting to it if you are unfamiliar with the terms used below, you will find some background information in the following links (I am still working on finishing these) :n,1.

Projection displays: Intro and background [ https://interaxn.


com/Projection-displays-Intro-and-background ] n2.

Projection displays: Core technology [ https://interaxn.


com/Projection-displays-Core-technology ]n,Figure 01: Image shows different commercial pico projection/optical modules.

Details in the image and descriptions follow below.

Image to scale.

,,We look at the construction of five systems in this note.

Of the five, four are types of pico projection technologies, and the fifth is the Google glass near-eye display system.


Digital Light Projection (DLP), Samsungs Galaxy Beam 1 & 2 (ODM/Sekonix)n2.

Laser Beam Steering (LBS) Celluon module (uses the Microvision PicoP)n3.

Field-Sequential Color Liquid Crystal on Silicon (FSC LCoS), OEM pico projector (ODM/Himax)n4.

Color filter LCoS microdisplay (ODM: Himax/3M early prototype)n5.

Google Glass FSC LCoS (likely Himax)n,nI also describe a relation between Magic Leap and pico projection that pop media doesnu2019t seem to have dwelt on yet.

I finally conclude with a minor discussion on Cicret/Ritot and u2018repurposingu2019 existing DLP h/w.

,[Unless noted, all images are the authors work.

],,n, Device teardowns and engineering,n,n,n,Device 1: Digital Light projection,n(aka Digital Micromirror Display/DMD, Deformable Mirror Display),,Figure T1: Pico projector module from Samsung Galaxy Beam 1 & 2.

The top row show top/bottom views of the device.

The bottom images show labeled components.

,,n,Figure T2: Light sources/collimation lenses used in pico projection.

(A) shows the wrap-around flex and collimation lenses on which the R+B and G LEDs reside.

(B), (C) show closeups of the LEDs and (D) shows the collimation lenses with mounts.

All LEDs are mounted on ceramic dies and heat sinks.

, ,n,Figure T3: Fly eye lens array used for homogenizing collimated light input from dichroic mirrors.

(A) shows an FEL mounted on a common 532nm laser, (B) shows the effect of the FEL on the lasers spot - The FEL spreads the light uniformly over a rectangle, (C) closeup of FEL and (D) is a closeup of the individual lenses,.

,n,Figure T4: TI DMD Micromirror unit, images and schematics.

The DLP system consists of multiple dies, 4 or 5 bonded layers at least.

The die stack is generally proprietary, but it consists of a coverglass layer, a MEMS mirror layer, a CMOS memory layer and a TSV/THV routing+component +high voltage IC layer.

(Src: ,Larry Hornbecks DLP note [ http://focus.



pdf ],; More description ,here [ https://www.


com/What-key-engineering-skills-were-required-for-the-development-of-MEMS-products-such-as-DLP-and-micro-Gyroscopes ],),,n,Figure T5: TI DMD micromirror array.

L to R, Penny, DLP2010/0.

2inch TRPixel, Closeup of array, closeup of individual pixel, apparently new Tilt-Roll-Pixel.

(Src: TI DLPA051 whitepaper/Sep2014),,n,Figure T6: Figure shows a sequential zoom into the DMD mirror array using an optical microscope.

The greyed out pixels are stuck.

These are easy to damage but I am also clumsy.

Not a great combination.

, ,n,Figure T7: Projection lenses used in pico projectors.

The left system is used in an FSC LCoS system, the right one is used in the Samsung Galaxy projector.

Note that LCoS lens is positioned using a manual thumb wheel, but the Samsung system uses a bipolar stepper to move the projection lens about the guide-rails.

, ,n,Operation,,First thing to note is that we are only looking at the ,light engine, component of the system.

Without the driver and power ICs, the light engine ,doesnt do anything at all,.

Companies like to make fudged claims that they have the smallest light engines, but thats BS without including the driver/power ICs.

The least total self-contained volume for this system will be around 30 x 30 x 20 mm3 without video conversion DSPs and external power.

Thats actually huge.

,DLP uses unpolarized light and simply flips light around with micromirrors.

So unlike liquid crystal based devices, DLP has no need for light polarizing components.

,LEDs are used as illumination sources in DLP, typically with two primary wavelengths on a single ceramic die and the third one on a second die.

Color is obtained through time averaging strategies (field sequential color/FSC) as opposed to spatial averaging [FSC is discussed in the Google Glass section below].

This allows a finer micromirror pixel pitch.

The illumination from the LEDs passes through collimating lenses and gets directed to dichroic mirrors.

These mirrors act as bandpass filters and only allow certain wavelengths to pass through to a homogenizer optic, typically a fly-eye micro-lens array.

This light now passes through a condenser lens which focuses it on a 45-degree first surface type mirror, that reflects the light on to the DMD array.

,TI DLP technology is based on arrays of electrostatically switched micromirrors that act as bistable light switches.

When we turn them on, they reflect light into the projection optics.

When we turn them off, they reflect the light into a light sink (this light/energy is lost as heat).

The light sink in this design is located at a truncated corner of the first surface mirror (not shown as it cannot be imaged easily).

You can read a little more about how DLP works ,here [ https://www.


com/What-key-engineering-skills-were-required-for-the-development-of-MEMS-products-such-as-DLP-and-micro-Gyroscopes ], or on TIs website.

These micromirrors can also be pulse-width modulated to create a grayscale response.

The subframe-to-pixel state conversion is handled in a driver IC which passes the information to a power IC that steps voltages up to around 12V required to actuate individual mirrors.

,So the on-state light reflected by the micromirrors passes through the projection lens onto a screen.

This system has a bipolar stepper motor (which also has a limit switch) that can be used to focus the projection lens.

,Note that unlike full-size projectors we do not use a single, high intensity white light source - that removes color wheel type filter and rotational actuator requirements.

Also, larger pico projectors use TIR prism sets to avoid chromatic aberration issues and adjust relative illumination/imager sizes.

The biggest advantage DLP has over LCoS is that the switching and settling time on the mirrors is on the order of 10s of microseconds compared to 100s of microseconds for liquid crystals, that implies high refresh and dynamic content.

,[Finally note that all pico projection systems use folded light optics/pathways - People like to describe the Google Glass as using folded light pathways as if the term carried deep meaning or significance - it doesnt.

Generic term.

Most cameras/SLRs use folded light pathways.

],,n,,,Device 2: Field-Sequential Color Liquid Crystal on Silicon,,,Figure S1: Shows the internal structure of an FSC LCoS pico projector light engine.

Again note that this is merely the light engine, and does not contain any driver or power ICs; these are located on the main system board.

, ,n,Figure S2: shows the illumination pathway.

R+B LEDs can be noted to be on the same die.

Can you tell anything by the intensity pattern seen in the polarizing beam splitter?, ,n,Figure S3: Close up of the Polarized Beam Splitter Array and the Louvred wave plate that make up the polarization compensation system.

Note that this and the polarized beam splitter cube are different components.

(B) shows the top view, while (A) shows the edge and parallel structure of the individual PBS elements.

, ,n,Figure S4: (A) shows a Polarized Beam Splitter cube.

(B) shows the PBS splitting the input laser beam into two.

,,n,Video S1: And heres OK Go performing their I wont let you go in LCoS reflection.

Note that we are looking directly at the LCoS screen under a microscope using unpolarized illumation; there are no PBSs here.

Thats why the images appear to be edge-filtered, ie.

only show edges; these pixels are turned off all the way.

The grayscale (PWMed) LCs dont show.

,,n,Operation,,Regular displays, like the IPS LCD used on Apple devices or the SCTN AMLCDs used in computer monitors have the liquid crystal layer sitting on a ,glass, back-plane (among other things).

This glass backplane allows polarized light from a backlight panel that sits behind the glass back-panel to get through.

This is described as a ,transmissive, display.

,Liquid Crystal on Silicon however has the liquid crystal layer on top of a ,silicon, substrate.

So there is no backlight.

When the liquid crystals are electromagnetically stressed, they either close or open preventing or allowing some part of the light to hit the underlying silicon substrate.

If this silicon backpanel is reflective, then we see light reflected out from the pixel cell.

Note that display or output light and the input light share the same paths.

,So LCoS is similar to DMD that way, pixels in both reflect light.

But DMDs use geometry to push light out in a different direction from the incident light, where in LCoS, the input/output light share the optical path.

,You might have noticed that the DLP unit did not have a louvered/half wave plate, a PBS array or a PBS cube.

Thats because DLPs use unpolarized light.

Liquid Crystals however need polarized light to work and can be made to ,shutter, polarized light.

,So the initial part of illumination production in LCoS is exactly like that in DLP up to the fly-eye homogenization.

After that the LCoS needs to convert most of the unpolarized light to that with a single polarization.

That is why a PBS array is used to split the incident unpolarized light to its two components (S & P), and louvered wave plate selectively converts either one of the two to the other.

These elements in combination represent a Polarization Conversion System.

PCS is never 100% efficient, and there are other ways to polarize/recycle light instead of just using a PBS array and a half wave plate (polarizing gratings, reflective polarizers etc.

) - we will revisit this point when discussing the Google Glass.

,Anyway, the fly eye lens, PBS array and half wave plate output P polarized light which is then reflected/condensed through to a PBS cube.

The dielectric plane in the PBS cube reflects P polarized light and transmits S polarized light.

So the incident P component from the condenser gets reflected into the LCoS, where the activated liquid crystal cells toggle incident P to reflected S again.

This reflects S light is now able to pass through the PBS into the projection lens assembly and out into the world.

So LCoS projectors, just like LCD monitors, produce polarized light as well.

,Color is generated using a process similar to DLP, time averaged FSC.

,n,n,,,Device 3: Color Filter Liquid Crystal on Silicon,,See Figure 01 at the very top of this note for what CF LCoS displays look like.

,Color filter LCoS is more similar to traditional LC displays than to FSC LCoS.

Traditional LCDs are transmissive, but CF LCoS is reflective.

FSC LCoS uses subframe synchronization to generate time-averaged color, but CF LCoS uses a traditional sub-pixel array to create spatially averaged color.

Consequently CF LCoS pixels are larger in size compared with either DLP or FSC LCoS panels.

,CF LCoS requires front illuminated white light sources, and I have seen devices that dont use any polarizers at all - they just operate at 50% efficiency and use the largest heat sinks you can imagine.

The most common source for CF LCoS pico projectors are LCD makers in China.

Making LCs on silicon backplanes is easier than on glass, so these units usually come directly from LCD operations.

These are poor quality, but cheap.

When done right, CF LCoS can create decent projections.

,These devices look similar to the FSC LCoS devices and can be designed with similar quality optics.

These will typically be larger than FSC LCoS or DMD panels.

,,Figure C1: Shows the actual pixel structure from two different CF LCoS panels.

The glow in the pixels is from a reflection of the P component of microscope illumination, theres no backlight in these panels!,,,n,,,Device 4: Laser Beam Steering/Celluon/Microvision,,,Figure B1: Shows the full system for a celluon laser pico projector.

Its larger than three iPhone 6 plus in size.

, ,n,Figure B2: (A)/TopLeft shows the LBS projector unit with a credit card and a Samsung Beam pico projector for comparing sizes.

The two sheet metal components are covers/heat sinks.

(B)/BottomLeft shows the top view of exposed optical module with the cover on the laser diode array intact, and (C)/Right shows the same without the cover from a different perspective.

, ,n,Figure B3: Shows the laser diodes all lit up.

Note the micromirror/electromagnet assembly is missing.

This image also indicates the optical path of the system.

,,n,Figure B4: Multiple perspectives of the electromagnetic micromirror actuation system.

This thing is huge for something that claims to be a MEMS device.

,,n,Figure B5: The MEMS micromirror die.

The top right corner broke during disassembly.

,,n,Figure B6: A metal ring is present on the reverse side of the micromirror.

The entire die is also bonded to a ferromagnetic metal film to create a flux path for magnetic fields.

,,n,Figure B7: There are full/4-bridge piezoresistive strain gauges placed at the regions they expect maximum torsion to occur.

I could not probe the exact configuration.

,,,Lets start with a bit of history, ,on Microvision, - Microvision is the company that designed the light engine we just looked at.

It was founded in 1993 or thereabouts by a guy called Steve Willey.

Steve was a management guy tasked by Tom Furness, director of Human Interface Technology lab (HIT lab) at UofWA/Industrial Engineering dept, to commercialize the ,virtual retinal display [ http://en.


org/wiki/Virtual_retinal_display ], based on the retinal scanning tech developed in the lab.

Heres the cool thing about the HIT lab: Eric Seibel developed the scanning fiber endoscope there for his PhD around 96.

Whos he, you ask? ,Well, apparently hes the guy whose technology appears to form the basis for MagicLeap [ http://gizmodo.

com/how-magic-leap-is-secretly-creating-a-new-alternate-rea-1660441103 ],.

Microvision started out trying to scan images directly into the retina (exactly like what MagicLeaps trying to do now).


Small world.

Old concepts.

New spin.


,The idea was originally derived by ,Mainster et al in 1982 [ http://www.




gov/pubmed/7122056 ], from observing an after-effect of scanning laser ophthalmoscopy which uses lasers to map the retinal surface.

Patients reported perceiving images during the procedure.

When Microvision started this work in the early 90s, it used very simplistic galvanometric scanning mirrors to steer/scan the laser into eyes.

By the late 90s, they had pivoted to MEMS/micromirrors in silicon, during what I like to refer to as the first MEMS commercialization hype of mid 90s.

They won quite a few federal/contracts and awards between 1999 to 2005 and had a few interesting concepts to show for the money.

n,Figure B8: Taken from ,USAARL/AHPD Report No.

2003-03 [ http://www.




PDF ],.

Microvision and AARL did a bunch of failsafe/exposure studies that claimed everything was fine and cute, but DoD killed the project anyway.

Something about lasers and eyes doesnt quite work well together.

(Brothersoft was in the same RSD/RID space as well and they didnt pan out either.

Marketing a display experience that only a single person can see is a bitch!) ,,n,Operation,,Spatial coherence in lasers allows a rapidly fluttering (oscillating) mirror to reflect the beam to precise locations in space.

That is the basic idea behind Laser Beam Steering (LBS) which is also described as a Flying Spot Mirror technology.

Note that DLP and LCoS can be used with either LEDs or laser emission sources (laser diodes, LDs), but beam steering applications can only use coherent sources.

,LBS has more similarities with CRT than with any other digital display technology.

CRT used a tight beam of electrons and magnetic lenses to move the beam around in space; similarly, mirrors move a laser beam around in LBS.

CRT was a quasi-digital technology (the shadow mask layer over phosphors created the pixel structure) and so is LBS.

The source is time multiplexed between on/off states to create a pixel through persistence of vision (There are other, similar, strategies as well).

The smallest pixel is obviously the smallest spot size, and is directly proportional to mirror diameter and oscillation amplitude.

,The particular Celluon/Microvision device under consideration uses 5 LDs (two R/639nm/90mW, two G/522nm/85mW and one B/445nm/50mW TO-cans, not sure about thermal/frequency stabilization; Class 3R device) as their illumination sources.

The multiple LDs per color seems to be a wavelength diversity based speckle reduction technique.

The beams get collimated using lenses in front of the TO-cans and combined to a single optical path through several prims arrays.

These get condensed before finally being directed to the micromirror.

This is very straightforward as far as optics go.

[Note I have no idea of the dielectrics/coatings over any of the optic elements (through out this article).

So its possible I am missing some key points.

],The next part is the actuator/micromirror assembly.

Their actuation uses a simple galvanometric 2D scanning approach.

The basic idea is they have a variable current carrying coil placed in a (permanent) magnetic field and this coil experiences an electromagnetic torque/Lorentz force which causes it to rotate - like a DC motor.

However, unlike a DC motor or galvanometer, the mirror needs to be able to rotate in 2 axes (lets call them fast and slow; fast axis addresses rows, slow addresses columns) for a laser beam to scan across a 2D surface.

,If we wanted the mirror to point in any arbitrary direction, we would use multiple coils t0 generate orthogonal fields.

But since arbitrary pointing is not required in this case, they couple the two axes using a single coil and orthogonal fields that are not aligned with the axes of rotation.

The micromirror is located on two torsion flexures in a gimbal arrangement.

I am not sure what the angle of rotation for the device is (about 17 degrees?).

,n,Video B1: Fraunhofer IPMS/Resonant MEMS scanner.

Similar technology, but a lot better execution using electrostatic comb actuation strategies.

,,nThe high stiffness flexures define the fast/row axis scan (vertical axis in Fig.

B5) and the compliant/horizontal flexures define the slower column axis.

Each torsion flexure appears to have a single piezoresistive full bridge sensor, probably used for simple time keeping (not a feedback signal).

I believe this device operates in a resonant mode (too big/heavy to operate in bang-bang/discrete mode).

The metal ring at the bottom of the mirror is used to concentrate the flux through the thickness of the MEMS die and the metal plate at the backside constitutes the permanent magnetic circuit of the device.

,That concludes the description of how LBS operates.

,Some points to note with LBS:,Theres a lot of talk about how laser projectors are focus free.

They are, but within a restricted range.

The image is not crisp at large throws (with large projection image sizes).

And we get a very, very distorted/ blurred image at short/ultra-short throws with the optics meant for medium throw projection.

,Note that achieving ultra-short throws is virtually impossible with commercial micromirror technologies (we cant get high mirror torsion amplitudes that still produce a linear response).

We also see extensive nonlinear flying-spot scanlines, and speckle issues.

In the early days of this tech, people used a two-torsion mirror setup instead of a gimbal arrangement (for example, Symbol/Motorola).

They had to digitally correct pincushion effects and mirror-angular velocity dependent artifacts in displays.

The mirrors also deform during oscillation and require correction.

,This particular device is an example of a really old technology stack dating to early 2000s, and I was frankly surprised to see an unpackaged die with the HUGE permanent magnets and an inefficient magnetic circuit.

Fraunhofer IPMS has done a lot of good work in this area using electrostatic comb actuators and Mirrocle/Intel is commercializing this.

I really expected to see something like that.

,As expected with LBS, theres a very nice contrast in the image with saturated colors.

My eyes hurt for some reason (Placebo effect from knowing that this is a class 3R device?) and I couldnt stare at it for longer than 20 minutes.

,,n,,,Device 5: Google Glass (GG),,[Please see the ,catwig page [ http://www.


com/google-glass-teardown/ ], for better system overview images while noting that their teardown of the optical module is incomplete]n nLucky us, I was a Glass explorer and received my GG in 2013.

I .