Fall 2010 Colloquia
Physics and Astronomy Colloquium - Fall - 2010 (4-5pm, Science Bldg. 127)
Dec. 9 NOTE: This Thursday Colloquium + Seminar
(1) Colloquium: 4 pm
REU@Notre Dame: "Experience" over a Quarter Century
(2) Nuclear Astrophysics Seminar: Nuclear Incompressibility, the Asymmetry Term, and the MEM Effect
12:20pm-1:20pm, Dec. 9, Science 143, lunch served at 12:00noon
Prof. Umesh Garg
Physics Department, University of Notre Dame
￼Prof. Umesh Garg received his Ph.D degree from the State University of New York at Stony Brook in 1978. He was a Research Associate at Texas A&M University in College Station from 1978 to 1982 before joining the physics faculty at University of Notre Dame. He became a full professor at Notre Dame in 1994. Prof. Umesh Garg is a Fellow of the American Physical Society and has been a guest professor at many nuclear physics laboratories around the world. Prof. Garg's current research interests include experimental investigation of compressional-mode giant resonances and exotic quantal rotation in nuclei. More information about his research can be found at http://www.nd.edu/~sciwww/FacultyCVs/garg_cv.pdf Prof. Umesh Garg has been serving as the Director of the Physics REU Program at Notre Dame since 2000. He was recently elected to a 3-year term on the Executive Committee of the National Physics REU Leadership Group (NPRLG).
(1) Colloquium: REU@ND: "Experience" over a Quarter Century
The Research Experience for Undergraduates (REU) program has been in operation in the Physics Department, University of Notre Dame, for 25 years now. From humble beginnings, it has expanded to become a robust and very popular program, attracting more than 200 applicants for less than 20 places available each summer. In this talk, I will describe our program and discuss its salient features, as well as what 25 years have taught us in terms of providing an enriching (and fun!) research experience to undergraduate students.
(2) Nuclear Astrophysics Seminar: Nuclear Incompressibility, the Asymmetry Term, and the MEM Effect
The Nuclear Incompressibility parameter is one of three important components characterizing the nuclear equation of state. It has crucial bearing on diverse nuclear and astrophysical phenomena, including radii of neutron stars, strength of supernova collapse, emission of neutrinos in supernova explosions, and collective flow in medium- and high-energy nuclear collisions. In this talk I will review current status of the research on direct experimental determination of nuclear incompressibility via the compressional-mode giant resonances. In particular, recent measurements on a series of Sn and Cd isotopes have provided an "experimental" value for the asymmetry term of nuclear incompressibility.
We also find that the GMR centroid energies of the in both Sn and Cd isotopes are significantly lower than the theoretical predictions, pointing to the role of superfluidity and the MEM (Mutual Enhancement of Magicity) Effect.
Dec. 2 Surface Science Research
Prof. Anil Chourasia
Department of Physics and Astronomy, Texas A&M University-Commerce
Dr. A. R. Chourasia joined the then East Texas State University as a post-doctoral fellow after getting his Ph. D. in 1986. He has worked on Extended X-ray Absorption Fine Structure for his Ph. D. Related to his EXAFS work, he was a visiting scientist in the Metals Research Institute, Japan. He joined TAMU-Commerce faculty in 1996 after a national search. He has experience in thin film deposition and characterization. He has designed and developed a thin film deposition unit. The facility allows in situ characterization of interfaces by x-ray photoelectron spectroscopy, appearance potential spectroscopy, and reflection high-energy electron diffraction unit. He has advised several graduate students in their masters’s theses. He has several publications including review articles in refereed journals. He is also developing research in the field of theoretical materials science. His CV is found at here.
In this presentation, the research conducted in surface science at the Surface Science Research Laboratory will be presented. The facility for thin film deposition and characterization will be introduced. The controlled deposition (from < 1 monolayer/min to several monolayers/min) of metallic thin films on different substrates will be discussed. The surface sensitive techniques available in the SSRL for in situ characterization of the different interfaces will be presented. In particular, the application of the technique of XPS in the characterization of surfaces will be demonstrated. New results on titanium/copper oxide interface will be presented.
Nov. 18 Picometer Resolution Electron Microscopy as a New Tool to Tailor Materials at the Atomic Scale
Prof. Miguel Jose-Yacaman
Department of Physics & Astronomy
University of Texas at San Antonio
Professor Miguel Jose-Yacaman is the Chairman of the Department of Physics and Astronomy at UT-San Antonio. He received his Ph.D in material science from the National University of Mexico in 1973. He then did his postdoctoral research at Oxford University, England and NASA’s Ames Research Center in Mountain View, California. Dr. Miguel Jose-Yacaman was subsequently on the faculty at the National University of Mexico, West Virginia University, and the University of Texas at Austin as the Reese Professor of Engineering before moving to UT-San Antonio in 2008. Dr. Miguel Jose-Yacaman also served as the Executive Secretary of the National System of Research, Deputy Director for scientific research of the National Council for Science and Technology, and the General Director of the National Institute of Nuclear Research of Mexico. Dr. Miguel Jose-Yacaman is a Fellow of the American Physical Society and the AAAS. He won the National Prize of Sciences of Mexico, the Gold Medal of The Mexican Society of Physics and the Robert Franklin Mehl Award from the Minerals, Metals & Materials Society. His primary research interest has been the structure and properties of nanoparticles including metals, semiconductors, and magnetic materials. He developed a number of TEM techniques to study nanoparticles, which opened a new field of correlation structure with physical properties. He also did pioneering work on the antibacterial properties of metal nanoparticles. He demonstrated that nanoparticles deactivated viruses such as HIV-1. Additionally, he demonstrated for the first time the five fold structure of gold nanoparticles. Dr. Yacaman was also a pioneer of electron microscopy in Latin America. He organized the first SEM lab in Latin America. Furthermore, during his career he has organized 10 electron microscopy centers in Mexico and the United States. More information about Professor Miguel Jose-Yacaman and his research can be found at http://physics.utsa.edu/Faculty%20Staff/profiles/yacaman/yacaman.html
During the last decade Electron Microscopy has seen dramatic advances mainly due to the spherical aberration correction on the lenses and into the hardware for analyzing and recording signals. For the first time it is possible to study nanostructures at the atomic level in a reliable way many interesting outstanding problems on materials science can be attacked with the new tools, in particular the structure of matter at nanoscale has been a long standing problem. Nanoparticles have many significant technological affiliations in catalysis in medicine and others, all of which depend on the properties of nanosized matter. Advances in characterization is opening a new era in which is possible for the first time to come late structure with properties for nanoscale material. In this presentation,we will discuss some recent advances of the structure of nanoparticles using aberration corrected STEM and recent advances on the study of Bimetallic nanoparticles, we will show that the distribution of metals is more complicated that the simple Alloy or core-shell model.We have used picometer resolution images match with energy loss spectroscopy. It is found that the structure is in many cases 3 layer one, the first metal at the core a second metal in an intermediate layer and the external shell being the first metal again, this has very interesting implications for the optimization of metallic catalysis. Other examples will presented on this talk.
Nov. 11 CERN, the LHC, and ATLAS: Physics at the Frontier
Dr. Kendall Reeves
University of Texas at Dallas
Dr. Kendall Reeves received his B.S. in Physics in 1993 from the California State University, Pomona, and his Ph.D. from the University of Texas at Austin in 2001. His research is in the field of experimental high energy physics, and he has previously collaborated on the SLD experiment at the Stanford Linear Accelerator Center, and the HERA-B experiment at the Deutsches Elektronen-Synchrotron (DESY) laboratory in Hamburg, Germany. In 2003 Reeves joined the University of Wuppertal as a Research Associate, where he became involved with the ATLAS experiment at CERN, and this involvement continues with his present position at the University of Texas at Dallas. He specializes in the construction and commissioning of large detectors. During this period of early LHC data, Reeves' physics interests are charmonium and charmonium-like states that contain a charm and an anticharm quark.
In March of this year, the Large Hadron Collider at CERN (the European Nuclear Research Center) began colliding protons at a center of mass energy of 7 TeV. The LHC has now replaced the Tevatron at Fermilab, near Chicago, as the accelerator operating at the highest collision energy. This provides us with a new window on the properties of matter and the forces which bind it together, and promises an excellent opportunity for discovery. ATLAS is the largest detector at the LHC. I will talk about my experiences helping to build ATLAS, I will talk about the latest challenges such as observing collisions from proton bunch trains, and I will look forward to the ATLAS and LHC program for the next year.
Nov. 4 A Timely Question in Computational Sciences: CPU or GPU? The Top 10 Factors to Consider
Dr. Antone Kusmanoff
L-3 Communications, Greenville, Texas
In 1967 I graduated with a BA in math from Southern Illinois University and went into the Air Force as a Communications Officer with the rank of Second Lieutenant. Twenty years later I retired as an Information Systems Officers at the rank of Lieutenant Colonel. Before I retired, I also had attained a BSEE and a MSEE from University of Missouri and also a MSEE from Georgia Tech. After I retired, I went to Oklahoma State University and attained a Ph.D. in electrical engineering and immediately took a position at Southwest Research Institute in San Antonio upon graduation. I have had a few other jobs since then with all of them related to communication systems or computer engineering activities including owning my own business for a year or so. For the last 13 years I have been a Senior Principal Systems Engineer for Raytheon/L-3 Communications here in Greenville working on various defense industry projects. I am currently assigned to the R&D Department where I am associated with research related to high performance computing systems. I have been married for 42 years and we have three children and two grandchildren and one more due in Feb. I have been involved in multiple sports throughout my life, but have settled down to golf as my main athletic focus. I also have an interest in older vehicles although presently do not have anything except some left over parts from a 1963 T-Bird.
Computational scientists spend their hours constructing mathematical models or completing a quantitative analysis to analyze and try to solve scientific problems. The problem could be a set of unsolved coupled partial differential equations (PDEs) or other forms of computational problems found in scientific disciplines. Techniques of numerical analysis are the primary methods used in computational science activities. But principally, they are working through a study of mathematical models realized on CPU architecture based COTS computer systems. The computational scientist’s problem’s extra-ordinary nature in the computing world is why the solutions are usually only effectively executed on super computers and clusters of various sizes. This is driven mainly because of their long execution times and large memory demands. But a new technology solution is available which calls for the application of a graphical processing unit (GPU) against the computation science problem set with promises of multiple orders of magnitude speedup. In the last two years the GPU approach has had a sudden water fall of university papers (most often supported by and working with the GPU manufacturers) making claims of nearly mythical performance improvement levels. It truly appears that “this time” the sun will be setting on the complex, but oh so slow, CPU. With the super computer performance claims from the small frame GPU architecture and with the latest general purpose GPU (GPGPU) architecture advances shown by the manufacturers, it appears the good old COTS CPU is truly the dead technology for the next decade. Or is it Will CPUs be gone the same way the horse and buggy are no longer found in our pathways of life; just too slow for us modern folks? After all, the rumor is that the CPU can’t even keep up with the next increment of Moore’s law. The CPU maker’s responses have been to stuff more and more computing cores into each little socket in order to reach the increase performance. But the user community has found out that the parallel architecture of the multi-core CPU does work great with independent processes (eg. reading email and playing solitaire at the same time) but they also discovered they need new parallel processing algorithms and new parallel structured solutions to begin too approach the flop capacity they have read about on the CPU benchmark data sheet.
Then again, perhaps the CPU’s predicted demise is likewise based on false data points and it turns out that the GPU performance actually is a mythical level derived from special tuning situations on special benchmarks. The answer to this question is not simple. As an assistance in this issue, this presentation highlights ten logical computing architecture and/or problem factors to be called to the user’s attention before they replace their CPU based system with a GPU based system. These factors are for the computational scientist to consider, analyze and answer for themselves. The question that won’t be answered in this presentation is when, if ever, the GPU may actually be able to push the good old commercially available CPU off that important shelf where it proudly sits as the current king.
Oct. 28 What motivates young adults to have interest in science? What is Science Education & why is it important?
Prof. Gil Naizer
Dept. of Curriculum & Instruction, Texas A&M University-Commerce
Prof. Naizer received his Ph.D in Science Education from Texas A&M College Station in 1993. He has been at A&M Commerce since for 13 years and is PI or co-PI on several funded projects. Areas of interest include science teacher development, motivating students to pursue STEM fields, and project-based learning. More information about Prof. Naizer and his research can be found at http://faculty.tamu-commerce.edu/gnaizer/
The session will include an overview of current and recently funded projects intended to motivate public school students to pursue STEM careers. The current Maximizing Motivation, Targeting Technology (M2T2) project is an NSF funded project providing engaging experiences for middle school students. Project STEEM was recently completed and a new grant – STEM SKILLS will start in December. We will also explore the following: What motivates young adults to have interest in science? What is Science Education & why is it important?
Oct. 19 Special Seminar 12:30-1:30pm, Science 103 (Lunch will be provided)
The Trojan Horse Method as a tool for nuclear astrophysics
Dr. Gianluca Pizzone
Laboratori Nazionali del Sud INFN, Catania, Italy
￼Dr. Gianluca Pizzone received his Ph.D in 2001 from the University of Catania in Italy. His major research interest is in experimental nuclear astrophysics with both stable and radioactive beams.
Direct measurements in the last decades have highlighted a new problem related to the lowering of the Coulomb barrier between the interacting nuclei due to the presence of the "electron screening" in the laboratory measurements. It was systematically observed that the presence of the electronic cloud around the interacting ions in measurements of nuclear reactions cross sections at astrophysical energies gives rise to an enhancement of the astrophysical S(E)-factor as lower and lower energies are explored. Moreover, at present Such an effect is not well understood as the value of the potential for screening extracted from these measurements is higher than the tipper limit of theoretical predictions (adiabatic limit). On the other hand, the electron screening potential in laboratory measurement is different from that occurring in stellar plasmas thus the quantity of interest in astrophysics is the so-called "bare nucleus cross section". This quantity can only be extrapolated in direct measurements. These are the reasons that led to a considerable growth on interest in indirect measurement techniques and in particular the Trojan Horse Method (THM). Results concerning the bare nucleus cross sections measurements will be shown in several cases of astrophysical interest. In those cases the screening potential evaluated by means of the THM will be compared with the adiabatic limit and results arising from extrapolation in direct measurements.
Oct. 14 Neutron-rich matter, neutron stars, and their crusts
Prof. Charles J. Horowitz
Dr. Horowitz is a Professor of Physics at Indiana University. Prof. Horowitz received his Ph.D. from Stanford University in 1981. He was a postdoctoral Fellow at the Niels Bohr Institute in Copenhagen and MIT before joining the faculty first at MIT and then at Indiana University. He is an internationally recognized pioneer in several areas of astrophysics, computational sciences and nuclear physics.
He is also a Fellow of the American Physical Society. More information about Prof. Horowitz can be found at
Compress matter to great densities and electrons react with protons to make neutron rich matter. This material is at the heart of many fundamental questions in Nuclear Physics and Astrophysics. What are the high-density phases of Quantum Chromodynamics (our theory of quarks and gluons)? Where did the chemical elements come from? What is the structure of many compact and energetic objects in the heavens, and what determines their electromagnetic, neutrino, and gravitational-wave radiations? Moreover, neutron rich matter is being studied with an extraordinary variety of new tools such as Facility for Rare Isotope Beams (FRIB), an accelerator that is being built at Michigan State University, and the Laser Interferometer Gravitational Wave Observatory (LIGO). We describe the Lead Radius Experiment (PREX) that is using parity violation to measure the neutron radius in 208Pb. This has important implications for neutron rich matter, neutron stars, and their crusts. We model neutron rich matter using large-scale molecular dynamics simulations. We find neutron star crust to be the strongest material known, some 10 billion times stronger than steel. It can support large mountains. These concentrated masses, on rapidly rotating stars, can generate detectable oscillations of space and time known as gravitational waves.
Oct. 7 STELLAR TRACERS: PROBES OF PHYSICS AND GALCATIC EVOLUTION
Prof. Peter M. Frinchaboy III
Dept. of Physics & Astronomy, Texas Christian University
Sept. 30 Shedding Light on Dark Matter with Gravitational Lensing
Prof. Chuck Keeton
Department of Physics & Astronomy, Rutgers, the State University of New Jersey
Dr. Chuck Keeton is an associate professor of physics and astronomy at Rutgers University. Like other astronomers of his generation, Dr. Keeton attributes his interest in space to the success of the Voyager missions and the Space Shuttle program in the 1970s and 1980s. After earning a B.A. from Cornell University and Ph.D. from Harvard University, Dr. Keeton did research at the University of Arizona and the University of Chicago before joining the faculty of Rutgers University in 2004. Dr. Keeton has observed with the Hubble Space Telescope as well as observatories in Arizona, Hawaii, and Chile. His research has been featured by National Public Radio, MSNBC.com, and New Scientist magazine. Earlier this year Dr. Keeton received the Presidential Early Career Award for Scientists and Engineers in a ceremony at the White House. More information about Dr. Keeton and his research can be found at
In 1936 Einstein used his theory of relativity to predict that the bending of light by a star's gravity could create multiple images of a more distant star. Today there are many observed cases where the gravity of a distant galaxy bends light from an even more distant quasar. This "gravitational lensing" provides a unique opportunity to study the invisible dark matter thought to surround all galaxies. In particular, we can use gravitational lensing to discover dwarf galaxies that are made entirely of dark matter and thus invisible. The abundance of "dark dwarfs" is sensitive to the nature of the dark matter particle. Existing data reveal the average amount of dark matter substructure in galaxies, and future large samples hold great promise for revealing even more about the exotic substance that permeates the universe.
Sept 23 Maxwell, Einstein, and Their Impossibilities
Prof. Mark G. Raizen
Department of Physics and Center for Nonlinear Dynamics, The University of Texas at Austin
Dr. Mark Raizen is a Sid W. Richardson Foundation Regents Chair and Professor of physics at the University of Texas at Austin. He was born in New York City where generations of his family resided since the 1840s. While he comes from a long line of medical doctors, dating back to the Civil War, Dr. Raizen's life took a different path. In his childhood, Dr. Raizen was influenced by his uncle, Dr. Robert F. Goldberger , former provost of Columbia University and deputy director for science at the NIH, to pursue a scientific career. Like his mother, aunt, and uncle, Raizen attended The Walden School on the Upper West Side, until his family moved to Israel . He graduated from De Shalit High School and received his undergraduate degree in mathematics from Tel Aviv University in 1980. He continued his graduate education at the University of Texas at Austin, under the guidance of Steven Weinberg (Nobel Prize in Physics, 1979) and Jeff Kimble (California Institute of Technology). Dr. Raizen completed his Ph.D. in 1989. From 1989 to 1991, he was a National Research Council (NRC) post-doc at the Time and Frequency Division of the National Institute of Standards and Technology, working with David Wineland and James Bergquist. In 1991, Dr. Raizen returned to Austin and The University of Texas where he became an assistant professor of physics. He was promoted to associate professor in 1996 and full professor in 2000. Dr. Raizen holds the Sid W. Richardson Foundation Regents Chair, one of only four such chairs in the physics department. Dr. Raizen started his scientific career in theoretical particle physics in 1984 with Steven Weinberg, his mentor. In 1985, he moved into experimental physics where he began a close association with Jeff Kimble. In his graduate work, Dr. Raizen was instrumental in one of the first experiments that measured squeezed states of light and also observed, for the first time, the Vacuum Rabi splitting in the optical domain. While at NIST, Dr. Raizen developed the first linear ion trap which has become the basis for quantum information with trapped ions. At the University of Texas, Austin, the research program in the Raizen group uses laser cooling and trapping of neutral atoms to study many fundamental problems. One of the most important results was the first direct observation the quantum suppression of chaos. In other experiments, Dr. Raizen and his group investigated quantum transport of atoms in an accelerating optical lattice. They studied the loss mechanism during the acceleration and determined that it is due to quantum tunneling. Surprisingly, for short times they found a deviation from the exponential decay law in the survival probability. This is a manifestation of a basic quantum effect predicted over forty years ago by Leonid Khalfin but not observed until now. This short-time deviation from exponential decay was then used to suppress or enhance the decay rate, effects known as the Quantum Zeno effect or Anti-Zeno effect. In recent years the focus of the experimental research has shifted towards many-body physics. Towards this goal, Dr. Raizen and his group have built up two experiments with Bose-Einstein Condensate in rubidium and sodium. They have developed a unique system for the study and control of quantum statistics of atoms and quantum entanglement. The system includes a condensate in an optical box trap together with single atom detection. Dr. Raizen pioneered a totally new approach to producing ultra-cold atoms by coherent slowing of supersonic beams. Using an atomic paddle, a slow monochromatic beam of ground state helium was produced. In a different approach, pulsed magnetic fields were used to stop paramagnetic atoms and molecules. To further cool these particles, Dr. Raizen and his collaborators introduced the concept of a one-way barrier, or one-way wall, which is used to accumulate atoms or molecules in an optical tweezer. This method was realized experimentally by the Raizen Group in December, 2007. This cooling method is an exact physical realization of informational cooling, originally proposed by Leo Szilard in 1929. This proposal used the concept of information entropy to resolve the paradox of Maxwell's Demon. Together, these methods enable the trapping and cooling of ultra-cold atoms that span most of the periodic table and many molecules. This method will be applied to trapping of spin-polarized hydrogen, deuterium, and tritium for purposes of atomic spectroscopy and precision measurement of beta decay. The latter closes a circle that started with Dr. Raizen's work in particle physics, combining new approaches in atomic physics to address fundamental questions. Over the past two years, Dr. Raizen and his group built a new experiment to study Brownian motion of a bead of glass held in optical tweezers in air. In 1907, Albert Einstein published a paper in which he considered the instantaneous velocity of Brownian motion, and showed that it could be used to test the Equipartition Theorem, one of the basic tenets of statistical mechanics. In this paper, Einstein concluded that the instantaneous velocity would be impossible to measure in practice due to the very rapid randomization of the motion. In the spring of 2010, the Raizen Group measured the instantaneous velocity of a Brownian particle, over 100 years since the original prediction by Einstein. The velocity data was used to verify the Maxwell-Boltzmann velocity distribution , and the equipartition theorem for a Brownian particle.
Awards and Honors:
2008: Willis Lamb Medal in Laser Science and Quantum Optics
2002: Max Planck Award from the Max Planck Society and the Alexander von Humboldt Foundation
1999: I.I. Rabi Prize in Atomic, Molecular, and Optical Physics, American Physical Society
1993-1998: National Science Foundation Young Investigator Award
1992-1995: Office of Naval Research Young Investigator Award
1992-1994: Alfred P. Sloan Foundation Research Fellow
1991-1993: The Sid W. Richardson Foundation Regents Chair Fellow
1989-1991: National Research Council Postdoctoral Fellowship
1988-: IBM Graduate Fellowship
Dr. Raizen is also a fellow of American Physical Society and the Optical Society of America
In 1871, James Clerk Maxwell proposed a thought experiment, and in 1907 Albert Einstein made a prediction. Both men said that the experiments are impossible to perform in practice. In this talk I will show how the impossible is now possible.
Sept 16 Connections between Stellar Evolution and Nuclear Physics
Prof. R.E. Tribble
Department of Physics and Astronomy and the Cyclotron Institute, Texas A&M University
Distinguished Professor of Physics – Texas A&M University, College Station, TX, USA
Name: Robert E. Tribble
Born in Mexico, Missouri, January 7, 1947
1969: Graduated from the University of Missouri – Columbia 196
1973: Ph.D. from Princeton University
· 1973~present: Assistant Professor to Distinguished Professor, Texas A&M Universit
· 1977~1978: Visiting Scientist, Max Planck Institute, Heidelberg, German
· 1979~1987: Head, Department of Physics, Texas A&M University
· 1987~1988: Visiting Scientist, Los Alamos and Lawrence Livermore National Lab
· 2003~present: Director, Cyclotron Institute, Texas A&M University
U.S. Nuclear Science Advisory Committee (member, 1991-1994, chair 2006-2009)
International Nuclear Physics Conference organizing committee (2001, 2004, 2007)
Organization for Economic Cooperation and Development Global Science Forum for nuclear physics (2006-2008)
IUPAP working group on nuclear physics
Principle author of the 2007 U.S. Long Range Plan for Nuclear Science
Facility for Rare Isotope Beams Science Advisory Committee
Chair Elect, Division of Nuclear Physics of the American Physical Society
Editorial board for Reports on Progress in Physics
Member or chair of Program Advisory Committees at the RIBF at RIKEN, Japan, the NSCL, Michigan State University, USA, the Fundamental Nuclear Physics Beam Line at the Spallation Neutron Source, Oak Ridge National Laboratory, USA, the TRIUMF Laboratory, Canada.
Member of scientific advisory committees for Physics Division at Argonne National Lab, the Facility for Rare Isotope Beams, Michigan State University, the Thomas Jefferson National Accelerator Lab, the KoRIA project in South Korea.
Over 280 publications in refereed journals
Experiments in fundamental symmetries
Experiments in nuclear reactions and scattering
Measurements of nuclear reaction rates for nuclear astrophysics
Determination of the gluon spin content of the proto
Recent research experience in electroweak interactions and symmetries (co-spokesperson of the TWIST collaboration at TRIUMF); quark/parton distributions and gluon spin content in nuclei (member of STAR collaboration at RHIC and member of NUSEA (Fermilab E866)); nuclei far from stability; production and use of radioactive ion beams (RIBs) for nuclear astrophysics – designed and built a recoil spectrometer for RIBs, developed new technique for obtaining direct capture reaction rate information for nuclear astrophysics, used system for nuclear reactions and scattering studies and for production of nuclei for b-g decay studies.
Many years ago Hans Bethe (and others) realized that nuclear fusion reactions were the only source that could provide the energy to fuel our sun. We now know that nuclear reactions and interactions, along with mass, are key factors in dictating the ultimate fate of a star. The life of many stars ends in an explosion of energy and the production of a wide range of nuclear isotopes. In order to develop a full understanding of this explosive nuclear synthesis, we need information about reactions, masses and lifetimes of radioactive isotopes. This complicated problem has led nuclear physicists to develop new tools to probe nuclear properties and reaction rates for isotopes that often are far from stability. In my presentation, I will provide an overview of our present knowledge of nuclear processes that are important in various stages of stellar evolution. And I will briefly discuss out how we are extending our reach to the very edges of stability.
Sept 9 Introduction to Kalman Filters
Dr. Mike Grabbe
L-3 Communications, Greenville, Texas
Dr. Mike Grabbe is an Engineering Fellow with L-3 Communications in Greenville, TX. He works primarily on the design of target geo-location and tracking algorithms and the analysis of geo-location system performance. Prior to joining L-3, Mike worked at Texas Instruments and Raytheon in the areas of target tracking and missile guidance. Mike has a B.S. in General Engineering from the U.S. Naval Academy, an M.S. in Electrical Engineering from SMU, an M.S. in Applied Mathematics from the University of Arkansas, and a Ph.D. in Mathematical Sciences / Robotics from Clemson University. He holds two geo-location and tracking algorithm patents and is a Senior Member of the IEEE.
A Kalman filter is an optimal recursive linear estimator for the output of a dynamic system driven by noise. Kalman filters are widely used in defense applications, especially in the areas of target geo-location, target tracking, missile guidance, and aided inertial navigation. They have also found application in other disciplines involving the prediction of statistical time series, such as mathematical finance. In the simplest applications, the dynamics consist of a system of linear time-invariant differential equations, and the input is stationary white Gaussian noise. As a result, the system output to be estimated is a stationary correlated Gaussian random process. In target tracking applications, such a random process is used to model target motion in order to encompass the uncertainties involved, such as whether the target will maintain a constant velocity vector or will maneuver. The power spectral density of the input noise is then used as a tuning parameter to adjust the filter bandwidth for good tracking performance. The topics to be covered in this presentation include historical information, applications, system dynamics, algorithm overview, a target tracking example, and references. The material to be presented is used for an internal training course at L-3 Communications.
Sept 2 NANOPARTICLES-INDUCED CHANGES IN THE ELECTRO-OPTICAL PROPERTIES OF CERTAIN LIQUID-CRYSTAL BASED MICROSTRUCTURES
Prof. Suresh C Sharma
Department of Physics, University of Texas at Arlington