Jump to content

Tipup's Content - Page 3 - InviteHawk - Your Only Source for Free Torrent Invites

Buy, Sell, Trade or Find Free Torrent Invites for Private Torrent Trackers Such As redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker etc.

Tipup

Advanced Members
  • Posts

    911
  • Joined

  • Last visited

  • Feedback

    0%
  • Points

    34,245 [ Donate ]

Everything posted by Tipup

  1. Economists from the National University of Singapore (NUS) have completed an extensive study revealing that exposure to air pollution over several weeks is not just unhealthy, it can also reduce employee productivity. Associate Professor Alberto Salvo from the Department of Economics at the NUS Faculty of Arts and Social Sciences and an author of the study, explained, "Most of us are familiar with the negative impact air pollution can have on health, but as economists, we wanted to look for other socioeconomic outcomes. Our aim with this research was to broaden the understanding of air pollution in ways that have not been explored. We typically think that firms benefit from lax pollution regulations, by saving on emission control equipment and the like; here we document an adverse effect on the productivity of their work force." The results of this study were published in the American Economic Journal: Applied Economics on 3 January 2019. The link between air pollution and productivity The NUS team, including Associate Professor Haoming Liu and Dr. Jiaxiu He, spent over a year gathering information from factories in China. This involved interviewing managers at one dozen firms in four separate provinces, before obtaining access to data for two factories, one in Henan and the other in Jiangsu. The factories were textile mills, and workers were paid according to each piece of fabric they made. This meant that daily records of productivity for specific workers on particular shifts could be examined. Hence, the researchers compared how many pieces each worker produced each day to measures of the concentration of particulate matter that the worker was exposed to over time. A standard way of determining the severity of pollution is to measure how many fine particles less than 2.5 micrometres in diameter (PM2.5) are in the air. The majority of people living in developing countries are exposed to particle concentrations that health authorities deem harmful. At the two factory locations, pollution levels varied significantly from day to day, and overall they were consistently high. At one location, PM2.5 levels averaged about seven times the safe limit set by the US Environmental Protection Agency, at 85 micrograms per cubic metre. Interestingly, unlike previous literature, the team found that daily fluctuations in pollution did not immediately affect the productivity of workers. However, when they measured for more prolonged exposures of up to 30 days, a definite drop in output can be seen. The study was careful to control for confounding factors such as regional economic activity. "We found that an increase in PM2.5, by 10 micrograms per cubic metre sustained over 25 days, reduces daily output by 1 per cent, harming firms and workers," says Associate Professor Liu. "The effects are subtle but highly significant." The researchers remain agnostic about the reasons that explain why productivity goes down when pollution goes up. "High levels of particles are visible and might affect an individual's well-being in a multitude of ways," explained Associate Professor Liu. "Besides entering via the lungs and into the bloodstream, there could also be a psychological element. Working in a highly polluted setting for long periods of time could affect your mood or disposition to work." First-of-its-kind study examining prolonged exposure to air pollution Research on how living and working in such a polluted atmosphere affects productivity is very limited, partly due to worker output being difficult to quantify. One previous study that focused on workers packing fruit in California found a large and immediate effect from exposure to ambient PM2.5, namely that when levels rise by 10 micrograms per cubic metre, workers become 6 per cent less productive on the same day. That study's estimate appears large for a developing country. "Labourers in China can be working under far worse daily conditions while maintaining levels of productivity that look comparable to clean air days. If the effect were this pronounced and this immediate, we think that factory and office managers would take more notice of pollution than transpired in our field interviews. Therefore, our finding that pollution has a subtle influence on productivity seems realistic," Associate Professor Liu added. All the data collected in the NUS study are being made open access to serve as a resource for other researchers to accelerate progress in this topic. "This was a key criterion for inclusion in our study," Associate Professor Salvo added. "We wanted to share all the information we gathered so that other researchers may use it as well, hopefully adding to this literature's long-run credibility. We saw no reason why data on anonymous workers at a fragmented industry could not be shared."
  2. A giant toadstool that swallows up vitamins and nutrients in the intestines and kidneys: This is how one receptor that absorbs B12 vitamins in the small intestine looks. For the first time, researchers from Aarhus University, Denmark, have an insight into an as-yet unknown biology which has persisted for hundreds of millions of years of evolution. "What we're looking at is evolution at a structural level. A receptor with a toadstool structure that stems from way back to the common ancestors of insects and humans," says Associate Professor Christian Brix Folsted Andersen from the Department of Biomedicine at Aarhus University in Denmark. Vitamin B12 is the vitamin that humans most often lack, even with a healthy diet, which in turn can lead to serious anaemic diseases and symptoms from the central nervous system. With his research group, Andersen has now described the body's largest cell receptor: An ancient, previously unknown construction that was created by the merger of two proteins, and which, for reasons scientists do not yet understand, is preserved as a colossal structure in molecular terms. In the 1960s, scientist Dorothy Hodgkin received the Nobel Prize for her scientific breakthrough in determining the structure of the B12 vitamin. Now, Andersen and colleagues report this receptor structure more than 1000 times larger, which enables B12 to be absorbed in the body. The research results have been published in the scientific journal Nature Communications, and shed light on the issue of faulty vitamin B12 absorption and the loss of nutrients in the kidneys. "With the help of X-ray crystallography, we've succeeded in determining how the receptor is able to organise itself in a previously unknown way in human biology. With this new knowledge, we're finally able to explain why thousands of people around the world with specific genetic changes are unable to absorb the vitamin," explains Andersen over the phone from the University of Washington in the U.S. "But in my mind, the most interesting aspect is that with the help of advanced electron microscopy, which I'm learning about in detail here in Seattle, we have been able to see how the receptor as a whole looks, and thus also see how the receptor absorbs B12 vitamin in the intestines and various other substances in the kidneys. It's fantastic to have the opportunity to see this as the first person ever," he says. Andersen points out that in an evolutionary context, there is something very mysterious about the receptor as it does not resemble anything seen previously. "At the same time, by comparing genes, we can see that the receptor has the same structure as we find in insects and that it must have been evolved very early in evolution—many millions of years ago, and thus long before the origin of mammals," he says. Andersen's research is a continuation of his longstanding work with Søren K. Moestrup into B12 transport. In 2010, this research led to new and pivotal knowledge about how the receptor specifically recognises B12 in the small intestine. "The research we're carrying out today is a continuation of decades of research into vitamin B12. Indeed, 25 years ago, we had no idea about what was going on the shadowy recesses of the intestines. Now, the lights have been turned on, and we can see how it all works in a way that none of us could have imagined," says Moestrup. "Apart from obviously being very satisfying from a scientific viewpoint, it also opens completely new perspectives for medical treatment. For example, we now have in-depth knowledge about a receptor that could evidently be used to transport drugs into the kidneys and intestines," he says.
  3. Precise simulations of the movement and behavior of crowds can be vital to the production of digital sequences or the creation of large structures for crowd management. However, the ability to quantitatively predict the collective dynamic of a group responding to external stimulation remains a largely open issue, based primarily on models in which each individual's actions are simulated according to empirical behavioral rules. Until now, there was no experimentally tested physical model that describes the hydrodynamics of a crowd without assuming behavioral rules. Researchers from a laboratory affiliated with the CNRS, l'ENS de Lyon, and l'Université Claude Bernard Lyon 1 have provided a first equation of this type, deduced from a measurement campaign conducted on crowds numbering tens of thousands of individuals. The physicists focused on cohorts of runners at the beginning of a marathon, as they are guided to the starting line by a row of organizers in successive sequences of walking and halting. This protocol creates a periodic and controlled disturbance similar to the stimulations that are typically used to probe the mechanical response of fluids. Remarkably, group behavior varies very little from one assembly of runners to another, one race to another, and one country to another, with speed information propagating constantly at a little over one meter per second. The researchers established a generic description that can precisely predict crowd flows, as flows observed in a road-race in Chicago in 2016 helped predict those of thousands of runners at the start of the Paris marathon in 2017. By using technical standards from fluid mechanics to analyze images from the starting corrals of five races, researchers successfully measured crowd speed at each instant, subsequently describing it as a liquid flow. Their results show that information regarding the speed to adopt spreads to the back of the group in the form of waves measuring hundreds of meters, with no loss of intensity. In contrast, any change to the crowd's movement trajectory dissipates quickly, spreading just a few meters through the crowd. In short, speed information spreads easily through this fluid, while orientational information does not. The physicists now want to study the response of groups to extreme disturbances in order to test the limits of their hydrodynamic description of crowds.
  4. After developing a method to control exciton flows at room temperature, EPFL scientists have discovered new properties of these quasiparticles that can lead to more energy-efficient electronic devices. They were the first to control exciton flows at room temperature. And now, the team of scientists from EPFL's Laboratory of Nanoscale Electronics and Structures (LANES) has taken their technology one step further. They have found a way to control some of the properties of excitons and change the polarization of the light they generate. This can lead to a new generation of electronic devices with transistors that undergo less energy loss and heat dissipation. The scientists' discovery forms part of a new field of research called valleytronics and has just been published in Nature Photonics. Excitons are created when an electron absorbs light and moves into a higher energy level, or "energy band" as they are called in solid quantum physics. This excited electron leaves behind an "electron hole" in its previous energy band. And because the electron has a negative charge and the hole a positive charge, the two are bound together by an electrostatic force called a Coulomb force. It's this electron-electron hole pair that is referred to as an exciton. Unprecedented quantum properties Excitons exist only in semiconducting and insulating materials. Their extraordinary properties can be easiliy accessed in 2-D materials, which are materials whose basic structure is just a few atoms thick. The most common examples of such materials are carbon and molybdenite. When such 2-D materials are combined, they often exhibit quantum properties that neither material possesses on its own. The EPFL scientists thus combined tungsten diselenide (WSe2) with molybdenum diselenide (MoSe2) to reveal new properties with an array of possible high-tech applications. By using a laser to generate light beams with circular polarization, and slightly shifting the positions of the two 2-D materials so as to create a moiré pattern, they were able to use excitons to change and regulate the polarization, wavelength and intensity of light. From one valley to the next The scientists achieved this by manipulating one of the excitons' properties: their "valley," which is related to the extremes of energies of the electron and the hole . These valleys – which are where the name valleytronics comes from – can be leveraged to code and process information at a nanoscopic level. "Linking several devices that incorporate this technology would give us a new way to process data," says Andras Kis, who heads LANES. "By changing the polarization of light in a given device, we can then select a specific valley in a second device that's connected to it. That's similar to switching from 0 to 1 or 1 to 0, which is the fundamental binary logic used in computing."
  5. A mechanically compromised skull can result from enlarged fontanelles and smaller frontal bones due to defective migration and differentiation of osteoblasts in the skull primordia (developing skull). The Wnt/Planar cell polarity signaling pathway (Wnt/PCP), usually regulates cell migration and movement in tissues during embryonic development. In a recent study, conducted by Yong Wan and colleagues at the Center for Craniofacial Regeneration, the central research emphasis was on the Prickle1 gene, a core component of the Wnt/PCP pathway, in the skull. For the studies, Wan et al. used the missense allele of Prickle1, named Prickle1Beetlejuice (PrickleBJ). The homozygous PrickleBJ/BJ 'Beetlejuice' mutants were microcephalic and developed enlarged fontanelles between insufficient frontal bones, although the parietal bones were normal. The homozygous mutants had several other craniofacial defects including a midline cleft lip, incompletely penetrant cleft palate and decreased proximal-distal growth of the head. The scientists observed decreased Wnt/β-catenin and hedgehog signaling in the frontal bone condensations of the homozygous mutants in the study. The results are now published on Scientific Reports. In the homozygous mutants, the frontal bone osteoblast precursors underwent delayed differentiation, alongside decreased expression of migratory markers, resulting in underdeveloped frontal bones. The study showed that the Prickle1 protein function contributed to both migration and differentiation of bone-forming cells (osteoblast precursors) and its absence in the mutant animal model resulted in the defects. The homozygous mutants (PrickleBJ/BJ) developed cardiac outflow tract misalignment and cleft palate, contributing to perinatal death of the mutant mice. Therefore, the observed phenotypic features were from early to late embryonic stages. The Prickle1 gene regulates the differentiation of frontal bone osteoblasts in a new animal model Homozygous mutants (PrickleBJ/BJ) are microcephalic and have defects in the neural-crest derived skull. (a-c, e-g) Macroscopic views of the wild type mouse (Prickle+/+) (a-c) and homozygous mutant (e-g) littermates. d) Schematic of the …more By nature, the craniofacial complex contains three distinct regions: the skull vault, cranial base and the face. The cranial base forms the floor of the braincase and the skull vault - the roof. Bones of the cranial base form via endochondral ossification, while osteogenesis in the skull vault occurs via intramembranous ossification. Both the skull vault and cranial base are of embryonic origin (neural-crest derived or mesodermally derived). In the study model, the Beetlejuice mutants (Bj) contained a point-mutation in the Prickle1 gene (C161F), the Bj C161F mutation was deleterious to the function of the cytoplasmic protein Prickle1. Mutations of the protein in humans are usually associated with familial epilepsy. The mutant phenotype was consistent with another independent point mutation of Prickle1, known as C251X, which included stunted limbs and a cleft palate. While the protein product of the gene is widely expressed in the cytoplasm, little was known about its role in craniofacial osteogenesis. In the present study, Wan et al. analyzed the bones and cartilage of the head using alcian blue and alizarin red histology dyes. The homozygous mutant skulls were smaller, and the proximal-distal length of the head was reduced with an increased medial-lateral width of the skull. The results showed a statistically significant decrease in the contribution of the nasal bone to the total length of the skull vault in the wild type mice (Prickle+/+). The Prickle1 gene regulates the differentiation of frontal bone osteoblasts in a new animal model No change in the rate of proliferation or apoptosis in the Prickle1Bj/Bj frontal bones. At embryonic stage 12.5 (E12.5), Prickle1+/+ (a–d) and Prickle1Bj/Bj (f–i) littermates assayed for histology (haemotoxylin and eosin) staining (a,f), …more In contrast, in the homozygous mutant (PrickleBJ/BJ), contributions of the frontal bone to the total length increased, while the proportion of the parietal bone remained unchanged. Taken together, the results indicated that the Prickle1 protein function was required during all stages of frontal bone development. Wan et al. focused on the function of Prickle1 in the developing skull vault by examining the tissue distribution of the protein in wild type vs. mutant embryos. They found that the Prickle1 mutation resulted in two defected processes during frontal bone development, which included delayed osteoblast differentiation and reduced migration in the frontal bone. Such frontal bone defects were also observed in the phenotypic spectrum of cleidocranial dysplasia (CCD). The observed frontal bone insufficiency could potentially result from defects in proliferation and cell death. The scientists conducted studies using Haematoxylin and Eosin (H&E) histology dyes to test the hypothesis by observing frontal bone condensation in the wild type vs. mutant animals at embryonic stage 12.5 (E12.5), at which time frontal bone condensation typically occurred. Thereafter, they conducted TUNEL apoptosis assays, where the results indicated very few apoptotic cells (depicted via TUNEL positive uptake) in either genotype. The Prickle1 gene regulates the differentiation of frontal bone osteoblasts in a new animal model Delayed ossification in the frontal bone primordium. Digoxigenin (DIG)-labeled section in situ hybridization to embryonic stage 12.5 Prickle1+/+ (a–c) and Prickle1Bj/Bj (d–f) littermates. The expression levels of Runx2 (a,d), Alkaline …more The study included BrdU-labelled cell counts in the mutant vs. wild type mice to show no difference in the ratio of proliferating cells either. The number of actively dividing cells were then tested using phosphor-histone H3 immunohistochemistry to show no difference in the number of dividing cells in littermates. Since there was no change in cell death and proliferation, the scientists were next determined to test if osteogenic differentiation was occurring correctly. For this, Wan et al conducted RNA in situ hybridization experiments to assess the expression of alkaline phosphatase (ALP) and Osterix (OSX, also known as Sp7) in the pre-osteoblasts and osteoblasts of the frontal bones. They determined the expression of RUNX2, an early marker of osteoblast commitment in the skull and of ALP, a marker of more mature osteoblasts. By embryonic stage 15.5 (E15.5), the expression of Runx2, ALP and OSX decreased in the ectocranial layer of the mutant frontal bones compared with wild type littermates. The scientists determined that intramembranous ossification (conversion of mesenchymal tissue into bone) delayed in the frontal bone results in the hypoplastic Beetlejuice mutants. The Prickle1 gene regulates the differentiation of frontal bone osteoblasts in a new animal model Osteoblast migration is decreased in the frontal bone primordium. DIG-labeled section in situ hybridization to Twist1, Msx1, Msx2 and Engrailed1 (En1) to E12.5 Prickle1+/+ (a–d) and Prickle1Bj/Bj (e–h) coronal sections. (a,e) Decreased …more Wan et al. further determined if a defective signaling system led to the observed delayed frontal bone osteogenesis by studying the level of canonical Wnt and Hedgehog (HH) signaling in the mutants. The results suggested that the levels of HH signaling (required for cranial bone development) were, indeed, defective in the mutant animals. Finally, they conducted in situ hybridization to markers of osteoblast migration (with markers Engrailed1 (En1), Twist1, Msx1 and Msx2) in the wild type and mutant littermates. The expression level of the markers was reduced in the frontal bone primordia in the mutants. The results suggested the Prickle1 protein function was necessary to mediate cell migration of osteoblast precursors during all stages of skull vault development. In this way, Wan et al. analyzed the Beetlejuice mutant mouse as a new model to understand the etiology of microcephaly. The number of animal models currently in use to determine the growth patterns of the face and skull in microcephaly are limited. The scientists combined genetic, molecular and physical mechanisms in the study relative to Prickle1 mutants to show contributions to decreased growth of the craniofacial region in the new mouse model. Wan et al. will continue the work to understand how cell migration and the alteration of each compartment (brain, skull vault and cranial base) contribute to the development of microcephaly, clavarial patterning and growth.
  6. China became the third country to land a probe on the Moon on Jan. 2. But, more importantly, it became the first to do so on the far side of the moon, often called the dark side. The ability to land on the far side of the moon is a technical achievement in its own right, one that neither Russia nor the United States has pursued. The probe, Chang'e 4, is symbolic of the growth of the Chinese space program and the capabilities it has amassed, significant for China and for relations among the great power across the world. The consequences extend to the United States as the Trump administration considers global competition in space as well as the future of space exploration. One of the major drivers of U.S. space policy historically has been competition with Russia particularly in the context of the Cold War. If China's successes continue to accumulate, could the United States find itself engaged in a new space race? China's achievements in space Like the U.S. and Russia, the People's Republic of China first engaged in space activities during the development of ballistic missiles in the 1950s. While they did benefit from some assistance from the Soviet Union, China developed its space program largely on its own. Far from smooth sailing, Mao Zedong's Great Leap Forward and the Cultural Revolution disrupted this early programs. The Chinese launched their first satellite in 1970. Following this, an early human spaceflight program was put on hold to focus on commercial satellite applications. In 1978, Deng Xiaoping articulated China's space policy noting that, as a developing country, China would not take part in a space race. Instead, China's space efforts have focused on both launch vehicles and satellites—including communications, remote sensing and meteorology. This does not mean the Chinese were not concerned about the global power space efforts can generate. In 1992, they concluded that having a space station would be a major sign and source of prestige in the 21st century. As such, a human spaceflight program was re-established leading to the development of the Shenzhou spacecraft. The first Chinese astronaut, or taikonaut, Yang Liwei, was launched in 2003. In total, six Shenzhou missions have carried 12 taikonauts into low earth orbit, including two to China's first space station, Tiangong-1. Will China's moon landing launch a new space race? In this photo provided Jan. 3, 2019, by China National Space Administration via Xinhua News Agency, the first image of the moon's far side taken by China's Chang'e-4 probe. A Chinese spacecraft on Thursday, Jan. 3, made the first-ever …more In addition to human spaceflight, the Chinese have also undertaken scientific missions like Chang'e 4. Its first lunar mission, Chang'e 1, orbited the moon in October 2007 and a rover landed on the moon in 2013. China's future plans include a new space station, a lunar base and possible sample return missions from Mars. A new space race? The most notable feature of the Chinese space program, especially compared to the early American and Russian programs, is its slow and steady pace. Because of the secrecy that surrounds many aspects of the Chinese space program, its exact capabilities are unknown. However, the program is likely on par with its counterparts. In terms of military applications, China has also demonstrated significant skills. In 2007, it undertook an anti-satellite test, launching a ground-based missile to destroy a failed weather satellite. While successful, the test created a cloud of orbital debris that continues to threaten other satellites. The movie "Gravity" illustrated the dangers space debris poses to both satellites and humans. In its 2018 report on the Chinese military, the Department of Defense reported that China's military space program "continues to mature rapidly." Despite its capabilities, the U.S., unlike other countries, has not engaged in any substantial cooperation with China because of national security concerns. In fact, a 2011 law bans official contact with Chinese space officials. Does this signal a new space race between the U.S. and China? As a space policy researcher, I can say the answer is yes and no. Some U.S. officials, including Scott Pace, the executive secretary for the National Space Council, is cautiously optimistic about the potential for cooperation and does not see the beginning of a new space race. NASA Administrator Jim Brindenstine recently met with the head of the Chinese space program at the International Astronautical Conference in Germany and discussed areas where China and the U.S. can work together. However, increased military presence in space might spark increased competition. The Trump administration has used the threat posed by China and Russia to support their argument for a new independent military branch, a Space Force. Regardless, China's abilities in space are growing to the extent that is reflected in popular culture. In Andy Weir's 2011 novel "The Martian" and its later film version, NASA turns to China to help rescue their stranded astronaut. While competition can lead to advances in technology, as the first space race demonstrated, a greater global capacity for space exploration can also be beneficial not only for saving stranded astronauts but increasing knowledge about the universe where we all live. Even if China's rise heralds a new space race, not all consequences will be negative.
  7. An adolescent experiences the death of his mother after a lengthy illness. When I ask what services he would like to receive from the school, he initially says he didn't expect special treatment, would be embarrassed by counseling from the school mental health staff and wouldn't feel comfortable if many of his teachers asked to talk to him about his grief. At the same time, the student felt as though the school should somehow take his situation into account. "I don't know what the school should do," the student told me. "But I just lost the person I love most in my life and they act as if nothing happened." In my many years as a developmental-behavioral pediatrician who specializes in school crisis and child bereavement, I believe this dilemma – that is, the need to do enough but not to overwhelm the grieving student or the adults who are trying to help – represents a major challenge for America's schools. The need for recognition by trusted adults of their loss, a genuine expression of sympathy and an offer of assistance is often what students seek after a major loss – but too often don't receive. A common experience Loss is very common in childhood – 9 of every 10 children experience the death of a close family member or friend and 1 of every 20 children experience the death of a parent. In contrast, teacher preparation to support grieving students is uncommon. In a recent survey conducted by the American Federation of Teachers and the New York Life Foundation, 93 percent of teachers reported that they never received any training on how to support grieving students. They identified this lack of training as the primary barrier that prevented them from reaching out to grieving students in their class and offering the support they knew they needed. Worried that they would do or say the wrong thing and only make matters worse, some educators chose instead to say and do nothing. In recognition of this problem, I offer a series of insights and recommendations that teachers can adopt to make the school experience less stressful for students who have recently lost a loved one. Although the advice is aimed at educators, surviving parents or caretakers or anyone who cares about how to help bereaved students can use this advice to advocate on their behalf. The consequences of inaction Saying nothing says a lot to grieving children. It communicates that adults are either unaware, uninterested or unwilling to help. It leaves children confused about what has happened and how to react. It leaves children unsupported and forces them to grieve alone. Adults should reach out to grieving children and let them know that they are aware and concerned and are available to provide support and assistance. What not to say Anything that starts with "at least" should probably be reconsidered – "at least she's not in pain anymore" or "at least you still have your father" are generally not helpful comments. It suggests that the adult is uncomfortable with the child's expression of grief and is trying to "cheer up" the grieving child in order to limit the adult's own discomfort. Don't encourage children to hide their feelings or reactions, and don't feel that you have to hide your own emotions. Be genuine and authentic. Tell grieving children that you are sorry about their loss and ask them what they are feeling and how they are doing. There isn't anything you can say that is going to make everything right again for a grieving child. So, listen more than you talk. Other guidelines of what not to say – and what to say instead – to grieving children can be found in "The Grieving Student: A Teacher's Guide." Engage peers Peers want to – and can – be an important source of support to grieving children, but often are unsure what to say or do. Provide them advice on what to say and practical suggestions on how to be helpful. This will help grieving children obtain critical peer support and decrease their sense of isolation. It will also reduce the likelihood that peers will instead ask repetitive and intrusive questions or tease grieving children. Offer academic accommodations Grieving children often experience a temporary decrease in learning ability. They may be tired from not being able to sleep, have difficulty concentrating and learning new material, or may be experiencing significant disruptions in their home environment that make it difficult to study or complete homework. Grieving children should view school as a place of comfort and support, especially at a time of loss. If they are worried about failing, school becomes instead a source of additional distress. Teachers should offer educational support before children demonstrate academic failure. Check in more frequently to make sure that they are learning new material and are able to keep up with the workload. Talk to other teachers, instructors and coaches and try to help grieving students balance all of their responsibilities. If the student needs to prepare for an important concert, then maybe academic teachers can lessen some of their assignments. Grieving students may need to have their workload decreased or modified temporarily. If a major report seems overwhelming, substitute with shorter and more manageable assignments. If it's hard for them to stay on task to complete an individual project, consider a group project that might promote peer support. Be more sensitive Teachers can also introduce activities with more sensitivity. For example, if you are going to do a project for Mother's Day, introduce the activity by telling students that you realize some children may not have a mother who is alive or living with them. They can still complete the activity remembering their mother, or can choose to focus on another important female family member. This will also help students whose mothers may be deployed in the military or incarcerated, or away for other reasons. Help children manage grief triggers Many things may remind grieving children about the person who died and cause them to temporarily feel a resurgence of their grief. It may be a comment made by a teacher or a peer, such as "I went shopping with my mother this weekend," or a portion of a classroom lesson, such as a health education lesson that references a similar cause of death. Holidays such as Thanksgiving or the winter holidays tend to involve spending time with loved ones and may accentuate the sense of loss. Let students know that these triggers may occur and set up a safety plan. Students may be given permission to step out of the classroom briefly if they are feeling upset and worried that they will not be able to contain their emotions. Work out a signal to communicate when this occurs that doesn't draw attention to the student. Make a plan for where the student will go and who they can talk with. If students know that they will be able to leave, they often feel less overwhelmed and will be more likely to remain in class and stay engaged in the lesson. For more information The Coalition to Support Grieving Students offers free learning modules on a wide range of issues related to grieving students, including videos and written summaries. Schools can also learn more about how to help grieving students through the Grief-Sensitive Schools Initiative.
  8. When Amelia Earhart took off in 1937 to fly around the world, people had been flying airplanes for only about 35 years. When she tried to fly across the Pacific, she – and the world – knew it was risky. She didn't make it, and was declared dead in January 1939. In the 80 years since then, many other planes have been lost around the world and never found again – including the 2014 disappearance of Malaysia Airlines Flight 370, over the Indian Ocean. As flight instructors and aviation industry professionals, we know that increasingly advanced technologies are getting better at tracking planes, even across great expanses of water far from land. These systems allow aircraft to navigate much more easily, and many allow real-time flight tracking across much of the globe. Getting from place to place From the early years of aviation up until about 2000, the main way pilots navigated was by playing connect-the-dots across a map. They would use radio direction-finding equipment to follow a route from an airport to a radio-transmitting beacon at a fixed location, and then from beacon to beacon until reaching the destination airport. Various technologies made that process easier, but the concept was still the same. That system is still in use, but decreasingly so as new technologies replace it. In the first few years of the 21st century, pilots for major airlines began to use the United States' Global Positioning System and other similar systems that use signals from orbiting satellites to calculate the plane's position. GPS is more accurate, letting pilots land easily in bad weather conditions, without the need for expensive ground-based radio transmitters. Satellite navigation also lets pilots fly more directly between destinations, because they need not follow the routes from one radio beacon to the next. Amelia Earhart would have a hard time disappearing in 2019 Amelia Earhart, missing and declared dead Jan. 5, 1939. Credit: Underwood & Underwood/Wikimedia Commons There are six satellite-based navigation systems in operation: GPS, run by the United States; Galileo, run by the European Union and the European Space Agency; and the Russian GLONASS cover the whole planet, and China's BeiDou system is expected to span the globe by 2020. India's NAVIC covers the Indian Ocean and nearby areas; Japan has begun operating the QZSS system to improve navigation in the Pacific. The systems operate independently of each other, but some satellite navigation receivers can merge data from more than one of them simultaneously, providing pilots with extremely accurate information about where they are. That can help them get where they're going, rather than going missing. Tracking aircraft When planes do get lost, the company or country responsible for them often starts searching; some efforts, like the search for MH 370, include many nations and businesses. Amelia Earhart would have a hard time disappearing in 2019 Ground-based radio beacons are found at airports and along major flight routes. Credit: Sabung.hamster/Wikimedia Commons, CC BY-SA When all is going well, most planes are tracked by radar, which can also help air traffic controllers prevent midair collisions and give pilots directions around severe weather. When planes fly beyond the range of land-based radar, like on long-haul trips over oceans, though, they're tracked using a method devised more than 70 years ago: Pilots periodically radio air traffic control with reports on where they are, what altitude they're flying at and what their next navigation landmark is. Over the past few years, a new method has been rolling out around the world. Called "Automatic Dependent Surveillance – Broadcast," the system sends automatic position reports from airplanes to air traffic controllers and nearby aircraft, so everyone knows who's where and avoids collisions. By 2020, the FAA will require most aircraft in the U.S. to have an ADS-B system, which is already mandatory in several other countries. At the moment, though, ADS-B flight tracking doesn't cover remote areas of the world because it depends on ground-based receivers to collect the information from planes. A space-based receiver system is being tested, which could eventually cover the entire planet. In addition, many airplane manufacturers sell equipment that includes monitoring and tracking software: for instance, to analyze engine performance and spot problems before they become severe. Some of this equipment can transmit real-time data on the location of the aircraft while it's in flight. Data from those systems were used in the search for MH 370, and also gave investigators early insight into the 2015 Germanwings 9525 crash in the French Alps, before the plane's "black box" flight data recorder was found. GPS, ADS-B and other navigation and tracking systems might have helped save, or at least find, Amelia Earhart and her navigator, Fred Noonan – either by preventing them from getting lost in the first place or by directing rescuers to their location after the plane went down. Eight decades later, planes still go missing – but it's getting harder to fly off the map.
  9. Messenger RNA, which can induce cells to produce therapeutic proteins, holds great promise for treating a variety of diseases. The biggest obstacle to this approach so far has been finding safe and efficient ways to deliver mRNA molecules to the target cells. In an advance that could lead to new treatments for lung disease, MIT researchers have now designed an inhalable form of mRNA. This aerosol could be administered directly to the lungs to help treat diseases such as cystic fibrosis, the researchers say. "We think the ability to deliver mRNA via inhalation could allow us to treat a range of different disease of the lung," says Daniel Anderson, an associate professor in MIT's Department of Chemical Engineering, a member of MIT's Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. The researchers showed that they could induce lung cells in mice to produce a target protein—in this case, a bioluminescent protein. If the same success rate can be achieved with therapeutic proteins, that could be high enough to treat many lung diseases, the researchers say. Asha Patel, a former MIT postdoc who is now an assistant professor at Imperial College London, is the lead author of the paper, which appears in the Jan. 4 issue of the journal Advanced Materials. Other authors of the paper include James Kaczmarek and Kevin Kauffman, both recent MIT Ph.D. recipients; Suman Bose, a research scientist at the Koch Institute; Faryal Mir, a former MIT technical assistant; Michael Heartlein, the chief technical officer at Translate Bio; Frank DeRosa, senior vice president of research and development at Translate Bio; and Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute. Treatment by inhalation Messenger RNA encodes genetic instructions that stimulate cells to produce specific proteins. Many researchers have been working on developing mRNA to treat genetic disorders or cancer, by essentially turning the patients' own cells into drug factories. Because mRNA can be easily broken down in the body, it needs to transported within some kind of protective carrier. Anderson's lab has previously designed materials that can deliver mRNA and another type of RNA therapy called RNA interference (RNAi) to the liver and other organs, and some of these are being further developed for possible testing in patients. In this study, the researchers wanted to create an inhalable form of mRNA, which would allow the molecules to be delivered directly to the lungs. Many existing drugs for asthma and other lung diseases are specially formulated so they can be inhaled via either an inhaler, which sprays powdered particles of medication, or a nebulizer, which releases an aerosol containing the medication. The MIT team set out to develop a material that could stabilize RNA during the process of aerosol delivery. Some previous studies have explored a material called polyethylenimine (PEI) for delivering inhalable DNA to the lungs. However, PEI doesn't break down easily, so with the repeated dosing that would likely be required for mRNA therapies, the polymer could accumulate and cause side effects. To avoid those potential side effects, the researchers turned to a type of positively charged polymers called hyperbranched poly (beta amino esters), which, unlike PEI, are biodegradable. The particles the team created consist of spheres, approximately 150 nanometers in diameter, with a tangled mixture of the polymer and mRNA molecules that encode luciferase, a bioluminescent protein. The researchers suspended these particles in droplets and delivered them to mice as an inhalable mist, using a nebulizer. "Breathing is used as a simple but effective delivery route to the lungs. Once the aerosol droplets are inhaled, the nanoparticles contained within each droplet enter the cells and instruct it to make a particular protein from mRNA," Patel says. The researchers found that 24 hours after the mice inhaled the mRNA, lung cells were producing the bioluminescent protein. The amount of protein gradually fell over time as the mRNA was cleared. The researchers were able to maintain steady levels of the protein by giving the mice repeated doses, which may be necessary if adapted to treat chronic lung disease. Broad distribution Further analysis of the lungs revealed that mRNA was evenly distributed throughout the five lobes of the lungs and was taken up mainly by epithelial lung cells, which line the lung surfaces. These cells are implicated in cystic fibrosis, as well as other lung diseases such as respiratory distress syndrome, which is caused by a deficiency in surfactant protein. In her new lab at Imperial College London, Patel plans to further investigate mRNA-based therapeutics. In this study, the researchers also demonstrated that the nanoparticles could be freeze-dried into a powder, suggesting that it may be possible to deliver them via an inhaler instead of nebulizer, which could make the medication more convenient for patients.
  10. The idea of using hydrogen as the basis of a clean sustainable energy source, often termed a hydrogen economy, has been a topic of conversation for decades. Hydrogen fuel, for example, doesn't emit any carbon dioxide and is considered more sustainable than traditional fossil fuels. The lightest element on the periodic table, hydrogen is an energy carrier that can be used to power fuel cells in transportation vehicles, buildings or other infrastructure. Hydrogen also can help upcycle things like straw, grasses and other biomass into high-value chemicals used in everything from plastics to paint to personal care items. But the technology driving these innovations has faced serious challenges, chiefly because freeing hydrogen for these uses is produced mainly through processes that require fossil fuels and come with an environmental cost—carbon dioxide. Now, University of Delaware engineer Feng Jiao has patented a process that may hold the key to producing greener hydrogen from water using electricity and a copper-titanium catalyst. A focus on renewables Jiao, an associate professor of chemical and biomolecular engineering and associate director of the Center for Catalytic Science and Technology at UD, wasn't always interested in water electrolysis, which uses electricity to reduce water into hydrogen gas and oxygen molecules. When he first joined the UD faculty in 2010, his research program focused on the energy storage capability of batteries. "But we realized that batteries are an expensive technology for large-scale energy storage, so my lab began focusing on beneficial ways to use electricity instead," Jiao said. "Chemical conversion is one way to do this." Initially, Jiao and his research team focused on developing processes to turn carbon dioxide into useful chemicals, such as ethanol that can be used in synthetic fuels, or ethylene that can be used to produce polymers and plastics. A project, funded by the National Science Foundation and later by the National Aeronautics and Space Administration (NASA), explored ways to convert carbon dioxide to oxygen, something that would be very useful for deep space exploration. Jiao and his students developed an efficient system, but found they needed a better catalyst to drive the reaction. As they tested different metals for the job, the researchers unexpectedly discovered that a copper-titanium alloy is among only a few non-precious, metal-based catalysts that can split water into hydrogen gas and oxygen, a process referred to as hydrogen evolution. Both copper and titanium are considered inexpensive and relatively abundant when compared with precious metals, such as silver or platinum, typically suited for the job. Hydrogen is currently produced using what's known as steam-methane reforming, where natural gas and high heat are employed to free hydrogen molecules from methane. Jiao calls it a "dirty process" because when the hydrogen gas is removed, all that is left is carbon, usually in the form of carbon dioxide. "So, you can produce hydrogen cheaply, but at an environmental cost—carbon dioxide emissions," says Jiao. Greener hydrogen from water Copper alone is not effective at producing hydrogen. But add some interesting chemistry — and a teeny bit of titanium — and a world of possibilities suddenly opens to create catalysts that pull their weight and serve the environment. Credit: University of Delaware This got Jiao thinking about cleaner ways to produce hydrogen without the environmental cost. Cleaner, greener processes Copper is known to be good at conducting both heat and electricity. This is why it is the material of choice for electrical wiring in our homes, cookware, electronics, motor vehicle parts, even air conditioning and home heating parts. However, copper alone is not effective at producing hydrogen. But add some interesting chemistry—and a teeny bit of titanium—and a world of possibilities suddenly opens to create catalysts that pull their weight and serve the environment. "With a little bit of titanium in it, the copper catalyst behaves about 100 times better than copper alone," said Jiao. This is because, when paired together, the two metals create uniquely active sites that help the hydrogen atoms strongly interact with the catalyst surface in a way that is comparable to the performance of much more expensive platinum-based catalysts. While traditional chemical processes start with fossil fuels, such as coal or gas, and add oxygen to produce various chemicals, Jiao explained, with hydrogen the reverse chemical reaction is possible. "We can start with the most oxidized form of carbon—carbon dioxide—and add hydrogen to produce the same chemicals, which has a lot of potential for reducing carbon emissions," said Jiao, who spoke at a U.S. Senate Committee hearing on carbon capture and neutralization in 2018. The Jiao team performs a life cycle analysis on each process they invent to evaluate the economics of how the technology stacks up against currently accepted methods. They ask themselves questions such as "Is the invention cost-effective? Is it better or worse than existing technology, and how much can be gained by using the process?" Early results show that a copper-titanium catalyst can produce hydrogen energy from water at a rate more than two times higher than the current state-of-the-art platinum catalyst. Jiao's electrochemical process can operate at near-room temperatures (70 to 176 degrees Fahrenheit), for the most part, too, which increases the catalyst's energy efficiency and can greatly lower the overall capital cost of the system. Jiao already has filed a patent application on the process with the help of UD's Office of Economic Innovation and Partnerships (OEIP), but he said more work is needed in terms of scaling the process for commercial applications. If they can make it work, the savings would be big—an alternative catalyst that is three orders of magnitude cheaper than the current state-of-the-art platinum-based catalyst. Future development efforts will focus on ways to increase the size of the water electrolyzer from lab scale to commercial scale. Additional testing of the catalyst's stability also is planned. The researchers are exploring different combinations of metals, too, to find the sweet spot between performance and cost. "Once you have the technology, you can create jobs around material supply, manufacturing, and once you can build a product, you can commercialize and export it," said Jiao. Feng Jiao and colleagues from Columbia University and Xi'an Jiaotong University recently reported their latest findings in an article in ACS Catalysis, a journal of the American Chemical Society. His colleague at Columbia University is Jingguang Chen, a former professor in UD's Department of Chemical and Biomolecular Engineering.
  11. Understanding earthquakes is a challenging problem—not only because they are potentially dangerous but also because they are complicated phenomena that are difficult to study. Interpreting the massive, often convoluted data sets that are recorded by earthquake monitoring networks is a herculean task for seismologists, but the effort involved in producing accurate analyses could significantly improve the development of reliable earthquake early-warning systems. A promising new collaboration between Caltech seismologists and computer scientists using artificial intelligence (AI)—computer systems capable of learning and performing tasks that previously required humans—aims to improve the automated processes that identify earthquake waves and assess the strength, speed, and direction of shaking in real time. The collaboration includes researchers from the divisions of Geological and Planetary Sciences and Engineering and Applied Science, and is part of Caltech's AI4Science Initiative to apply AI to the big-data problems faced by scientists throughout the Institute. Powered by advanced hardware and machine-learning algorithms, modern AI has the potential to revolutionize seismological data tools and make all of us a little safer from earthquakes. Recently, Caltech's Yisong Yue, an assistant professor of computing and mathematical sciences, sat down with his collaborators, Research Professor of Geophysics Egill Hauksson, Postdoctoral Scholar in Geophysics Zachary Ross, and Associate Staff Seismologist Men-Andrin Meier, to discuss the new project and future of AI and earthquake science. What seismological problem inspired you to include AI in your research? Meier: One of the things that I work on is earthquake early warning. Early warning requires us to try to detect earthquakes very rapidly and predict the shaking that they will produce later so that you can get a few seconds to maybe tens of seconds of warning before the shaking starts. Hauksson: It has to be done very quickly—that's the game. The earthquake waves will hit the closest monitoring station first, and if we can recognize them immediately, then we can send out an alert before the waves travel farther. Meier: You only have a few seconds of seismogram to decide whether it is an earthquake, which would mean sending out an alert, or if it is instead a nuisance signal—a truck driving by one of our seismometers or something like that. We have too many false classifications, too many false alerts, and people don't like that. This is a classic machine-learning problem: you have some data and you need to make a realistic and accurate classification. So, we reached out to Caltech's computing and mathematical science (CMS) department and started working on it with them. Why is AI a good tool for improving earthquake monitoring systems? Yue: The reasons why AI can be a good tool have to do with scale and complexity coupled with an abundant amount of data. Earthquake monitoring systems generate massive data sets that need to be processed in order to provide useful information to scientists. AI can do that faster and more accurately than humans can, and even find patterns that would otherwise escape the human eye. Furthermore, the patterns we hope to extract are hard for rule-based systems to adequately capture, and so the advanced pattern-matching abilities of modern deep learning can offer superior performance than existing automated earthquake monitoring algorithms. Ross: In a big aftershock sequence, for example, you could have events that are spaced every 10 seconds, rapid fire, all day long. We use maybe 400 stations in Southern California to monitor earthquakes, and the waves caused by each different earthquake will hit them all at different times. Yue: When you have multiple earthquakes, and the sensors are all firing at different locations, you want to be able to unscramble which data belong to which earthquake. Cleaning up and analyzing the data takes time. But once you train a machine-learning algorithm—a computer program that learns by studying examples as opposed to through explicit programing—to do this, it could make an assessment really quickly. That's the value. How else will AI help seismologists? Yue: We are not just interested in the occasional very big earthquake that happens every few years or so. We are interested in the earthquakes of all sizes that happen every day. AI has the potential to identify small earthquakes that are currently indistinguishable from background noise. Ross: On average we see about 50 or so earthquakes each day in Southern California, and we have a mandate from the U.S. Geological Survey to monitor each one. There are many more, but they're just too small for us to detect with existing technology. And the smaller they are, the more often they occur. What we are trying to do is monitor, locate, detect, and characterize each and every one of those events to build "earthquake catalogs." All of this analysis is starting to reveal the very intricate details of the physical processes that drive earthquakes. Those details were not really visible before. Why hasn't anyone applied AI to seismology before? Ross: Only in the last year or two has seismology started to seriously consider AI technology. Part of it has to do with the dramatic increase in computer processing power that we have seen just within the past decade. What is the long-term goal of this collaboration? Meier: Ultimately, we want to build an algorithm that mimics what human experts do. A human seismologist can feel an earthquake or see a seismogram and immediately tell a lot of things about that earthquake just from experience. It was really difficult to teach that to a computer. With artificial intelligence, we can get much closer to how a human expert would treat the problem. We are getting much closer to creating a "virtual seismologist." Why do we need a "virtual seismologist?" Yue: Fundamentally both in seismology and beyond, the reason that you want to do this kind of thing is scale and complexity. If you can train an AI that learns, then you can take a specialized skill set and make it available to anyone. The other issue is complexity. You could have a human look at detailed seismic data for a long time and uncover small earthquakes. Or you could just have an algorithm learn to pick out the patterns that matter much faster. Meier: The detailed information that we're gathering helps us figure out the physics of earthquakes—why they fizzle out along certain faults and trigger big quakes along others, and how often they occur. Will creating a "virtual seismologist" mean the end of human seismologists? Ross: Having talked to a range of students, I can say with fairly high confidence that most of them don't want to do cataloguing work. [Laughs.] They would rather be doing more exciting work. Yue: Imagine that you're a musician and before you can become a musician, first you have to build your own piano. So you spend five years building your piano, and then you become a musician. Now we have an automated way of building pianos—are we going to destroy musicians' jobs? No, we are actually empowering a new generation of musicians. We have other problems that they could be working on.
  12. Hybrid perovskites are spectacularly efficient materials for photovoltaics. Just a few years after the first solar cells were fabricated, they have already achieved solar conversion efficiencies greater than 22 percent. Interestingly, the fundamental mechanisms that are responsible for this high efficiency are still being vigorously debated. A thorough understanding of these mechanisms is essential to enable further improvements, and computational studies conducted using the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory have produced critical new insights. Chris Van de Walle's group at the University of California, Santa Barbara (UCSB) has reported these breakthroughs in two recent papers: X. Zhang, J.-X. Shen, W. Wang, and C. G. Van de Walle, ACS Energy Lett. 3, 2329 (2018) and J.-X. Shen, X. Zhang, S. Das, E. Kioupakis, and C. G. Van de Walle, Adv. Energy Mater. 8, 1801027 (2018). Hybrid perovskites are a group of materials that combine organic molecules with an inorganic framework in a perovskite lattice structure. A number of research groups previously attributed the high efficiency of the hybrid perovskites to an indirect band gap originating from strong spin-orbit coupling. It was argued that the indirect nature of the gap suppresses radiative recombination between electrons and holes and thus minimizes undesirable carrier recombination. UCSB postdoc Xie Zhang and Ph.D. student Jimmy-Xuan Shen (who has since graduated) demonstrated that this was incorrect by developing a cutting-edge, first-principles approach to accurately determine the spin texture of the band edges and quantitatively compute the radiative recombination rates. For methylammonium lead iodide (the prototype hybrid perovskite commonly referred to as MAPI) they found that the radiative recombination is actually as strong as in conventional direct-gap semiconductors. "This result should put an end to misguided attempts to analyze and design device characteristics based on erroneous assumptions about the recombination rate," said Zhang. Strong radiative recombination means that these materials are also useful for light-emitting diode (LED) applications. However, current densities in LEDs are much higher than in solar cells, and at high carrier concentrations nonradiative recombination processes can become detrimental. Such nonradiative losses have been observed, but experimentally it is not possible to identify the microscopic origins. Shen and Zhang built on expertise in the Van de Walle group to accurately compute the recombination rate from first principles. They also managed to precisely link the rate to features in the electronic structure. "Auger recombination is a process in which two carriers recombine across the band gap and the excess energy is transferred to a third carrier," explained Shen. "We found that the Auger coefficient in MAPI is unexpectedly large: two orders of magnitude larger than in other semiconductors with comparable band gaps." The researchers identified two distinct features of the material that are responsible: a resonance between the band gap and the spin-orbit-induced splitting of the conduction bands, and the presence of structural distortions that promote the Auger process. "These calculations are extremely demanding, and the compute power provided by NERSC has been instrumental in obtaining these results," commented Van de Walle. "We have been able to demonstrate that Auger losses can be suppressed if lattice distortions are reduced, and we propose specific approaches for achieving this in real materials."
  13. A German court said Friday it had opened the way for shareholders to join a collective legal action against Mercedes-Benz parent Daimler for diesel cheating that mirrors one already brought against VW. Multiple shareholders in the luxury carmaker argue that their investment was harmed by the "dieselgate" scandal and that they deserve compensation as a result. Now a Stuttgart tribunal has called for a so-called "model case" that would test questions common to the claims, in the German legal system's closest analogue to a class-action lawsuit. In a statement, plaintiffs' lawyer Andreas Tilp said that Daimler should have "informed financial markets about the risks arising from the use of illegal software in its diesel cars" as early as 2012. A Daimler spokesman told AFP: "We believe this case is baseless and we will contest it with all the legal means at our disposal". The Stuttgart-based manufacturer has consistently disputed claims that it manipulated its motors to appear less polluting in the lab than in real driving conditions. Volkswagen admitted in 2015 to such practices affecting 11 million cars worldwide, with the subsequent "dieselgate" scandal costing it tens of billions in fines, compensation and buybacks. The German transport ministry in June ordered Daimler to recall 774,000 Mercedes vehicles found to contain software capable of deceiving emissions tests. Most were Vito vans, GLC-class SUVs and C-class sedans. Since 2015, several German prosecutors' offices have opened cases against VW and its subsidiaries Audi and Porsche, along with Daimler, Opel and parts maker Bosch on suspicion of fraud, stock market manipulation or false advertising.
  14. A new study conducted by a team of astronomers from Poland and South Africa provides more insights into the nature of Hen 3-160, a symbiotic binary system in the southern Milky Way. The research, presented in a paper published December 22 on arXiv.org, proposes that this object is a symbiotic binary containing a Mira variable star. It is assumed that symbiotic binaries showcase dramatic, episodic changes in the spectra of their light because one star of pair is a very hot, small star while the other is a cool giant. In general, such systems are essential for researchers studying various aspects of stellar evolution. Astronomers divide symbiotic stars (SySt) into two main classes: S-type and D-type. Most known SySts are of S-type, which have near-infrared spectra generally dominated by the cool star's photosphere, and are indistinguishable from ordinary late-type giants. D-type symbiotic stars exhibit additional emission attributed to thick circumstellar dust shells. SySts of this class experience large amplitude variations due to the presence of Mira variables (red giants with pulsation periods longer than 100 days, and amplitudes greater than one magnitude in infrared and 2.5 magnitude at visual wavelengths) and other long-period variable stars. Although Hen 3-160 (other designations: SS73 9, WRAY 15-208, Schwartz 1 and 2MASS 08245314-512832) was first spotted in the 1960s, no detailed studies of this binary have been conducted, and very little is known about the parameters of its components. Thus, a group of astronomers led by Cezary Gałan of Nicolaus Copernicus Astronomical Center of the Polish Academy of Sciences in Warsaw, Poland, decided to analyze data from spectroscopic and photometric observations of Hen 3-160 collected during a timespan of over two decades. Gałan's team used optical spectra obtained with SpUpNIC spectrograph on the 1.9-m Radcliffe telescope in Sutherland, South Africa, and photometric optical data acquired with a 35-cm Meade RCX400 telescope in Klein Karoo Observatory, near Sutherland. Analysis of this data sheds new light on the nature of Hen 3-160. "In this work, we present new observations collected over two decades which enabled us to reveal its very interesting nature," the astronomers wrote in the paper. The main conclusion from this study is that the giant in the Hen 3-160 system is a Mira variable pulsating with 242.5-day period. Moreover, it is the first known symbiotic Mira that is simultaneously the S-process enhanced star of MS spectral type. In particular, the researchers found that the large-amplitude periodic variations observed in the optical V and IC-band light curves with the pulse period of over 100 days, which were correlated with changes in other bands as well in the spectra, indicate that the cool component is a Mira star. Furthermore, the presence of comparably strong ZrO and TiO bands are indicative of the MS spectral type for this object, and place it among the S stars, proving that it is enhanced in the S-process elements. The astronomers also estimated the distance of the Hen 3-160 system. They found that the binary is located some 30,600 light years away from the Earth, about 4,200 light years above the disk of the Milky Way galaxy. They added that galactic coordinates of Hen 3-160, together with relatively high proper motions, make it a galactic extended thick disc object.
  15. In her book Silent Spring, Rachel Carson writes: "The sense of smell, almost more than any other, has the power to recall memories…." You might wonder how this relates to microorganisms. In fact, they produce most of the odours that we perceive. If you've ever walked in a forest following the first rainfall after a dry spell, you would recall a sweet, fresh and powerfully evocative smell. This earthy-smelling substance is geosmin, a chemical released into the air by a soil-dwelling bacteria called actinomycetes. You may also recall the tangy scent of the sea, evoking memories of crashing waves, sandy beaches and the cry of seagulls. This smell is thanks to dimethyl sulfide, a rather stinky sulfurous compound produced by bloom-forming algae. But microbial scents can also protect plants. Agricultural crops can wither and die under drought conditions. Microbes —thanks to the scents they release —can help plants better tolerate these stressful conditions, an important service in a warming climate. As a microbial ecologist, my work focuses on understanding how microbes and plants work together, and which microbial scents help crops. A language of odours Odours, both good and bad, are caused by chemicals called volatile organic compounds, or volatiles. Microbial aromas might save crops from drought Our experimental field with wheat as a model plant to investigate the effect of drought on the plant microbiome. Credit: Ruth Schmidt Scientists have known about this form of language since 1990. Plants use volatiles to attract pollinators, to "cry for help" when under attack by insects and to warn neighbouring plants to prepare their chemical defenses. Yet only in the past decade have researchers realized that microbes also communicate with the help of volatiles. Some microbes use volatiles to send each other signals or coordinate their behaviour, such as their ability to move or grow. Volatiles have low boiling points and other, unique properties that allow them to evaporate easily and travel through the air over long distances —from a microbial perspective, at least. These useful attributes help microbes communicate in soil environments. One could think of volatiles as the "words" that build the "language" of microorganisms. My research on microbial communication has looked at how soil microbes living near plant roots use volatiles to exchange information in the "rhizosphere," or root space. Bacteria and fungi do in fact respond to each other with terpenes, another type of volatile. Terpenes have already been shown to be important in plants and insects communicating with each other, but it is the first time that we have also observed terpene-based conversation between microbes. Zoom into the root surrounding of plants where microbes thrive and emit volatiles. The scent of climate change Agriculture and climate change are deeply intertwined. A recent study estimates that high temperatures and drought will lead to drastic losses for all major food crops, including maize and wheat. This will have a dramatic impact on the global food supply. We are in dire need of strategies to reduce the negative effects of climate change on crops. One of such strategies stems from microbes. Microbes live inside us and on our skin, and help keep us healthy. Like humans, plants host communities of microbes, collectively called the plant microbiome, that maintain their health, support their growth and protect them from disease by fighting off pathogens. Plants can even recruit microbes to their roots to help withstand drought. The plant microbiome plays a large role in plant survival and vice versa —plants supply nutrients to their associated microbes that, in return, protect their host through cooperative and competitive interactions. This intimate, co-dependent relationship between the plant and its microbiome is called the holobiont. Some even consider the collaboration a "superorganism" —an organized society that functions as a whole. The microbiome drives the evolution and adaptation of its plant host making it a resilient entity that can adapt to changing environmental conditions. Our team investigates the various ways the plant holobiont adapts to stress, such as contamination and drought. We find ways to cultivate microbial communities with plants to boost their resilience to these stresses. Part of our research involves finding ecologically friendly alternatives for agriculture. For example, microbial terpenes —the biggest class of volatiles produced by fungi, protists and bacteria can help plants survive in times of drought. The microbes release these volatiles to signal the plants and stimulate the plant's defensive mechanisms. We still don't know how the communication occurs, or which genes and pathways are involved in the release of these volatiles, but we're working on it. We're tracing microbial volatiles in the plant holobiont and literally digging out the genes carrying the genetic information to produce those compounds. We can then select the microbes that carry the genes for the smells that help plants withstand drought —and feed them to our crops like vitamins so that they can continue to provide us with food in a warmer future.
  16. In the 1980s, the discovery of high-temperature superconductors known as cuprates upended a widely held theory that superconductor materials carry electrical current without resistance only at very low temperatures of around 30 Kelvin (or minus 406 degrees Fahrenheit). For decades since, researchers have been mystified by the ability of some cuprates to superconduct at temperatures of more than 100 Kelvin (minus 280 degrees Fahrenheit). Now, researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have unveiled a clue into the cuprates' unusual properties—and the answer lies within an unexpected source: the electron spin. Their paper describing the research behind this discovery was published on Dec. 13 in the journal Science. Adding electron spin to the equation Every electron is like a tiny magnet that points in a certain direction. And electrons within most superconductor materials seem to follow their own inner compass. Rather than pointing in the same direction, their electron spins haphazardly point every which way—some up, some down, others left or right. When scientists are developing new kinds of materials, they usually look at the materials' electron spin, or the direction in which the electrons are pointing. But when it comes to making superconductors, condensed matter physicists haven't traditionally focused on spin, because the conventionally held view was that all of the properties that make these materials unique were shaped only by the way in which two electrons interact with each other through what's known as "electron correlation." But when a research team led by Alessandra Lanzara, a faculty scientist in Berkeley Lab's Materials Sciences Division and a Charles Kittel Professor of Physics at UC Berkeley, used a unique detector to measure samples of an exotic cuprate superconductor, Bi-2212 (bismuth strontium calcium copper oxide), with a powerful technique called SARPES (spin- and angle-resolved photoemission spectroscopy), they uncovered something that defied everything they had ever known about superconductors: a distinct pattern of electron spins within the material. "In other words, we discovered that there was a well-defined direction in which each electron was pointing given its momentum, a property also known as spin-momentum locking," said Lanzara. "Finding it in high-temperature superconductors was a big surprise." Revealing hidden spin: Unlocking new paths toward high-temperature superconductors A research team led by Berkeley Lab's Alessandra Lanzara (second from left) used a SARPES (spin- and angle-resolved photoemission spectroscopy) detector to uncover a distinct pattern of electron spins within high-temperature cuprate …more A new map for high-temperature superconductors In the world of superconductors, "high temperature" means that the material can conduct electricity without resistance at temperatures higher than expected but still in extremely cold temperatures far below zero degrees Fahrenheit. That's because superconductors need to be extraordinarily cold to carry electricity without any resistance. At those low temperatures, electrons are able to move in sync with each other and not get knocked by jiggling atoms, causing electrical resistance. And within this special class of high-temperature superconductor materials, cuprates are some of the best performers, leading some researchers to believe that they have potential use as a new material for building super-efficient electrical wires that can carry power without any loss of electron momentum, said co-lead author Kenneth Gotlieb, who was a Ph.D. student in Lanzara's lab at the time of the discovery. Understanding what makes some exotic cuprate superconductors such as Bi-2212 work at temperatures as high as 133 Kelvin (about -220 degrees Fahrenheit) could make it easier to realize a practical device. Among the very exotic materials that condensed matter physicists study, there are two kinds of electron interactions that give rise to novel properties for new materials, including superconductors, said Gotlieb. Scientists who have been studying cuprate superconductors have focused on just one of those interactions: electron correlation. The other kind of electron interaction found in exotic materials is "spin-orbit coupling—the way in which the electron's magnetic moment interacts with atoms in the material. Spin-orbit coupling was often neglected in the studies of cuprate superconductors, because many assumed that this kind of electron interaction would be weak when compared to electron correlation, said co-lead author Chiu-Yun Lin, a researcher in the Lab's Materials Sciences Division and a Ph.D. student in the Department of Physics at UC Berkeley. So when they found the unusual spin pattern, Lin said that although they were pleasantly surprised by this initial finding, they still weren't sure whether it was a "true" intrinsic property of the Bi-2212 material, or an external effect caused by the way the laser light interacted with the material in the experiment. Shining a light on electron spin with SARPES Over the course of nearly three years, Gotlieb and Lin used the SARPES detector to thoroughly map out the spin pattern at Lanzara's lab. When they needed higher photon energies to excite a wider range of electrons within a sample, the researchers moved the detector next door to Berkeley Lab's synchrotron, the Advanced Light Source (ALS), a U.S. DOE Office of Science User Facility that specializes in lower energy, "soft" X-ray light for studying the properties of materials. The SARPES detector was developed by Lanzara, along with co-authors Zahid Hussain, the former ALS Division Deputy, and Chris Jozwiak, an ALS staff scientist. The detector allowed the scientists to probe key electronic properties of the electrons such as valence band structure. After tens of experiments at the ALS, where the team of researchers connected the SARPES detector to Beamline 10.0.1 so they could access this powerful light to explore the spin of the electrons moving with much higher momentum through the superconductor than those they could access in the lab, they found that Bi-2212's distinct spin pattern—called "nonzero spin—was a true result, inspiring them to ask even more questions. "There remains many unsolved questions in the field of high-temperature superconductivity," said Lin. "Our work provides new knowledge to better understand the cuprate superconductors, which can be a building block to resolve these questions." Lanzara added that their discovery couldn't have happened without the collaborative "team science" of Berkeley Lab, a DOE national lab with historic ties to nearby UC Berkeley. "This work is a typical example of where science can go when people with expertise across the scientific disciplines come together, and how new instrumentation can push the boundaries of science," she said.
  17. Tropical forests store about a third of Earth's carbon and about two-thirds of its above-ground biomass. Most climate change models predict that as the world warms, all of that biomass will decompose more quickly, which would send a lot more carbon dioxide into the atmosphere. But new research presented at the American Geophysical Union's 2018 Fall Meeting contradicts that theory. Stephanie Roe, an ecology Ph.D. student at the University of Virginia, measured the rate of decomposition in artificially warmed plots of forest in Puerto Rico. She found biomass in the warmed plots broke down more slowly than samples from a control site that wasn't warmed. Her results indicate that as the climate warms, forest litter could pile up on the ground, instead of breaking down into the soil. Less decomposition means less carbon dioxide released back into the atmosphere. But it also means less carbon taken up by the soil, where it's needed to fuel microbial processes that help plants grow. "These results could have significant implications on the carbon cycle in a warmer future," Roe said. Roe said there are few empirical studies of how tropical forests will respond to climate change. She set out to address this gap in June of 2017, when she and her research team travelled to El Yunque National Forest in Puerto Rico. They landed at a site called TRACE—the Tropical Responses to Altered Climate Experiment. TRACE is the first-ever long-term warming experiment conducted in a tropical forest. It was established by the US Forest Service in 2016 for research like Roe's. The site consists of three hexagonal plots of land enclosed by a ring of infrared heaters raised four meters above the ground, and three more plots enclosed by fake heaters that are used as the "control" forest. Roe collected leaves from the plots, dried them out in the lab, and then returned them to the plots randomly. In addition to the native plants, she also included black and green tea, and popsicle sticks to represent woody biomass, to see how different materials would respond to the warming. The heaters were programmed to continuously heat the plots to four degrees higher than the ambient temperature of the forest. The experiment was supposed to run for a full year, but at the beginning of October, Hurricane Maria swept across the island, destroying the TRACE sites. Roe was back in Virginia when the storm struck. She had collected samples from the first few months of the experiment, and they were already showing signs of significant decomposition, so she decided to go ahead with the analysis based on what she had. And the results were not what she thought they would be. "We would expect that microbes tend to work faster, like their metabolisms increase, with warmer temperatures," Roe said. "So we would expect to see an increase of activity of microbes and other decomposers to decompose the litter." But instead of seeing faster rates of decomposition, Roe observed the warming produced a drying effect in the plots, which slowed decomposition. "What we found is actually it went the other way because moisture was impacted so much," Roe said. Moisture in the litter from the treatment sites was reduced by an average of 38 percent. Roe pointed out that the increase in frequency and severity of storms in the region could amplify this effect. Hurricane Maria reduced significant portions of the tree canopy in El Yunque, allowing a lot more sunlight to reach the forest floor that can dry up the litter. The results Roe shared are preliminary and not yet published. Her next project is to do further analysis of the nutrients in the litter and of the microbial communities to see if there are other factors that could explain the unexpected slowdown in decomposition.
  18. America's schools are in a state of crisis. By the end of the day, 7,000 students will have dropped out of high school, or just over 1 million every year. Meanwhile, 40 percent of teachers will leave the profession within five years, resulting in an increasingly dire shortage of qualified instructors. While state and federal policymakers often focus on testing and accountability, University of Delaware associate professor Deborah Bieler suggests that keeping students and teachers alike from dropping out of school may be as simple as engaging in meaningful conversation. In her new book The Power of Teacher Talk: Promoting Equity and Retention Through Student Interactions, published Nov. 9 by Teachers College Press, Bieler asserts that brief daily interactions shouldn't be thought of as meaningless small talk. Just a few words of encouragement, a genuine compliment and even a follow-up from a previous conversation shows students they are valued, which in turn allows teachers to experience greater job satisfaction. "Teachers typically engage in five student interactions per minute—that's one interaction every 12 seconds. Each one shapes and reflects participants' attitudes about staying in school," said Bieler, who is also a former high school English teacher. "The great potential of these interactions to change lives for the better is often why students love school, why people become teachers, and why both students and teachers stay in school." Meaningful interactions, however brief, are especially important for teachers and students of historically marginalized groups. "Equity-oriented teachers look for and take opportunities to actively promote and increase equity wherever they can," said Bieler. As a result, "When students and teachers remain in school, there is a greater chance that they can use their more deeply developed skills and knowledge to create a more equitable world for themselves and for others." Strategies to keeping students engaged So what can teachers do and say to keep marginalized students from "falling through the cracks"? In her book, Bieler outlines four strategies. 1. Classroom decoration Classroom décor makes a visual statement of a teacher's stance on equity, social justice and commitment to staying in school, so Bieler suggests including "pivotal items" such as posters or bulletin boards that acknowledge and support social justice, instead of school-focused messages, to show students they are welcomed and valued. "One teacher displayed an inspirational poster entitled 'Determination/Little Rock Nine,' which included images of the Little Rock Nine students who integrated Central High School in 1957," said Bieler. 2. Impromptu interactions during class Every day, teachers and students engage in hundreds of spontaneous interactions that have a profound effect on each group's decision to stay in school. So, Bieler suggests, respond with patience rather than discipline. "Teachers are uniquely positioned to mark youth as worthy, and they do this important work through their daily interactions with students," said Bieler. 3. Conversations before and after class Engage in unstructured small talk before and after class, unrelated to school topics. This positive energy can build a more humane connection and make students feel more valued. "The moments before and after class provide places for teachers and students to assert their agency and to create or perform their identities in ways that are not possible during class time," said Bieler. 4. Staying to talk When a student is in danger of "falling through the cracks," as Bieler puts it, invite the student to stay and talk after class. "Compared with all of the other interactions discussed in this book, these intentional staying-to-talk interactions with students were among the most powerful ways that I saw equity-oriented teachers try to connect with students about whose success they were concerned," said Bieler. "The meetings communicate that the teacher is paying attention to, or investing in, students and signal to students their value and sense of promise." Reviewers have described the book as "an indispensable resource for new and practicing teachers alike" and "a must-read for anyone interested in understanding and improving life in schools."
  19. Fewer Marriott guest records than previously feared were compromised in a massive data breach, but the largest hotel chain in the world confirmed Friday that approximately 5.25 million unencrypted passport numbers were accessed. The compromise of those passport numbers has raised alarms among security experts because of their value to state intelligence agencies. The FBI is leading the investigation of the data theft and investigators suspect the hackers were working on behalf of the Chinese Ministry of State Security, the rough equivalent of the CIA. The hackers accessed about 20.3 million encrypted passport numbers. There is no evidence that they were able to use the master encryption key required to gain access to that data. Unencrypted passport numbers are valuable to state intelligence agencies because they can be used to compile detailed dossiers on people and their international movements. In the case of China, it would allow that country's security ministry to add to databases of aggregated information on valued individuals. Those data points include information on people's health, finances and travel. "You can identify things in their past that maybe they don't want known, points of weakness, blackmail, that type of thing," said Priscilla Moriuchi, an analyst with Recorded Future who specialized in East Asia at the U.S. National Security Agency where she spent 12 years. She left the agency in 2017. When the Bethesda, Maryland, hotel chain initially disclosed the breach in November, the company said that hackers compiled stolen data undetected for four years, including credit card and passport numbers, birthdates, phone numbers and hotel arrival and departure dates. The affected hotel brands were operated by Starwood before it was acquired by Marriott in 2016. They include W Hotels, St. Regis, Sheraton, Westin, Element, Aloft, The Luxury Collection, Le MĂ©ridien and Four Points. Starwood-branded timeshare properties were also affected. None of the Marriott-branded chains were threatened. Marriott said Friday that it now believes the overall number of guests potentially involved is around 383 million, less than the initial estimate of 500 million, but still one of the largest security breaches on record. The 2017 Equifax hack affected more than 145 million people. A Target breach in 2013 affected more than 41 million payment card accounts and exposed contact information for more than 60 million customers.
  20. A team of researchers at the Gran Sasso Science Institute (GSSI) and Istituto Italiano di Technologia (IIT) have devised a mathematical approach for understanding intra-plant communication. In their paper, pre-published on bioRxiv, they propose a fully coupled system of non-linear, non-autonomous discontinuous and ordinary differential equations that can accurately describe the adapting behavior and growth of a single plant, by analyzing the main stimuli affecting plant behavior. Recent studies have found that rather than being passive organisms, plants can actually exhibit complex behaviors in response to environmental stimuli, for instance, adapting their resource allocation, foraging strategies, and growth rates according to their surrounding environment. How plants process and manage this network of stimuli, however, is a complex biological question that remains unanswered. Researchers have proposed several mathematical models to achieve a better understanding of plant behavior. Nonetheless, none of these models can effectively and clearly portray the complexity of the stimulus-signal-behavior chain in the context of a plant's internal communication network. The team of researchers at GSSI and IIT who carried out the recent study had previously investigated the mechanisms behind intra-plant communication, with the aim to identify and exploit basic biological principles for the analysis of plant root behavior. Their previous work analyzed robotic roots in a simulated environment, translating a set of biological rules into algorithmic solutions. A mathematical approach for understanding intra-plant communication Photo by Alex Loup on Unsplash.com. Even though each root acted independently from the others, the researchers observed the emergence of some self-organizing behavior, aimed at optimizing the internal equilibrium of nutrients at the whole-plant level. While this past study yielded interesting results, it merely considered a small part of the complexity of intra-plant communication, completely disregarding the analysis of above-ground organs, as well as photosynthesis-related processes. "In this paper, we do not aspire to gain a complete description of the plant complexity, yet we want to identify the main cues influencing the growth of a plant with the aim of investigating the processes playing a role in the intra-communication for plant growth decisions," the researchers wrote in their recent paper. "We propose and explain here a system of ordinary differential equations (ODEs) that, differently from state of the art models, take into account the entire sequence of processes from nutrients uptake, photosynthesis and energy consumption and redistribution." In the new study, therefore, the researchers set out to develop a mathematical model that describes the dynamics of intra-plant communication and analyses the possible cues that activate adaptive growth responses in a single plant. This model is based on formulations about biological evidence collected in laboratory experiments using state-of-the-art techniques. Compared to existing models, their model covers a wider range of elements, including photosynthesis, starch degradation, multiple nutrients uptake and management, biomass allocation, and maintenance. These elements are analyzed in depth, considering their interactions and their effects on a plant's growth. To validate their model and test its robustness, the researchers compared experimental observations of plant behavior with results obtained when applying their model in simulations, where they reproduced conditions of growth similar to those naturally occurring in plants. Their model attained high accuracy and minor errors, suggesting that it can effectively summarize the complex dynamics of intra-plant communication. "The model is ultimately able to highlight the stimulus signal of the intra-communication in plants, and it can be expanded and adopted as a useful tool at the crossroads of disciplines such as mathematics, robotics and biology, for instance, for validation of biological hypotheses, translation of biological principles into control strategies or resolution of combinatorial problems," the researchers said in their paper.
  21. On New Year's Day, 2019, Navy engineer David A. Tonn received his twenty-eighth U.S. patent, according to the U.S. Patent and Trademark Office. Titled "Dual Mode Slotted Monopole Antenna," the novel antenna design could soon be connecting your cell phone to the internet. (U.S. Patent 10,170,841) Slotted antennas are commonly used in telecommunication towers and television broadcast antennas. Unlike a standard antenna, the slots allow the antenna to be pointed in a particular direction. The Navy uses these types of antennas for radar applications and communicating with towed sonar buoys. Until now, slotted cylinder antennas have been limited by what's called a cutoff frequency, beyond which the antenna effectively shorts out. Tonn's work has successfully given cylinder slot antennas the ability to also act as a monopole antenna beyond the cutoff frequency by "floating" the antenna above the ground plane with a capacitor. Tonn, an expert in maritime antennas at the Naval Undersea Warfare Center, tested his design using a 12-inch copper prototype, but the patent notes that it "can be scaled to other portions of the RF spectrum, making it useful in the realm of commercial communications, e.g., digital television, cellular telephone service, etc." Businesses that want to bring the antenna to market can now acquire it by applying for a patent license from the Navy. Under the business-friendly umbrella of technology transfer, patent license agreements allow federal laboratories to assign their intellectual property rights to a business or entrepreneur and facilitate access to inventors and technical data. TechLink, a nonprofit organization that specializes in federal technology transfer, helps companies prepare a commercialization plan and patent license application at no charge. Since 1999, TechLink has helped hundreds of companies, large and small, realize commercial success with federal inventions through the development of new and improved products and services.
  22. Urban planners should plant hedges, or a combination of trees with hedges—rather than just relying on roadside trees—if they are to most effectively reduce pollution exposure from cars in near-road environments, finds a new study from the University of Surrey. In a paper published in Atmospheric Environment, researchers from the Global Centre for Clean Air Research (GCARE) looked at how three types of road-side green infrastructure—trees, hedges, and a combination of trees with hedges and shrubs—affected the concentration levels of air pollution. The study used six roadside locations in Guildford, UK, as test sites where the green infrastructure was between one to two metres away from the road. The researchers found that roadsides that only had hedges were the most effective at reducing pollution exposure, cutting black carbon by up to 63 percent. Ultrafine and sub-micron particles followed this reduction trend, with fine particles (less than 2.5 micrometres in diameter) showing the least reduction among all the measured pollutants. The maximum reduction in concentrations was observed when the winds were parallel to the road due to a sweeping effect, followed by winds across the road. The elemental composition of particles indicated an appreciable reduction in harmful heavy metals originating from traffic behind the vegetation. The hedges only—and a combination of hedges and trees—emerged as the most effective green infrastructure in improving air quality behind them under different wind directions. Roadsides with only trees showed no positive influence on pollution reduction at breathing height (usually between 1.5 and 1.7m), as the tree canopy was too high to provide a barrier/filtering effect for road-level tailpipe emissions. According to the United Nations, more than half of the global population live in urban areas—this number increases to almost two thirds in the European Union where, according to the European Environmental Agency, air pollution levels in many cities are above permissible levels, making air pollution a primary environmental health risk. Professor Prashant Kumar, the senior author of the study and the founding Director of the GCARE at the University of Surrey, said: "Many millions of people across the world live in urban areas where the pollution levels are also the highest. The best way to tackle pollution is to control it at the source. However, reducing exposure to traffic emissions in near-road environments has a big part to play in improving health and well-being for city-dwellers. "The iSCAPE project provided us with an opportunity to assess the effectiveness of passive control measures such as green infrastructure that is placed between the source and receptors." "This study, which extends our previous work, provides new evidence to show the important role strategically placed roadside hedges can play in reducing pollution exposure for pedestrians, cyclists and people who live close to roads. Urban planners should consider planting denser hedges, and a combination of trees with hedges, in open-road environments. Many local authorities have, with the best of intentions, put a great emphasis on urban greening in recent years. However, the dominant focus has been on roadside trees, while there are many miles of fences in urban areas that could be readily complemented with hedges, with appreciable air pollution exposure dividend. Urban vegetation is important given the broad role it can play in urban ecosystems—and this could be about much more than just trees on wide urban roads", adds Professor Kumar.
  23. The world's largest tech conference has apparently learned a big lesson about gender equity. CES, the huge annual consumer-electronics show in Las Vegas, caught major flak from activists in late 2017 when it unveiled an all-male lineup of keynote speakers for the second year in a row. Although it later added two female keynoters , the gathering's "boys' club" reputation remained intact. It didn't help that one of the unsanctioned events latching on to CES last year was a nightclub featuring female "robot strippers." This year, four of the nine current keynoters are women. GenderAvenger, the activist group that raised a ruckus last year, recently sent CES organizers a congratulatory letter and awarded the show a "Gold Stamp of Approval" for a roster of keynote and "featured" speakers that it says is 45 percent women—60 percent of them women of color. It's a significant change for CES, which like most tech conferences remains disproportionately male, just like the industry it serves. Even absent the robot dogs, sci-fi worthy gadgets and "booth babes" CES has been known for, you could readily peg it as a technology show from the bathroom lines alone—where men shift uncomfortably as they wait their turn while women waltz right in. Keynoters this year include IBM CEO Ginni Rometty; Lisa Su, CEO of chipmaker Advanced Micro Devices; and U.S. Transportation Security Elaine Chao. The entire featured speaker list is currently half female, although the exact percentage won't be known until after the event. "There is no question we keep trying to do better," said Gary Shapiro, CEO of the Consumer Technology Association, which organizes CES. Tech's big gadget show edges closer to gender equity In this Jan. 4, 2017, photo a woman participates in a virtual realty presentation during an Intel news conference before CES International in Las Vegas. The weeklong event is one of the world's largest trade shows and where many tech …more "Diversity is about having people who see things differently—frankly, disagree with you and tell you that you are stupid," said Tania Yuki, CEO of social media analytics company Shareablee and an attendee of CES for the past several years. The big question, she says, is whether CES has really listened to its critics. CES is the place to be for tech companies and startups to show off their latest gadgets and features. More than 180,000 people are expected to attend this year, and some 4,500 companies will be on the convention floor. Among them are newcomers like Tide maker Procter & Gamble, defense contractor Raytheon and tractor seller John Deere—all eager to burnish their technology bona fides. But really leveling the playing field often means more than inviting female CEOs to speak. For starters, women and people of color are underrepresented in the tech industry, especially in leadership and technical roles. So, conference organizers might need to look harder, or be more flexible in who they invite to speak. There are also optics. While recent attendees say "booth babes"—scantily clad women hawking gadgets—no longer seem to be a presence, some companies still hire "fitness models," largely young women wearing tight-fitting outfits, to demo products. This can make it difficult for the few women at the show who are there as executives, engineers and other technologists, as men mistake them for models, too. "When you are talking about scantily clad models you are setting a tone," said Bobbie Carlton, the founder of Innovation Women, a speaker bureau for women. "It is a slippery slope and you end up with this type of mentality that runs through industry, where women are objectified and are only useful if they look good." Tech's big gadget show edges closer to gender equity In this Jan. 7, 2018, file photo Aureole's wine angel rappels up the wind tower at the Mandalay Bay hotel and casino during CES International in Las Vegas. The weeklong event is one of the world's largest trade shows and where many tech …more More optics: Until recently, a porn convention taking place immediately after CES appeared more diverse than CES itself. Not a good look for the tech confab. There are also logistical challenges, Carlton said. For example, women often work for smaller companies, which can find it more challenging to "send someone cross-country to stay at a fancy hotel for three days," she said. Rajia Abdelaziz is CEO of invisaWear, a startup that makes smart "safety jewelry." While she's attending CES this year, she said it wasn't worth the $10,000 it would cost her company to have its own convention-floor booth. In addition to the cost concerns, Abdelaziz notes that her products are primarily aimed at women—and there just aren't that many of them at CES. Women are also still more likely to be responsible for the home and for child care, so they might turn down speaking opportunities if the timing doesn't work for them, Carlton said. CES has tried to make some concessions. For example, it offers private pods for women to pump breast milk at the event. But it doesn't offer child care support, unlike the smaller Grace Hopper Celebration for Women in Computing conference, a fall event aimed at women in computer science. Tech's big gadget show edges closer to gender equity In this Jan. 8, 2018, file photo a model performs at a display for Sony cameras after a Sony news conference at CES International in Las Vegas. Critics have been on the case of one of the tech industry's largest trade shows for not …more Organizers note that children are not permitted at CES. Although kids are also banned from Grace Hopper, that conference still manages to offer free child care for attendees. Still, Yuki is hopeful that CES is on the right track. "It's a big conference," she said. "You can only turn a very big ship very slowly."
  24. There are more than 3,900 confirmed planets beyond our solar system. Most of them have been detected because of their "transits"—instances when a planet crosses its star, momentarily blocking its light. These dips in starlight can tell astronomers a bit about a planet's size and its distance from its star. But knowing more about the planet, including whether it harbors oxygen, water, and other signs of life, requires far more powerful tools. Ideally, these would be much bigger telescopes in space, with light-gathering mirrors as wide as those of the largest ground observatories. NASA engineers are now developing designs for such next-generation space telescopes, including "segmented" telescopes with multiple small mirrors that could be assembled or unfurled to form one very large telescope once launched into space. NASA's upcoming James Webb Space Telescope is an example of a segmented primary mirror, with a diameter of 6.5 meters and 18 hexagonal segments. Next-generation space telescopes are expected to be as large as 15 meters, with over 100 mirror segments. One challenge for segmented space telescopes is how to keep the mirror segments stable and pointing collectively toward an exoplanetary system. Such telescopes would be equipped with coronagraphs—instruments that are sensitive enough to discern between the light given off by a star and the considerably weaker light emitted by an orbiting planet. But the slightest shift in any of the telescope's parts could throw off a coronagraph's measurements and disrupt measurements of oxygen, water, or other planetary features. Now MIT engineers propose that a second, shoebox-sized spacecraft equipped with a simple laser could fly at a distance from the large space telescope and act as a "guide star," providing a steady, bright light near the target system that the telescope could use as a reference point in space to keep itself stable. In a paper published today in the Astronomical Journal, the researchers show that the design of such a laser guide star would be feasible with today's existing technology. The researchers say that using the laser light from the second spacecraft to stabilize the system relaxes the demand for precision in a large segmented telescope, saving time and money, and allowing for more flexible telescope designs. "This paper suggests that in the future, we might be able to build a telescope that's a little floppier, a little less intrinsically stable, but could use a bright source as a reference to maintain its stability," says Ewan Douglas, a postdoc in MIT's Department of Aeronautics and Astronautics and a lead author on the paper. The paper also includes Kerri Cahoy, associate professor of aeronautics and astronautics at MIT, along with graduate students James Clark and Weston Marlow at MIT, and Jared Males, Olivier Guyon, and Jennifer Lumbres from the University of Arizona. In the crosshairs For over a century, astronomers have been using actual stars as "guides" to stabilize ground-based telescopes. "If imperfections in the telescope motor or gears were causing your telescope to track slightly faster or slower, you could watch your guide star on a crosshairs by eye, and slowly keep it centered while you took a long exposure," Douglas says. In the 1990s, scientists started using lasers on the ground as artificial guide stars by exciting sodium in the upper atmosphere, pointing the lasers into the sky to create a point of light some 40 miles from the ground. Astronomers could then stabilize a telescope using this light source, which could be generated anywhere the astronomer wanted to point the telescope. "Now we're extending that idea, but rather than pointing a laser from the ground into space, we're shining it from space, onto a telescope in space," Douglas says. Ground telescopes need guide stars to counter atmospheric effects, but space telescopes for exoplanet imaging have to counter minute changes in the system temperature and any disturbances due to motion. The space-based laser guide star idea arose out of a project that was funded by NASA. The agency has been considering designs for large, segmented telescopes in space and tasked the researchers with finding ways of bringing down the cost of the massive observatories. "The reason this is pertinent now is that NASA has to decide in the next couple years whether these large space telescopes will be our priority in the next few decades," Douglas says. "That decision-making is happening now, just like the decision-making for the Hubble Space Telescope happened in the 1960s, but it didn't launch until the 1990s.'" Star fleet Cahoy's lab has been developing laser communications for use in CubeSats, which are shoebox-sized satellites that can be built and launched into space at a fraction of the cost of conventional spacecraft. For this new study, the researchers looked at whether a laser, integrated into a CubeSat or slightly larger SmallSat, could be used to maintain the stability of a large, segmented space telescope modeled after NASA's LUVOIR (for Large UV Optical Infrared Surveyor), a conceptual design that includes multiple mirrors that would be assembled in space. Researchers have estimated that such a telescope would have to remain perfectly still, within 10 picometers—about a quarter the diameter of a hydrogen atom—in order for an onboard coronagraph to take accurate measurements of a planet's light, apart from its star. "Any disturbance on the spacecraft, like a slight change in the angle of the sun, or a piece of electronics turning on and off and changing the amount of heat dissipated across the spacecraft, will cause slight expansion or contraction of the structure," Douglas says. "If you get disturbances bigger than around 10 picometers, you start seeing a change in the pattern of starlight inside the telescope, and the changes mean that you can't perfectly subtract the starlight to see the planet's reflected light." The team came up with a general design for a laser guide star that would be far enough away from a telescope to be seen as a fixed star—about tens of thousands of miles away—and that would point back and send its light toward the telescope's mirrors, each of which would reflect the laser light toward an onboard camera. That camera would measure the phase of this reflected light over time. Any change of 10 picometers or more would signal a compromise to the telescope's stability that, onboard actuators could then quickly correct. To see if such a laser guide star design would be feasible with today's laser technology, Douglas and Cahoy worked with colleagues at the University of Arizona to come up with different brightness sources, to figure out, for instance, how bright a laser would have to be to provide a certain amount of information about a telescope's position, or to provide stability using models of segment stability from large space telescopes. They then drew up a set of existing laser transmitters and calculated how stable, strong, and far away each laser would have to be from the telescope to act as a reliable guide star. In general, they found laser guide star designs are feasible with existing technologies, and that the system could fit entirely within a SmallSat about the size of a cubic foot. Douglas says that a single guide star could conceivably follow a telescope's "gaze," traveling from one star to the next as the telescope switches its observation targets. However, this would require the smaller spacecraft to journey hundreds of thousands of miles paired with the telescope at a distance, as the telescope repositions itself to look at different stars. Instead, Douglas says a small fleet of guide stars could be deployed, affordably, and spaced across the sky, to help stabilize a telescope as it surveys multiple exoplanetary systems. Cahoy points out that the recent success of NASA's MARCO CubeSats, which supported the Mars Insight lander as a communications relay, demonstrates that CubeSats with propulsion systems can work in interplanetary space, for longer durations and at large distances. "Now we're analyzing existing propulsion systems and figuring out the optimal way to do this, and how many spacecraft we'd want leapfrogging each other in space," Douglas says. "Ultimately, we think this is a way to bring down the cost of these large, segmented space telescopes."
  25. Scientists tackling the illegal trade in elephant ivory got more than they bargained for when they found woolly mammoth DNA in trinkets on sale in Cambodia, they revealed Friday. "It was a surprise for us to find trinkets made from woolly mammoth ivory in circulation, especially so early into our testing and in a tropical country like Cambodia," said Alex Ball, manager at the WildGenes laboratory, a wildlife conservation charity based at Edinburgh Zoo. "It is very hard to say what the implications of this finding are for existing elephant populations, however we plan to continue our research and will use genetics to work out where it has come from." The giant mammals have been extinct for around 10,000 years and are not covered by international agreements on endangered species. WildGenes has been using genetic data to tackle wildlife crime by determining the origin of ivory finding its way to the marketplace. "It is estimated that globally over 30,000 elephants are killed every year for their ivory and it appears there are increasing amounts of ivory for sale within Cambodia," said Ball. "Understanding where the ivory is coming from is vital for enforcement agencies looking to block illegal trade routes." Britain last year banned sales of all ivory except for the rarest and most important antiques.
Ă—
Ă—
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.