Jump to content

Tipup's Content - Page 12 - InviteHawk - Your Only Source for Free Torrent Invites

Buy, Sell, Trade or Find Free Torrent Invites for Private Torrent Trackers Such As redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker etc.

Tipup

Advanced Members
  • Posts

    911
  • Joined

  • Last visited

  • Feedback

    0%
  • Points

    34,245 [ Donate ]

Everything posted by Tipup

  1. Newton's third law dictates that forces between interacting particles are equal and opposite for closed systems. In a non-equilibrium environment, the third law can be defied, giving rise to "nonreciprocal" forces. Theoretically, this was shown when dissimilar, optically trapped particles were mediated by an external field. In a recent study, Yuval Yifat and colleagues measured the net nonreciprocal forces in electrodynamically interacting, asymmetric nanoparticle dimers and nanoparticle aggregates. In the experiments, the nanoparticle structures were confined to pseudo one-dimensional geometries and illuminated by plane waves. The observed motion was due to the conservation of total momentum for particles and fields with broken mirror symmetry (represented by a changed direction of motion). The results are now published on Light: Science & Applications. The ability to convert light energy into self-directed motion with light-driven nanomotors or micromachines has already attracted great interest. A variety of methods in optics can produce rotational motion or give rise to translational motion with photoreactive materials. The promise to engineer light-driven nanomotors arose from recent theoretical work, which predicted that dissimilar particles illuminated by an electromagnetic plane wave, will experience a nonreciprocal net force. The predicted nonreciprocal forces were demonstrated with simulations to vary very little with interparticle separation. However, straightforward experimental evidence on the phenomenon was not presented thus far. Exploring the reactive optical effects can open new possibilities of self-assembling, light-driven micromachines to herald a new field in optics and photonics. To fill the experimental gap, in the present study, Yifat et al. demonstrated self-motility using optically bound dimers of disproportionate metallic nanoparticles (NPs). The experimental findings were also supported by quantitative electrodynamic simulations. Aside from dimers, the scientists similarly generated and measured the motion of asymmetric nanoparticle clusters or assemblies. To perform the experiments, Yifat et al. used a standard optical trapping setup with a Ti:Sapphire laser operating at a wavelength of 790 nm. A tightly focused, circularly polarized spatially phase-modulated beam of light formed an optical ring trap. **Reactive optical matter: light-induced motion A schematic diagram of the experiment: a) Example trajectories for a homodimer (black) and a heterodimer (color) that are moving in counterclockwise (green) and clockwise (blue) directions. Distribution of instantaneous angular velocities (gray dots) and the mean angular velocities of the homodimers (b, black) and heterodimers (c, orange) as a function of interparticle separation. The bin size is 300 nm. The mean angular velocity value was calculated by fitting a Gaussian function to the instantaneous velocity distribution. The error bars are the 3σ confidence intervals for fitted means of the distribution. Positive velocity is defined as motion of the heterodimer toward the larger NP. d) The calculated mean square displacement (MSD) values for the homodimer data that are shown in (b) (black), the heterodimer data that are shown in (c) (orange), and the subset of the heterodimer data where the interparticle separation was ≤1.2 μm (red). Credit: Light: Science & Applications, doi: https://doi.org/10.1038/s41377-018-0105-y In the study, the motion of a trapped mixture of silver (Ag) nanoparticles with 150 nm – 200 nm diameter were measured using dark-field microscopy at a high frame rate of 290 fps. The particles were tracked, and their precise position used to calculate the angular position (θi) on the ring. The scientists conducted particle imaging and tracking using the mosaic particle tracking toolbox available via Image J software. Yifat et al. observed a "heterodimer" of dissimilar particles in which the directed motion of electrodynamically interacting pairs were toward the larger particle. Conversely, when two particles of the same size, termed a "homodimer" came into close proximity, directed motion was not observed. The results were in agreement with the forces calculated using the generalized Mie theory (GMT). The scientists did not observe full or free rotation in the experiment – the manifested torque and its effect will be investigated further in future work. **Reactive optical matter: light-induced motion “Nonreciprocal” force-induced dynamics. a) Example trajectories for a homodimer (black) and a heterodimer (color) that are moving in counterclockwise (green) and clockwise (blue) directions. Distribution of instantaneous angular velocities (gray dots) and the mean angular velocities of the homodimers (b, black) and heterodimers (c, orange) as a function of interparticle separation. The bin size is 300 nm. The mean angular velocity value was calculated by fitting a Gaussian function to the instantaneous velocity distribution. The error bars are the 3σ confidence intervals for fitted means of the distribution. Positive velocity is defined as motion of the heterodimer toward the larger NP. d) The calculated mean square displacement (MSD) values for the homodimer data that are shown in (b) (black), the heterodimer data that are shown in (c) (orange), and the subset of the heterodimer data where the interparticle separation was ≤1.2 μm (red). Credit: Light: Science & Applications, doi: https://doi.org/10.1038/s41377-018-0105-y Thereafter, Yifat et al. imaged the representative time trajectories of θc (the central angle of the pair) for the heterodimers and homodimers. In the heterodimers, motion of the pair was directed toward the larger particle and therefore could move clockwise or counterclockwise, around the ring depending on its orientation. The scientists repeated the experiments and combined the results. In the combined data with different heterodimer orientations, positive velocity was defined as the vector from the smaller sample toward the larger particle. For instance, the heterodimers exhibited a positive mean angular velocity at an optical binding separation of 600 ± 150 nm and a negative mean angular velocity when the separation was larger at 900 ± 150 nm. In contrast, the mean angular velocity for a homodimer was zero for all separations. The change in mean velocity and the motion of the heterodimer pair toward the larger, thermally hotter particle was due to the electromagnetic field and not due to heat-induced self-thermophoresis (i.e. local temperature gradient generated due to laser adsorption by the metal-coated particles). **Reactive optical matter: light-induced motion Video of the silver (Ag) heterodimer in a ring trap – motion in a counter-clockwise direction. Credit: Light: Science & Applications, doi: https://doi.org/10.1038/s41377-018-0105-y The findings agreed with previous publications on the asymmetry of light scattered by optically trapped objects. The simulated motion was similarly directed from the small particle to the larger particle. The scientists observed a separation-dependent imbalance of angular scattering (where more light was scattered in one direction than another). The asymmetry in far-field scattering created a force on the dimer, setting it in motion as observed. Similar asymmetric scattering was previously observed for plasmonic nanoantenna. Yifat et al. used the same experimental approach to study gold (Au) nanostar dimers and large asymmetric aggregates of gold nanoparticles. **Reactive optical matter: light-induced motion Video of gold (Au) nanoparticle clusters in the ring trap. Credit: Light: Science & Applications, doi: https://doi.org/10.1038/s41377-018-0105-y In this way, the scientists experimentally demonstrated light-driven motion of heterodimers and asymmetric scatterers in optical ring traps to quantify net nonreciprocal forces in one-dimensional plane wave-fields. Although the experiments were confined to a ring trap in this study, the strategy is transferable to any optically trapped matter structure that exhibits electromagnetic asymmetry. The optical trapping used in the study offered solutions to the experimental challenge of generating directed motion at the nanoscale. Nonreciprocal forces in the study created the self-motile particles without the use of chemical environments, chemical fuels or complex structures. The electrodynamic theory and simulations that were simultaneously conducted in the study also showed that interparticle interactions caused asymmetric scattering in the heterodimers. The work thus fundamentally followed Noether's theorem (the symmetry of the action of a physical system contains a corresponding conservation law). Accordingly, Yuvat et al. rationalize that the observed self-motility and the quantified nonreciprocal forces followed from the conservation of total momentum of particles and fields, for systems with broken symmetry. The scientists envision the use of such light-driven asymmetric nanoparticle assemblies as active colloids with artificial chemotactic systems, and as fully operational "nanoswimmers" for research in soft condensed matter and biophysics.
  2. Spin-based quantum computers have the potential to tackle difficult mathematical problems that cannot be solved using ordinary computers, but many problems remain in making these machines scalable. Now, an international group of researchers led by the RIKEN Center for Emergent Matter Science have crafted a new architecture for quantum computing. By constructing a hybrid device made from two different types of qubit—the fundamental computing element of quantum computers—they have created a device that can be quickly initialized and read out, and that simultaneously maintains high control fidelity. In an era where conventional computers appear to be reaching a limit, quantum computers—which do calculations using quantum phenomena—have been touted as potential replacements, and they can tackle problems in a very different and potentially much more rapid way. However, it has proven difficult to scale them up to the size required for performing real-world calculations. In 1998, Daniel Loss, one of the authors of the current study, came up with a proposal, along with David DiVincenzo of IBM, to build a quantum computer by using the spins of electrons embedded in a quantum dot—a small particle that behaves like an atom, but that can be manipulated, so that they are sometimes called "artificial atoms." In the time since then, Loss and his team have endeavored to build practical devices. There are a number of barriers to developing practical devices in terms of speed. First, the device must be able to be initialized quickly. Initialization is the process of putting a qubit into a certain state, and if that cannot be done rapidly it slows down the device. Second, it must maintain coherence for a time long enough to make a measurement. Coherence refers to the entanglement between two quantum states, and ultimately this is used to make the measurement, so if qubits become decoherent due to environmental noise, for example, the device becomes worthless. And finally, the ultimate state of the qubit must be able to be quickly read out. While a number of methods have been proposed for building a quantum computer, the one proposed by Loss and DiVincenzo remains one of the most practically feasible, as it is based on semiconductors, for which a large industry already exists. For the current study, published in Nature Communications, the team combined two types of qubits on a single device. The first, a type of single-spin qubit called a Loss-DiVincenzo qubit, has very high control fidelity—meaning that it is in a clear state, making it ideal for calculations, and has a long decoherence time, so that it will stay in a given state for a relatively long time before losing its signal to the environment. Unfortunately, the downside to these qubits is that they cannot be quickly initialized into a state or read out. The second type, called a singlet-triplet qubit, is quickly initialized and read out, but it quickly becomes decoherent. For the study, the scientists combined the two types with a type of quantum gate known as a controlled phase gate, which allowed spin states to be entangled between the qubits in a time fast enough to maintain the coherence, allowing the state of the single-spin qubit to be read out by the fast singlet-triplet qubit measurement. According to Akito Noiri of CEMS, the lead author of the study, "With this study we have demonstrated that different types of quantum dots can be combined on a single device to overcome their respective limitations. This offers important insights that can contribute to the scalability of quantum computers."
  3. Cassiopeia A, the youngest known supernova remnant in the Milky Way, is the remains of a star that exploded almost 400 years ago. The star was approximately 15 to 20 times the mass of our sun and sat in the Cassiopeia constellation, almost 11,000 light-years from earth. Though stunningly distant, it's now possible to step inside a virtual-reality (VR) depiction of what followed that explosion. A team led by Kimberly Kowal Arcand from the Harvard-Smithsonian Center for Astrophysics (CfA) and the Center for Computation and Visualization at Brown University has made it possible for astronomers, astrophysicists, space enthusiasts, and the simply curious to experience what it's like inside a dead star. Their efforts are described in a recent paper in Communicating Astronomy with the Public. The VR project—believed to be the first of its kind, using X-ray data from NASA's Chandra X-ray Observatory mission (which is headquartered at CfA), infrared data from the Spitzer Space Telescope, and optical data from other telescopes—adds new layers of understanding to one of the most famous and widely studied objects in the sky. "Our universe is dynamic and 3-D, but we don't get that when we are constantly looking at things" in two dimensions, said Arcand, the visualization lead at CfA. The project builds on previous research done on Cas A, as it's commonly known, that first rendered the dead star into a 3-D model using the X-ray and optical data from multiple telescopes. Arcand and her team used that data to convert the model into a VR experience by using MinVR and VTK, two data visualization platforms. The coding work was primarily handled by Brown computer science senior Elaine Jiang, a co-author on the paper. PlayPlay Seek 00:00 Current time00:15 Toggle Mute Volume Toggle Fullscreen Credit: NASA/CXC/SAO The VR experience lets users walk inside a colorful digital rendering of the stellar explosion and engage with parts of it while reading short captions identifying the materials they see. "Astronomers have long studied supernova remnants to better understand exactly how stars produce and disseminate many of the elements observed on Earth and in the cosmos at large," Arcand said. When stars explode, they expel all of their elements into the universe. In essence, they help create the elements of life, from the iron in our blood to the calcium in our bones. All of that, researchers believe, comes from previous generations of exploded stars. In the 3-D model of Cas A, and now in the VR model, elements such as iron, silicon, and sulfur are represented by different colors. Seeing it in 3-D throws Cas A into fresh perspective, even for longtime researchers and astronomers who build models of supernova explosions. "The first time I ever walked inside the same data set that I have been staring at for 20 years, I just immediately was fascinated by things I had never noticed, like how various bits of the iron were in different locations," Arcand said. "The ability to look at something in three dimensions and being immersed in it just kind of opened up my eyes to think about it in different ways." The VR platforms also opens understanding of the supernova remnant, which is the strongest radio source beyond our solar system, to new audiences. VR versions of Cas A are available by request for a VR cave (a specially made room in which the floors and walls are projection screens), as well as on Oculus Rift, a VR computer platform. As part of this project, the team also created a version that works with Google Cardboard or similar smartphone platforms. In a separate but related project, Arcand and a team from CfA worked with the Smithsonian Learning Lab to create a browser-based, interactive, 3-D application and 360-degree video of Cas A that works with Google Cardboard and similar platforms. A virtual reality experience of being inside an exploded star Wearing VR goggles Kim Arcand views a 3-D representation of the Cassiopeia A supernova remnant at the YURT VR Cave at Brown. Credit: NASA/CXC/SAO; NASA/CXC/E.Jiang "My whole career has been looking at data and how we take data and make it accessible or visualize it in a way that adds meaning to it that's still scientific," Arcand said. VR is an almost perfect avenue for this approach, since it has been surging in popularity as both entertainment and an educational tool. It has been used to help medical staff prepare for surgeries, for example, and video game companies have used it to add excitement and immersion to popular games. Arcand hopes to make Cas A accessible to even more people, such as the visually impaired, by adding sound elements to the colors in the model. Reaction to the VR experience has been overwhelmingly positive, Arcand said. Experts and non-experts alike are hit by what Arcand calls "awe moments" of being inside and learning about something so massive and far away. "Who doesn't want to walk inside a dead star?" Arcand said.
  4. Indonesia's tsunami has raised fears that another deadly wave could wipe out the few dozen Javan rhinos still living in the wild, conservation authorities said Friday. There are believed to be fewer than 70 of the critically endangered species in a national park not far from a rumbling volcano that triggered Saturday's killer wave. None of the animals are believed to have been killed in the disaster—which left more than 400 people dead—but officials are warning that another deadly wave could slam into the stricken region. That is putting pressure on conservationists at Ujung Kulon National Park, on the western tip of Indonesia's main island of Java, to ramp up a longstanding plan to find a suitable secondary habitat for the rhinos. "It's become our duty to work harder to find a second habitat because the danger is real," national park chief Mamat Rahmat told AFP. "We're lucky that the tsunami did not affect the Javan rhinos this time. But the threat is there and we need to act accordingly." Widodo Ramono, head of the Rhino Conservation Foundation of Indonesia, added: "If you've only got one habitat and there's another tsunami, the rhinos could be wiped out completely." Plans to find a second home for the species have been in the works for about eight years, with conservationists surveying areas all over Java and neighbouring Sumatra but so far without success, he said. There are believed to be fewer than 70 Javan rhinos in a national park not far from a rumbling volcano that triggered Saturday's There are believed to be fewer than 70 Javan rhinos in a national park not far from a rumbling volcano that triggered Saturday's killer wave The size of the habitat, climate, food and water sources and safety from poachers are among the key criteria, Rahmat said. "There are still a lot of issues to be worked out," he added. The rhinos' current sanctuary in the park comprises some 5,100 hectares (12,600 acres) of lush rainforest and freshwater streams. Several years ago, three calves were filmed in the national park, raising hopes for the future of the world's rarest rhino after years of population decline. The shy creature, whose folds of loose skin give it the appearance of wearing armour plating, once numbered in the thousands and roamed across Southeast Asia. But, like other rhino species across the world, poaching and human encroachment on its habitat has led to a dramatic population decline. Poaching in particular represents a severe threat, with rhino horns used in traditional Asian medicine fetching ever higher prices on the black market despite a lack of scientific evidence showing the horn has any medicinal value.
  5. Tesla named two independent board members Friday as part of a settlement with U.S. regulators who demanded more oversight of CEO Elon Musk. Oracle co-founder Larry Ellison and Kathleen Wilson-Thompson, an executive vice president at Walgreens Boots Alliance, join the board as independent directors, effective immediately. Musk got into trouble with the Securities and Exchange Commission in early August when he said in a tweet that he had "funding secured" to take the electric car company private at $420 per share. The SEC accused Musk of committing securities fraud, saying that the funding had not been secured and that he had duped investors who drove shares of Tesla up by 11 percent on the day of the tweet. Several weeks later, Musk said the go-private deal was off. Regulators initially wanted to force Musk out of his job as CEO, but agreed to accept $20 million in penalties from both Musk and Tesla. Musk did agree to step down as chairman for at least three years, but acknowledged now wrong-doing. Despite the agreement, Musk has continued to clash with regulators. Just days after settling the case, Musk taunted the government via Twitter, referring to the SEC as the "Shortseller Enrichment Commission." Musk has had a long-running feud with short sellers, a category of investors that have bet that the price of Tesla stock will fall. So far, Musk is winning that fight. Shares of Tesla Inc. are up more than 20 percent since his clash with the SEC. Tesla named Australian telecommunications executive Robyn Denholm as board chairwoman last month as part of its agreement with the SEC. Although Denholm brings much-needed financial and auto industry expertise to Tesla—which has struggled to produce cars and make money—there hasn't been a marked change in Musk's unorthodox behavior, at least when compared with other chief executives at major corporations that are publicly traded. Tesla shares slumped 6 percent in early September after Musk was seen appearing to smoke marijuana during an interview that made the rounds on YouTube. Earlier this month, Musk also dismissed the idea that Denholm could exert control over his behavior, saying in an interview with "60 Minutes" that "It's not realistic in the sense that I am the largest shareholder in the company." And Ellison, one of the most recognizable names in Silicon Valley, revealed in October that not only was Tesla his second largest investment, but also that he and Musk are close. "I'm very close friends with Elon Musk, and I'm a big investor in Tesla," Ellison said. One other thing required of Tesla by the SEC as part of the settlement is that somebody vet Musk's tweets and other comments about the company before they are released to the public. Musk also shrugged off that provision, saying none of his tweets have been censored so far and the company does not review his posts to determine beforehand whether they could potentially affect the company's stock price. Still, in an SEC filing Friday, Tesla said that it, "intends to certify to the Commission that it and Elon have timely completed each of their respective actions required pursuant to the Settlement." Tesla shares rose more than 2 percent in midday trading.
  6. Using Atacama Large Millimeter/submillimeter Array (ALMA), researchers have conducted interferometric observations of the elliptical galaxy NGC 3557 to investigate molecular gas emission from this source. Results of these observations, available in a paper published December 13 on arXiv.org, could be helpful for understanding the process of star formation in this galaxy. Although many surveys of molecular gas emission from galaxies have been performed to date, only few studies have been carried out on the physical conditions of molecular gas, especially carbon monoxide (CO), in elliptical galaxies. This could be due to the overall weakness of the molecular emission in such galaxies and small apparent size of these structures. The currently small number of such studies and high demand for that kind of observations motivated a group of astronomers led by Baltasar Vila-Vilaró of the ALMA Observatory in Chile to investigate the galaxy NGC 3557. Located some 130 million light years away from the Earth, NGC 3557 is a southern-sky elliptical galaxy and a member of a small group of galaxies. It was classified as a flat-spectrum radio galaxy with a jet that bends at distances of a few arcmin from the center. Vila-Vilaró's team chose NGC 3557 due to its relative proximity and CO-brightness, which facilitates the study of its molecular structures in detail. Moreover, NGC 3557 is assumed to be in a stage where little current star formation may be happening, hence astronomers perceive it as an important representative of the molecular gas structures to be expected in post-star formation scenarios. "As part of the ALMA program 2015.1.00591.S (P.I.: Baltasar Vila-Vilaró), we observed the southern elliptical galaxy NGC 3557 in the 12CO(1–0) line (ALMA Band 3)," the researchers wrote in the paper. According to the study, the astronomers detected the CO(1-0) emission line, and a relatively strong continuum at 3mm exhibiting flat-spectrum central unresolved source and two jets associated with the larger scale emission observed at lower frequencies. They added that the molecular gas appears to be concentrated within the inner 815 light years of the galaxy, near the location of the unresolved flat-spectrum source. The authors of the paper found that the total integrated CO(1-0) flux is 4.5 Jy km/s, which indicates a molecular hydrogen mass of about 90 million solar masses or a total molecular gas mass including helium of approximately 122 million solar masses. They noted that these values differ quite significantly from those reported in similar previous study of NGC 3557 conducted in 2010. Furthermore, the average value of the integrated intensity line ratio CO(2-1)/CO(1-0) was found to be 0.7. This value in NGC 3557 is relatively high when compared with values reported in other elliptical galaxies observed with single-dish telescopes and could be related to the interaction of the molecular gas with the radio jet plasma. The researchers have also investigated the kinematics of the molecular gas in NGC 3557 and found that it has some organized rotation with the same orientation as the nuclear dust absorption detected with the Hubble Space Telescope (HST).
  7. India will send a three-member team into orbit for up to a week when it launches its first manned space mission expected in 2022, the government announced Friday. Indian ministers approved $1.4 billion to provide technology and infrastructure for the programme, according to a government statement. The sum would make India's one of the cheapest manned space programmes, stepping up its space rivalry with China. But the statement said India also hopes to take part in "global" space projects. India will become the fourth nation after Russia, the United States and China to send a manned mission into space. Ministers approved financing to launch an Indian-developed craft into a "low earth orbit" for a duration ranging from one orbital period to a maximum of seven days, the statement said. Prime Minister Narendra Modi announced in August that India will launch a manned space flight by 2022 with at least one astronaut. The cabinet had not approved the project however. There will be two unmanned and one manned flights to launch the Gaganyaan (Sky-Vehicle) Programme, the statement said. Without giving a date for the blast off, the government said the manned flight would be "within 40 months" of Friday's meeting. Modi has hailed the national space programme as a prestige project. The government has stated that space flights will boost the economy, generate jobs and enhance capabilities in areas such as medicine, agriculture and fighting pollution. A successful manned mission would allow India to become a "collaborating partner in future global space exploration initiatives with long term national benefits," said the statement. The country has invested heavily in its space programme in the past decade. The Indian Space Research Organisation announced in July that it planned to send an unmanned mission to the moon in 2019. India launched an orbiter to Mars in 2013 which is still operational and last year launched a record 104 satellites in one blast-off. New Delhi is competing with other international players for a greater share of the satellite market, and hopes its low-cost space programme will give it an edge. China put its first humans into space in 2003 but its Shenzhou programme cost more than $2.3 billion. Experts say the United States spent the equivalent of about $110 billion at current values on preparatory flights and the mission to put the first man on the moon in 1969.
  8. Many plants need to avoid flowering in the autumn – even if conditions are favourable – otherwise they would perish in winter. To flower in the spring they need to sense and then remember winter, a process known as vernalisation. But how do plants sense vital information such as temperatures to align flowering with the seasons? Until now, many researchers thought that fluctuations in monthly, daily, hourly temperatures were detected by a small number of dedicated sensors. But new research by the John Innes Centre reveals that plants combine the temperature sensitivity of multiple processes to distinguish between the seasons. "At first glance this might seem like a surprising finding, however in hindsight, it is very reasonable and it is also more likely as a mechanism to evolve," comments Dr. Rea Antoniou-Kourounioti, first author of the study which appears in the journal Cell Systems. "Biochemical reactions are naturally temperature sensitive, so the alternative, a few specialised sensors, would suggest that the temperature sensitivity of everything else must be ignored or compensated for. On the other hand, taking inputs from multiple pathways that were already responding to temperature, and evolving to use this combined information is less complicated and can lead to a more robust system," she explains. Credit: John Innes Centre The team from the labs of Professors Martin Howard and Caroline Dean developed a predictive mathematical model of temperature sensing for the key flowering regulator FLC in Arabidopsis. This vernalisation model can be used in combination with climate models to predict how plants will change their flowering in future climates. In this study, the team collaborated with groups from Sweden to test the model on patterns of data from plants grown in field sites in Sweden and Norwich – and the model matched these well. Arabidopsis is a relative of many crop species, such as broccoli and oilseed rape, so the work could be extended to help breeders develop climate-resilient varieties. Future work will involve adjusting the model in crop species and integrating it into current crop prediction models for farmers and breeders. The team will work with climate modellers to more accurately predict the temperatures that plants will actually experience in future.
  9. On Saturn, changing seasons can mean changes in the haziness—and color—of the skies. In the 13 years the Cassini spacecraft orbited Saturn, from 2004 to 2017, scientists noticed the atmosphere in the planet's northern hemisphere turned from blue-tinted to gold or even salmon. The stark color shift came from changes in the amount of sunlight-triggered haze in Saturn's atmosphere, according to new research. "I think everyone was kind of surprised by why the atmosphere was blue," said planetary scientist Scott Edgington, who is the deputy project scientist of the Cassini mission. Edgington presented the findings in a poster session last week at the 2018 American Geophysical Union Fall Meeting in Washington, D.C. Scientists are striving to pin down all of the light sources that shine on Saturn and understand how the light interacts chemically with Saturn's atmosphere. Answering these questions may help researchers better understand differences in the atmospheres of the Solar System's gas giants Jupiter and Saturn, and the ice giants Uranus and Neptune. Jupiter and Saturn have hazes that give them a golden color, while Uranus and Neptune have clearer atmospheres like Earth's blue skies on a clear day. But as the researchers saw in Cassini images, Saturn wasn't always covered in golden haze. "Of course, people were scratching their heads," Edgington said. "Why isn't it hazy everywhere, just like Jupiter?" In Saturn's case, particularly limited sunlight in the winter seems to let the planet's atmosphere recover from bouts of haziness. The reason for the extra sun protection? The planet's massive rings. The main driver of Saturn's seasons is the planet's tilt, just like on Earth. Earth is tilted so that the Northern Hemisphere faces the sun most directly in June and the Southern Hemisphere sees the sun face-on in December. In December, the Northern Hemisphere experiences long winter nights while the Southern Hemisphere enjoys its long summer days. Rings make Saturn shadier, bluer and less hazy in winter Saturn’s north pole changed from blue-tinted in 2012 to gold in 2016 as the northern hemisphere’s season turned from winter. Credit: NASA/JPL-Caltech/Space Science Institute/Hampton University The same effect happens on Saturn, which is tilted about as much as Earth is. But Saturn also has an expansive ring system that blocks sunlight for the hemisphere tilted away from the sun, making winters even less sunny on the gas giant. The planet's changing sun exposure is responsible for the seasonal swings in its atmosphere's haziness, Edgington said. Sunlight breaks apart methane gas molecules, which are a small but significant fraction of Saturn's atmosphere. Methane's breakup creates other molecules like ethane and acetylene, which trigger a complex web of chemical reactions that eventually make haze. When one hemisphere of Saturn is enjoying a shaded winter, the haze-making process slows down. Existing haze particles clump to form heavier grains and sink further into the planet's atmosphere and out of sight without new batches of haze to replace them. Thanks to that, Saturnian summers tend to have hazy, golden skies, while winters have clearer, bluer skies. "It seems like there's a direct connection between what we see and what the chemistry tells us should happen," Edgington said. The researchers will continue to study Cassini's data of Saturn's atmosphere. They still need to incorporate the last few years of Cassini's data into this project, Edgington said. One aspect of the project Edgington seemed especially excited about was figuring out how light reflecting off Saturn's rings contributes to the planet's sun exposure. Because Saturn's rings extend far beyond the planet's main body, sunlight can bounce off parts of the far side of the rings and onto the planet's dark side. "Even the dark side of the planet really isn't that dark," Edgington said.
  10. A Vietnam court Friday ordered ride-hailing app Grab to pay a cab company more than $200,000 for losses incurred due to competition—a judgement blasted by the firm as "a giant step backwards" for the country's tech community. The Singapore-based app, which launched in Vietnam in 2013, has been embroiled in a lawsuit with Vinasun, a major taxi provider in the south of the country, since May 2017. Vinasun blamed profit losses amounting to $1.8 million on its rival's entry into the market. A court in Ho Chi Minh City ruled Friday that Grab must compensate Vinasun $206,000 in damages for "having seriously violated the law on transport business", a court clerk told AFP. A local news outlet, which serves as a mouthpiece for the city's department of justice, said Grab's "activities caused losses to Vinasun". But since there was a lack of concrete evidence to prove that Grab was the sole reason for the Vietnamese company's losses, the judge said there were no grounds to demand the full $1.8 million compensation, according to the news outlet. Grab is Southeast Asia's most dominant ride-share company, operating across eight countries in a fast-growing sector with increasing competition. The judgement sets a "bad precedent", said the company's Vietnam head Jerry Lim, allowing traditional companies to sue its competitors "instead of constantly innovating through technology to remain relevant" in the country's vibrant tech industry. "...This is a defeat and giant step backwards for Vietnam's hardworking entrepreneurs and tech talents," he said in a statement. "It is unfortunate that Vinasun's anti-competitive tactics as a reaction to their declining business profits have somehow prevailed." He added the company was "intrigued" by the verdict, given the lack of "direct causal relationship" between Vinasun's losses and Grab's business activities. Grab will be appealing to seek a reversal of the court's decision, and is also preparing to launch a defamation lawsuit against Vinasun "if there is no retraction of the baseless allegations made". Vinasun could not be reached for comment on Friday. Grab's ambitious ascent has not been without issues. Earlier this year the Competition and Consumer Commission in Singapore fined Grab and fellow ride-hailing app Uber a total of $9.5 million for merging—a move it said substantially reduced competition in the island nation. The region's ride-hailing market is expected to be worth $20 billion by 2025, according to research by Google and Singapore investment vehicle Temasek.
  11. It's relatively easy for galaxies to make stars. Start out with a bunch of random blobs of gas and dust. Typically those blobs will be pretty warm. To turn them into stars, you have to cool them off. By dumping all their heat in the form of radiation, they can compress. Dump more heat, compress more. Repeat for a million years or so. Eventually pieces of the gas cloud shrink and shrink, compressing themselves into a tight little knots. If the densities inside those knots get high enough, they trigger nuclear fusion and voila: stars are born. When we observe massive galaxies, we see enormous amounts of X-ray radiation blasting away from their cores. This radiation naturally carries away heat. This radiation naturally cools down the galaxies, especially in their cores. So, the gas in the core should be compressing and shrinking in volume. The surrounding material should take notice and fall in down behind it, funneling itself into the core. And not just a little bit: as much as thousand solar masses per year ought to be collapsing into the cores of the most massive galaxies as they cool, cool, cool. This enormous cooling and compressing should, by all rights, trigger massive amounts of star formation. After all, you have exactly the right conditions: lots of stuff cooled down into tiny little pockets. So in these galaxies with loads of X-ray output, we ought to be seeing tons of new stars popping out. We don't. That's a problem. Warm and Cozy Galaxies Something has to keep these galaxies warm despite the major loss of heat from their X-ray emission. Something has to stop the gas from compressing all the way down to manufacture stars. Something has to keep the starlights turned down low. As with most mysteries in astronomy, there are various ideas, all with their own strengths and weaknesses, and none of them entirely satisfactory. The variety of mechanisms used to explain this conundrum include supernova feedback, powerful shock waves blown out by massive stars, magnetic fields going haywire, and even altering the very shape of the galaxy to prevent further cooling. Perhaps the easiest things to blame are the supermassive black holes that sit in the center of the galaxies. As the gas cools and flows inwards, it draws itself to the black hole. The massive sucking vortex of gravity hungrily feeds off the gas, driving it further down. But with all that gas compressing into such a small volume, it heats up, tremendously. Sometimes, if the mix of strong magnetic forces are just right, streams of gas can wheel around the black hole, barely avoiding oblivion beneath the event horizon, wind and swirl around, eventually blasting out of the region in the form of a long, thin jet. This jet carries a lot of energy. Enough energy to heat up the entire core of the galaxy, preventing further cooling. If that's not good enough, the extreme radiation emitted by the intense hot gas as it gets shoved down the gullet of the black hole can blast away at its surroundings, providing more than enough heat to halt – and even reverse – the flows of cool gas. Maybe. New Research Reveals How Galaxies Stay Hot and Bothered Artist’s impression of ULAS J1120+0641, a very distant quasar powered by a black hole with a mass two billion times that of the Sun. Credit: ESO/M. Kornmesser A Rotten Heartbeat This scenario is definitely appealing, because it's a) really common and b) really powerful. At first glance it's a perfect clincher, but nature, as usual, as a habit of turning nasty. The trouble is that feeding black holes are fantastically complicated systems, with all sorts of physical processes mixing together, which makes them hard to study. And, wouldn't you know it, when we try to simulate these scenarios on a computer, following the physics as best we can and as best understand, we have a lot of trouble getting the right amounts of energy into the right places. Sometimes the galaxies just keep on cooling. Sometimes they blow up. Sometime they fluctuate back and forth between heating and cooling too rapidly. While we don't have a full and final picture yet, researchers are making steady, if slow, progress in understanding the relationship between giant black holes and their host galaxies. In a recent paper, scientists used advanced computer simulations to try to examine that full picture, including as much of the detailed physics as possible. They found that when it comes to these fantastic processes featuring nature's awesome raw power at its rawest, subtleties matter. Sure, the intense radiation given off by the infalling gas and the jets escaping from near the deathly surface of the black holes play a role in regulating the temperatures of galaxies. But they often fail, misapplying their energies into the wrong places or the wrong times. Physics to the Rescue But radiation and jets aren't the only things driven by the central supermassive black holes. Cosmic rays, tiny charged particles traveling close to the speed of light, flood the vicinity of the maelstrom. They help transport heat at a nice even, steady pace, keeping the heartbeat of the galaxy going at a regular rhythm. Plus there's good old-fashioned turbulence, with rolling shock waves and general bad temperament driven by the flare-ups in the center. This turbulence does just a fine job at preventing surrounding gas from cooling completely and bursting into star formation. So is this it, the complete story? Of course not. Galaxies are living, breathing creatures, with massive engines of gravity driving their hearts, and intertwined flows of gas shaped by powerful – and sometimes exotic – forces. It's a tough problem to study, but a fascinating one, since by pinning down the relationship between galaxies and their black holes, as communicated through the flows and disruptions of cool gas, we can try to unlock the story of galaxy evolution itself.
  12. A research team, led by Professor Tetsuo Endoh at Tohoku University, has successfully developed 128Mb-density spin-transfer torque magnetoresistive random access memory (STT-MRAM) with a write speed of 14 ns for use in embedded memory applications, such as cache in IoT and AI. This is currently the world's fastest write speed for embedded memory application with a density over 100Mb and will pave the way for the mass-production of large capacity STT-MRAM. STT-MRAM is capable of high-speed operation and consumes very little power, as it retains data even when the power is off. Because of these features, STT-MRAM is gaining traction as the next-generation technology for applications such as embedded memory, main memory and logic. Three large semiconductor fabrication plants have announced that risk mass-production will begin in 2018. As memory is a vital component of computer systems, handheld devices and storage, its performance and reliability are of great importance for green energy solutions. The current capacity of STT-MRAM is ranged between 8Mb-40 Mb. But to make STT-MRAM more practical, it is necessary to increase the memory density. The team at the Center for Innovative Integrated Electronic Systems (CIES) has increased the memory density of STT-MRAM by intensively developing STT-MRAMs in which magnetic tunnel junctions (MTJs) are integrated with CMOS. This will significantly reduce the power-consumption of embedded memory such as cache and eFlash memory. MTJs were miniaturized through a series of process developments. To reduce the memory size needed for higher-density STT-MRAM, the MTJs were formed directly on via holes—small openings that allow a conductive connection between the different layers of a semiconductor device. By using the reduced size memory cell, the research group has designed 128Mb-density STT-MRAM and fabricated a chip. In the fabricated chip, the researchers measured a write speed of subarray. As a result, high-speed operation with 14ns was demonstrated at a low power supply voltage of 1.2 V. To date, this is the fastest write speed operation in an STT-MRAM chip with a density over 100Mb in the world.
  13. Depending on who you ask, blockchain technology is poised to revolutionize the world—from creating a universal currency to building a free and truly private internet. Or, the new technology, built with a combination of encryption and transparency, is a solution in search of a problem. The reality likely falls somewhere in between. While a growing number of startups and researchers are devoting themselves to exploring blockchain's full potential, experts caution that a healthy dose of skepticism is needed to fully evaluate the technology and its eventual place in society. For many individuals, though—including some looking to invest—blockchain technologies and their limitations remain poorly understood, leaving people vulnerable to being exploited by bad actors. Researchers at Princeton University's School of Engineering and Applied Science are striving to change that through education, outreach and research. "Early on we realized this was a technology that was not well understood but that a lot of people were interested in," said Ed Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs at Princeton. "There wasn't a coherent, high-quality way of teaching about this technology or explaining it, so we've tried to systematize the knowledge and unsolved problems underlying it." Simply put, a blockchain is a ledger. But unlike an old-time hotel register gathering dust on a counter, a blockchain ledger is held electronically in multiple locations across the internet. It is visible to any member of the community participating in that particular blockchain. Each copy of the ledger is held on a computer called a node; when someone makes a transaction using the blockchain—say using virtual currency to order a pizza—the operators of the nodes run through calculations to create a new entry, or block, in the ledger. Each new block is encrypted using a private, numeric key from the person who bought the pizza; the new blocks are also linked to the previous blocks using additional encryption. The combination of encryption and visibility makes entries extremely difficult to fake. Because the calculations are carried out on multiple nodes and the results are visible to participants—varying results would be an immediate red flag. The distributed nature of the system means it is hard for a single entity to control. It also makes transactions extremely difficult to track back to a user. The initial use of blockchain technology was in new forms of currency such as Bitcoin. More recently, the ability to track decentralized transactions reliably has attracted other sectors. Businesses are exploring its use for contracts, app development and international finance. "I think this will be a story of gradual integration, rather than a story of a revolution," said Arvind Narayanan, an associate professor of computer science at Princeton. "It's an interesting new technology, and a number of us here are working to make that technical footing even stronger." In 2014, Narayanan began teaching one of the first university courses on blockchain, which he and Felten soon expanded into an online Coursera series and a popular textbook. At the same time, with colleagues and former and current students, they began innovating ways to maximize the benefits of blockchain and minimize the risks. "There's a lot at stake, and a lot not understood about this technology," Felten said. "As independent academics, the role we try to play is to be explainers, interpreters and B.S. detectors." That said, Felten and Narayanan believe that blockchain does have a significant role to play—although, most likely, we have yet to imagine what it will be. "In some sense, we're still in search of its major application," Felten said. Numerous Princeton alumni are attempting to fill that unknown by becoming early innovators in the field, including a co-founder of the cryptocurrency Ethereum and founders of several high-profile companies, such as Blockstack (see list below). Where they will take these and other ventures depends not only on technical finesse, but imagination. The decentralized network Blockchain's most prominent use so far has been in creating cryptocurrencies, such as Bitcoin and Ethereum, that are not controlled by a central bank. These currencies are not blockchains themselves—they are abstract tokens—but trades of their coins are recorded on blockchains. Because ownership and any transfer of ownership is recorded on the public ledger, participants in the Bitcoin system do not need to trust any one entity. Instead, they place their trust in the distributed ledger technology, which is maintained by a large number of participants around the world. Each cryptocurrency offers a limited number of coins, although new ones are regularly created and doled out as payment to users, called miners, who are the first to solve the difficult computational problems—the harsh puzzles—added to the chain. Miners' computers run algorithms that perform the difficult task of building blockchain records and solving mathematical problems. In exchange, they receive coins. While this sounds abstract, Felten points out that the system actually has much in common with conventional currencies. "Most money we have exists in numbers on some computer somewhere," he said. "If you go into a sandwich shop, they give you a sandwich in exchange for you telling a bank to move numbers from one account to another." Like paper money, he continues, cryptocurrencies have value because their supply is limited, and because users can be confident that they can exchange them for goods and services. Cryptocurrencies now trade against the dollar, and their combined market cap is over $100 billion. Among their biggest attractions, cryptocurrencies offer a way to transfer money over distances and borders without involving intermediaries that may charge high fees. In other cases, certain cryptocurrencies possess advanced features, including the ability to create smart contracts, or self-enforcing rules that govern escrow arrangements and other interactions. Blockchain is still in its infancy, though, so the true scope of its usefulness is likely yet to be revealed. "It's kind of an analogy to the early days of the internet, where some people were super excited and made a lot of claims about how it would change human existence forever, and some said it was just a fad," Felten said. "While it didn't solve all of humanity's problems, it did turn out to be pretty important." But for all the interesting current and future uses for blockchain, he added, there is "an extraordinary amount of snake oil and exaggeration in the public rhetoric." Because some cryptocurrency transactions are anonymous, for example, they are particularly attractive for criminal groups, including those looking to exchange illegal goods. In other cases, less savvy users are exploited through "pump and dump schemes" in which unscrupulous investors artificially boost the price of a hot commodity and then quickly sell, causing a crash. "There are a huge number of scams going on," Narayanan warned. Blockchain is also extremely energy intensive, mostly due to mining, which requires specialized equipment with a high demand for electricity. Bitcoin mining alone accounts for about 0.1 percent of total world energy use—more energy than certain countries, including Denmark and Ireland, consume. As Narayanan testified before the Senate Committee on Energy and Natural Resources in August, this represents a serious problem for energy use and the environment. Coding the future From the early days, Princeton researchers have been striving to mitigate some of these issues and to better understand the technology and its potential. "Bitcoin is portrayed in the media as jumping into existence from the mind of one mysterious person, but I co-authored a paper on the component technologies of cryptocurrencies that cited literature from the early 1980s," Narayanan said. "Continuing to improve on cryptocurrency and blockchain will take a lot more computer science research." BlockSci, for example, is a database that Narayanan and his colleagues built to analyze hundreds of millions of Bitcoin transactions. BlockSci allows them to investigate trends and to answer questions such as how much money is actually being transferred and how much privacy users truly have. "There are lots of interesting scientific and commercial questions we can ask with these data," Narayanan said. A recent investigation revealed, for example, that bitcoins are changing hands less often than what was previously assumed—about 1.4 times per month—suggesting that individuals are using coins less as currency and more as investments. Princeton students and graduates are also pushing the field forward, by creating apps and writing software to improve cryptocurrencies; founding companies based on blockchain; and funding such ventures. Joseph Lubin, one of the founders of Ethereum, graduated from Princeton in 1987 with a degree in electrical engineering and computer science. One recent venture, Basis, founded by Princeton computer science alumni Nader Al-Naji, Lawrence Diao and Josh Chen, recently raised $133 million for effort to build a cryptocurrency that maintains a more stable price than conventional blockchain-based "coins." The Basis system creates the virtual equivalent of a central bank, which automatically adjust the supply of currency, based on demand. One recent Princeton alumni venture, Blockstack, aims to build a completely decentralized internet based on blockchain. According to co-founders Ryan Shea (2012 BSE in mechanical and aerospace engineering) and Muneeb Ali (2017 Ph.D. in computer science), Blockstack, which is registered as a public benefit corporation, was inspired by major issues they perceived in the way the internet works, including concerns about personal data and autonomy. "We saw that a lack of competition and lack of control for the end user was really hampering freedom, security and privacy around the world," Shea said. "We wanted to build a new system that empowers the individual and allows each of us to own our data." Rather than Facebook storing and controlling all of a person's data on its servers, for example, a Blockstack user could easily migrate his or her digital identity from app to app, if desired. Blockstack software for managing profiles and securing accounts is already available, as are decentralized messenger and document editor apps. Next year, the company plans to release its own blockchain in tandem with a Blockstack token, and discussions are underway for creating a decentralized Twitter. "We're working with lots of teams to help them build whatever apps they desire on the platform," Shea says. "The most exciting things are less around the exact details of the underlying infrastructure we provide and more around how we enable developers to create new experiences." Blockstack is already coming full circle by inspiring and enabling other Princeton scholars to create new technologies. At the Keller Center for Innovation in Engineering Education's eLab Summer Accelerator Program in August, a team of new Princeton graduates launched Afari, a Blockstack-based social media platform meant to give data privacy back to the people by returning data ownership and privacy to users, and by giving everyone an equal chance for their voice to be heard and rewarded. "Social media is so broken in our opinion that you need to redesign it from the ground up," said Avthar Sewrathan, co-founder of Afari and a 2018 graduate in computer science. Blockchain, the team said, makes that possible. "When you make a post on Afari," said co-founder Felix Madutsa, "your data is not stored with us but rather is stored on a decentralized system that you, the user, controls and owns."
  14. Researchers from Argonne's Environmental Science division participated in one of the largest collaborative atmospheric measurement campaigns in Antarctica in recent decades. On May 13, 1887, the journal Science published a brief history of Antarctic exploration in which it outlined scientific achievements thus far and expressed a hope that new exploration would soon be undertaken. The article makes apparent that, by the late 19th century, scientists already understood the importance of the region's geography on meteorology and the regulation of ocean currents. "… the meteorological phenomena of the southern hemisphere depend on those of the Antarctic region, and our knowledge of the meteorology of the earth will be incomplete until such phenomena of the south polar region are thoroughly studied." While the hope for further exploration of Antarctica has come to fruition, such exploration has come in fits and starts, due in part to the huge investment in time and money required to transport, install and maintain delicate instrumentation and a small host of scientists. The primary reason, perhaps, is what atmospheric research engineer Maria Cadeddu delicately refers to as the region's "prohibitive conditions." It's a tough place. In 2015, Cadeddu and colleagues from the U.S. Department of Energy's (DOE) Argonne National Laboratory participated in a collaborative atmospheric measurement campaign to understand the impact of regional and large-scale events on Antarctic warming. The team was comprised of a number of academic institutions and national laboratories, including Argonne, Los Alamos and Brookhaven. The research focused on the micro- and macro-physical properties of Antarctic clouds, like the average size of droplets or the total amount of liquid or ice contained in a cloud. The goal was to determine the amount of radiation the clouds will transmit based on such parameters. Based at McMurdo Station and on the West Antarctic Ice Sheet (WAIS), the campaign was part of the DOE Atmospheric Radiation Measurement (ARM) West Antarctic Radiation Experiment (AWARE), led by Principal Investigator Dan Lubin from the Scripps Institution of Oceanography. The one-year study deployed the largest assemblage of instrumentation for ground-based Antarctic atmospheric measurements since 1957, and details from that study are emerging in a number of scientific journals, including Nature Communications and the Journal of Geophysical Research: Atmospheres. "The whole idea was to try to figure out how atmospheric dynamics, like air masses that come from the sea, for example, can affect cloud properties and how changes in cloud properties affect the energy balance of the region," said Cadeddu, who works in Argonne's Environmental Science division. "And understanding how clouds affect a system can help with future climate projections." Antarctica is an important region for climate models, she noted, but models rely on data, the more accurate the better. To date, Antarctic climate models have been less than accurate because science lacks quantitative observations of the region; the observations that are available come from satellites, which have issues in very high and low latitudes. But given the time and the instrumentation provided by AWARE, researchers have begun to fill in many missing pieces in Antarctica's overall climate puzzle. At home, where temperatures are less frigid, Cadeddu is part of the Argonne cloud and radiation research team that includes Virendra Ghate, a radar meteorologist, and Donna Holdridge, the ARM mentor for the radio-sounding systems. The cloud and radiation research team contributed their expertise in remote-sensing equipment, including LiDAR (light detection and ranging) and radar devices, short-wave spectrometers and microwave radiometers for measuring radiation, and radiosondes (balloon-elevated apparatuses that measure upper atmospheric conditions). Because remote sensors transmit raw data, researchers must process and interpret the information to obtain direct measurements or physical quantities. For example, signals sent from LiDAR and radar devices return as scattered signals that correlate to cloud-altering mechanisms like radiation. "These sensors use knowledge of how radiation propagates through a medium, as well as how cloud and rain drops interact with radiation. When we examine these signals, we can estimate specific cloud properties, such as particle sizes or the amount of vapor, liquid water or ice they contain," explained Cadeddu. Cloud phases are relevant to radiative property, or how much radiation the clouds transmit, absorb or scatter. Argonne researchers used this information, in part, to understand differences between cloud conditions in the Arctic and Antarctic and their effect on regional climate. Among the major differences, Antarctica exhibits much less anthropogenic pollution than the Arctic. While this offers more pristine conditions for studying clouds, the lower pollution levels also affect the amount of liquid water present in clouds at very low temperatures. Models convert all the liquid to ice when clouds reach temperatures near -20 degrees C. But the team found that the liquid layer persists in temperatures as low as -35 degrees C in the clouds above McMurdo. Even small amounts of liquid can have a warming effect on the surface of the Arctic, so the team is trying to determine what climate-related effects these liquid-saturated clouds might have in the south. The AWARE campaign made headlines in 2016, when scientists conducting measurements along the West Antarctic Ice Sheet Divide captured one of the largest surface melt events on record. Traditionally, surface melt events at the ice sheet are attributed to warm ocean water beneath coastal ice shelves, but extensive observations showed external factors at work, as well. Scientists attribute some of the melting to a strong El Niño event combined with regional conditions, some of which related back to liquid-bearing clouds. "Clouds exert an important influence on the balance of incoming and outgoing energy at the surface, and these low-level optically thin clouds can have a determinant role in either causing or prolonging melting conditions over ice sheets," said Cadeddu. Cloud characteristics may not have been part of the larger consideration of "meteorological phenomena of the southern hemisphere" when the Science article appeared in 1887. Whatever the factors, the author made clear that 19th century science was looking at a larger, more forward-thinking picture that left room for the potential role of clouds when they wrote the following: " … The important bearing of these problems on practical questions cannot be overrated. The seaman cannot dispense with the knowledge of the currents, winds, and magnetic elements, and there is hardly a class of people who will not be benefited by the progress of meteorology." Research papers used for this article include, "Antarctic cloud macrophysical, thermodynamic phase, and atmospheric inversion coupling properties at McMurdo Station. Part I: Principal data processing and climatology," and "Cloud optical properties over West Antarctica from shortwave spectroradiometer measurements during AWARE," in the Journal of Geophysical Research: Atmospheres, May 22, 2018, and September 3, 2018, respectively; and "January 2016 extensive summer melt in West Antarctica favoured by strong El Niño," in Nature Communications, June 15, 2017.
  15. Uppsala University researchers have devised a new model for the universe – one that may solve the enigma of dark energy. Their new article, published in Physical Review Letters, proposes a new structural concept, including dark energy, for a universe that rides on an expanding bubble in an additional dimension. We have known for the past 20 years that the universe is expanding at an ever accelerating rate. The explanation is the "dark energy" that permeates it throughout, pushing it to expand. Understanding the nature of this dark energy is one of the paramount enigmas of fundamental physics. It has long been hoped that string theory will provide the answer. According to string theory, all matter consists of tiny, vibrating "stringlike" entities. The theory also requires there to be more spatial dimensions than the three that are already part of everyday knowledge. For 15 years, there have been models in string theory that have been thought to give rise to dark energy. However, these have come in for increasingly harsh criticism, and several researchers are now asserting that none of the models proposed to date are workable. In their article, the scientists propose a new model with dark energy and our universe riding on an expanding bubble in an extra dimension. The whole universe is accommodated on the edge of this expanding bubble. All existing matter in the universe corresponds to the ends of strings that extend out into the extra dimension. The researchers also show that expanding bubbles of this kind can come into existence within the framework of string theory. It is conceivable that there are more bubbles than ours, corresponding to other universes. The Uppsala scientists' model provides a new, different picture of the creation and future fate of the universe, while it may also pave the way for methods of testing string theory.
  16. Like other professionals, scientists like to be the best at what they do, but they also like to have fun in their job. And in 2018, my colleagues managed just that in claiming a record for decoding the world's longest DNA sequence. For the English scientists involved, perhaps the most important fact is that their DNA read was about twice as long as the previous record, held by their Australian rivals. The glory of gaining the record is the result of an Ashes-style competition to produce ever longer DNA sequences. The record has exchanged hands several times over the past year, but with this new sequence the trophy seems to be safe in the UK – at least for the moment. But as exciting as it is to win, the most inspiring thing about this record is the science and the future applications that could become available thanks to our ability to decode ever longer sequences. Jigsaw jumble The technology that enables scientists to read runs of DNA sequences has come a long way since the millennium-era race to decode the first human genome. There are lots of ways you can now read DNA, but the problem is that many animal and plant genomes are often billions of base pairs (pairs of DNA building blocks known as A, T, G and C) and so making sense of them is tricky. People have used different methods in the past, but essentially what they do is chop the DNA up into small parts, read each piece and then try to assemble the results back together, a bit like what you would do with a jigsaw puzzle. Putting the DNA pieces together in the correct order is therefore a major obstacle when it comes to DNA sequencing. This is obviously harder the more pieces you have, especially if they are short and very similar to each other. Being able to continuously read ever longer pieces – eventually an entire chromosome in one go – would therefore have a huge impact on science and innovation. In my own research, I am interested in finding the genes that determine the left and right sides of animal bodies. And while I can fairly straightforwardly read the genome of snails like "Jeremy" – which has a shell that coils left instead of right – it is very difficult to make sense of it, because the order is almost completely jumbled. My colleague, Matt Loose, also at the University of Nottingham, led the team behind the new world record , which read 2.3m bases of human DNA in one go. Putting that in context, in the most common form of DNA sequencing only a few hundred bases are read at once, creating millions of pieces to put together. If a few hundred bases are equivalent to once around a grand prix track, then a 2.3m base pair read is twice around the circumference of the Earth. In comparison, the main rival Australian team at the Kinghorn Centre for Clinical Genomics is still some way behind. They have still to get once around the world. Long reads and small holes The key technology that is pushing these advances is a very small hole, called a nanopore. DNA bases, or letters, are ratcheted through the nanopore, and the order can be read by monitoring disruptions to an electrical current put through it. If the nanopore were scaled up to the size of a thumb and forefinger pinch, then the scientists would have threaded a rope of over seven kilometres in lenght through the hole, without it becoming tangled or breaking. In comparison, a more typical DNA sequence would be about half a metre in length. In theory, sequencing a whole chromosome in one go should be possible using this method. This would then avoid the problem of trying to assemble a massive jigsaw. But natural breaks in each chromosome mean that this may not be possible. Whatever the actual limit of read length, the new methods are already being used to more quickly and cost effectively identify pathogens in disease outbreaks. The same methods are also being used to rapidly and accurately characterise the genome rearrangements that take place as cells progress to become cancerous. A recent proposal to sequence the genomes of 1.5m known animal, plant and fungal species will also benefit from these new long-read technologies. In future, the methods will help enable truly personalised medicine – having our individual genomes sequenced. In the UK, about 85,000 people have already had their entire genetic code read, with an ambition to sequence a million genomes in the next five years. For the moment, most of this is being done using older, short-read technology, which is still cheaper but misses an important layer of structural information. In my own laboratory, I plan to use the same methods to find the genes that sometimes enable snails to exist in two mirror-image versions of themselves. The same methods may also be used to further unravel the genetics of human diseases, especially those that are due to structural rearrangements and changes in gene copy number. The scientists behind the record believe that their record might last for a year or so. And the competition is expanding to include other competitors – just in the last month, a new entrant from the Netherlands came within a whisker of beating the UK record. But given what's at stake, fierce competition can only be a good thing.
  17. Tropical cyclones, and the torrential rains and strong winds these storms bring along with them, threaten coastal communities around the world and are expected to increase in intensity due to climate change. But not every tropical cyclone becomes a natural disaster and not every natural disaster results in human fatalities. Whether or not a natural hazard, such as a tropical cyclone, becomes a natural disaster depends on whether the hazard overwhelms existing human infrastructure in a particular country or region. But when does a natural disaster result in fatalities? New research presented at the 2018 American Geophysical Union Fall Meeting in Washington, D.C. suggests that at the country level, how effective the national government is, along with how much exposure to the natural hazard a particular region or community faced, are both important factors in answering this question. Researchers often use a country's Gross Domestic Product (GDP) and poverty rates as indicators of tropical cyclone vulnerability and mortality. While these factors are often good proxies for determining vulnerability, they make it difficult to parse out what is actually causing vulnerability and often don't account for how much exposure to the natural disaster a particular community or region experienced, according to Elizabeth Tennant, a Ph.D. candidate in public policy at the University of Maryland, College Park, and a research associate at Clark University who presented the new findings. "It's not just whether or not a country is rich or poor," Tennant said. "It's the institutions that are in place." In the new study, Tennant created a global dataset of over a thousand storm events from 1978-2005, bridging socioeconomic data, like national government effectiveness, economic development and human capital with meteorological data, like the wind speed and rainfall associated with a particular storm event. Tennant began by matching up tropical cyclones to a tropical storm, allowing her to model the wind and rainfall conditions of the storm, pinpoint how much exposure certain areas had to the natural disaster and then relate these factors to population and infant mortality data on a subnational scale. How much exposure an area has to a natural disaster impacts the vulnerability of the region to disaster mortality, so accounting for exposure can help to improve precision and limit bias, according to Tennant. Tennant then compared how government effectiveness, measured by the World Governance Index – an annual measure of governance around the world done put out by the World Bank – impacts tropical cyclone fatalities. She found countries with more effective governments experience lower mortality rates than those with ineffective governments. "[The] effect is large, statistically significant," Tennant said about the finding. But it's not just government effectiveness that matters when considering mortality rates. Tennant also found storm fatalities are higher when areas that are already vulnerable face higher exposure to a natural hazard. This finding may seem like common sense, but most research into mortality occurs at the national scale, erasing how some areas of a country may be more vulnerable than others, Tennant said. Understanding which areas are exposed to natural hazards due to tropical cyclones could be helpful for formulating new policies to minimize the vulnerability that some areas face, and Tennant hopes that her work could help to further these efforts. "The same approach can also be applied to looking at other natural hazards, like earthquakes," Tennant said.
  18. The experience of single parenthood is more common than typically reported – and children's well-being is not negatively affected by living in single-parent households – according to a study led by the University of Sheffield. A report by Sumi Rabindrakumar, in her role as policy officer at the leading national charity working with single parent families, Gingerbread, and University of Sheffield researchers, found that public policy and research needs a more nuanced understanding of single parent family life – reflecting how households change over time. The study, carried out as part of the University's Crook Public Service Fellowship scheme, explored the experiences of more than 27,800 households with children over a six-year period. It found that, while surveys typically suggest that one in four families with children are headed by a single parent at any one point in time, data suggests that one in three families with children will have been a single parent family at some point over a six-year period. Rosie Ferguson, Chief Executive at Gingerbread, said: "We have been supporting single parent families for a hundred years and we know first-hand how strong and diverse single parents and their families are. "Our report with the University of Sheffield debunks myths about single-parent households and significantly, it shows that children are not negatively impacted if raised by a lone parent. What is most important to a child's well-being is the presence of positive relationships. "We urge policymakers and researchers alike to do more to challenge popular stereotypes and reflect the dynamism of family life." The report found transitions out of single parent family status are also common. Over six years, one in seven single parents reported getting married or cohabiting – and of these parents, nearly three quarters re-partnered with a biological parent of their child. The study also found there is no evidence of a negative impact of living in a single-parent household on children's well-being in terms of their self-reported life satisfaction, quality of peer relationships, or positivity about family life. Children who are living or have lived in single parent families score as highly – or higher – against each measure of well-being as those who have always lived in two parent families. Sumi Rabindrakumar, report author, added: "By taking a more dynamic view of family life, these findings challenge common political and public narratives around single parents and their families. "Not only is the experience of single parenthood more common than typically reported, but family and caring relationships are more complex and often extend beyond the household unit. "Crucially, there are clear signs that children's well-being is not negatively affected by living within a single-parent household. This fresh look at family life must now be reflected in policy making and research alike. To ignore these trends risks remaining out of touch with the reality of everyday lives and the UK's family landscape." The study found that policymakers should recognise the fluidity of families and separation – single parenthood is common and separation in itself does not mean the breakdown of relationships with a child's biological parent, particularly given the prevalence of re-partnering for biological parents. They should also think beyond the household and understand and value the support networks and relationships between and within households in policy decisions. Policymakers should also resist popular narratives regarding the perceived 'problems' of single parenthood for children and ensure targeted policy making by taking proper account of the evidence on what affects family outcomes. Professor Nathan Hughes, from the University of Sheffield's Department of Sociological Studies, said: "These findings have clear implications for how single parent families should be understood, valued and supported. Stereotyping single parenthood as a problem is inaccurate and immoral. "The evidence on what affects child and family outcomes is readily available to politicians, but often does not seem to penetrate pre-determined negative political narratives about single parents. "We need to recognise that family extends beyond the household unit. In particular, it is clear that grandparents play a key role in providing both financial and practical support, and therefore in ensuring a child's well-being." The research was carried out by a multidisciplinary team as part of the Crook Public Service Fellowship scheme in the University of Sheffield's Faculty of Social Sciences. The Crook Public Service Fellowship scheme, named in honour of the donor, Emeritus Professor Tony Crook, CBE FAcSS FRTPI, from the University of Sheffield's Department of Urban Studies and Planning and former Pro-Vice-Chancellor, aims to encourage original thinking and influence public policy. The initiative allows future leaders in the public and not-for-profit sector to work closely with academics on pressing policy issues to influence their sector and wider society. Professor Crook said: "I am delighted to see how the Crook Fellowships have achieved what I wanted to see when we set these up. They are helping to build strong collaborations between academic colleagues and Crook Fellows working in the policy and practice communities. "The fundamental aim is to help make the world a better place through rigorous research on difficult policy challenges. These reports show what we can do through building strong links between academics and policy makers."
  19. A "hidden cradle of plant evolution" has been uncovered in Jordan. In Permian sedimentary rocks exposed along the east coast of the Dead Sea, a team led by palaeobotanists from the University of Münster discovered well-preserved fossils of plant groups bearing characteristics typical of younger periods of Earth history. The Permian began some 300 million years ago and ended around 250 million years ago. The researchers present their findings in this week's issue of Science. The newly recovered fossils represent the earliest records of three major plant groups and reveal them to be much older than previously thought. Perhaps the most important finds are fossil twigs of the Podocarpaceae—today the second-largest family of conifers—making them the oldest fossil record of any living conifer family. Researchers also found leaves and reproductive organs of Corystospermaceae, a group of seed plants that went extinct some 150 million years ago, as well as remains of Bennettitales, a peculiar lineage of extinct seed plants with flower-like reproductive organs. Earliest records of three plant groups uncovered in the Permian of Jordan A mummified seed-fern frond flaking off a piece of mudrock after being exposed to the light of day for the first time in some 255 million years; well-preserved fossils like this from Permian sedimentary rocks exposed along the shore of the …more Evidence for the unexpectedly early occurrence of Corystospermaceae in the Permian of Jordan was first published about ten years ago by a research team led by Prof Dr. Hans Kerp. Since then, researchers have uncovered not only the well-preserved leaves but also the characteristic reproductive organs of this group of plants. Like Bennettitales and Podocarpaceae, these plants were believed to have evolved millions of years later during the Early Mesozoic. The fossils are unusually well preserved. "Analysis of characteristic epidermal cell patterns enabled us to resolve the systematic relationships of the plant fossils more precisely," says Bomfleur. "The study area is really exceptional, like a melting pot of floral provinces." The plant fossils there occur in unusual mixed assemblages that consist of plant taxa typical for different floral regions. Earliest records of three plant groups uncovered in the Permian of Jordan The well-preserved plant cuticles are freed from the sedimentary rock using strong acid; after cleaning and bleaching, this isolated frond fragment of an extinct seed-fern can yield important biological and ecological information. Credit: Palaeobotany Research Group Münster The fossil occurrences were discovered in sedimentary deposits from seasonally drier environments of an equatorial coastal lowland—a type of environment that rarely preserves plant fossils. "The occurrence of no less than three major 'modern' plant groups in deposits of just this single rock formation may indicate that such stressed and disturbance-prone tropical environments may have acted as evolutionary cradles also for other plant groups," says Bomfleur. Back in the lab, the team prepared the fossils using a variety of methods, including treatments with strong acids to prepare the plants' cuticles for detailed microscopic analysis.
  20. NIMS has succeeded in fabricating topological LC circuits arranged in a honeycomb pattern where electromagnetic (EM) waves can propagate without backscattering, even when pathways turn sharply. These circuits may be suitable for use as high-frequency electromagnetic waveguides, which would allow miniaturization and high integration in electronic devices such as mobile phones. Researchers are seeking topological properties with functions that are not affected even if the sample shapes are changed. Topological properties were first discovered in electron systems, and more recently, the notion has been developed for light and microwaves for building optical and electromagnetic waveguides immune to backscattering. However, realization of topological properties in light and microwaves normally requires gyromagnetic materials under an external magnetic field, or some other complex structures. In order to match existing electronics and photonics technologies, it is important to achieve topological properties based on conventional materials and simple structures. In 2015, this research team demonstrated topological properties in light and microwaves in a honeycomb lattice of dielectric cylinders such as silicon. This time, the team reported that in a microstrip, electromagnetic waves attain topological properties when the metallic strips form a honeycomb pattern and the intra-hexagon and inter-hexagon strip widths are different. The team also fabricated microstrips and measured electric fields on their surfaces, and successfully observed the detailed structure of topological electromagnetic modes, where vortices of electromagnetic energy polarized in a specific direction are generated during the wave propagation. This research demonstrates that topological propagation of electromagnetic waves can be induced using conventional materials in a simple structure. Because topological electromagnetic wave propagation is immune to backscatter even when pathways turn sharply, designs of compact electromagnetic circuits become possible, leading to miniaturization and high integration of electronics devices. In addition, the direction of vortex and the vorticity associated with topological electromagnetic modes may be used as data carriers in high-density information communications. All these features may contribute to the development of advanced information society represented by IoT and autonomous vehicles.
  21. For Chinese guests at Marriott International hotels, the check-in process will soon get easier. The hotel giant announced last summer that it's developing facial recognition systems that will allow guests to check in at a kiosk in less than a minute via a quick scan of their facial features. Half a world away, fearful of what such technological advances will mean to their future job security, thousands of Marriott workers across the United States voted this fall to authorize their union to strike. In addition to calls for higher wages and better workplace safety, they pushed for procedures to protect them from the looming impact of technological advancement. "You are not going to stop technology. The question is whether workers will be partners in its deployment or bystanders that get run over by it," the union's president told The New York Times. Indeed, what many are calling "the Fourth Industrial Revolution" is already here, disrupting jobs and labor markets, largely because of the rise and advance of artificial intelligence and robotics. Tinglong Dai, a Carey Business School associate professor in the research track with expertise in how AI interacts with operations management, is among those experts who are optimistic about the long-term impact on workers. "In industries where demand for a product or service will grow in response to increased productivity, the rise of AI/robotics can turn out to be a boon for the job market, stimulating consumer demand and expanding market size," he says, pointing to the success of Uber and Lyft as one obvious example. "They've created a new and larger market for taxi-like services." While Dai acknowledges that advancing technology has killed (and will continue to kill) some types of jobs, he notes that new industries and job functions will be created and will make up for the loss of existing professions. "Part of the promise of AI/robotics has always been to liberate human beings from the '3Ds'—dirty, difficult, and dangerous jobs—so that they can focus on creative, personal, and original activities," says Dai. "I don't necessarily see AI/robotics as substituting for human skills; I see more opportunities for them to complement our strengths." Dismissing doomsayers, Dai says he believes that issues such as widening skill gaps are short-term problems that will be righted. "I am extremely optimistic about the future," he says. "I believe in the unstoppable human desire to create a better world." The issue, by the numbers For its 2018 "The Future of Jobs Report," the World Economic Forum surveyed 313 chief human resources officers of large employers operating in multiple locations, representing more than 15 million employees around the globe. Over the next several years, they found, companies expect a significant shift on the frontier between humans and machines when it comes to existing work tasks. Currently, companies estimate that 71 percent of the total task hours are performed by humans and 29 percent are performed by machines. By 2022, companies predicted that humans would complete only 58 percent of total task hours and machines would complete 42 percent. This shift will be accompanied, business leaders predict, by a need to "re-skill" the workforce. A predicted 54 percent of today's employees will require significant re- and upskilling by 2022. The study also found: 35 percent of the workforce is expected to require additional training lasting up to six months 9 percent is expected to require additional training of six months to a year 10 percent will require training of more than a year Nearly 25 percent of companies are undecided about or unlikely to pursue the retraining of existing employees Nearly 66 percent of companies expect workers to adapt and pick up skills as they pursue new positions More than half of companies are likely to turn to external contractors, temporary staff, and freelancers to address their skills gaps A global scorecard In a 2017 analysis that covers 46 countries comprising almost 90 percent of global GDP, the McKinsey Global Institute found that China faces the largest number of workers needing to switch occupations—up to 100 million if automation is adopted rapidly, or 12 percent of the 2030 workforce. For advanced economies, the share of the workforce that may need to learn new skills and work in new occupations is much higher, up to 33 percent of the 2030 workforce in the U.S. and Germany and nearly 50 percent in Japan. About 50 percent of the tasks that workers perform and are paid almost $15 trillion to do in the global economy could be automated through the adapting of current technology, the analysis found. But degrees matter. More than half of the occupations requiring less than a high school diploma are susceptible to being replaced through technical automation, whereas only 22 percent of jobs that require a college degree are susceptible. The career fields most likely to increase in demand in the period of the 2022 are those based on and enhanced by technology, including: Data analysts and scientists Software and applications developers E-commerce and social media specialists Also expected to increase are jobs related to understanding and leveraging emerging technologies, such as: AI and machine learning specialists Big data specialists Process automation experts Information security analysts User experience and human-machine interaction designers Robotics engineers Blockchain specialists Dai says the changing landscape of the global economy will produce opportunities for those who know where to look. "Online streaming has not killed movie theaters, because at a fundamental level people want to go out instead of staying at home," he says. "For the same reason, the hope and desire for newness and excitement will lead people to a new world in which they have fulfilling and interesting work and lives."
  22. Australia has one of the highest rates of pet ownership in the world, with 38% of Australian households owning dogs. Dogs improve the quality of our lives, and studies show that exposure to dogs can even improve our immune system. However, one medium sized dog produces about 180 kilograms of poo a year. With about 9 million dogs in Australia, it can really start to pile up. Rather than wrap it in plastic and throw it away – where it eventually ends up in landfill – you can use dog poo as a sustainable source of fertiliser. Poo problems The waste products of humans and their associated animals have not always been a problem. In the past, even within the memory of people I met living on small Pacific islands, human poo was produced in relatively small amounts because the population was low, and could decompose naturally and safely within the soil. Healthy soil contains a vast number of microbes and organisms that thrive on organic material. But burgeoning populations have changed this. Waste produced by humans is now an immense problem. Not only is there a waste issue, but human activities have caused soil pollution and degradation that kills soil microbes or impairs their capacity to process organic matter. Dog poo is considered an environmental hazard. This is a consequence of its composition. It is comprised of three-quarters water plus undigested food including carbohydrates, fibre, proteins, and fats from the dog's digestive system. Also present are a wide range of resident bacteria that are needed for digestion. If dogs are infected with worms, or other disease-causing microbes, these can be present in their poo. Left on the street, dog poo is washed into waterways, creating a potential health hazard. Once pathogenic microbes from the poo get into waterways, they can find their way into other living things – including humans. People also don't like dog poo because of its smell. This is due to the volatile products produced by microbes in the gut that are involved in the digestion process. More than 100 different chemicals that could contribute to the bad smell have been identified. Because poo smells bad we avoid dealing with it. Local councils offer plastic bags at parks and other public places to encourage dog owners to collect the poo. Bins, sometimes specifically for dog waste, are often placed nearby so the smelly package can be discarded as soon as possible. But this is not the best solution, because ultimately the dog poo ends up going to landfill, contributing to our ongoing problem of waste accumulation. Don’t waste your dog's poo – compost it The author’s pup, in a garden he helped fertilise. Author provided Why dog poo can become a nutrient Rather than becoming a pollutant, dog poo can become a nutrient for your garden, by being composted in your backyard. If you have a garden you can make your own compost bin by adding the dog poo to grass clippings, plant or other organic waste, and even sawdust as a source of food for the microbes. The microbes then break down the organic material into humus. During this process the temperature in the compost mixture rises to 50-60℃. Over time, the heat will kill most canine bacteria, as they are adapted to live at lower temperatures in the dog's gut. Compost contains billions of microbes per gram of material and competition from these (as well as the environmental conditions of the compost that are very different from the dog digestive system) assist in promoting destruction of pathogenic canine microbes, if present. The compost needs to be turned over weekly to ensure uniform composing and oxygenation. Over days or weeks the temperature in the compost drops, indicating when the decomposition process is complete. Then it's time to use your compost to improve your garden! A couple of dog-do dont's: Don't include waste from unknown dogs or from dogs that show signs of disease Avoid using it on vegetables for human consumption. If you live in an apartment and don't have a garden or access to green waste, you can still compost dog poo and use the product. There are small compost bins commercially available for this purpose. Composted material from these can be used on your outdoor or indoor plants. And if you don't have any indoor plants, then you should think about getting some, as they can cut down on ozone in the air and even reduce indoor pollution.
  23. An international team of scientists including an employee of I.M. Sechenov First Moscow State Medical University (MSMU) has developed a device for mixing chemical and biological reaction feeds. The team managed to increase the mixing efficiency up to 90 percent. The new device will be used in biological and chemical experiments. The article was published in the RSC Advances journal. Small mixing devices are used in biological and chemical research in the course of experiments. For example, in biology, such mixers are used for analysis, purification, and synthesis of DNA, as well as for the delivery of medicinal drugs. In chemistry they are necessary for carrying out reactions. There are two types of mixing devices—active and passive ones. In the first group, the principle of operation is based on the use of external forces, such as magnetic fields, heat, acoustic fluctuations, and so on, which accelerate the mixing process. However, due to complicated construction, they have reduced reliability and service life. Therefore, the second type of mixing device, using only the energy of flow movement, is considered more prospective. Passive mixers create a turbulent flow—the flow of liquid that causes it to exchange substance between its layers. The efficiency of this method directly depends on a mixer's geometry. As a rule, it consists of one or several metal plates placed one after another in a small tube. Each plate has a considerable number of holes of different shapes and sizes. They direct the flows of liquid and secure their efficient mixing. A team of scientists from Iran, Australia and Russia used existing mathematical models of liquid mixing and developed five variants of passive mixers. Each has its own unique geometry and mixing characteristics. After that, the scientists combined them into one hybrid micro-mixer. The new mixing device consists of six consequent metal plates with holes of different shapes. By changing the combinations of elements, the researchers created a micro-mixer that secures almost 100 percent mixing of two liquids with low viscosity and 90 percent with high viscosity. "Micro-mixers can be used in medical, chemical, and biological research. We've created a hybrid micro-mixer and compared it with other mixing devices by mixing efficiency. Our device turned out to be applicable to a wide range of biological studies. The suggested micro-mixer has different construction elements that helped us combine the mixing methods. It secures high process efficiency and is designed to replace the existing models of passive micro-mixers," said Majid Warkiani, a co-author of the work and a research associate of the laboratory of nanotheranostics at MSMU and of the Institute of Biomedical Materials and Devices, University of Technology Sydney.
  24. Instead of blood pressure, temperature, and heart rate, the vital signs for a forest are captured in key traits such as the amount of nitrogen in a tree's leaves, the leaf area, or the density of the wood. These "functional traits" can impact how trees grow—and therefore how forests respond to climate change. While researchers have begun trying to tease out these patterns in recent decades, incomplete data has made it difficult to understand what's happening to particular traits in any meaningful way—especially when you get down to the level of individual trees in a forest. To help fill this important knowledge gap, Daniel J. Wieczynski and Santa Fe Institute external professor Van M. Savage, both ecologists at the University of California-Los Angeles, and their collaborators decided to analyze existing data from trait studies on forest communities to see what could be revealed about these shifts on a global scale. "One of the challenges is that you need a lot of data to accurately measure functional diversity," says Wieczynski. "So our idea was to take what functional data we have available from databases and pair this with locally collected field data, as well as data about species abundance, to say something about climate-biodiversity relationships that we couldn't say before." The team, which also included Santa Fe Institute external professor Brian Enquist of the University of Arizona, amassed data from 421 tree communities around the world, including information from 55,983 individual trees from 2,701 species, and examined a range of "functional traits" that influence individual growth, such as plant height, wood density, leaf area, and the amount of carbon, nitrogen and phosphorus in a leaf. To determine the climatic conditions these tree communities are living in, they also analyzed the temperature, precipitation, wind speed and vapor pressure in each one. The study—one of the first to examine how climate is influencing functional traits in forest communities on a global scale—found evidence of major changes in these traits, which could affect forest productivity and composition and even how forests are distributed around the globe. And they found that climate affects nine different traits in various ways: For example, they discovered that leaf area is most influenced by vapor pressure and temperature, while height is primarily affected by temperature variability. To the authors' surprise, two climatic factors in particular had an outsized effect on trait diversity overall: temperature variability—not just mean temperature—and vapor pressure. They also found evidence that forests are currently shifting their traits in response to global warming. Wieczynski and Savage hope the work could help improve the accuracy of computer models that try to predict how forests will respond to climate change in the future. " By calculating a more accurate relationship between functional diversity and climate, using the methods we used, we'll be able to more accurately predict those changes in the future using these models," Wieczynski says. "And hopefully this will show it's important to measure more trait data in communities, or more individual level information in communities than just species-level information." "I think these results will be useful to determine climate change's effects on ecological systems," Savage says. This is just the start in gaining a better grasp of how climate change is affecting functional traits in forest communities, Wieczynski adds. "The next step is to go out to do new field studies where you actually measure trait values for more individuals."
  25. British music retailer HMV, which was launched by English composer Edward Elgar in 1921 and helped propel the Beatles to fame, collapsed into administration on Friday as consumers switch to digital streaming in droves. His Master's Voice (HMV), which is known worldwide for its iconic logo showing a dog listening to a record player, is the last major music retailer in Britain and has been suffering for years from the decline in physical sales of CDs and DVDs. Hilco Capital, a restructuring company which rescued HMV when it previously entered administration in 2013, said that the board of HMV had decided to appoint administrators but its 125 stores in Britain would continue trading for the moment as negotiations continue with suppliers. The company employs around 2,200 people. "It is disappointing to see the market, particularly for DVD, deteriorate so rapidly in the last 12 months as consumers switch at an ever increasing pace to digital service," said Paul McGowan, executive chairman of HMV and Hilco. "During the key Christmas trading period the market for DVD fell by over 30 percent compared to the previous year," he said, adding that this decline was "unsustainable". Signed The Beatles Digital downloads overtook physical music sales in Britain for the first time in 2012 and since then platforms for music and film such as Spotify, iTunes, Netflix and Amazon Prime have grown further, undermining retailers. His master's voice fading His master's voice fading "The switch to digital has accelerated dramatically this year, creating a void that we are no longer able to bridge," McGowan said, adding that the physical music market in Britain is forecast to fall by around 17 percent next year. "As a result, the directors have concluded that it will not be possible to continue to trade the business," he added. HMV opened its first store on Oxford Street in 1921 selling gramophones, radios and popular music recordings. It made history in 1962 when record label EMI, which belonged to HMV until 1996, signed The Beatles. Since its last collapse, HMV had hosted a series of live events in store with musicians like pop star Kylie Minogue and grime artist Stormzy in a bid to increase footfall. But McGowan also noted there had been a "tsunami of challenges facing UK retailers", including a raise in business rates—a tax on non-residential properties. Hello Goodbye Hello Goodbye 'First victim' of Christmas slump Brexit uncertainty and the growth of online shopping have hit British retailers particularly hard this year. Consumer confidence in Britain fell to its lowest level in five years in December, according to the GfK institute. "Poor Christmas trading has claimed its first victim," Richard Lim, chief executive of Retail Economics, told the BBC. The electronics retailer Maplin, the British branch of Toys'R'Us and the discount chain Poundworld all collapsed in 2018. The department store House of Fraser was also forced into administration while Marks & Spencer and Debenhams had to close stores. Some 150,000 jobs have been lost in Britain's retail sector this year, according to an estimate by Britain's Press Association news agency.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.