Jump to content

Tipup's Content - Page 17 - InviteHawk - The #1 Trusted Source for Free Tracker Invites

Buy, Sell, Trade, or Find Free Invites for top private trackers like redacted, blutopia, losslessclub, femdomcult, filelist, Chdbits, Uhdbits, empornium, iptorrents, hdbits, gazellegames, animebytes, privatehd, myspleen, torrentleech, morethantv, bibliotik, alpharatio, blady, passthepopcorn, brokenstones, pornbay, cgpeers, cinemageddon, broadcasthenet, learnbits, torrentseeds, beyondhd, cinemaz, u2.dmhy, Karagarga, PTerclub, Nyaa.si, Polishtracker, and many more.

Tipup

Advanced Members
  • Posts

    911
  • Joined

  • Last visited

  • Feedback

    0%
  • Points

    34,245 [ Donate ]

Everything posted by Tipup

  1. Even if you explicitly tell Facebook to not track your location, it says it will still use your IP address to track your location. Kashmir Hill, reporting for Gizmodo: Aleksandra Korolova has turned off Facebook's access to her location in every way that she can. She has turned off location history in the Facebook app and told her iPhone that she "Never" wants the app to get her location. She doesn't "check-in" to places and doesn't list her current city on her profile. Despite all this, she constantly sees location-based ads on Facebook. She sees ads targeted at "people who live near Santa Monica" (where she lives) and at "people who live or were recently near Los Angeles" (where she works as an assistant professor at the University of Southern California). When she traveled to Glacier National Park, she saw an ad for activities in Montana, and when she went on a work trip to Cambridge, Massachusetts, she saw an ad for a ceramics school there. Facebook was continuing to track Korolova's location for ads despite her signaling in all the ways that she could that she didn't want Facebook doing that. [...] "There is no way for people to opt out of using location for ads entirely," said a Facebook spokesperson by email. "We use city and zip level location which we collect from IP addresses and other information such as check-ins and current city from your profile to ensure we are providing people with a good service -- from ensuring they see Facebook in the right language, to making sure that they are shown nearby events and ads for businesses that are local to them."
  2. Google has listened to user feedback and is currently testing a feature that will let G Suite users invite non-Google account holders to view, comment, suggest edits, and even directly edit Google Docs, Sheets, and Slides files. From a report: This wasn't possible until now, and G Suite users could only share documents and request feedback from users that owned a Google account. The way this new feature will work is via PINs (Personal Identification Numbers). Google said that G Suite users would be able to invite a non-Google user to view or edit a document via email. The said email would contain a link to the shared document. Non-Google users will be able to access the link and request an PIN that it would be delivered via a second email. Once they enter the PIN code, users can then view or edit the shared file -based on the assigned permissions.
  3. London's Metropolitan Police is testing its facial recognition technology in the capital this week. From a report: It's the seventh time the Metropolitan Police, the UK capital's police force, has trialled facial recognition in public. The technology has previously been used at large events, including Notting Hill Carnival in 2016 and 2017, and Remembrance Day services last year. This year, the technology is being used Monday and Tuesday of this week in Soho, Piccadilly Circus, and Leicester Square -- all major shopping areas in the heart of the city. Cameras are fixed to lampposts or deployed on vans, and use software developed by Japanese firm NEC to measure the structure of passing faces. This scan is then compared to a database of police mugshots. The Met says a match via the software will prompt officers to examine the individual and decide whether or not to stop them. Posters will inform the public they're liable to be scanned while walking in certain areas, and the Met says anyone declining to be scanned "will not be viewed as suspicious."
  4. Amazon may have turned off its Oracle data warehouse in favor of Amazon Web Services database technology, but no one else in their right mind would, Oracle's outspoken co-founder and CTO Larry Ellison says. From a report: "We have a huge technology leadership in database over Amazon," Ellison said on a conference call following the release of Oracle's second quarter financial results. "In terms of technology, there is no way that... any normal person would move from an Oracle database to an Amazon database." During last month's AWS re:Invent conference, AWS CTO Werner Vogels gave an in-the-weeds talk explaining why Amazon turned off its Oracle data warehouse. In a clear jab at Oracle, Vogels wrote off the "90's technology" behind most relational databases. Cloud native databases, he said, are the basis of innovation. The remarks may have gotten under Ellison's skin. Moving from Oracle databases to AWS "is just incredibly expensive and complicated," he said Monday. "And you've got to be willing to give up tons of reliability, tons of security, tons of performance... Nobody, save maybe Jeff Bezos, gave the command, 'I want to get off the Oracle database." Ellison said that Oracle will not only hold onto its 50 percent relational database market share but will expand it, thanks to the combination of Oracle's new Generation 2 Cloud infrastructure and its autonomoius database technology. "You will see rapid migration of Oracle from on-premise to the Oracle public cloud," he said. "Nobody else is going to go through that forced march to go on to the Amazon database."
  5. AT&T said Tuesday its network is now live in parts of 12 cities across the United States, with the first mobile 5G device arriving on Friday, December 21. From a report: According to an AT&T spokesperson, the company's 5G network is already up and running in parts of the previously promised dozen cities: Atlanta, Charlotte, Dallas, Houston, Indianapolis, Jacksonville, Louisville, Oklahoma City, New Orleans, Raleigh, San Antonio, and Waco. However, the first consumer device that will be able to access that network, Netgear's Nighthawk 5G Mobile Hotspot, will become available just ahead of the Christmas holiday. The company also revealed that it will be using the name "5G+" for the part of its network that will use millimeter wave spectrum and technologies, and it said the Nighthawk 5G Mobile Hotspot will run on that 5G+ network. [...] AT&T's 5G pricing is also interesting. Like Verizon, AT&T is offering an initial promotion that makes the hardware and 5G service cheap up front, with new pricing set to follow later. Early adopters from the consumer, small business, and business markets will be able to "get the mobile 5G device and wireless data at no cost for at least 90 days," AT&T says, with new pricing beginning in spring 2019. At that point, the Nighthawk 5G Mobile Hotspot will cost $499 outright, with 15GB of 5G service priced at $70 per month, which AT&T calls "comparable" to its current $50 monthly charge for 10GB of 4G data.
  6. Sphero's hinted that it's getting out of the licensed product game, but this week CEO Paul Berberian confirmed that the company is clearing out its remaining licensed inventory and won't be restocking the supply. From a report: That means the company won't be producing any more BB-8s, R2-D2s, Lightning McQueen cars, or talking Spider-Mans. The listings for all the toys list them as "legacy products" that are no longer in production. App support will continue for "at least two years, if not longer," Berberian says. The Disney partnership lasted three years, but ultimately, the licensed toy business required more resources than it was worth, Berberian tells The Verge. These toys sold well when released with a movie, but interest waned over time as the movie became more distant, he says. Still, the company sold "millions" of BB-8s, although company data shows that the toys weren't used much after initial play time and eventually sat on shelves.
  7. They say revenge is a dish best served cold. But for Mark Rober, it's much sweeter served smart, smelly and covered in glitter. From a report: The former NASA engineer-turned-YouTube star has received plaudits online after designing a booby trap to avenge all those who've fallen victim to a new wave of neighborhood crime: doorstep delivery theft. Rober spent six months combining GPS tracking, cameras, fart spray and glitter in an elaborate and amusing mechanism after discovering thieves had stolen an Amazon delivery from his doorstep. In a video posted on his channel, the 38-year-old, who helped design the U.S. space agency's Curiosity Rover, said his engineering experience left him well-placed to "take a stand" after dismissive police left him feeling "powerless." "If anyone was going to make a revenge ... package and over-engineer the crap out of it, it was going to be me," said Rober, who spent nine years with NASA.
  8. Consumers in Taiwan will only be able to use 4G services from 2019 as the government will shut down 3G services by the end of the year, according to a Sina news report on Monday, citing local Taiwan media reports. From a report: Although the vast majority of the population in Taiwan have shifted to 4G networks, there are still around 200,000 consumers using 3G. This has prompted local carriers to roll out incentives and promotions to get 3G users to shift onto the latest 4G plans. Taiwan's latest move to shut down 3G networks follows its earlier decision to remove all 2G networks on July 1, 2017, as local regulators and telecom operators continue to actively push for the development of 4G network coverage. As of March this year, the number of 4G users has already exceeded the population in Taiwan, said the report. The number of 3G users has declined to some 228,000 people in mid-November from 5.5 million in 2017.
  9. An anonymous reader shares a report: According to the yearly report published by Stockholm-based phone number-identification service Truecaller, spam calls grew by 300 percent year-over-year in 2018. The report also found that telecom operators themselves are much to blame. Between January and October of this year, Truecaller said, users worldwide received about 17.7 billion spam calls. That's up from some 5.5 billion spam calls they received last year. One of the most interesting takeaways from the report is a sharp surge in spam calls users received in Brazil this year, making it the most spammed country in the world. According to Truecaller, an average user in Brazil received over 37 spam calls in a month, up from some 20 spam calls during the same period last year. According to the report, telecom operators (at 32 percent) remained the biggest spammers in Brazil. The report also acknowledged the general election as an event that drove up spam calls in the country. As in Brazil, Indians were bombarded by telecom operators (a whopping 91 percent of all spam calls came from them) and service providers trying to sell them expensive plans and other offerings. Spam calls received by users in the U.S. were down from 20.7 calls in a month to 16.9, while users in the U.K. saw a drop in their monthly dose of spam calls from 9.2 to 8.9. [...] Truecaller also reported that scam calls subjecting victims to fraud attempts and money swindling are still a prevalent issue. One in every 10 American adults lost money from a phone scam, according to a yearly report the firm published in April this year.
  10. One of the great hopes of the UK tech sector, Blippar, has collapsed into administration over a funding row. BBC News reports: The augmented reality firm was co-founded by Ambarish Mitra, and its technology was used in a partnership with the BBC's Planet Earth II series. Blippar was one of the UK's tech "Unicorns" -- start-up businesses that are worth $1bn or more. Mr Mitra became a brand ambassador for the UK to promote British innovation around the world. He claimed to have founded his business from a Delhi slum, leading him to be dubbed a "real-life Slumdog Millionaire". However, the Financial Times ran a profile disputing many of Mr Mitra's claims about his birth and his business development. It seemed to be one of the brightest stars in London's tech firmament, raising big sums from American and Malaysian backers who bought into the message that augmented reality was the next big thing. So why has the Blippar bubble burst? A few years ago it did appear to have something groundbreaking -- you could point its phone app at everyday objects and they would animate into action, give you useful information or serve up an advert. But the business appeared to depend on a very fickle set of customers -- advertising agencies wanting to use its augmented reality tools in their campaigns. Not only are much bigger firms offering similar technology but big brands seem to have concluded that it's a gimmick whose time may already have passed. What's more Blippar suffered from a lack of focus, trying out a range of ideas -- making an app for Google Glass, opening a Silicon Valley office, launching a facial recognition service.
  11. Welcome to Invitehawk !
  12. Welcome to Invitehawk !
  13. France won’t wait on the rest of the European Union to start taxing big tech. French finance minister Bruno Le Maire says the country will move ahead with a new tax on Google, Apple, Facebook, and Amazon starting Jan. 1, 2019. The tax is expected to raise €500 million ($570 million) in 2019. France and Germany had originally pushed for an EU-wide 3% tax on big tech firms’ online revenues, in part to prevent companies like Apple from sheltering their profits in countries with the lowest tax rates. The deal, which required the support of all 28 EU states, appeared to crumble earlier this month, with opposition from countries including Ireland, home to the European headquarters of Google and Apple. France and Germany attempted to salvage the deal by scaling it back to a 3% tax on ad sales from tech giants. That would effectively limit the tax to Google and Facebook, excluding companies like Airbnb and Spotify that might have been harder hit under the initial proposal. In the meantime, France is moving ahead with its own tax on Google, Apple, Facebook, and Amazon, which are collectively known in the region as GAFA. “The tax will be introduced whatever happens on 1 January and it will be for the whole of 2019 for an amount that we estimate at €500m,” Le Maire said at a press conference in Paris, the Guardian reported today (Dec. 17). UK treasury minister Mel Stride has also suggested the UK could act alone to tax tech giants, if a broader European push failed. “We have a strong preference for moving multilaterally in that space but we have said that in the event that that doesn’t move fast enough for us then that this is something we could consider doing unilaterally, or perhaps with a smaller group of other tax authorities,” Stride said in July. While the US has bristled at talk of taxing companies based in Silicon Valley, American economist Jeffrey Sachs in October endorsed a tech tax, arguing it would help avert a dystopian future in which global wealth became even more concentrated among a small number of people.
  14. Economics is searching for its third act. Over the past few decades, the field had become more self-assured, harnessed more mathematics, and moved further away from social sciences such as psychology in the direction of hard sciences such as physics. Macroeconomists, who are concerned with understanding the economy in its broadest sense, were in a self-congratulatory mood. That is, until 2008. The financial crisis ruptured the profession and exposed deep flaws in its models for understanding the world. The 10 years since the global financial crisis have been plagued with increasing anxiety about inequality and economic security. The brutal and far-reaching economic collapse, deep recession, and slow recovery have puzzled economists. Macroeconomists have been fending off criticism for not foreseeing a financial crisis of such epic scale. This is part of Remaking Economics, a series exploring foundational changes to a field that shapes how we understand the world. During the 20th century, the West suffered from two major economic crises. Each of these brought about a major revolution in economic thinking. After the 2008 financial crisis, no such shift has taken place. Economists are still using many of the same tools built to address the same questions as before. When is the revolution? The crises of the last century provoked two notable shifts in thought when the nature of those economic emergencies revealed defects in the prevailing economic theory. The first was in the 1930s after the Great Depression. High levels of unemployment—a quarter of the US population and even higher in some other countries—and sustained low output couldn’t be explained by Marshallian economics, the status quo at the time. The consequential figure then was the towering—in both height (6’ 7”) and influence—British economist John Maynard Keynes, who for all intents and purposes invented modern macroeconomics. Over the next few decades, Keynesian economics developed, involving a new general equilibrium IS-LM (investment-savings, liquidity-money) model, which showed how different types of markets—labor, goods, money—interacted with each other. The inner workings of this model don’t matter for the purposes of this story, but what’s important is that for several decades Keynesian economics was used to guide economic policymaking, which hinged on fiscal policy and the role of government intervention. The economy performed well, for a while. In the 1970s, there was an unexpected inversion of the usual relationship between economic growth and inflation. Following a recession, the US and some other industrialized nations were hit by a period of “stagflation,” in which inflation climbed while economic growth stagnated and unemployment rose. Keynesian economic models broke down, especially relating to the behavior of prices. Thus began economics’ second act, which took place at the University of Chicago, an institution that has employed more economics Nobel prize winners than any other. By the middle of the 1970s, American economist Milton Friedman rose to prominence by espousing belief in free markets, a focus on money supply, and advocacy of less government intervention. This paved the way for the policy of inflation-targeting at central banks, downgrading the importance of fiscal policy in favor of monetary policy. In 1976, the same year Friedman won the Nobel prize, another economist of the “Chicago School” was also leveling criticism at the prevailing macroeconomic orthodoxy, particularly Keynesian economics. Robert Lucas, who would win a Nobel prize 19 years later, argued that it was naive to predict the effects of a change in policy entirely on the basis of past observations. What has since become known as the Lucas Critique is that an economic model isn’t good enough if it ignores the fact that agents’ behavior changes when policy changes. This criticism has led to the development of so-called microfoundations in macroeconomics. This means that economic models should be built as an aggregation of microeconomic models that account for changes in individuals’ behavior. These individuals—consumers and firms—are forward-looking, optimizing agents who have rational expectations given the information available to them. Microfoundations help form the basis of dynamic stochastic general equilibrium (DSGE) models, which have become more popular in central banks and used in macroeconomics and policymaking at time of the global financial crisis. In 2003, when Lucas was the president of the American Economic Association, he struck a triumphant note about the state of the field. “Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades,” he said. This proclamation proved premature. Within a few years, the global economy abruptly sank into the worst recession since the Great Depression. What went wrong for economics was that a key sector of the economy wasn’t given the scrutiny it deserved: finance. The models used at the time by central bankers and other policymakers not only didn’t foresee the crisis, they couldn’t even conceive of such a shock emanating from the banking sector. The models didn’t properly consider financial institutions as agents in the economy, with their own unique incentives and risks. “The shock of how important finance could be to the real economy hit all macroeconomists,” says Elizabeth Bogan, a senior lecturer in economics at Princeton. “Many of them now claim they saw it coming, but don’t believe that. We didn’t.” Since the crisis, vast amounts of research has been dedicated to better incorporating the financial industry into economic models. This work is ongoing, but there has been progress. Nonetheless, the spectacular failure to foresee the crisis focused minds on other underlying problems in economics, namely that it treats people as infallible rational actors, puts too much in trust in the self-correcting nature of markets, and maintains a monoculture that suffocates outside ideas and theories. Economic growth, particularly when measured by gross domestic product, hasn’t produced materially higher levels of happiness. Inequality still rises even as global poverty is reduced. Meanwhile, we’re hurtling towards a climate-change crisis that capitalism appears incapable of avoiding (if anything, it’s making the problem worse). These are also failures of economics. Where are the solutions? Ten years after the financial crisis, economists rightfully wonder how they can address these challenges with models that originated in the 1970s and have already failed them once before. David Vines, a professor of economics at the University of Oxford, is at the forefront of the search for his field’s third act. “As a result of the global financial crisis we are no longer clear what macroeconomic theory should look like, or what to teach the next generation of students,” Vines wrote, with fellow economist Samuel Wills. “We are still looking for the kind of constructive response to this crisis that Keynes produced in the 1930s.” To guide their search, Vines and Wills asked a group of esteemed economists, including Nobel laureates Joseph Stiglitz and Paul Krugman, and former IMF chief economist Olivier Blanchard, to write papers that answer a series of questions about the state of macroeconomics. They wanted to get to the bottom of what is wrong with current models and identify a path forward for macroeconomic theory. The responses were pulled together into a single issue of the Oxford Review of Economic Policy (OxREP) published in January, which featured 14 papers under the title “Rebuilding macroeconomic theory.” The first question was, directly, “Is the benchmark DSGE model fit for purpose?” Eighteen economists authored papers in the journal, and disagreements abound. But one point unites them: DSGE models need an overhaul. There are four key changes needed to the core model: a stronger appreciation for the costs and risks inherent in the financial system, less reliance on the assumption of rational expectations, the inclusion of more heterogeneous (diverse) agents, and better ways to incorporate microfoundations. The shortcomings of DSGE models aren’t abstract problems that concern only economic wonks. These models are used increasingly by central banks, including the US Federal Reserve, whose monetary policy influences the entire global economy. If the insights economists derive from popular models are deficient, it ultimately affects us all. The area where most economists believe the most progress has been made—the incorporation of the financial sector into models—still has a long way to go. When critics question the advances of economics, its defenders often say, “well, what about physics?” There is a feeling that if physics isn’t judged against its inability to solve the mysteries of the universe, then why should economics be held to such an unfairly high standard? Physicists still haven’t resolved the contradictions between general relativity and quantum mechanics, so why should economists be expected to seamlessly integrate finance with macroeconomic theory? “There isn’t a ready ‘finance’ to sprinkle in your ‘macro’ soup,” says John Cochrane, a senior fellow of the Hoover Institution at Stanford University. In fact, finance and macroeconomics are still debating their own basic questions, he explains. Finance is at odds over what constitutes a crisis, the nature of “frictions” in the system, and why don’t people take more steps to get around them. Macroeconomists, meanwhile, are puzzling over the source of inflation and the effectiveness of the emergency asset-buying programs in response to the 2008 crisis. “We have spent 10 years making models of the crisis, and that effort seems to be running in to declining marginal product, like every other investment,” Cochrane says. “The event is passing into economic history, and people are getting a bit bored with rehashing it over and over again.” The other field economics is regularly compared to is medicine. Doctors are expected to diagnose their patients but could be forgiven for not predicting every detail of their symptoms. As Hélène Rey, an economist at the London Business School, puts it: “If you ask the doctor if you have measles to predict the number of spots you’re going to have, the doctor is not going to tell you. At the same time, they will more or less know what’s good for you to treat your measles.” She says that the failure of economic forecasting had been conflated with a failure in economics as a whole. Many economists would be relieved to abandon the expectation that their tools can predict financial crises. This isn’t an attempt to evade responsibility. If people knew a recession was starting tomorrow, they would stop spending and investing and the recession would happen today. Instead of “magic models,” the focus should be on creating a system that is more resilient and less prone to crises. Banks should be funding themselves using equity and not short-term debt. “That would cure crises in a heartbeat,” says Cochrane. And so, as much as there is a consensus on a need to advance macroeconomics into a new era, there is still much hand-wringing over what to do next. “What we don’t yet have is any new idea of how to do things,” says Vines. Some argue that a completely new way of thinking already exists: agent-based modeling. The easiest way to understand agent-based modeling is to imagine a big computer simulation. Something like the video game The Sims, but the characters make their own decisions. The model is programmed to act out how all the players behave and interact with each other. The players, or agents, perceive and react to their environment, and the decisions they make can change the environment, which makes other agents respond to the changes. While not used too often for economics, these models are increasingly popular in other scientific fields, such as mapping the spread of disease, weather systems, and traffic patterns. Richard Bookstaber believes that agent-based models are just the thing for economics, especially when it comes to predicting and managing crises. In theory, they would help economics achieve a better understanding the granular structure of the economy and the countless interactions between every agent in it. Bookstaber spent most of his career working in financial risk management for banks and hedge funds, such as Morgan Stanley and Bridgewater Associates. In recent years, he’s put his faith in agent-based modelling into practice at the $120 billion pension and endowment fund of the University of California, where he is chief risk officer. In the years after the crisis, Bookstaber worked for the US Securities and Exchange Commission on designing and implementing the Volcker Rule, a key piece of financial regulation implemented to stop banks from taking excessive risks. (Now, the Federal Reserve is looking for ways to loosen it.) He then went to the US Treasury department to develop ways to identify systemic risks in the US financial system and see if an agent-based model could be built to this end. “After 2008 everybody knew there was something missing with standard risk management, and this is filling that need,” he says. Approaching economics from the world of finance, he sees fundamental flaws in the profession’s approach on some key issues, which he ominously dubs “the four horsemen of the econocalypse.” The first is computational irreducibility, a fancy way of saying that the system—in this case, the economy—can’t be understood through simplification. This is a big problem for economists, since simplification is the raison d’etre of economic models. The second is radical uncertainty, which reflects that we, as humans, can’t totally understand the world because there are always new and unpredictable things taking place. The third is emergence, which is the theory that the whole is greater than the sum of its parts—the system as a whole has properties that individual agents don’t, which emerges because of the interactions between those agents. It’s why a bunch of individually reasonable actions can lead to an unexpected or undesirable result, like a traffic jam. The final flaw, or horseman, as Bookstaber puts it, is ergodicity. This is a combination of the previous three that suggests we do not live in an ergodic world: one in which characteristics and behavior don’t change over time. “The whole notion of using econometrics and history to look at what’s going to happen is going to fail and the time you see that so obviously is with the crisis,” Bookstaber says. “One of the problems with risk management is that you look at the risks in the past, which is great 95% of the time but doesn’t work when it really matters.” All of these issues can be solved using agent-based models, Bookstaber argues throughout his recent book, boldly titled The End of Theory. It’s a book that makes grim reading for economists. Bookstaber’s argument is that the economy is too complex for the economic models popular today. Instead, modeling a simulation of the world is better. “It’s just a foundational attack on economic methods,” he says. “It doesn’t have the attraction that economists like, doesn’t have equations, you don’t get a solution at the end.” For all of their promise, agent-based models aren’t ready to take over economics in the way that they have proved useful by natural scientists. Agent-level behavior can’t be known to the same level of accuracy, because humans do not act according to fundamental laws of nature that govern their movement like, for example, particles. While, economics is criticized for a reliance on rational expectations—essentially, assuming humans are smarter than they really are—discarding it entirely is just as problematic. People may not be perfectly rational but it’s just as unlikely that they are systematically irrational. They learn from experience. Doyne Farmer, a professor of mathematics at the University of Oxford, is a long-time advocate of agent-based models but acknowledges that these models are far less developed than others. His back-of-the-envelope calculation is that DSGE and other economic models have had 30,000 person years of work go into developing them, compared with a mere 500 person years for agent-based models. Even so, agent-based models are met with enthusiasm because their adoption by economics is a sign that the field is opening up to lessons from other sciences. And not before time, given that there are now new challenges that make a broader view more appealing. Economists need to understand the impact of artificial intelligence on jobs, the way financial systems adapt in response to regulation, and the consequences of climate change, to name just a few. Enter the Rebuilding Macroeconomics Network, a four-year research initiative to fund innovation in economics paid for the by the UK’s Economics and Social Research Council. Farmer sits on the group’s five-person management board. Rebuild Macro, which launched about a year ago, questions everything we think of when we think about the economy. It casts its net wide for contributors with new ideas on topics that include globalization, sustainability, the financial system, inequality, and disruptive technology. The network’s aim is “to transform macroeconomics back into a useful and policy-relevant social science,” it states. This declaration is bolder than it seems on the surface. It’s a direct attack on the direction the field is moving in, which is increasingly mathematical with ambitions of becoming a harder science. Rebuild Macro rejects the monoculture that marks much of macroeconomics. Research by Richard Van Noorden for the science journal Nature shows how insular economics research has been. For six decades, until 2010, economics papers cited other disciplines far less frequently than other subjects. “Historically, economics has indeed been the least outward-looking social science,” according to another paper. “Our discipline is likely to benefit from explorations further afield.” VAN NOORDEN (2015) This is starting to happen, slowly, thanks to an increase in empirical research. Keeping up this momentum is a key concern of Rebuild Macro, which has installed biologists, anthropologists, and physicists alongside economists and policymakers at the heart of the network. The director of the Rebuild Macro network is Angus Armstrong, a former head of macroeconomic analysis in the UK Treasury Department, including during the financial crisis. In one area, Armstrong already sees the value in working with people from other professions. “Fundamental to all the macroeconomic models we have this notion of equilibrium,” he says. “It’s been fascinating to speak to physicists that look at us like we are mad, saying ‘what makes you think it’s stationary?’” He uses the example of a pane of glass. In another form it is sand. It may take millions of years to transition between the two states, but ultimately they have the same molecular structure. “These are examples of how you can have notion of equilibrium but it doesn’t have to be stationary,” he says. Psychology is also of use. “How do you deal with decision making when you’re faced with genuine ignorance? When you don’t know what the possibilities are until you are there?” Armstrong asks. “Psychologists have very interesting ideas about how human beings respond in that way and it does not involve maximization of the best possible options in the way economists would consider it.” Armstrong’s main criteria for researchers applying for funding from his group is to think big. “We don’t want to fund things that are just a marginal increase on stuff that’s been done already,” he says. The group is looking to support “things that really address real world, big macroeconomic questions.” At the end of four years, Rebuild Macro will report to its paymasters which areas it thinks have the potential for a genuine breakthrough in our understanding of the economy. Whether special efforts like Rebuild Macro’s or breakthroughs by the profession’s heavy hitters at the most prestigious institutions get there first, it’s clear that a genuine revolution in economics is afoot—and overdue. But, as Armstrong says, “It’s an enormous task.”
  15. Bartenders and cocktail enthusiasts know it: proportions matter. A bit too much or slightly too little of an ingredient, and the person drinking will never look back. The same principle applies to food processing plants, for which getting ingredient levels right can be complicated but nonetheless crucial to keep the business afloat. So far, these producers have mostly operated on a combination of gut feeling and technology capable of identifying the different components in liquid food and beverages. They have known, for instance, how much sugar there was in their products. The composition of this sugar, however, has been unknown. These days are over. By successfully applying FT-NIR technology – a technology using near-infrared light and algorithms to quantify gas components – to liquids under the FAME (Development and demonstration of an innovative FT-NIR-based system for food content analysis) project, Opsis is now capable of distinguishing ingredients at molecular level. How is FT-NIR relevant to the food industry? Dr Olle Lundstrom: Compared to technologies currently in use in the food processing industry, which mostly relies on NIR only, FT-NIR provides better resolution. Using it, you can continuously identify small details that had never been captured before on production lines. FT-NIR has been around for 30 years but has so far been used mainly in laboratories and for industrial applications. With this project, Opsis successfully brought its own FT-NIR Gas technology – used for pollution monitoring – into the food processing industry. Is it really useful to scrutinise food at the molecular level? For some market players it won't be interesting, unless they are looking for something very detailed and specific. For example, there are already solutions available to quantify sugar on production lines. You don't need FT-NIR for that. However, no technology can measure what type of sugars are in a product. Thanks to FAME, we can now differentiate between fructose sugar, maltose sugar and glucose sugar. What are the main challenges you faced in bringing this technology to the food industry? The first challenge, and perhaps the most important one, was to develop this technology for gas. It took 30 years to get there. FAME has been building on this extensive research and development process to take this existing technology and make it applicable to liquids as well, be it milk, wine, spirits, sugar or water. The second challenge consisted in making online measurements – that is, taking samples from production lines from which many different products with different behaviours, temperatures, flows and pressure come out – and bringing these to a stable laboratory environment. This was essential to make a very detailed analysis possible. Our last challenge was related to prediction models and calibrations. It's pretty much like taking a prism and trying to split out the spectrum into understandable data. To do that, you need a mathematical estimation model able to convert light into a value. This was a great challenge and this model required much fine-tuning to become applicable to the many different possible environments. What would you say were the main achievements brought about thanks to phase 2 funding? Phase 2 funding helped us take the technology we had for gas and bring it to liquids. But it also helped us identify customers interested in using this technology. We now have customers that have been running this technology for a while and are very interested in it over the long term. We have not been able to release this information publicly yet, but we are currently discussing a future press announcement related to two major multinational corporations we have been working with. Can you describe the use cases for these two customers? Customer number one is a sugar refinery producing liquid sugar and syrup. Such products, in order to become and remain liquid, require a certain composition of different sugar types. If producers were to use only saccharose, the sugar would freeze or remain solid. Thanks to our technology, the customer can measure and control the exact levels of glucose, fructose and saccharose needed on its production line. This not only helps to improve the quality of the final product, but also causes less waste and decreases production cost. The second customer deals with fermentation to produce alcohol. That process also requires specific combinations of sugars, and our equipment allows us to measure and even monitor the fermentation process. No one except Opsis can do this today. If you had to convince a new potential customer, what would be your main arguments? Imagine you have a food processing plant, using some kind of liquid. Today, you have no choice but to keep adjusting all sorts of valves based on gut feeling to get the product you want. With our technology, you can actually adjust these valves precisely to have an optimised process, depending on whether you want to save time or cost, or have a maximum yield. What are your objectives for the next five years? Within six months, we intend to go public with the announcement of our two main customers. Once it's done, we will expand across Europe to get closer to the plants. Then, we'll start looking into worldwide expansion.
  16. Using the Visible and Infrared Survey Telescope for Astronomy (VISTA), astronomers have detected a new bright quasar at a redshift of about 6.8. The newly identified quasar, designated VHS J0411-0907, is the brightest object in the near-infrared J-band among the known quasars at redshift higher than 6.7. The finding is reported in a paper published December 6 on arXiv.org. Powered by the most massive black holes, bright quasars (or quasi-stellar objects, QSOs) with high redshift (over 6.0) are important for astronomers as they are perceived as the brightest beacons highlighting the chemical evolution of the universe most effectively. However, such objects are extremely rare and difficult to find. To date, around 100 high-redshift quasars have been found mainly thanks to large-area optical and infrared surveys. This number is still insufficient for significantly advancing our knowledge about the early stages of evolution of the universe. Now, a team of astronomers led by Estelle Pons of University of Cambridge, U.K., reports the finding of another important addition to the list of bright, high-redshift quasars. By employing the near-infrared VISTA Hemisphere Survey (VHS), the researchers have identified a new quasar at a redshift of 6.82, which received designation VHS J0411-0907. The new QSO has been selected by spectral energy distribution (SED) classification using near-infrared data from VISTA, optical data from the Panoramic Survey Telescope And Rapid Response System (Pan-STARRS), and mid-infrared data from NASA's Wide-field Infrared Survey Explorer (WISE). "By combining colour-selection and SED fitting χ2 selection, we found a new high-z quasar VHS J0411-0907 at a redshift of 6.82," the astronomers wrote in the paper. According to the study, VHS J0411-0907 has a bolometric luminosity of about 189 quattuordecillion erg/s, black hole mass of around 613 million solar masses, and an Eddington ratio of approximately 2.37. The researchers noted that these parameters make VHS J0411-0907 the quasar with the highest Eddington ratio and one of the lowest black hole masses among the known QSOs with redshifts of over 6.5. "The high Eddington ratio of this quasar is consistent with a scenario of low-mass BH [black hole] seeds growing at super-Eddington rates," the paper reads. Notably, VHS J0411-0907 has the brightest near-infrared J-band continuum magnitude of the nine known quasars with redshifts higher than 6.7 and is currently the highest redshift QSO detected in the Pan-STARRS survey. VHS J0411-0907 is the seventh quasar with redshift over 6.5 discovered using VHS. Pons' team hopes that further studies using this survey will yield the detection of dozens of new high-redshift quasars. "Based on the Jiang et al. (2016) luminosity function, we expect to detect in 10,000 deg2 of VHS about 20, 34 and 15 quasars with 6.5 < z < 7.0 for the J-band limiting depth of VHS-ATLAS, VHS-DES and VHS-GPS respectively. In addition to about 6, 14 and 5 quasars with 7 < z < 7.5," the astronomers noted.
  17. DISCORD, THE CHAT service for PC gaming, has quickly put down two clear markers in the sand. Back in August, the platform announced it would be selling PC games, putting in direct competition with the likes of Steam, GOG and EA Origin, but now it's gone a step further, offering developers a far more generous slice of the revenue: 90 per cent, rather than the industry-standard 70. "Turns out, it does not cost 30 per cent to distribute games in 2018," the company wrote in a blog post announcing the initiative. "After doing some research, we discovered that we can build amazing developer tools, run them, and give developers the majority of the revenue share. "So, starting in 2019, we are going to extend access to the Discord store and our extremely efficient game patcher by releasing a self-serve game publishing platform. No matter what size, from AAA to single person teams, developers will be able to self publish on the Discord store with 90 per cent revenue share going to the developer." The other 10 per cent will cover operating costs, apparently, but Discord will "explore lowering it by optimising our tech and making things more efficient." This is an announcement sure to give Valve the willies, although the whole space is becoming a lot more competitive more generally. The Epic Store, for example, charges 12 per cent, but will also absorb the five per cent licensing cost for the Unreal Engine, if used. Still, while Steam remains the main place where the players are, it'll be hard for developers to vote with their feet and embrace this more fragmented marketplace. If gamers embrace the opposition, however, we may end up with a far more developer-friendly playing field in 2019. µ
  18. AS THE world still tries to find a really good alternative to passwords, there's bad news for those that thought that facial recognition was the key, after a journalist from Forbes was able to fool most phones with a 3D printed head. Thomas Brewster got his own noggin 3D printed and tested it on the facial recognition on four Android phones, plus the iPhone X. Of the five, only the iPhone X wasn't fooled by the bust - the four Android devices opened without a fuss. Apple has bet big on facial recognition, whilst fingerprints are the more popular option in Android, and with good reason, it would seem, as only Apple has shown itself to be ready for one of the most obvious ways that it could be fooled. Look at the Pixel 3 - there's no default for facial recognition there, but it has been added by third parties - Huawei, Samsung and OnePlus all have the option on their current flagships. Yet none is as secure as Apple's offering. There's no secret that facial recognition is still at its fledgeling stage, as demonstrated by the epic fails of police attempts to implement it. The message here seems to be that, if you've got an Android phone, stick to fingerprints, or better still passcodes, which are more secure than either biometric solution - compounded by the fact that officials can't generally force you to give up your phone PIN, but they can make you look at it. Last week it emerged that Taylor Swift fans were subjected to clandestine facial recognition checks at one of her gigs, as attempts were made to scupper her stalkers. The whole thing makes you wonder if the entire premise of Lionel Ritchie's ‘Hello' video was a blind woman who set out to steal all of Lionel's Bitcoins and read his emails. μ
  19. CHIPMAKER Qualcomm claims Apple remains in violation of a Chinese court's orders to stop selling iPhones despite a software update that the firm pushed out on Monday. Last week, China granted Qualcomm an injunction that banned "the import and sale of nearly all iPhone models in China." Apple, however, claimed that "all iPhones" would remain available for sale to customers in China", and said it would release an iOS update to address the three patents involved in the never-ending Qualcomm case. As promised, Apple unleashed iOS 12.1.2 this week, which brings with it eSIM support, bug fixes and - most importantly - changes to tackle the patents involved with Qualcomm's ongoing legal battle. It hasn't done much to satisfy the blood-hungry chipmaker, though, which says that Apple continues to violate the injunction that would see iPhone sales banned in China, despite its iOS 12.1.2 update. Qualcomm believes that Apple is still in violation of the court order as it continues to sell iPhones and has not been given explicit permission to do so. "Despite Apple's efforts to downplay the significance of the order and its claims of various ways it will address the infringement, Apple apparently continues to flout the legal system by violating the injunctions," Don Rosenberg, Qualcomm's general counsel, told Reuters in a statement on Monday. "Apple's statements following the issuance of the preliminary injunction have been deliberate attempts to obfuscate and misdirect",. When contacted by Reuters for comment, Apple reiterated its earlier statement: "Qualcomm's effort to ban our products is another desperate move by a company whose illegal practices are under investigation by regulators around the world. "All iPhone models remain available for our customers in China. Qualcomm is asserting three patents they had never raised before, including one which has already been invalidated. We will pursue all our legal options through the courts." After last week being granted a preliminary injunction that bans the import and sale of the iPhone 6S, iPhone 6S Plus, iPhone 7, iPhone 7 Plus, iPhone 8, iPhone 8 Plus and iPhone X, Qualcomm is now reportedly seeking a ban on Apple's latest iPhone XS, XS Max and XR smartphones. µ
  20. CHINESE PHONE MAKER Huawei's next-gen P30 and P30 Pro smartphones will buck the trend of in-screen cameras with a OnePlus 6T-esque 'waterdrop' notch. At least that's according to casemaker Olixar, via MobileFun, which has already started flogging accessories for Huawei's as-yet-unannounced 2019 flagships. Images of the cases (above) show that both the P30 and P30 Pro will adopt a teeny-tiny waterdrop notch, rather than a 'punch hole' cutout like that seen on Huawei's own Nova 4 handset. This, according to MobileFun, is because OLED displays with the 'punch-hole' design are currently only available to Samsung; the newly-announced Nova 4 sports an LCD screen. Olixar's cases, if accurate, also show that the higher-spec Huawei P30 Pro will feature a quad-camera setup on its backside. It won't be the first, though; while last year's Huawei P20 was the first to sport a triple-camera setup, Samsung's Galaxy A9 (2018) has already arrived with four cameras in tow. The P30 Pro's vertical quad camera array will be coupled with a separate cutout for a dual-LED flash, the images show, alongside two additional, as-yet-unidentified sensors. The lesser-specced Huawei P30 will allegedly feature a triple-camera setup, but Olixar hasn't shared any images of setup. While Huawei hasn't commented on the leak, Walter Ji, the company's European head, suggested earlier this year that the firm would be shoving four cameras onto its smartphones in 2019. "Next year we will definitely see more innovation in the camera, and now we have three, imagine four for next year," he said. We don't know much else about the smartphone duo, but we'd put money on the P30 and P30 Pro packing Huawei's 7nm Kirin 980 processor, Android 9 Pie and the same 48MP main camera sensor as seen on the Nova 4. All will be revealed early next year. Probably. µ
  21. USER REVIEWS WERE once extremely helpful in knowing what to buy. None of the perceived biases of professional reviewers: bite-sized notes telling from people just like you telling you whether a product is good or not. The trouble is that bite-sized reviews are incredibly easy to fake convincingly, and there's a cottage industry of fake-review writers eagerly gaming algorithms everywhere from Amazon to TripAdvisor. Now Google has had enough and has warned those tempted to astroturf its Android Play Store that a crackdown is imminent. Quite how effective it can be is another matter entirely. Google boasts that it's "deployed a system that combines human intelligence with machine learning to detect and enforce policy violations in ratings and reviews", but accepts that it's a "big job," highlighting that it detects millions of dodgy reviews and thousands of iffy apps every week. "Our team can do a lot, but we need your help to keep Google Play a safe and trusted place for apps and games," the blog post reads, before supplying advice for both developers and regular Android users. Developers are encouraged not to buy fake or incentivised reviews, which feels like a sensible place to start if you're trying to reduce the number of fake or incentivised reviews. More usefully, developers are told not to run campaigns where five-star reviews are rewarded by in-game items or currency. But it isn't just developers who get to enjoy the fun of outsourced quality control. Users are also invited to join the party by not accepting money or goods in exchange for ratings and taking valuable minutes away from their precious time on Earth to read the comment-posting policy. "It's pretty concise and talks about all the things you should consider when posting a review to the public," the post pleads, making it sound only marginally more appealing. Users are also encouraged to keep feedback constructive and to avoid profanity, hateful or sexual reviews. This raises the question of how many people have been leaving sexual reviews, but perhaps that's best left unanswered. Finally, everyone can help by sending feedback of iffy-looking reviews or flagging them as spam.
  22. PC MAKER Lenovo has beaten the impending CES rush with the unveiling of two new ThinkPad devices: the ThinkPad L390 and ThinkPad L390 Yoga. While, aesthetically, the biz-focused laptops offer few differences over their ThinkPad L380 and ThinkPad L380 Yoga predecessors, they're the first ThinkPads to ship with Intel's 8th-generation Whiskey Lake chips; expect a performance boost over Kaby Lake-R and added support for Gigabit WiFi. Both the traditional ThinkPad L390 and convertible ThinkPad L390 Yoga pack 13.3in full HD displays, and support up to 32GB of DDR4 memory, up to 512GB of PCIe solid state storage, and offer two USB Type-C ports, two USB 3.1 Type-A ports, HDMI 1.4 and mini Ethernet jacks, and a microSD card reader. You'll also find baked-in fingerprint readers, with Lenovo encrypting data on its dTPM 2.0 chip. The two laptops also pack 45 Wh batteries, although Lenovo estimates you'll get two hours of extra battery life out of the ThinkPad L390 compared to the L390 Yoga, the latter of which has a hinge that lets you twist the screen a full 360-degrees to use the device as a touchscreen tablet. "The ThinkPad portfolio has always offered a broad choice to give customers the specific device they need. The updated ThinkPad L390 and L390 Yoga are no different. Positioned with price-sensitive business customers and professional consumers in mind, the L series maintains the ThinkPad reputation for ruggedness and durability," Lenovo swoons. "Sitting below our ThinkPad X1 premium laptops and the mainstream corporate workhorse T and X series, the updated L390 and L390 are designed to provide end-users with mainstream performance and value." The Lenovo ThinkPad L390 and L390 will be available in the US later this month, priced from $659 and $889, respectively. There's no word yet on UK availability. µ
  23. IF THERE'S ONE thing you'd hope would be supported by the world's most sophisticated cybersecurity, it'd be ballistic missiles. Losing your CV and some photos due to malware is one thing. Losing an entire city is something else entirely. Unfortunately, the USA's ballistic missile system is irresponsibly insecure, according to a damning new report form the Department of Defence Inspector General. The paper found a total lack of encryption, no antivirus, no multifactor authentication and unpatched vulnerabilities dating back to 1990. We'll bet there's at least one instance of ‘password123' in the system, too. The report was compiled in April this year, so hopefully the information - in parts heavily redacted - is now out of date. Even if it is, it's worrying that security went unchecked for quite so long. The department spotchecked five random locations where the Missile Defence Agency (MDA) had placed ballistic missiles designed to intercept nukes heading for the mainland, protecting the United States. Theoretically multi-factor authentication is required for new MDA employees, but the investigators found that this rule was being ignored at three of the five sites. The report highlights one user who had been there for seven years accessing Ballistic Missile Defence Systems without the required common access card. While the systems are password protected, a clever spear phishing campaign could be all that's required to access the missiles. Yikes. That's worrying, but not quite as worrying as the unpatched vulnerabilities the report highlights. IT administrators at three of the five locations - possibly the same ones - had failed to keep computers up to date, with vulnerabilities highlighted dating back to 1990. We've all been guilty of pressing ‘update later', but generally not for 28 years running. These basic security flaws are bad enough, but get even worse when considering the weaknesses at the sites themselves. Not only were security cameras found to leave huge swathes of bases uncovered, but server racks at two of the locations were unlocked and easily accessible. In one of these, the rack was unlocked right next to a sign saying that the server door must be locked at all times. The auditors reported that they weren't challenged when entering the facilities without proper ID, and also noted that sensors often listed doors as closed when they clearly weren't. Combine all that with a lack of encryption at three locations, and no intrusion detection system, and it's a small wonder all the missiles haven't already been fired. Sleep well, reader. µ
  24. IF YOU'RE PARTAKING in some Christmas shopping in London this week, you may want to pack a fake moustache and glasses. Police in London are trialling some pretty invasive facial recognition software in unmarked vans in Soho, Picadilly Circus and Leicester Square. Researchers at Cardiff University provided some terrifying numbers on the potential of facial recognition, stating that it has the capacity to identify as many as 18,000 faces every minute. That's an estimate that should be put to the test on one of the busiest shopping days of the year in some of the most congested part of the country. Shoppers will be guinea pigs in a police trial whether or not they're suspected of committing a crime. That's the bad news. The good news is that the technology in question is so woefully inefficient that even the most dedicated of career criminals has very little to worry about. Back in May, a Freedom of Information request from Big Brother Watch revealed that 98 per cent of the software's matches were inaccurate. And last month, a follow-up FoI request showed that low bar had sunk further, and was now hitting absolute zero. The Cardiff University report highlighted above also noted that the technology is weaker in crowds and low-light conditions, which sounds just perfect for winter in central London. But it's general uselessness isn't really the point, as campaigners from Big Brother Watch have been highlighting. "Live facial recognition is a form of mass surveillance that, if allowed to continue, will turn members of the public into walking ID cards," said Silkie Carlo, Director of the organisation. "As with all mass surveillance tools, it is the general public who suffer more than criminals. The fact that it has been utterly useless so far shows what a terrible waste of police time and public money it is. It is well overdue that police drop this dangerous and lawless technology." The Metropolitan Police Force defended the trial of the technology and added that members of the public would be informed of what's going on: "As with all previous deployments, the technology will be used overtly with a clear uniformed presence and information leaflets will be disseminated to the public. Posters with information about the technology will also be displayed in the area." Campaign group Liberty thinks the reality was nowhere near that promise: Met Police are using #FacialRecognition surveillance tech on the streets of #London today. These privacy-invading cameras snatch our deeply personal biometric data without our consent - and the van isn't even marked up. #ResistFacialRec pic.twitter.com/ZZSf1Qrbog — Liberty (@libertyhq) December 17, 2018 It looks like actually reading those signs would involve getting close enough to be scanned. The perfect honey trap for those of a curious nature. µ
  25. MICROSOFT'S BIG hairy audacious plan to move its "meh" web browser Edge onto Google's open source Chromium engine was, in part, Google's fault. That's the claim of a former Microsoft intern who says that EdgeHTML maintenance was becoming impossible because Google kept breaking compatibility in Chrome, leaving Edge devs trying to patch its own software. 'JoshuaJB' posted on Hacker News: "I very recently worked on the Edge team, and one of the reasons we decided to end EdgeHTML was because Google kept making changes to its sites that broke other browsers, and we couldn't keep up. For example, they recently added a hidden empty div over YouTube videos that causes our hardware acceleration fast-path to bail (should now be fixed in Win10 Oct update)." He goes on to explain that without that indirect interference, he believed that Edge's 'fairly state-of-the-art' hardware acceleration was producing better battery life results that Chrome, and it was only after they had broken/fixed it that Edge stopped working as well. He adds: "What makes it so sad, is that their claimed dominance was not due to ingenious optimization work by Chrome, but due to a failure of YouTube. On the whole, they only made the web slower." 'JoshuaJB' says that he doesn't think that what was done to YouTube was a deliberate attempt to stymie the progress of Edge, but some within Microsoft thinks that is exactly what it was, especially as Google reverse the changes that were impairing Edge performance. Microsoft confirmed last month that it was moving future versions of Edge to the open source engine, with EdgeHTML being retained only for legacy Universal Windows Apps (UWP) and other compatibility use cases. Reaction to the news has been mixed, with some hailing the further move towards a common standard, whilst others question the wisdom of having just three major web engines left - Apple Safari, Mozilla Firefox and Chromium which powers Chrome, Opera, Vivaldi and of course Edge. On the plus side, it does seem to have calmed down the browser wars with their annoying self-serving pop-ups, at least for now. μ
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.