Pornhub 2018 in review

Follow along to see the most interesting data points amassed by our team of statisticians, all presented with colorful charts and insightful commentary. Enjoy!

The Year in Numbers
Top Searches & Pornstars
Traffic & Time on Site
Gender Demographics
Age Demographics
Devices & Technology
Celebrity Searches
Movie & Game Searches
Events, Holidays & Sports
Top 20 Countries in Depth

Source: https://www.pornhub.com/insights/2018-year-in-review

Team that invented way to enlarge objects now invents method to shrink objects to the nanoscale, decreasing their volume 100x

MIT researchers have invented a way to fabricate nanoscale 3-D objects of nearly any shape. They can also pattern the objects with a variety of useful materials, including metals, quantum dots, and DNA.

“It’s a way of putting nearly any kind of material into a 3-D pattern with nanoscale precision,” says Edward Boyden, an associate professor of biological engineering and of brain and cognitive sciences at MIT.

Using the , the researchers can create any shape and structure they want by patterning a with a laser. After attaching other useful materials to the scaffold, they shrink it, generating structures one thousandth the volume of the original.

These tiny structures could have applications in many fields, from optics to medicine to robotics, the researchers say. The technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it.

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, is one of the senior authors of the paper, which appears in the Dec. 13 issue of Science. The other senior author is Adam Marblestone, a Media Lab research affiliate, and the paper’s lead authors are graduate students Daniel Oran and Samuel Rodriques.

Implosion fabrication

Existing techniques for creating nanostructures are limited in what they can accomplish. Etching patterns onto a surface with light can produce 2-D nanostructures but doesn’t work for 3-D structures. It is possible to make 3-D nanostructures by gradually adding layers on top of each other, but this process is slow and challenging. And, while methods exist that can directly 3-D print nanoscale objects, they are restricted to specialized materials like polymers and plastics, which lack the functional properties necessary for many applications. Furthermore, they can only generate self-supporting structures. (The technique can yield a solid pyramid, for example, but not a linked chain or a hollow sphere.)

To overcome these limitations, Boyden and his students decided to adapt a technique that his lab developed a few years ago for high-resolution imaging of brain tissue. This technique, known as expansion microscopy, involves embedding tissue into a hydrogel and then expanding it, allowing for high resolution imaging with a regular microscope. Hundreds of research groups in biology and medicine are now using expansion microscopy, since it enables 3-D visualization of cells and tissues with ordinary hardware.

By reversing this process, the researchers found that they could create large-scale objects embedded in expanded hydrogels and then shrink them to the nanoscale, an approach that they call “implosion fabrication.”

As they did for , the researchers used a very absorbent material made of polyacrylate, commonly found in diapers, as the scaffold for their nanofabrication process. The scaffold is bathed in a solution that contains molecules of fluorescein, which attach to the scaffold when they are activated by laser light.

Using two-photon microscopy, which allows for precise targeting of points deep within a structure, the researchers attach fluorescein molecules to specific locations within the gel. The fluorescein molecules act as anchors that can bind to other types of molecules that the researchers add.

“You attach the anchors where you want with light, and later you can attach whatever you want to the anchors,” Boyden says. “It could be a quantum dot, it could be a piece of DNA, it could be a gold nanoparticle.”

“It’s a bit like film photography—a latent image is formed by exposing a sensitive material in a gel to light. Then, you can develop that latent image into a real image by attaching another material, silver, afterwards. In this way implosion fabrication can create all sorts of structures, including gradients, unconnected structures, and multimaterial patterns,” Oran says.

Once the desired molecules are attached in the right locations, the researchers shrink the entire structure by adding an acid. The acid blocks the negative charges in the polyacrylate gel so that they no longer repel each other, causing the gel to contract. Using this technique, the researchers can shrink the objects 10-fold in each dimension (for an overall 1,000-fold reduction in volume). This ability to shrink not only allows for increased resolution, but also makes it possible to assemble materials in a low-density scaffold. This enables easy access for modification, and later the material becomes a dense solid when it is shrunk.

“People have been trying to invent better equipment to make smaller nanomaterials for years, but we realized that if you just use existing systems and embed your in this gel, you can shrink them down to the nanoscale, without distorting the patterns,” Rodriques says.

Currently, the researchers can create objects that are around 1 cubic millimeter, patterned with a resolution of 50 nanometers. There is a tradeoff between size and resolution: If the researchers want to make larger objects, about 1 cubic centimeter, they can achieve a resolution of about 500 nanometers. However, that resolution could be improved with further refinement of the process, the researchers say.

Read more at: https://phys.org/news/2018-12-team-method-nanoscale.html#jCp

Source: Team invents method to shrink objects to the nanoscale

How to Stop Windows 10 From Collecting Activity Data on You – after disabling activity tracking option

Another day, another tech company being disingenuous about its privacy practices. This time it’s Microsoft, after it was discovered that Windows 10 continues to track users’ activity even after they’ve disabled the activity-tracking option in their Windows 10 settings.

You can try it yourself. Pull up Windows 10’s Settings, go to the Privacy section, and disable everything in your Activity History. Give it a few days. Visit the Windows Privacy Dashboard online, and you’ll find that some applications, media, and even browsing history still shows up.

Application data found on the Windows Privacy Dashboard website
Screenshot: Brendan Hesse

Sure, this data can be manually deleted, but the fact that it’s being tracked at all is not a good look for Microsoft, and plenty of users have expressed their frustration online since the oversight was discovered. Luckily, Reddit user a_potato_is_missing found a workaround that blocks Windows and the Windows Store from tracking your PC activity, which comes from a tutorial originally posted by Tenforums user Shawn Brink.

We gave Brink’s strategy a shot and found it to be an effective workaround worth sharing for those who want to limit Microsoft’s activity-tracking for good. It’s a simple process that only requires you to download and open some files, but we’ll guide you through the steps since there a few caveats you’ll want to know.

How to disable the activity tracker in Windows 10

Brink’s method works by editing values in your Window Registry to block the Activity Tracker (via a .REG file). For transparency, here’s what changes the file makes:

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\System

PublishUserActivities DWORD

0 = Disable
1 = Enable

These changes only apply to Activity Tracking and shouldn’t affect your operating system in any other way. Still, if something does go wrong, you can reverse this process, which is explained in step 7. To get started with Brink’s alterations:

  1. Download the “Disable_Activity_history.reg” file from Brink’s tutorial to any folder you want.
  2. Double-click on the .REG file to open it, and then click “Run” to begin applying the changes to your registry.
  3. You will get the usual Window UAC notification to allow the file to make changes to your computer. Click “Yes.”
  4. A warning box will pop up alerting you that making changes to your registry can result in applications and features not working, or cause system errors—all of which is true, but we haven’t run into any issues from applying this fix. If you’re cool with that, click “Yes” to apply the changes. The process should happen immediately, after which you’ll get one final dialogue box informing you of the information added to the registry. Click “OK” to close the file and wrap up the registry change.
  5. After the registry edit is complete, you’ll need to sign out of Windows (press Windows Key+X then Shut down or Sign out>Sign out) then sign back in to apply the registry changes.
  6. When you sign back in, your activity will no longer be tracked by Windows, even the stuff that was slipping through before.
  7. To reverse the registry changes and re-enable the Activity Tracker, download the “Enable_Activity_history.reg” file also found on the Tenforums tutorial, then follow the same steps above.

Update 12/13/2018 at 12:30pm PT: Microsoft has released a statement to Neowin about the aforementioned “Activity History.” Here’s the statement from Windows & devices group privacy officer Marisa Rogers:

“Microsoft is committed to customer privacy, being transparent about the data we collect and use for your benefit, and we give you controls to manage your data. In this case, the same term ‘Activity History’ is used in both Windows 10 and the Microsoft Privacy Dashboard. Windows 10 Activity History data is only a subset of the data displayed in the Microsoft Privacy Dashboard. We are working to address this naming issue in a future update.”

As Neowin notes, Microsoft says there are two settings you should look into if you want to keep your PC from uploading your activity data:

“One is to go to Settings -> Privacy -> Activity history, and make sure that ‘Let Windows sync my activities from this PC to the cloud’ is unchecked. Also, you can go to Settings -> Privacy -> Diagnostics & feedback, and make sure that it’s set to basic.”

Source: How to Stop Windows 10 From Collecting Activity Data on You

Virgin Galactic flight sends first astronauts to edge of space – successfully. Are you looking, Elon?

Virgin Galactic completed its longest rocket-powered flight ever on Thursday, taking a step ahead in the nascent business of space tourism.

The two pilots on board Virgin Galactic’s spacecraft Unity became the company’s first astronauts. Virgin Group founder Richard Branson was on hand to watch the historic moment.

“Many of you will know how important the dream of space travel is to me personally. Ever since I watched the moon landings as a child I have looked up to the skies with wonder,” Branson said after the flight. “This is a momentous day and I could not be more proud of our teams who together have opened a new chapter of space exploration.”

Virgin Galactic said the test flight reached an altitude of 51.4 miles, or nearly 83 kilometers. The U.S. military and NASA consider pilots who have flown above 80 kilometers to be astronauts. The Federal Aviation Administration announced on Thursday that pilots Mark Stucky and C.J Sturckow would receive commercial astronaut wings at a ceremony in Washington, D.C. early next year.

Lifted by the jet-powered mothership Eve, the spacecraft Unity took off from the Mojave Air and Space Port in the California desert. Upon reaching an altitude above 40,000 feet, the carrier aircraft released Unity. The two-member crew then piloted the spacecraft in a roaring burn which lasted 60 seconds. The flight pushed Unity to a speed of Mach 2.9, nearly three times the speed of sound, as it screamed into a climb toward the edge of space.

After performing a slow backflip in microgravity, Unity turned and glided back to land at Mojave. This was the company’s fourth rocket-powered flight of its test program.

Unity is the name of the spacecraft built by The Spaceship Company, which Branson also owns. This rocket design is officially known as SpaceShipTwo (SS2).

Unity also carried four NASA-funded payloads on this mission. The agency said the four technology experiments “will collect valuable data needed to mature the technologies for use on future missions.”

“Inexpensive access to suborbital space greatly benefits the technology research and broader spaceflight communities,” said Ryan Dibley, NASA’s flight opportunities campaign manager, in a statement.

The spacecraft underwent extensive engine testing and seven glide tests before Virgin Galactic said it was ready for a powered test flight — a crucial milestone before the company begins sending tourists to the edge of the atmosphere. Each of the previous three test flights were successful in pushing the spacecraft’s limits farther.

Source: Virgin Galactic flight sends first astronauts to edge of space

Yes, it can be done without rockets exploding all over the place or going the wrong direction. Well done, this is how commercial space flight should look.

Taylor Swift Show Used to Stalk Visitors with Hidden Face Recognition in Kiosk Displays

At a Taylor Swift concert earlier this year, fans were reportedly treated to something they might not expect: a kiosk displaying clips of the pop star that served as a covert surveillance system. It’s a tale of creeping 21st-century surveillance as unnerving as it is predictable. But the whole ordeal has left us wondering what the hell is going on.

As Rolling Stone first reported, the kiosk was allegedly taking photos of concertgoers and running them through a facial recognition database in an effort to identify any of Swift’s stalkers. But the dragnet effort reportedly involved snapping photos of anyone who stared into the kiosk’s watchful abyss.

“Everybody who went by would stop and stare at it, and the software would start working,” Mike Downing, chief security officer at live entertainment company Oak View Group and its subsidiary Prevent Advisors, told Rolling Stone. Downing was at Swift’s concert, which took place at the Rose Bowl in Los Angeles in May, to check out a demo of the system. According to Downing, the photos taken by the camera inside of the kiosk were sent to a “command post” in Nashville. There, the images were scanned against images of hundreds of Swift’s known stalkers, Rolling Stone reports.

The Rolling Stone report has taken off in the past day, with Quartz, Vanity Fair, the Hill, the Verge, Business Insider, and others picking up the story. But the only real information we have is from Downing. And so far no one has answered some key questions—including the Oak View Group and Prevent Advisors, which have not responded to multiple requests for comment.

For starters, who is running this face recognition system? Was Taylor Swift or her people informed this reported measure would be in place? Were concertgoers informed that their photos were being taken and sent to a facial recognition database in another state? Were the photos stored, and if so, where and for how long? There were reportedly more than 60,000 people at the Rose Bowl concert—how many of those people had their mug snapped by the alleged spybooth? Did the system identify any Swift stalkers—and, if they did, what happened to those people?

It also remains to be seen whether there was any indication on the kiosk that it was snapping fans’ faces. But as Quartz pointed out, “concert venues are typically private locations, meaning even after security checkpoints, its owners can subject concert-goers to any kind of surveillance they want, including facial recognition.”

Source: Taylor Swift Show Used to Demo Face Recognition: Report

Very very creepy

Scientists identify vast underground ecosystem containing billions of micro-organisms

The Earth is far more alive than previously thought, according to “deep life” studies that reveal a rich ecosystem beneath our feet that is almost twice the size of all the world’s oceans.

Despite extreme heat, no light, minuscule nutrition and intense pressure, scientists estimate this subterranean biosphere is teeming with between 15bn and 23bn tonnes of micro-organisms, hundreds of times the combined weight of every human on the planet.

Researchers at the Deep Carbon Observatory say the diversity of underworld species bears comparison to the Amazon or the Galápagos Islands, but unlike those places the environment is still largely pristine because people have yet to probe most of the subsurface.

“It’s like finding a whole new reservoir of life on Earth,” said Karen Lloyd, an associate professor at the University of Tennessee in Knoxville. “We are discovering new types of life all the time. So much of life is within the Earth rather than on top of it.”

The team combines 1,200 scientists from 52 countries in disciplines ranging from geology and microbiology to chemistry and physics. A year before the conclusion of their 10-year study, they will present an amalgamation of findings to date before the American Geophysical Union’s annual meeting opens this week.

Samples were taken from boreholes more than 5km deep and undersea drilling sites to construct models of the ecosystem and estimate how much living carbon it might contain.

The results suggest 70% of Earth’s bacteria and archaea exist in the subsurface, including barbed Altiarchaeales that live in sulphuric springs and Geogemma barossii, a single-celled organism found at 121C hydrothermal vents at the bottom of the sea.

One organism found 2.5km below the surface has been buried for millions of years and may not rely at all on energy from the sun. Instead, the methanogen has found a way to create methane in this low energy environment, which it may not use to reproduce or divide, but to replace or repair broken parts.

Lloyd said: “The strangest thing for me is that some organisms can exist for millennia. They are metabolically active but in stasis, with less energy than we thought possible of supporting life.”

Rick Colwell, a microbial ecologist at Oregon State University, said the timescales of subterranean life were completely different. Some microorganisms have been alive for thousands of years, barely moving except with shifts in the tectonic plates, earthquakes or eruptions.

Source: Scientists identify vast underground ecosystem containing billions of micro-organisms | Science | The Guardian

It’s Time to Check Which Apps Are Tracking Your Location

Guess what? A bunch of apps like knowing where you are, so that their developers can then take that data, package it up for various advertising companies, and make a quick buck off of your precise whereabouts—including where you go and how long you spend there.

Source: It’s Time to Check Which Apps Are Tracking Your Location

Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

The millions of dots on the map trace highways, side streets and bike trails — each one following the path of an anonymous cellphone user.

One path tracks someone from a home outside Newark to a nearby Planned Parenthood, remaining there for more than an hour. Another represents a person who travels with the mayor of New York during the day and returns to Long Island at night.

Yet another leaves a house in upstate New York at 7 a.m. and travels to a middle school 14 miles away, staying until late afternoon each school day. Only one person makes that trip: Lisa Magrin, a 46-year-old math teacher. Her smartphone goes with her.

An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds, according to a database of more than a million phones in the New York area that was reviewed by The New York Times. While Ms. Magrin’s identity was not disclosed in those records, The Times was able to easily connect her to that dot.

The app tracked her as she went to a Weight Watchers meeting and to her dermatologist’s office for a minor procedure. It followed her hiking with her dog and staying at her ex-boyfriend’s home, information she found disturbing.

“It’s the thought of people finding out those intimate details that you don’t want people to know,” said Ms. Magrin, who allowed The Times to review her location data.

Like many consumers, Ms. Magrin knew that apps could track people’s movements. But as smartphones have become ubiquitous and technology more accurate, an industry of snooping on people’s daily habits has spread and grown more intrusive.

Lisa Magrin is the only person who travels regularly from her home to the school where she works. Her location was recorded more than 800 times there, often in her classroom .
A visit to a doctor’s office is also included. The data is so specific that The Times could determine how long she was there.
Ms. Magrin’s location data shows other often-visited locations, including the gym and Weight Watchers.
In about four months’ of data reviewed by The Times, her location was recorded over 8,600 times — on average, once every 21 minutes.

By Michael H. Keller and Richard Harris | Satellite imagery by Mapbox and DigitalGlobe

At least 75 companies receive anonymous, precise location data from apps whose users enable location services to get local news and weather or other information, The Times found. Several of those businesses claim to track up to 200 million mobile devices in the United States — about half those in use last year. The database reviewed by The Times — a sample of information gathered in 2017 and held by one company — reveals people’s travels in startling detail, accurate to within a few yards and in some cases updated more than 14,000 times a day.

[Learn how to stop apps from tracking your location.]

These companies sell, use or analyze the data to cater to advertisers, retail outlets and even hedge funds seeking insights into consumer behavior. It’s a hot market, with sales of location-targeted advertising reaching an estimated $21 billion this year. IBM has gotten into the industry, with its purchase of the Weather Channel’s apps. The social network Foursquare remade itself as a location marketing company. Prominent investors in location start-ups include Goldman Sachs and Peter Thiel, the PayPal co-founder.

Businesses say their interest is in the patterns, not the identities, that the data reveals about consumers. They note that the information apps collect is tied not to someone’s name or phone number but to a unique ID. But those with access to the raw data — including employees or clients — could still identify a person without consent. They could follow someone they knew, by pinpointing a phone that regularly spent time at that person’s home address. Or, working in reverse, they could attach a name to an anonymous dot, by seeing where the device spent nights and using public records to figure out who lived there.

Many location companies say that when phone users enable location services, their data is fair game. But, The Times found, the explanations people see when prompted to give permission are often incomplete or misleading. An app may tell users that granting access to their location will help them get traffic information, but not mention that the data will be shared and sold. That disclosure is often buried in a vague privacy policy.

“Location information can reveal some of the most intimate details of a person’s life — whether you’ve visited a psychiatrist, whether you went to an A.A. meeting, who you might date,” said Senator Ron Wyden, Democrat of Oregon, who has proposed bills to limit the collection and sale of such data, which are largely unregulated in the United States.

“It’s not right to have consumers kept in the dark about how their data is sold and shared and then leave them unable to do anything about it,” he added.

Mobile Surveillance Devices

After Elise Lee, a nurse in Manhattan, saw that her device had been tracked to the main operating room at the hospital where she works, she expressed concern about her privacy and that of her patients.

“It’s very scary,” said Ms. Lee, who allowed The Times to examine her location history in the data set it reviewed. “It feels like someone is following me, personally.”

The mobile location industry began as a way to customize apps and target ads for nearby businesses, but it has morphed into a data collection and analysis machine.

Retailers look to tracking companies to tell them about their own customers and their competitors’. For a web seminar last year, Elina Greenstein, an executive at the location company GroundTruth, mapped out the path of a hypothetical consumer from home to work to show potential clients how tracking could reveal a person’s preferences. For example, someone may search online for healthy recipes, but GroundTruth can see that the person often eats at fast-food restaurants.

“We look to understand who a person is, based on where they’ve been and where they’re going, in order to influence what they’re going to do next,” Ms. Greenstein said.

Financial firms can use the information to make investment decisions before a company reports earnings — seeing, for example, if more people are working on a factory floor, or going to a retailer’s stores.

Planned Parenthood
A device arrives at approximately 12:45 p.m., entering the clinic from the western entrance.
It stays for two hours, then returns to a home.

By Michael H. Keller | Imagery by Google Earth

Health care facilities are among the more enticing but troubling areas for tracking, as Ms. Lee’s reaction demonstrated. Tell All Digital, a Long Island advertising firm that is a client of a location company, says it runs ad campaigns for personal injury lawyers targeting people anonymously in emergency rooms.

“The book ‘1984,’ we’re kind of living it in a lot of ways,” said Bill Kakis, a managing partner at Tell All.

Jails, schools, a military base and a nuclear power plant — even crime scenes — appeared in the data set The Times reviewed. One person, perhaps a detective, arrived at the site of a late-night homicide in Manhattan, then spent time at a nearby hospital, returning repeatedly to the local police station.

Two location firms, Fysical and SafeGraph, mapped people attending the 2017 presidential inauguration. On Fysical’s map, a bright red box near the Capitol steps indicated the general location of President Trump and those around him, cellphones pinging away. Fysical’s chief executive said in an email that the data it used was anonymous. SafeGraph did not respond to requests for comment.

Data reviewed by The Times includes dozens of schools. Here a device , most likely a child’s, is tracked from a home to school.
The device spends time at the playground before entering the school just before 8 a.m., where it remains until 3 p.m.
More than 40 other devices appear in the school during the day. Many are traceable to nearby homes.

By Michael H. Keller | Imagery by Google Earth

More than 1,000 popular apps contain location-sharing code from such companies, according to 2018 data from MightySignal, a mobile analysis firm. Google’s Android system was found to have about 1,200 apps with such code, compared with about 200 on Apple’s iOS.

The most prolific company was Reveal Mobile, based in North Carolina, which had location-gathering code in more than 500 apps, including many that provide local news. A Reveal spokesman said that the popularity of its code showed that it helped app developers make ad money and consumers get free services.

To evaluate location-sharing practices, The Times tested 20 apps, most of which had been flagged by researchers and industry insiders as potentially sharing the data. Together, 17 of the apps sent exact latitude and longitude to about 70 businesses. Precise location data from one app, WeatherBug on iOS, was received by 40 companies. When contacted by The Times, some of the companies that received that data described it as “unsolicited” or “inappropriate.”

WeatherBug, owned by GroundTruth, asks users’ permission to collect their location and tells them the information will be used to personalize ads. GroundTruth said that it typically sent the data to ad companies it worked with, but that if they didn’t want the information they could ask to stop receiving it.

Planned Parenthood
Records show a device entering Gracie Mansion, the mayor’s residence, before traveling to a Y.M.C.A. in Brooklyn that the mayor frequents.
It travels to an event on Staten Island that the mayor attended. Later, it returns to a home on Long Island.
Gracie
Mansion

By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe

The Times also identified more than 25 other companies that have said in marketing materials or interviews that they sell location data or services, including targeted advertising.

[Read more about how The Times analyzed location tracking companies.]

The spread of this information raises questions about how securely it is handled and whether it is vulnerable to hacking, said Serge Egelman, a computer security and privacy researcher affiliated with the University of California, Berkeley.

“There are really no consequences” for companies that don’t protect the data, he said, “other than bad press that gets forgotten about.”

A Question of Awareness

Companies that use location data say that people agree to share their information in exchange for customized services, rewards and discounts. Ms. Magrin, the teacher, noted that she liked that tracking technology let her record her jogging routes.

Brian Wong, chief executive of Kiip, a mobile ad firm that has also sold anonymous data from some of the apps it works with, says users give apps permission to use and share their data. “You are receiving these services for free because advertisers are helping monetize and pay for it,” he said, adding, “You would have to be pretty oblivious if you are not aware that this is going on.”

But Ms. Lee, the nurse, had a different view. “I guess that’s what they have to tell themselves,” she said of the companies. “But come on.”

Ms. Lee had given apps on her iPhone access to her location only for certain purposes — helping her find parking spaces, sending her weather alerts — and only if they did not indicate that the information would be used for anything else, she said. Ms. Magrin had allowed about a dozen apps on her Android phone access to her whereabouts for services like traffic notifications.

An app on Lisa Magrin’s cellphone collected her location information, which was then shared with other companies. The data revealed her daily habits, including hikes with her dog, Lulu. Nathaniel Brooks for The New York Times

But it is easy to share information without realizing it. Of the 17 apps that The Times saw sending precise location data, just three on iOS and one on Android told users in a prompt during the permission process that the information could be used for advertising. Only one app, GasBuddy, which identifies nearby gas stations, indicated that data could also be shared to “analyze industry trends.”

More typical was theScore, a sports app: When prompting users to grant access to their location, it said the data would help “recommend local teams and players that are relevant to you.” The app passed precise coordinates to 16 advertising and location companies.

A spokesman for theScore said that the language in the prompt was intended only as a “quick introduction to certain key product features” and that the full uses of the data were described in the app’s privacy policy.

The Weather Channel app, owned by an IBM subsidiary, told users that sharing their locations would let them get personalized local weather reports. IBM said the subsidiary, the Weather Company, discussed other uses in its privacy policy and in a separate “privacy settings” section of the app. Information on advertising was included there, but a part of the app called “location settings” made no mention of it.

A notice that Android users saw when theScore, a sports app, asked for access to their location data.

The Weather Channel app showed iPhone users this message when it first asked for their location data.

The app did not explicitly disclose that the company had also analyzed the data for hedge funds — a pilot program that was promoted on the company’s website. An IBM spokesman said the pilot had ended. (IBM updated the app’s privacy policy on Dec. 5, after queries from The Times, to say that it might share aggregated location data for commercial purposes such as analyzing foot traffic.)

Even industry insiders acknowledge that many people either don’t read those policies or may not fully understand their opaque language. Policies for apps that funnel location information to help investment firms, for instance, have said the data is used for market analysis, or simply shared for business purposes.

“Most people don’t know what’s going on,” said Emmett Kilduff, the chief executive of Eagle Alpha, which sells data to financial firms and hedge funds. Mr. Kilduff said responsibility for complying with data-gathering regulations fell to the companies that collected it from people.

Many location companies say they voluntarily take steps to protect users’ privacy, but policies vary widely.

For example, Sense360, which focuses on the restaurant industry, says it scrambles data within a 1,000-foot square around the device’s approximate home location. Another company, Factual, says that it collects data from consumers at home, but that its database doesn’t contain their addresses.

Nuclear plant

In the data set reviewed by The Times, phone locations are recorded in sensitive areas including the Indian Point nuclear plant near New York City. By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe
Megachurch

The information from one Sunday included more than 800 data points from over 60 unique devices inside and around a church in New Jersey. By Michael H. Keller | Satellite imagery by Mapbox and DigitalGlobe

Some companies say they delete the location data after using it to serve ads, some use it for ads and pass it along to data aggregation companies, and others keep the information for years.

Several people in the location business said that it would be relatively simple to figure out individual identities in this kind of data, but that they didn’t do it. Others suggested it would require so much effort that hackers wouldn’t bother.

It “would take an enormous amount of resources,” said Bill Daddi, a spokesman for Cuebiq, which analyzes anonymous location data to help retailers and others, and raised more than $27 million this year from investors including Goldman Sachs and Nasdaq Ventures. Nevertheless, Cuebiq encrypts its information, logs employee queries and sells aggregated analysis, he said.

There is no federal law limiting the collection or use of such data. Still, apps that ask for access to users’ locations, prompting them for permission while leaving out important details about how the data will be used, may run afoul of federal rules on deceptive business practices, said Maneesha Mithal, a privacy official at the Federal Trade Commission.

“You can’t cure a misleading just-in-time disclosure with information in a privacy policy,” Ms. Mithal said.

Following the Money

Apps form the backbone of this new location data economy.

The app developers can make money by directly selling their data, or by sharing it for location-based ads, which command a premium. Location data companies pay half a cent to two cents per user per month, according to offer letters to app makers reviewed by The Times.

Targeted advertising is by far the most common use of the information.

Google and Facebook, which dominate the mobile ad market, also lead in location-based advertising. Both companies collect the data from their own apps. They say they don’t sell it but keep it for themselves to personalize their services, sell targeted ads across the internet and track whether the ads lead to sales at brick-and-mortar stores. Google, which also receives precise location information from apps that use its ad services, said it modified that data to make it less exact.

Smaller companies compete for the rest of the market, including by selling data and analysis to financial institutions. This segment of the industry is small but growing, expected to reach about $250 million a year by 2020, according to the market research firm Opimas.

Apple and Google have a financial interest in keeping developers happy, but both have taken steps to limit location data collection. In the most recent version of Android, apps that are not in use can collect locations “a few times an hour,” instead of continuously.

Apple has been stricter, for example requiring apps to justify collecting location details in pop-up messages. But Apple’s instructions for writing these pop-ups do not mention advertising or data sale, only features like getting “estimated travel times.”

A spokesman said the company mandates that developers use the data only to provide a service directly relevant to the app, or to serve advertising that met Apple’s guidelines.

Apple recently shelved plans that industry insiders say would have significantly curtailed location collection. Last year, the company said an upcoming version of iOS would show a blue bar onscreen whenever an app not in use was gaining access to location data.

The discussion served as a “warning shot” to people in the location industry, David Shim, chief executive of the location company Placed, said at an industry event last year.

After examining maps showing the locations extracted by their apps, Ms. Lee, the nurse, and Ms. Magrin, the teacher, immediately limited what data those apps could get. Ms. Lee said she told the other operating-room nurses to do the same.

“I went through all their phones and just told them: ‘You have to turn this off. You have to delete this,’” Ms. Lee said. “Nobody knew.”

Source: Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret – The New York Times

Outdoor Ad Impact Forecaster (Dutch)

De impact van een buitenreclame campagne wordt voor 40% bepaald door de creatie van de uiting, voor 30% door het merk en voor 30% door de mediadruk. De Outdoor Ad Impact Forecaster analyseert vooraf een campagne op basis van deze drie kenmerken en geeft een rapport dat binnen 24 tot 48 uur de impact van een Out-of-Home campagne voorspelt. 

Deze voorspelling is gebaseerd op ruim 300 effectmetingen uitgevoerd door MeMo², waarvan de data op basis van machine learning in een voorspellingstool is verwerkt. De impact van de campagne wordt weergegeven in de vorm van een sterrenrating en daaropvolgend geeft de Forecaster een concreet advies over aanpassingen die de campagne impactvoller maken. Dit wordt aangevuld met professioneel advies van zowel de onderzoekers van MeMo² als de specialisten van Exterion Media. Dit tezamen vormt een compleet rapport voor een nog effectievere buitenreclame campagne.

Source: Outdoor Ad Impact Forecaster | Voorspel de impact van uw Out-of-Home campagne! – Exterion Media

Lenovo tells Asia-Pacific staff: Work lappy with your unencrypted data on it has been nicked

A corporate-issued laptop lifted from a Lenovo employee in Singapore contained a cornucopia of unencrypted payroll data on staff based in the Asia Pacific region, The Register can exclusively reveal.

Details of the massive screw-up reached us from Lenovo staffers, who are simply bewildered at the monumental mistake. Lenovo has sent letters of shame to its employees confessing the security snafu.

“We are writing to notify you that Lenovo has learned that one of our Singapore employees recently had the work laptop stolen on 10 September 2018,” the letter from Lenovo HR and IT Security, dated 21 November, stated.

“Unfortunately, this laptop contained payroll information, including employee name, monthly salary amounts and bank account numbers for Asia Pacific employees and was not encrypted.”

Lenovo employs more than 54,000 staff worldwide (PDF), the bulk of whom are in China.

The letter stated there is currently “no indication” that the sensitive employee data has been “used or compromised”, and Lenovo said it is working with local police to “recover the stolen device”.

In a nod to concerns that will have arisen from this lapse in security, Lenovo is “reviewing the work practices and control in this location to ensure similar incidents do not occur”.

On hand with more wonderfully practical advice, after the stable doors were left swinging open, Lenovo told staff: “As a precaution, we recommend that all employees monitor bank accounts for any unusual activities. Be especially vigilant for possible phishing attacks and be sure to notify your financial institution right away if you notice any unusual transactions.”

The letter concluded on a high note. “Lenovo takes the security of employee information very seriously. And while there is no indication any data has been compromised, please let us know if you have any questions.”

The staff likely do. One told us the incident was “extremely concerning” but “somehow not surprising in any way. How on Earth did they let this data exist on a laptop that was not encrypted?”

Source: Lenovo tells Asia-Pacific staff: Work lappy with your unencrypted data on it has been nicked • The Register

Equifax how-it-was-mega-hacked damning dossier lands, in all of its infuriating glory

A US Congressional report outlining the breakdowns that led to the 2017 theft of 148 million personal records from Equifax has revealed a stunning catalog of failure.

The 96-page report (PDF) from the Committee of Oversight and Government Reform found that the 2017 network breach could have easily been prevented had the company taken basic security precautions.

“Equifax, however, failed to implement an adequate security program to protect this sensitive data,” the report reads.

“As a result, Equifax allowed one of the largest data breaches in US history. Such a breach was entirely preventable.”

The report noted some of the previously-disclosed details of the hack, including the expired SSL certificate that had disabled its intrusion detection system for 19 months and the Apache Struts patch that went uninstalled for two months because of that bad cert.

The report states that Equifax’s IT team did scan for unpatched Apache Struts code on its network. But it only checked the root directory, not the subdirectory that was home to the unpatched software

Both issues were blamed for allowing an attacker to compromise the Equifax Automated Consumer Interview System and then spend weeks moving throughout the network to harvest personal records from other databases. It was only when the certificate was renewed that Equifax saw the massive amounts of data being copied from its servers and realized something was very wrong.

While those two specific issues were pinpointed as the source of the attack, the report finds that the intrusion was allowed to happen because the IT operation at Equifax had grown far too large far too fast, without a clear management structure or coherent policies across various departments.

Lousy IT security by design

“In 2005, former Equifax CEO Richard Smith embarked on an aggressive growth strategy, leading to the acquisition of multiple companies, IT systems, and data. While the acquisition strategy was successful for Equifax’s bottom line and stock price, this growth brought increasing complexity to Equifax’s IT systems, and expanded data security risks,” the committee found.

“In August 2017, three weeks before Equifax publicly announced the breach, Smith boasted Equifax was managing ‘almost 1,200 times’ the amount of data held in the Library of Congress every day.”

What’s more, the report notes that Equifax had been aware of these shortcomings for years, with internal audits that found problems in their software patching process back in 2015, and in both 2016 and 2017 a report from MSCI Inc. rated Equifax network security as a “zero out of ten.”

A 2015 audit found that ACIS, a Solaris environment that dated back to the 1970s, was not properly walled off from other databases, a fault that allowed the attackers to access dozens of systems they would not have otherwise been able to get to.

“Although the ACIS application required access to only three databases within the Equifax environment to perform its business function, the ACIS application was not segmented off from other, unrelated databases,” the report noted.

“As a result, the attackers used the application credentials to gain access to 48 unrelated databases outside of the ACIS environment.”

After the pwning of its servers was revealed Equifax blamed its woes on an IT staffer who hadn’t installed the Apache patch, and fired the person. The report makes it clear that there were many more people involved in Equifax’s failings than this one scapegoat.

To help prevent similar attacks from occurring, the report recommends a number of additional requirements for credit reporting agencies to tell people what information is being gathered, how it is stored, and who it is shared with. The report also suggests moving away from social security numbers as personal identifiers and recommends that companies in the finance and credit sectors be pushed to modernize their IT structure. ®

Source: Equifax how-it-was-mega-hacked damning dossier lands, in all of its infuriating glory • The Register

US Border Agents Keep Personal Data of 29000 Travelers on USBs, fail to delete them.

Last year, U.S. Customs and Border Protection (CBP) searched through the electronic devices of more than 29,000 travelers coming into the country. CBP officers sometimes upload personal data from those devices to Homeland Security servers by first transferring that data onto USB drives—drives that are supposed to be deleted after every use. But a new government report found that the majority of officers fail to delete the personal data.

The Department of Homeland Security’s internal watchdog, known as the Office of the Inspector General (OIG), released a new report yesterday detailing CBP’s many failures at the border. The new report, which is redacted in some places, explains that Customs officials don’t even follow their own extremely liberal rules.

Customs officials can conduct two kinds of electronic device searches at the border for anyone entering the country. The first is called a “basic” or “manual” search and involves the officer visually going through your phone, your computer or your tablet without transferring any data. The second is called an “advanced search” and allows the officer to transfer data from your device to DHS servers for inspection by running that data through its own software. Both searches are legal and don’t require a warrant or even probable cause—at least they don’t according to DHS.

It’s that second kind of search, the “advanced” kind, where CBP has really been messing up and regularly leaving the personal data of travelers on USB drives.

According to the new report [PDF]:

[The Office of the Inspector General] physically inspected thumb drives at five ports of entry. At three of the five ports, we found thumb drives that contained information copied from past advanced searches, meaning the information had not been deleted after the searches were completed. Based on our physical inspection, as well as the lack of a written policy, it appears [Office of Field Operations] has not universally implemented the requirement to delete copied information, increasing the risk of unauthorized disclosure of travelers’ data should thumb drives be lost or stolen.

It’s bad enough that the government is copying your data as you enter the country. But it’s another thing entirely to know that your data could just be floating around on USB drives that, as the Inspector General’s office admits, could be easily lost or stolen.

The new report found plenty of other practices that are concerning. The report notes that Customs officers regularly failed to disconnect devices from the internet, potentially tainting any findings stored locally on the device. The report doesn’t call out the invasion of privacy that comes with officials looking through your internet-connected apps, but that’s a given.

The watchdog also discovered that Customs officials had “inadequate supervision” to make sure that they were following the rules, and noted that these “deficiencies in supervision, guidance, and equipment management” were making everyone less safe.

But one thing that makes it sometimes hard to read the report is the abundance of redactions. As you can see, the little black boxes have redacted everything from what happens during an advanced search after someone crosses the border to the reason officials are allowed to conduct an advanced search at all:

Screenshot: Department of Homeland Security/Office of the Inspector General

The report notes that an April 2015 memo spells out when an advanced search may be conducted. But, again, that’s been redacted in the report.

Screenshot: Department of Homeland Security/Office of the Inspector General

But the Department of Homeland Security’s own incompetence might be our own saving grace for those concerned about digital privacy. The funniest detail in the new report? U.S. Customs and Border Protection forgot to renew its license for whatever top secret software it uses to conduct these advanced searches.

Screenshot: Department of Homeland Security/Office of the Inspector General

Curiously, the report claims that CBP “could not conduct advanced searches of laptop hard drives, USB drives, and multimedia cards at the ports of entry” from February 1, 2017 through September 12, 2017 because it failed to renew the software license. But one wonders if, in fact, the issue wasn’t resolved for almost a year, then what other “advanced search” methods were being used?

Source: Watchdog: Border Agents Keep Personal Data of Travelers on USBs

Russian Mapping Service Accidentally Locates Secret Military Bases

A Russian online mapping company was trying to obscure foreign military bases. But in doing so, it accidentally confirmed their locations—many of which were secret.

Yandex Maps, Russia’s leading online map service, blurred the precise locations of Turkish and Israeli military bases, pinpointing their location. The bases host sensitive surface-to-air missile sites and facilities housing nuclear weapons.

The Federation of American Scientists reports that Yandex Maps blurred out “over 300 distinct buildings, airfields, ports, bunkers, storage sites, bases, barracks, nuclear facilities, and random buildings” in the two countries. Some of these facilities were well known, but some of them were not. Not only has Yandex confirmed their locations, the scope of blurring reveals their exact size and shape.

Source: Mapping Service Accidentally Locates Secret Military Bases

Everyone’s revealing secret military bases!

UK Intelligence Agencies Are Planning a Major Increase in ‘Large-Scale Data Hacking’

Intelligence agencies in the UK are preparing to “significantly increase their use of large-scale data hacking,” the Guardian reported on Saturday, in a move that is already alarming privacy advocates.

According to the Guardian, UK intelligence officials plan to increase their use of the “bulk equipment interference (EI) regime”—the process by which the Government Communications Headquarters, the UK’s top signals intelligence and cybersecurity agency, collects bulk data off foreign communications networks—because they say targeted collection is no longer enough. The paper wrote:

A letter from the security minister, Ben Wallace, to the head of the intelligence and security committee, Dominic Grieve, quietly filed in the House of Commons library last week, states: “Following a review of current operational and technical realities, GCHQ have … determined that it will be necessary to conduct a higher proportion of ongoing overseas focused operational activity using the bulk EI regime than was originally envisaged.”

The paper noted that during the passage of the 2016 Investigatory Powers Act, which expanded hacking powers available to police and intelligence services including bulk data collection for the latter, independent terrorism legislation reviewer Lord David Anderson asserted that bulk powers are “likely to be only sparingly used.” As the Guardian noted, just two years later, UK intelligence officials are claiming this is no longer the case due to growing use of encryption:

… The intelligence services claim that the widespread use of encryption means that targeted hacking exercises are no longer effective and so more large-scale hacks are becoming necessary. Anderson’s review noted that the top 40 online activities relevant to MI5’s intelligence operations are now encrypted.

“The bulk equipment interference power permits the UK intelligence services to hack at scale by allowing a single warrant to cover entire classes of property, persons or conduct,” Scarlet Kim, a legal officer at UK civil liberties group Liberty International, told the paper. “It also gives nearly unfettered powers to the intelligence services to decide who and when to hack.”

Liberty also took issue with the intelligence agencies’ 180 on how often the bulk powers would be used, as well as with policies that only allow the investigatory powers commissioner to gauge the impact of a warrant after the hacking is over and done with.

“The fact that you have the review only after the privacy has been infringed upon demonstrates how worrying this situation is,”

Source: UK Intelligence Agencies Are Planning a Major Increase in ‘Large-Scale Data Hacking’

Millions of smartphones were taken offline by an expired certificate

Ericsson has confirmed that a fault with its software was the source of yesterday’s massive network outage, which took millions of smartphones offline across the UK and Japan and created issues in almost a dozen countries. In a statement, Ericsson said that the root cause was an expired certificate, and that “the faulty software that has caused these issues is being decommissioned.” The statement notes that network services were restored to most customers on Thursday, while UK operator O2 said that its 4G network was back up as of early Friday morning.

Although much of the focus was paid to outages on O2 in the UK and Softbank in Japan. Ericsson later confirmed to Softbank that issues had simultaneously affected telecom carriers who’d installed Ericsson-made devices across a total of 11 countries. Softbank said that the outage affected its own network for just over four hours.

Source: Millions of smartphones were taken offline by an expired certificate – The Verge

Windows 10 security question: How do miscreants use these for post-hack persistence?

Crafty infosec researchers have figured out how to remotely set answers to Windows 10’s password reset questions “without even executing code on the targeted machine”.

Thanks to some alarmingly straightforward registry tweaks allied with a simple Python script, Illusive Networks’ Magal Baz and Tom Sela were not only able to remotely define their own choice of password reset answers, they were also able to revert local users’ password changes.

Part of the problem is that Windows 10’s password reset questions are in effect hard-coded; you cannot define your own questions, limiting users to picking one of Microsoft’s six. Thus questions such as “what was your first’s pet name” are now defending your box against intruders.

The catch is that to do this, one first needs suitable account privileges. This isn’t an attack vector per se but it is something that an attacker who has already gained access to your network could use to give themselves near-invisible persistence on local machines, defying attempts to shut them out.

[…]

“In order to prevent people from reusing their passwords, Windows stores hashes of the old passwords. They’re stored under AES in the registry. If you have access to the registry, it’s not that hard to read them. You can use an undocumented API and reinstate the hash that was active just before you changed it. Effectively I’m doing a password change and nobody is going to notice that,” he continued, explaining that he’d used existing features in the post-exploitation tool Mimikatz to achieve that.

As for protecting against this post-attack persistence problem? “Add additional auditing and GPO settings,” said Sela. The two also suggested that Microsoft allows custom security questions as well as the ability to disable the feature altogether in Windows 10 Enterprise. The presentation slides are available here (PDF)

Source: Windows 10 security question: How do miscreants use these for post-hack persistence? • The Register

I Tried Predictim AI That Scans for ‘Risky’ Babysitters. Turns out founders don’t have kids

The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

“We take ethics and bias extremely seriously,” Sal Parsa, Predictim’s CEO, tells me warily over the phone. “In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.”

At issue is the fact that I’ve used Predictim to scan a handful of people I very much trust with my own son. Our actual babysitter, Kianah Stover, returned a ranking of “Moderate Risk” (3 out 5) for “Disrespectfulness” for what appear to me to be innocuous Twitter jokes. She returned a worse ranking than a friend I also tested who routinely spews vulgarities, in fact. She’s black, and he’s white.

“I just want to clarify and say that Kianah was not flagged because she was African American,” says Joel Simonoff, Predictim’s CTO. “I can guarantee you 100 percent there was no bias that went into those posts being flagged. We don’t look at skin color, we don’t look at ethnicity, those aren’t even algorithmic inputs. There’s no way for us to enter that into the algorithm itself.”

Source: I Tried Predictim AI That Scans for ‘Risky’ Babysitters

So, the writer of this article tries to push for a racist angle, however unlikely this is. Oh well, it’s still a good article talking about how this system works.

[…]

When I entered the first person I aimed to scan into the system, Predictim returned a wealth of personal data—home addresses, names of relatives, phone numbers, alternate email addresses, the works. When I sent a screenshot to my son’s godfather of his scan, he replied, “Whoa.”

The goal was to allow parents to make sure they had found the right person before proceeding with the scan, but that’s an awful lot of data.

[…]

After you confirm the personal details and initiate the scan, the process can take up to 48 hours. You’ll get an email with a link to your personalized dashboard, which contains all the people you’ve scanned and their risk rankings, when it’s complete. That dashboard looks a bit like the backend to a content management system, or website analytics service Chartbeat, for those who have the misfortune of being familiar with that infernal service.

[…]

Potential babysitters are graded on a scale of 1-5 (5 being the riskiest) in four categories: “Bullying/Harassment,” “Disrespectful Attitude,” “Explicit Content,” and “Drug use.”

[…]

Neither Parsa nor Simonoff [Predictim’s founders – ed] have children, though Parsa is married, and both insist they are passionate about protecting families from bad babysitters. Joel, for example, once had a babysitter who would drive he and his brother around smoking cigarettes in the car. And Parsa points to Joel’s grandfather’s care provider. “Joel’s grandfather, he has an individual coming in and taking care of him—it’s kind of the elderly care—and all we know about that individual is that yes, he hasn’t done a—or he hasn’t been caught doing a crime.”

[…]

To be fair, I scanned another friend of mine who is black—someone whose posts are perhaps the most overwhelmingly positive and noncontroversial of anyone on my feed—and he was rated at the lowest risk level. (If he wasn’t, it’d be crystal that the thing was racist.) [Wait – what?!]

And Parsa, who is Afghan, says that he has experienced a lifetime of racism himself, and even changed his name from a more overtly Muslim name because he couldn’t get prospective employers to return his calls despite having top notch grades and a college degree. He is sensitive to racism, in other words, and says he made an effort to ensure Predictim is not. Parsa and Simonoff insist that their system, while not perfect, can detect nuances and avoid bias.

The predictors they use also seem to be a bit overly simplistic and unuanced. But I bet it’s something Americans will like – another way to easily devolve responsibility of childcare.

 

Uber’s Arbitration Policy Comes Back to Bite It in the Ass

Over 12,000 Uber drivers found a way to weaponize the ridesharing platform’s restrictive contract in what’s possibly the funniest labor strategy of the year.

But first: a bit of background. One of the more onerous aspects of the gig economy is its propensity to include arbitration agreements in the terms of service—you know, the very long document no one really reads—governing the rights of its workers. These agreements prohibit workers from suing gig platforms in open court, generally giving the company greater leverage and saving it from public embarrassment. Sometimes arbitration is binding; in Uber’s case, driver’s can opt out—but only within 30 days of signing, and very few seem to realize they have the option.

Until an unfavorable U.S. Supreme Court ruling earlier this year, independent contractors often joined class-action lawsuits anyway, arguing (sometimes successfully) that they ought to have been classified as employees from the get-go. With that avenue of remuneration cut off, a group of 12,501 Uber drivers found a new option that hinges on the company’s own terms of service. While arbitrating parties are responsible for paying for their own attorneys, the terms state that “in all cases where required by law, the Company [Uber] will pay the Arbitrator’s and arbitration fees.”

If today’s petition in California’s Northern District Court is accurate, those arbitration fees add up rather quickly.

A group of 12,501 drivers opted to take Uber at its word, individually bringing their cases up for arbitration, overwhelming the infrastructure that’s meant to divide and conquer. “As of November 13, 2018, 12,501 demands have been filed with JAMS,” the notice states. (JAMS refers to the arbitration service Uber uses for this purpose.) Continuing on, emphasis ours: “Of those 12,501 demands, in only 296 has Uber paid the initiating filing fees necessary for an arbitration to commence […] only 47 have appointed arbitrators, and […] in only six instances has Uber paid the retainer fee of the arbitrator to allow the arbitration to move forward.” (Emphasis ours.)

While a JAMS representative was not immediately available for comment, the cause of the holdup is Uber itself, according to the notice:

Uber knows that its failure to pay the filing fees has prevented the arbitrations from commencing. Throughout this process, JAMS has repeatedly advised Uber that JAMS is “missing the NON-REFUNDABLE filing fee of $1,500 for each demand, made payable to JAMS.” JAMS has also informed Uber that “[u]ntil the Filing Fee is received we will be unable to proceed with the administration of these matters.

We have no reason to assume this fee would be different based on the nature of each case, so some back-of-the-envelope math indicates the filings alone would cost Uber—a company that already loses sickening amounts of money—over $18.7 million. We’ve reached out to Uber for comment and to learn if they have an estimate of what that number would be after attorney fees and other expenses.

Source: Uber’s Arbitration Policy Comes Back to Bite It in the Ass

Australia now has encryption-busting laws as Labor capitulates

Labor has backed down completely on its opposition to the Assistance and Access Bill, and in the process has been totally outfoxed by a government that can barely control the floor of Parliament.

After proposing a number of amendments to the Bill, which Labor party members widely called out as inappropriate in the House of Representatives on Thursday morning, the ALP dropped its proposals to allow the Bill to pass through Parliament before the summer break.

“Let’s just make Australians safer over Christmas,” Bill Shorten said on Thursday evening.

“It’s all about putting people first.”

Shorten said Labor is letting the Bill through provided the government agrees to amendments in the new year.

Under the new laws, Australian government agencies would be able to issue three kinds of notices:

  • Technical Assistance Notices (TAN), which are compulsory notices for a communication provider to use an interception capability they already have;
  • Technical Capability Notices (TCN), which are compulsory notices for a communication provider to build a new interception capability, so that it can meet subsequent Technical Assistance Notices; and
  • Technical Assistance Requests (TAR), which have been described by experts as the most dangerous of all.

Source: Australia now has encryption-busting laws as Labor capitulates | ZDNet

Australia now is a surveillance state.

Reddit, YouTube, Others Push Against EU Copyright Directive – even the big guys think this is a bad idea. Hint: aside from it being copyright, it’s a REALLY bad idea

With Tumblr’s decision this week to ban porn on its platform, everyone’s getting a firsthand look at how bad automated content filters are at the moment. Lawmakers in the European Union want a similar system to filter copyrighted works and, despite expert consensus that this will just fuck up the internet, the legislation moves forward. Now some of the biggest platforms on the web insist we must stop it.

YouTube, Reddit, and Twitch have recently come out publicly against the EU’s new Copyright Directive, arguing that the impending legislation could be devastating to their businesses, their users, and the internet at large.

The Copyright Directive is the first update to the group of nation’s copyright law since 2001, and it’s a major overhaul that is intended to claw back some of the money that copyright holders believe they’ve lost since the internet use exploded around the globe. Fundamentally, its provisions are supposed to punish big platforms like Google for profiting off of copyright infringement and siphon some income back into the hands of those to which it rightfully belongs.

Unfortunately, the way it’s designed will likely make it more difficult for smaller platforms, harm the free exchange information, kill memes, make fair use more difficult to navigate—all the while, tech giants will have the resources to survive the wreckage. You don’t have to take my word for it, listen to Tim-Berners Lee, the father of the world wide web, and the other 70 top technologists that signed a letter arguing against the legislation back in June.

So far, this issue hasn’t received the kind of attention that, say, net neutrality did, at least in part because it’s very complicated to explain and it takes a while for these kinds of things to sink in. We’ve outlined the details in the past on multiple occasions. The main thing to understand is that critics take issue with two pieces of the legislation.

Article 11, better known as a “link tax,” would require online platforms to purchase a license to link out to other sites or quotes from articles. That’s the part that threatens the free spread of information.

Article 13 dictates that online platforms install some sort of monitoring system that lets copyright holders upload their work for automatic detection. If something sneaks by the system’s filters, the platform could face full penalties for copyright infringement. For example, a SpongeBob meme could be flagged and blocked because of its source image belonging to Nickelodeon; or a dumb vlog could be flagged and blocked because there’s a sponge in the background and the dumb filter thought it was SpongeBob.

Source: Reddit, YouTube, Others Push Against EU Copyright Directive

Facebook Well Aware That Tracking Contacts Is Creepy: Emails

Back in 2015, Facebook had a pickle of a problem. It was time to update the Android version of the Facebook app, and two different groups within Facebook were at odds over what the data grab should be.

The business team wanted to get Bluetooth permissions so it could push ads to people’s phones when they walked into a store. Meanwhile, the growth team, which is responsible for getting more and more people to join Facebook, wanted to get “Read Call Log Permission” so that Facebook could track everyone whom Android user called or texted with in order to make better friend recommendations to them. (Yes, that’s how Facebook may have historically figured out with whom you went on one bad Tinder date and then plopped them into “People You May Know.”) According to internal emails recently seized by the UK Parliament, Facebook’s business team recognized that what the growth team wanted to do was incredibly creepy and was worried it was going to cause a PR disaster.

In a February 4, 2015, email that encapsulates the issue, Facebook Bluetooth Beacon product manager Mike Lebeau is quoted saying that the request for “read call log” permission was a “pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it.”

LeBeau was worried because a “screenshot of the scary Android permissions screen becomes a meme (as it has in the past), propagates around the web, it gets press attention, and enterprising journalists dig into what exactly the new update is requesting.” He suggested a possible headline for those journalists: “Facebook uses new Android update to pry into your private life in ever more terrifying ways – reading your call logs, tracking you in businesses with beacons, etc.” That’s a great and accurate headline. This guy might have a future as a blogger.

At least he called the journalists “enterprising” instead of “meddling kids.”

Then a man named Yul Kwon came to the rescue saying that the growth team had come up with a solution! Thanks to poor Android permission design at the time, there was a way to update the Facebook app to get “Read Call Log” permission without actually asking for it. “Based on their initial testing, it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,” Kwon is quoted. “It would still be a breaking change, so users would have to click to upgrade, but no permissions dialog screen. They’re trying to finish testing by tomorrow to see if the behavior holds true across different versions of Android.”

Oh yay! Facebook could suck more data from users without scaring them by telling them it was doing it! This is a little surprising coming from Yul Kwon because he is Facebook’s chief ‘privacy sherpa,’ who is supposed to make sure that new products coming out of Facebook are privacy-compliant. I know because I profiled him, in a piece that happened to come out the same day as this email was sent. A member of his team told me their job was to make sure that the things they’re working on “not show up on the front page of the New York Times” because of a privacy blow-up. And I guess that was technically true, though it would be more reassuring if they tried to make sure Facebook didn’t do the creepy things that led to privacy blow-ups rather than keeping users from knowing about the creepy things.

I reached out to Facebook about the comments attributed to Kwon and will update when I hear back.

Thanks to this evasion of permission requests, Facebook users did not realize for years that the company was collecting information about who they called and texted, which would have helped explain to them why their “People You May Know” recommendations were so eerily accurate. It only came to light earlier this year, three years after it started, when a few Facebook users noticed their call and text history in their Facebook files when they downloaded them.

When that was discovered March 2018, Facebook played it off like it wasn’t a big deal. “We introduced this feature for Android users a couple of years ago,” it wrote in a blog post, describing it as an “opt-in feature for people using Messenger or Facebook Lite on Android.”

Facebook continued: “People have to expressly agree to use this feature. If, at any time, they no longer wish to use this feature they can turn it off in settings, or here for Facebook Lite users, and all previously shared call and text history shared via that app is deleted.”

Facebook included a photo of the opt-in screen in its post. In small grey font, it informed people they would be sharing their call and text history.

This particular email was seized by the UK Parliament from the founder of a start-up called Six4Three. It was one of many internal Facebook documents that Six4Three obtained as part of discovery in a lawsuit it’s pursuing against Facebook for banning its Pikinis app, which allowed Facebook users to collect photos of their friends in bikinis. Yuck.

Facebook has a lengthy response to many of the disclosures in the documents including to the discussion in this particular email:

Call and SMS History on Android

This specific feature allows people to opt in to giving Facebook access to their call and text messaging logs in Facebook Lite and Messenger on Android devices. We use this information to do things like make better suggestions for people to call in Messenger and rank contact lists in Messenger and Facebook Lite. After a thorough review in 2018, it became clear that the information is not as useful after about a year. For example, as we use this information to list contacts that are most useful to you, old call history is less useful. You are unlikely to need to call someone who you last called over a year ago compared to a contact you called just last week.

Facebook still doesn’t like to mention that this feature is key to making creepily accurate suggestions as to people you may know.

Source: Facebook Well Aware That Tracking Contacts Is Creepy: Emails

Marriott’s breach response is so bad, security experts are filling in the gaps

Last Friday, Marriott sent out millions of emails warning of a massive data breach — some 500 million guest reservations had been stolen from its Starwood database.

One problem: the email sender’s domain didn’t look like it came from Marriott at all.

Marriott sent its notification email from “email-marriott.com,” which is registered to a third party firm, CSC, on behalf of the hotel chain giant. But there was little else to suggest the email was at all legitimate — the domain doesn’t load or have an identifying HTTPS certificate. In fact, there’s no easy way to check that the domain is real, except a buried note on Marriott’s data breach notification site that confirms the domain as legitimate.

But what makes matters worse is that the email is easily spoofable.

[…]

Take “email-marriot.com.” To the untrained eye, it looks like the legitimate domain — but many wouldn’t notice the misspelling. Actually, it belongs to Jake Williams, founder of Rendition Infosec, to warn users not to trust the domain.

“I registered the domains to make sure that scammers didn’t register the domains themselves,” Williams told TechCrunch. “After the Equifax breach, it was obvious this would be an issue, so registering the domains was just a responsible move to keep them out of the hands of criminals.”

[…]

Williams isn’t the only one who’s resorted to defending Marriott customers from cybercriminals. Nick Carr, who works at security giant FireEye, registered the similarly named “email-mariott.com” on the day of the Marriott breach.

“Please watch where you click,” he wrote on the site. “Hopefully this is one less site used to confuse victims.” Had Marriott just sent the email from its own domain, it wouldn’t be an issue.

Source: Marriott’s breach response is so bad, security experts are filling in the gaps — at their own expense | TechCrunch

FYI: NASA has sent a snatch-and-grab spacecraft to an asteroid to seize some rock and send it back to Earth

NASA’s mission to send a probe to an asteroid, dig up a chunk, and send the material back to Earth is now half-way complete. The agency says its OSIRIS-REx spacecraft has reached its hunk-of-rock target after a trip lasting two years and two billion miles.

The spacecraft, technically the Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is orbiting the asteroid Bennu, a diamond-shaped chunk of space rock with a varying orbit that keeps it around 100 million miles (160 million kilometers) from Earth.

“Initial data from the approach phase show this object to have exceptional scientific value,” said Dante Lauretta, the mission’s principal investigator. “We can’t wait to get to work studying and characterizing Bennu’s rough and rugged surface to find out where the right spot is to collect the sample and bring it back to Earth.”

“Today has been very exciting, but the true nail-biting moment will be the sample collection. The best times are ahead of us, so stay tuned. The exploration of Bennu has just begun, and we have a lifetime of adventure ahead of us.”

Bennu is thought to be a lump of rock from the earliest days of the Solar System. After a couple of flybys, OSIRIS-REx will settle into a steady orbit a few miles above the surface. It will spend the next 505 days circling the asteroid and scanning it with cameras, LIDAR and spectrographs to try and find out as much information as possible about its composition.

The asteroid is of particular interest to NASA because it may contain water and clays from the protoplasmic disc that formed the Sun and the planets in our Solar System. So, once it has picked the likeliest spot and safest place to find some of these materials, the spacecraft will extend the Touch-And-Go Sample Acquisition Mechanism (TAGSAM) – a 3.35-meter (11 ft) robotic arm – and grab a handful of matter from the surface.

Once that’s done, and assuming OSIRIS-REx doesn’t hit the surface, the spacecraft will begin the long voyage back to Earth. It’s expected to arrive on September 2023 and the sealed sample contained will reenter the atmosphere using a heat shield and float back to scientists via parachute into the Utah desert.

Source: FYI: NASA has sent a snatch-and-grab spacecraft to an asteroid to seize some rock and send it back to Earth • The Register

Nvidia Uses AI to Render Virtual Worlds in Real Time

Nvidia announced that AI models can now draw new worlds without using traditional modeling techniques or graphics rendering engines. This new technology uses an AI deep neural network to analyze existing videos and then apply the visual elements to new 3D environments.

Nvidia claims this new technology could provide a revolutionary step forward in creating 3D worlds because the AI models are trained from video to automatically render buildings, trees, vehicles, and objects into new 3D worlds, instead of requiring the normal painstaking process of modeling the scene elements.

But the project is still a work in progress. As we can see from the image on the right, which was generated in real time on a Nvidia Titan V graphics card using its Tensor cores, the rendered scene isn’t as crisp as we would expect in real life, and it isn’t as clear as we would expect with a normal modeled scene in a 3D environment. However, the result is much more impressive when we see the real-time output in the YouTube video below. The key here is speed: The AI generates these scenes in real time.

Nvidia AI Rendering

Nvidia’s researchers have also used this technique to model other motions, such as dance moves, and then apply those same moves to other characters in real-time video. That does raise moral questions, especially given the proliferation of altered videos like deep fakes, but Nvidia feels that it is an enabler of technology and the issue should be treated as a security problem that requires a technological solution to prevent people from rendering things that aren’t real.

The big question is when this will come to the gaming realm, but Nvidia cautions that this isn’t a shipping product yet. The company did theorize that it would be useful for enhancing older games by analyzing the scenes and then applying trained models to improve the graphics, among many other potential uses. It could also be used to create new levels and content in older games. In time, the company expects the technology to spread and become another tool in the game developers’ toolbox. The company has open sourced the project, so anyone can download and begin using it today, though it is currently geared towards AI researchers.

Nvidia says this type of AI analysis and scene generation can occur with any type of processor, provided it can deliver enough AI throughput to manage the real-time feed. The company expects that performance and image quality will improve over time.

Nvidia sees this technique eventually taking hold in gaming, automotive, robotics, and virtual reality, but it isn’t committing to a timeline for an actual product. The work remains in the lab for now, but the company expects game developers to begin working with the technology in the future. Nvidia is also conducting a real-time demo of AI-generated worlds at the AI research-focused NeurIPS conference this week.

Source: Nvidia Uses AI to Render Virtual Worlds in Real Time

When Discounts Hurt Sales: Too much discounting and too many positive reviews can hurt sales

By tracking the sales of 19,978 deals on Groupon.com and conducting a battery of identification and falsification tests, we find that deep discounts reduce sales. A 1% increase in a deal’s discount decreases sales by 0.035%–0.256%. If a merchant offers an additional 10% discount from the sample mean of 55.6%, sales could decrease by 0.63%–4.60%, or 0.80–5.24 units and $42–$275 in revenue. This negative effect of discount is more prominent among credence goods and deals with low sales, and when the deals are offered in cities with higher income and better education. Our findings suggest that consumers are concerned about product quality, and excessive discounts may reduce sales immediately. A follow-up lab experiment provides further support to this quality-concern explanation. Furthermore, it suggests the existence of a “threshold” effect: the negative effect on sales is present only when the discount is sufficiently high. Additional empirical analysis shows that deals displaying favorable third-party support, such as Facebook fans and online reviews, are more susceptible to this adverse discount effect.

Source: When Discounts Hurt Sales: The Case of Daily-Deal Markets | Information Systems Research

 
Skip to toolbar