Researchers are keeping pig brains alive outside the body

In a step that could change the definition of death, researchers have restored circulation to the brains of decapitated pigs and kept the reanimated organs alive for as long as 36 hours.

The feat offers scientists a new way to study intact brains in the lab in stunning detail. But it also inaugurates a bizarre new possibility in life extension, should human brains ever be kept on life support outside the body.

The work was described on March 28 at a meeting held at the National Institutes of Health to investigate ethical issues arising as US neuroscience centers explore the limits of brain science.

During the event, Yale University neuroscientist Nenad Sestan disclosed that a team he leads had experimented on between 100 and 200 pig brains obtained from a slaughterhouse, restoring their circulation using a system of pumps, heaters, and bags of artificial blood warmed to body temperature.

Source: Researchers are keeping pig brains alive outside the body – MIT Technology Review

The World’s First Working Projector Smartwatch Turns Your Arm Into a Big Touchscreen

GIF: Carnegie Mellon University & ASU Tech

Some smartwatches come with powerful processors, lots of storage, and robust software, but have limited capabilities compared to smartphones thanks to their tiny touchscreens. Researchers at Carnegie Mellon University, however, have now created a smartwatch prototype with a built-in projector that turns the wearer’s arm into a smartphone-sized touchscreen.

Despite what you may have seen on crowdfunding sites, the LumiWatch is the first smartwatch to integrate a fully-functional laser projector and sensor array, allowing a screen projected on a user’s skin to be poked, tapped, and swiped just like a traditional touchscreen. It seems like a gadget straight out of science fiction, but don’t reach for your credit card just yet, because it’s going to be a very long time before the technology created for this research project ends up in a consumer-ready device.

Source: The World’s First Working Projector Smartwatch Turns Your Arm Into a Big Touchscreen

The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

Plenty of people have voluntarily uploaded their DNA to GEDmatch and other databases, often with real names and contact information. It’s what you do if you’re an adopted kid looking for a long-lost parent, or a genealogy buff curious about whether you have any cousins still living in the old country. GEDmatch requires that you make your DNA data public if you want to use their comparison tools, although you don’t have to attach your real name. And they’re not the only database that has helped law enforcement track people down without their knowledge.

How DNA Databases Help Track People Down

We don’t know exactly what samples or databases were used in the Golden State Killer’s case; the Sacramento County District Attorney’s office gave very little information and hasn’t confirmed any further details. But here are some things that are possible.

Y chromosome data can lead to a good guess at an unknown person’s last name.

Cis men typically have an X and a Y chromosome, and cis women two X’s. That means the Y chromosome is passed down from genetic males to their offspring—for example, from father to son. Since last names are also often handed down the same way, in many families you’ll share a surname with anybody who shares your Y chromosome.

A 2013 Science paper described how a small amount of Y chromosome data should be enough to identify surnames for an estimated 12 percent of white males in the US. (That method would find the wrong surname for 5 percent, and the rest would come back as unknown.) As more people upload their information to public databases, the authors warned, the success rate will only increase.

This is exactly the technique that genealogical consultant Colleen Fitzpatrick used to narrow down a pool of suspects in an Arizona cold case. She seems to have used short tandem repeat (STR) data from the suspect’s Y chromosome to search the Family Tree DNA database, and she saw the name Miller in the results.

The police already had a long list of suspects in the Arizona case, but based on that tip they zeroed in on one with the last name Miller. As with the Golden State Killer case, police confirmed the DNA match by obtaining a fresh DNA sample directly from their subject—the Sacramento office said they got it from something he discarded. (Yes, this is legal, and it can be an item as ordinary as a used drinking straw.)

The authors of the Science paper point out that surname, location, and year of birth are often enough to find an individual in census data.

 SNP files can find family trees.

When you download your “raw data” after mailing in a 23andme or Ancestry test, what you get is a list of locations on your genome (called SNPs, for single nucleotide polymorphisms) and two letters indicating your status for each. For example, at a certain SNP you may have inherited an A from one parent and a G from the other.

Genetic testing sites will have tools to compare your DNA with others in their database, but you can also download your raw data and submit it to other sites, including GEDmatch or Family Tree DNA. (23andme and Ancestry allow you to download your data, but they don’t accept uploads.)

But you don’t have to send a spit sample to one of those companies to get a raw data file. The DNA Doe project describes how they sequenced the whole genome of an unidentified girl from a cold case and used that data to construct a SNP file to upload to GEDmatch. They found someone with enough of the same SNPs that they were probably a close cousin. That cousin also had an account at Ancestry, where they had filled out a family tree with details of their family members. The tree included an entry for a cousin of the same age as the unidentified girl, and whose death date was listed as “missing—presumed dead.” It was her.

Your DNA Is Not Just Yours

When you send in a spit sample, or upload a raw data file, you may only be thinking about your own privacy. I have nothing to hide, you might tell yourself. Who cares if somebody finds out that I have blue eyes or a predisposition to heart disease?

But half of your DNA belongs to your biological mother, and half to your biological father. Another half—cut a different way—belongs to each of your children. On average, you share half your DNA with a sibling, and a quarter with a half-sibling, grandparent, aunt, uncle, niece or nephew. You share about an eighth with a first cousin, and so on. The more of your extended family who are into genealogy, the more likely you are to have your DNA in a public database, already contributed by a relative.

In the cases we mention here, the breakthrough came when DNA was matched, through a public database, to a person’s real name. But your DNA is, in a sense, your most identifying information.

For some cases, it may not matter whether your name is attached. Facebook reportedly spoke with a hospital about exchanging anonymized data. They didn’t need names because they had enough information, and good enough algorithms, that they thought they could identify individuals based on everything else. (Facebook doesn’t currently collect DNA information, thank god. There is a public DNA project that signs people up using a Facebook app, but they say they don’t pass the data to Facebook itself.)

And remember that 2013 study about tracking down people’s surnames? They grabbed whole-genome data from a few high-profile people who had made theirs public, and showed that the DNA files were sometimes enough information to track down an individual’s full name. It may be impossible for DNA to be totally anonymous.

Can You Protect Your Privacy While Using DNA Databases?

If you’re very concerned about privacy, you’re best off not using any of these databases. But you can’t control whether your relatives use them, and you may be looking for a long-lost family member and thus want to be in a database while minimizing the risks.

Source: The Golden State Killer Suspect’s DNA Was in a Publicly Available Database, and Yours Might Be Too

‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale

the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

It has also raised concerns about the need for regulation to prevent abuses in the workplace.

The technology is also in use at in Hangzhou at State Grid Zhejiang Electric Power, where it has boosted company profits by about 2 billion yuan (US$315 million) since it was rolled out in 2014, according to Cheng Jingzhou, an official overseeing the company’s emotional surveillance programme.

“There is no doubt about its effect,” Cheng said.

Source: ‘Forget the Facebook leak’: China is mining data directly from workers’ brains on an industrial scale | South China Morning Post

Chinese government admits collection of deleted WeChat messages

Chinese authorities revealed over the weekend that they have the capability of retrieving deleted messages from the almost universally used WeChat app. The admission doesn’t come as a surprise to many, but it’s rare for this type of questionable data collection tactic to be acknowledged publicly.As noted by the South China Morning Post, an anti-corruption commission in Hefei province posted Saturday to social media that it has “retrieved a series of deleted WeChat conversations from a subject” as part of an investigation.The post was deleted Sunday, but not before many had seen it and understood the ramifications. Tencent, which operates the WeChat service used by nearly a billion people (including myself), explained in a statement that “WeChat does not store any chat histories — they are only stored on users’ phones and computers.”The technical details of this storage were not disclosed, but it seems clear from the commission’s post that they are accessible in some way to interested authorities, as many have suspected for years. The app does, of course, comply with other government requirements, such as censoring certain topics.There are still plenty of questions, the answers to which would help explain user vulnerability: Are messages effectively encrypted at rest? Does retrieval require the user’s password and login, or can it be forced with a “master key” or backdoor? Can users permanently and totally delete messages on the WeChat platform at all?

Source: Chinese government admits collection of deleted WeChat messages | TechCrunch

AI boffins rebel against closed-access academic journal Nature

Thousands of machine-learning wizards have signed an open statement boycotting a new AI-focused academic journal, disapproving of the paper’s policy of closed-access.Nature Machine Intelligence is a specialized journal concentrating on intelligent systems and robotics research. It’s expected to launch in January next year, and is part of Nature Publishing Group, one of the world’s top academic publishers.The joint statement written by Thomas Dietterich, a professor of computer science at Oregon State University in the US, and signed by more than 2,000 academics and researchers in industry, states that “they will not submit to, review, or edit for this new journal.”He said that free and open access journals speeds up scientific progress since it allows anyone to read the latest research and contribute their own findings. It also helps universities who can’t afford subscription fees or pay for their own papers to be open access.“It is important to note that in the modern scientific journal, virtually all of the work is done by academic researchers. We write the papers, we edit the papers, we typeset the papers, and we review the papers,” he told The Register.

Source: AI boffins rebel against closed-access academic journal that wants to have its cake and eat it • The Register

Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

The gambling industry is increasingly using artificial intelligence to predict consumer habits and personalise promotions to keep gamblers hooked, industry insiders have revealed.Current and former gambling industry employees have described how people’s betting habits are scrutinised and modelled to manipulate their future behaviour.“The industry is using AI to profile customers and predict their behaviour in frightening new ways,” said Asif, a digital marketer who previously worked for a gambling company. “Every click is scrutinised in order to optimise profit, not to enhance a user’s experience.”“I’ve often heard people wonder about how they are targeted so accurately and it’s no wonder because its all hidden in the small print.”Publicly, gambling executives boast of increasingly sophisticated advertising keeping people betting, while privately conceding that some are more susceptible to gambling addiction when bombarded with these type of bespoke ads and incentives.Gamblers’ every click, page view and transaction is scientifically examined so that ads statistically more likely to work can be pushed through Google, Facebook and other platforms.


Last August, the Guardian revealed the gambling industry uses third-party companies to harvest people’s data, helping bookmakers and online casinos target people on low incomes and those who have stopped gambling.

Despite condemnation from MPs, experts and campaigners, such practices remain an industry norm.

“You can buy email lists with more than 100,000 people’s emails and phone numbers from data warehouses who regularly sell data to help market gambling promotions,” said Brian. “They say it’s all opted in but people haven’t opted in at all.”

In this way, among others, gambling companies and advertisers create detailed customer profiles including masses of information about their interests, earnings, personal details and credit history.


Elsewhere, there are plans to geolocate customers in order to identify when they arrive at stadiums so they can prompted via texts to bet on the game they are about to watch.

The gambling industry earned£14bn in 2016, £4.5bn of which from online betting, and it is pumping some of that money into making its products more sophisticated and, in effect, addictive.

Source: Revealed: how bookies use AI to keep gamblers hooked | Technology | The Guardian

USB drive that crashes Windows

PoC for a NTFS crash that I discovered, in various Windows versions

Type of issue: denial of service. One can generate blue-screen-of-death using a handcrafted NTFS image. This Denial of Service type of attack, can be driven from user mode, limited user account or Administrator. It can even crash the system if it is in locked state.

Reported to Microsoft on July 2017, they did not want to assign CVE for it nor even to write me when they fixed it.

Affected systems

  1. Windows 7 Enterprise 6.1.7601 SP1, Build 7601 x64
  2. Windows 10 Pro 10.0.15063, Build 15063 x64
  3. Windows 10 Enterprise Evaluation Insider Preview 10.0.16215, Build 16215 x64

Note: these are the only systems I have tested.

Does not seem to reproduce on my current build: 10.0.16299 Build 16299 x64 (didnt have time to see if it’s really fixed)

last email response 🙂

Hey Marius, Your report requires either physical access or social engineering, and as such, does not meet the bar for servicing down-level (issuing a security patch). […]

Your attempt to responsibly disclose a potential security issue is appreciated and we hope you continue to do so.


life-saving gravity-powered light

The second generation of a deciwatt gravity-powered lamp designed by the British industrial designers behind the Psion computer keyboard was launched today.

Few innovations we cover can claim to save lives, but this just might be one of them. The $5 Gravity Light, designed by London’s Therefore Inc, offers the world’s poorest a clean alternative to burning kerosene or biomass for lighting or radios.

The clever bit is a winch that unwinds incredibly slowly, but steadily enough to provide a low but usable voltage. The lamp was first featured here in 2012.

The second generation adds solar power and a rechargeable battery. The latter may be surprising – co-designer Jim Reeves said short-lived and costly rechargeable batteries were far from ideal. But things change, and the ability to store the energy is useful.

Source: Grab your lamp, you’ve pulled: Brits punt life-saving gravity-powered light

Europe divided over robot ‘personhood’

While autonomous robots with humanlike, all-encompassing capabilities are still decades away, European lawmakers, legal experts and manufacturers are already locked in a high-stakes debate about their legal status: whether it’s these machines or human beings who should bear ultimate responsibility for their actions.

The battle goes back to a paragraph of text, buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Such a status could allow robots to be insured individually and be held liable for damages if they go rogue and start hurting people or damaging property.

Those pushing for such a legal change, including some manufacturers and their affiliates, say the proposal is common sense. Legal personhood would not make robots virtual people who can get married and benefit from human rights, they say; it would merely put them on par with corporations, which already have status as “legal persons,” and are treated as such by courts around the world.

Source: Europe divided over robot ‘personhood’ – POLITICO

Tried checking under the sofa? Indian BTC exchange Coinsecure finds itself $3.5m lighter

Indian Bitcoin exchange Coinsecure has mislaid 438.318 BTC belonging to its customers.

In a statement by parent firm Secure Bitcoin Traders Pvt, posted late on Thursday, the biz said its chief security officer had extracted a bunch of Bitcoin to distribute to punters – and discovered the funds were “lost in the process.”

The vanished Bitcoin stash was worth £2,493,590 ($3,547,745) at the time of publication, and apparently departed Coinsecure’s secure coin servers on April 9.

Earlier this week, folks began to smell a rat as the site went down for an unexpected nap that day:

Things proceeded to become more alarming for worried customers as Coinsecure stopped accepting deposits due to “backend updates.”

We’re told chief security officer Dr Amitabh Saxena and chief exec Mohit Kalra should have been the only ones with access to the wallet’s private keys. Here’s a crime report the biz filled out and submitted to Indian authorities:

Coinsecure FIR

With Bitcoin values tumbling after historic highs, it seems the quickest way to lose your cryptocurrency is to, er, deposit it somewhere.

Source: Tried checking under the sofa? Indian BTC exchange Coinsecure finds itself $3.5m lighter • The Register

Google uses AI to seperate out audio from a single person in a high noise rate video

People are remarkably good at focusing their attention on a particular person in a noisy environment, mentally “muting” all other voices and sounds. Known as the cocktail party effect, this capability comes natural to us humans. However, automatic speech separation — separating an audio signal into its individual speech sources — while a well-studied problem, remains a significant challenge for computers. In “Looking to Listen at the Cocktail Party”, we present a deep learning audio-visual model for isolating a single speech signal from a mixture of sounds such as other voices and background noise. In this work, we are able to computationally produce videos in which speech of specific people is enhanced while all other sounds are suppressed. Our method works on ordinary videos with a single audio track, and all that is required from the user is to select the face of the person in the video they want to hear, or to have such a person be selected algorithmically based on context. We believe this capability can have a wide range of applications, from speech enhancement and recognition in videos, through video conferencing, to improved hearing aids, especially in situations where there are multiple people speaking.

A unique aspect of our technique is in combining both the auditory and visual signals of an input video to separate the speech. Intuitively, movements of a person’s mouth, for example, should correlate with the sounds produced as that person is speaking, which in turn can help identify which parts of the audio correspond to that person. The visual signal not only improves the speech separation quality significantly in cases of mixed speech (compared to speech separation using audio alone, as we demonstrate in our paper), but, importantly, it also associates the separated, clean speech tracks with the visible speakers in the video.

The input to our method is a video with one or more people speaking, where the speech of interest is interfered by other speakers and/or background noise. The output is a decomposition of the input audio track into clean speech tracks, one for each person detected in the video.

An Audio-Visual Speech Separation Model To generate training examples, we started by gathering a large collection of 100,000 high-quality videos of lectures and talks from YouTube. From these videos, we extracted segments with a clean speech (e.g. no mixed music, audience sounds or other speakers) and with a single speaker visible in the video frames. This resulted in roughly 2000 hours of video clips, each of a single person visible to the camera and talking with no background interference. We then used this clean data to generate “synthetic cocktail parties” — mixtures of face videos and their corresponding speech from separate video sources, along with non-speech background noise we obtained from AudioSet. Using this data, we were able to train a multi-stream convolutional neural network-based model to split the synthetic cocktail mixture into separate audio streams for each speaker in the video. The input to the network are visual features extracted from the face thumbnails of detected speakers in each frame, and a spectrogram representation of the video’s soundtrack. During training, the network learns (separate) encodings for the visual and auditory signals, then it fuses them together to form a joint audio-visual representation. With that joint representation, the network learns to output a time-frequency mask for each speaker. The output masks are multiplied by the noisy input spectrogram and converted back to a time-domain waveform to obtain an isolated, clean speech signal for each speaker. For full details, see our paper.

Our multi-stream, neural network-based model architecture.

Here are some more speech separation and enhancement results by our method, playing first the input video with mixed or noisy speech, then our results. Sound by others than the selected speakers can be entirely suppressed or suppressed to the desired level.

Application to Speech Recognition Our method can also potentially be used as a pre-process for speech recognition and automatic video captioning. Handling overlapping speakers is a known challenge for automatic captioning systems, and separating the audio to the different sources could help in presenting more accurate and easy-to-read captions.

You can similarly see and compare the captions before and after speech separation in all the other videos in this post and on our website, by turning on closed captions in the YouTube player when playing the videos (“cc” button at the lower right corner of the player). On our project web page you can find more results, as well as comparisons with state-of-the-art audio-only speech separation and with other recent audio-visual speech separation work. Indeed, with recent advances in deep learning, there is a clear growing interest in the academic community in audio-visual analysis. For example, independently and concurrently to our work, this work from UC Berkeley explored a self-supervised approach for separating speech of on/off-screen speakers, and this work from MIT addressed the problem of separating the sound of multiple on-screen objects (e.g., musical instruments), while locating the image regions from which the sound originates. We envision a wide range of applications for this technology. We are currently exploring opportunities for incorporating it into various Google products. Stay tuned!

Source: Research Blog: Looking to Listen: Audio-Visual Speech Separation

Watch artificial intelligence create a 3D model of a person—from just a few seconds of video

Transporting yourself into a video game, body and all, just got easier. Artificial intelligence has been used to create 3D models of people’s bodies for virtual reality avatars, surveillance, visualizing fashion, or movies. But it typically requires special camera equipment to detect depth or to view someone from multiple angles. A new algorithm creates 3D models using standard video footage from one angle.

The system has three stages. First, it analyzes a video a few seconds long of someone moving—preferably turning 360° to show all sides—and for each frame creates a silhouette separating the person from the background. Based on machine learning techniques—in which computers learn a task from many examples—it roughly estimates the 3D body shape and location of joints. In the second stage, it “unposes” the virtual human created from each frame, making them all stand with arms out in a T shape, and combines information about the T-posed people into one, more accurate model. Finally, in the third stage, it applies color and texture to the model based on recorded hair, clothing, and skin.

The researchers tested the method with a variety of body shapes, clothing, and backgrounds and found that it had an average accuracy within 5 millimeters, they will report in June at the Computer Vision and Pattern Recognition conference in Salt Lake City. The system can also reproduce the folding and wrinkles of fabric, but it struggles with skirts and long hair. With a model of you, the researchers can change your weight, clothing, and pose—and even make you perform a perfect pirouette. No practice necessary.

Source: Watch artificial intelligence create a 3D model of a person—from just a few seconds of video | Science | AAAS

Whois is dead as Europe hands DNS overlord ICANN its arse :(

The Whois public database of domain name registration details is dead.

In a letter [PDF] sent this week to DNS overseer ICANN, Europe’s data protection authorities have effectively killed off the current service, noting that it breaks the law and so will be illegal come 25 May, when GDPR comes into force.

The letter also has harsh words for ICANN’s proposed interim solution, criticizing its vagueness and noting it needs to include explicit wording about what can be done with registrant data, as well as introduce auditing and compliance functions to make sure the data isn’t being abused.

ICANN now has a little over a month to come up with a replacement to the decades-old service that covers millions of domain names and lists the personal contact details of domain registrants, including their name, email and telephone number.

ICANN has already acknowledged it has no chance of doing so: a blog post by the company in response to the letter warns that without being granted a special temporary exemption from the law, the system will fracture.

“Unless there is a moratorium, we may no longer be able to give instructions to the contracted parties through our agreements to maintain Whois,” it warns. “Without resolution of these issues, the Whois system will become fragmented.”

We spoke with the president of ICANN’s Global Domains Division, Akram Atallah, and he told us that while there was “general agreement that having every thing public is not the right way to go”, he was hopeful that the letter would not result in the Whois service being turned off completely while a replacement was developed.

Source: Whois is dead as Europe hands DNS overlord ICANN its arse • The Register

It’s an important and useful tool – hopefully they will resolve this one way or another.

Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data

In 2004, one of the world’s most popular social networks, Orkut, was founded by a former Google employee named Orkut Büyükkökten. Later that year, a Harvard University student named Mark Zuckerberg launched ‘the Facebook’, which over the course of a year became ubiquitous in Ivy League universities and was eventually called

Orkut was shut down by Google in 2014, but in its heyday, the network had hit 300 million users around the world. Facebook took five years to achieve that feat. At a time when the #DeleteFacebook movement is gaining traction worldwide in light of the Cambridge Analytica scandal, Orkut has made a comeback

“ is a spiritual successor of,” Büyükkökten told BloombergQuint. “The most important thing about Orkut was communities, because they brought people together around topics and things that interested them and provided a safe place for people to exchange ideas and share genuine passions and feelings. We have built the entire ‘Hello’ experience around communities and passions and see it as Orkut 2.0.”

Orkut has decided to make a comeback when Mark Zuckerberg, founder and CEO of Facebook, has been questioned by U.S. congressmen and senators about its policies and data collection and usage practices. That came after the Cambridge Analytica data leak which impacted nearly 87 million users, including Zuckerberg himself.

“People have lost trust in social networks and the main reason is social media services today don’t put the users first. They put advertisers, brands, third parties, shareholders before the users,” Büyükkökten said. “They are also not transparent about practices. The privacy policy and terms of services are more like black boxes. How many users actually read them?”

Büyükkökten said users need to be educated about these things and user consent is imperative in such situations when data is shared by such platforms. “On Hello, we do not share data with third parties. We have our own registration and login and so the data doesn’t follow you anywhere,”he said. “You don’t need to sell user data in order to be profitable or make money.”

Source: Orkut Hello: The Man Behind Orkut Says His ‘Hello’ Platform Doesn’t Sell User Data – Bloomberg Quint

I am very curious what his business model is then

Do you have a browser based bitcoin wallet? Check you’re not hacked if it’s JavaScript based

A significant number of past and current cryptocurrency products
contain a JavaScript class named SecureRandom(), containing both
entropy collection and a PRNG. The entropy collection and the RNG
itself are both deficient to the degree that key material can be
recovered by a third party with medium complexity. There are a
substantial number of variations of this SecureRandom() class in
various pieces of software, some with bugs fixed, some with additional
bugs added. Products that aren't today vulnerable due to moving to
other libraries may be using old keys that have been previously
compromised by usage of SecureRandom().

Source: [bitcoin-dev] KETAMINE: Multiple vulnerabilities in SecureRandom(), numerous cryptocurrency products affected.

Cops Around the Country Can Now Unlock iPhones, Records Show

Police forces and federal agencies around the country have bought relatively cheap tools to unlock up-to-date iPhones and bypass their encryption, according to a Motherboard investigation based on several caches of internal agency documents, online records, and conversations with law enforcement officials. Many of the documents were obtained by Motherboard using public records requests.


The news highlights the going dark debate, in which law enforcement officials say they cannot access evidence against criminals. But easy access to iPhone hacking tools also hamstrings the FBI’s argument for introducing backdoors into consumer devices so authorities can more readily access their contents.

“It demonstrates that even state and local police do have access to this data in many situations,” Matthew Green, an assistant professor and cryptographer at the Johns Hopkins Information Security Institute, told Motherboard in a Twitter message. “This seems to contradict what the FBI is saying about their inability to access these phones.”

As part of the investigation, Motherboard found:


The GrayKey itself is a small, 4×4 inches box with two lightning cables for connecting iPhones, according to photographs published by cybersecurity firm Malwarebytes. The device comes in two versions: a $15,000 one which requires online connectivity and allows 300 unlocks (or $50 per phone), and and an offline, $30,000 version which can crack as many iPhones as the customer wants. Marketing material seen by Forbes says GrayKey can unlock devices running iterations of Apple’s latest mobile operating system iOS 11, including on the iPhone X, Apple’s most recent phone.

The issue GrayKey overcomes is that iPhones encrypt user data by default. Those in physical possession normally cannot access the phone’s data, such as contact list, saved messages, or photos, without first unlocking the phone with a passcode or fingerprint. Malwarebytes’ post says GrayKey can unlock an iPhone in around two hours, or three days or longer for 6 digit passcodes.

Source: Cops Around the Country Can Now Unlock iPhones, Records Show – Motherboard

India completes its GPS alternative, for the second time

India has successfully conducted the satellite launch needed to re-construct its Indian Regional Navigation Satellite System (IRNSS).

The Indian Space Research Organisation’s Polar Satellite Launch Vehicle PSLV-C41 ascended on Thursday, April 12th. Atop the craft was a satellite designated IRNSS-1L, the last of seven satellites in India’s constellation of navigational craft.

India understands that satellite navigation services have become an assumed resource for all manner of applications, but that relying on another nation’s network is fraught with danger in the event of war or other disputes. Like Russia, China and the European Union, India has therefore decided it needs a satnav system of its own.


ndia’s already completed the network once before: in April 2016 we covered the launch of IRNSS-G, which at the time was the seventh satellite in the constellation. But just three months later, the first satellite in the fleet broke: IRNSS-1A’s atomic clocks clocked off, leaving India with insufficient satellites to deliver its hoped-for 10-metre accuracy over land.

A replacement satellite, IRNSS-1H, failed to reach its desired orbit in August 2017.

Much rejoicing has therefore followed IRNSS-1L’s success, including the following prime-ministerial Tweet.

India’s said IRNSS has only regional ambitions: its seven satellites cover India and about 1,500km beyond the nation’s borders. But that’s enough distance to help India launch missiles, like its 5,000-km-range Agni-5, deep into Pakistan, China or Russia. Don’t forget: India is a nuclear power! The nation’s suggested it might add some more sats to the service, which would likely extend its range and enhance its accuracy.

Component-makers have already started making receivers capable of linking to INRSS satellites and other similar services, so there’s a decent chance your smartphone will be able to talk to India’s satellites should you visit the region.

Source: India completes its GPS alternative, for the second time • The Register

This AI Can Automatically Animate New Flintstones Cartoons

Researchers have successfully trained artificial intelligence to generate new clips of the prehistoric animated series based on nothing but random text descriptions of what’s happening in a scene.

A team of researchers from the Allen Institute for Artificial Intelligence, and the University of Illinois Urbana-Champaign, trained an AI by feeding it over 25,000 three-second clips of the cartoon, which hasn’t seen any new episodes in over 50 years. Most AI experiments as of late have involved generating freaky images based on what was learned, but this time the researchers included detailed descriptions and annotations of what appeared, and what was happening, in every clip the AI ingested.

As a result, the new Flintstones animations generated by the Allen Institute’s AI aren’t just random collages of chopped up cartoons. Instead, the researchers are able to feed the AI a very specific description of a scene, and it outputs a short clip featuring the characters, props, and locations specified—most of the time.

The quality of the animations that are generated is awful at best; no one’s going to be fooled into thinking these are the Hanna-Barbera originals. But seeing an AI generate a cartoon, featuring iconic characters, all by itself, is a fascinating sneak peek at how some films and TV shows might be made one day.

Source: This AI Can Automatically Animate New Flintstones Cartoons

Properly random random number generator generated

From dice to modern electronic circuits, there have been many attempts to build better devices to generate random numbers. Randomness is fundamental to security and cryptographic systems and to safeguarding privacy. A key challenge with random-number generators is that it is hard to ensure that their outputs are unpredictable1,2,3. For a random-number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model that describes the underlying physics is necessary to assert unpredictability. Imperfections in the model compromise the integrity of the device. However, it is possible to exploit the phenomenon of quantum non-locality with a loophole-free Bell test to build a random-number generator that can produce output that is unpredictable to any adversary that is limited only by general physical principles, such as special relativity1,2,3,4,5,6,7,8,9,10,11. With recent technological developments, it is now possible to carry out such a loophole-free Bell test12,13,14,22. Here we present certified randomness obtained from a photonic Bell experiment and extract 1,024 random bits that are uniformly distributed to within 10−12. These random bits could not have been predicted according to any physical theory that prohibits faster-than-light (superluminal) signalling and that allows independent measurement choices. To certify and quantify the randomness, we describe a protocol that is optimized for devices that are characterized by a low per-trial violation of Bell inequalities. Future random-number generators based on loophole-free Bell tests may have a role in increasing the security and trust of our cryptographic systems and infrastructure.

Source: Experimentally generated randomness certified by the impossibility of superluminal signals | Nature

Data exfiltrators send info over PCs’ power supply cables

If you want your computer to be really secure, disconnect its power cable.

So says Mordechai Guri and his team of side-channel sleuths at the Ben-Gurion University of the Negev.

The crew have penned a paper titled PowerHammer: Exfiltrating Data from Air-Gapped Computers through Power Lines that explains how attackers could install malware that regulates CPU utilisation and creates fluctuations in the current flow that could modulate and encode data. The variations would be “propagated through the power lines” to the outside world.

PowerHammer attack

Put the receiver near the user for highest speed, behind the panel for greatest secrecy

Depending on the attacker’s approach, data could be exfiltrated at between 10 and 1,000 bits-per-second. The higher speed would work if attackers can get at the cable connected to the computer’s power supply. The slower speed works if attackers can only access a building’s electrical services panel.

The PowerHammer malware spikes the CPU utilisation by choosing cores that aren’t currently in use by user operations (to make it less noticeable).

Guri and his pals use frequency shift keying to encode data onto the line.

After that, it’s pretty simple, because all the attacker needs is to decide where to put the receiver current clamp: near the target machine if you can get away with it, behind the switchboard if you have to.

Source: Data exfiltrators send info over PCs’ power supply cables • The Register

FDA approves AI-powered software to detect diabetic retinopathy

30.3 million Americans have diabetes according to a 2015 CDC study. An additional 84.1 million have prediabetes, which often leads to the full disease within five years. It’s important to detect diabetes early to avoid health complications like heart disease, stroke, amputation of extremities and vision loss. Technology increasingly plays an important role in early detection, too. In that vein, the US Food and Drug Administration (FDA) has just approved an AI-powered device that can be used by non-specialists to detect diabetic retinopathy in adults with diabetes.

Diabetic retinopathy occurs when the high levels of blood sugar in the bloodstream cause damage to your retina’s blood vessels. It’s the most common cause of vision loss, according to the FDA. The approval comes for a device called IDx-DR, a software program that uses an AI algorithm to analyze images of the eye that can be taken in a regular doctor’s office with a special camera, the Topcon NW400.

The photos are then uploaded to a server that runs IDx-DR, which can then tell the doctor if there is a more than mild level of diabetic retinopathy present. If not, it will advise a re-screen in 12 months. The device and software can be used by health care providers who don’t normally provide eye care services. The FDA warns that you shouldn’t be screened with the device if you have had laser treatment, eye surgery or injections, as well as those with other conditions, like persistent vision loss, blurred vision, floaters, previously diagnosed macular edema and more.

Source: FDA approves AI-powered software to detect diabetic retinopathy

After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Using well-established machine learning techniques, researchers from University of California, Berkeley have taught simulated humanoids to perform over 25 natural motions, from somersaults and cartwheels through to high leg kicks and breakdancing. The technique could lead to more realistic video gameplay and more agile robots.


UC Berkeley graduate student Xue Bin “Jason” Peng, along with his colleagues, have combined two techniques—motion-capture technology and deep-reinforcement computer learning—to create something completely new: a system that teaches simulated humanoids how to perform complex physical tasks in a highly realistic manner. Learning from scratch, and with limited human intervention, the digital characters learned how to kick, jump, and flip their way to success. What’s more, they even learned how to interact with objects in their environment, such as barriers placed in their way or objects hurled directly at them.


The new system, dubbed DeepMimic, works a bit differently. Instead of pushing the simulated character towards a specific end goal, such as walking, DeepMimic uses motion-capture clips to “show” the AI what the end goal is supposed to look like. In experiments, Bin’s team took motion-capture data from more than 25 different physical skills, from running and throwing to jumping and backflips, to “define the desired style and appearance” of the skill, as Peng explained at the Berkeley Artificial Intelligence Research (BAIR) blog.

Results didn’t happen overnight. The virtual characters tripped, stumbled, and fell flat on their faces repeatedly until they finally got the movements right. It took about a month of simulated “practice” for each skill to develop, as the humanoids went through literally millions of trials trying to nail the perfect backflip or flying leg kick. But with each failure came an adjustment that took it closer to the desired goal.

Bots trained across a wide variety of skills.
GIF: Berkeley Artificial Intelligence Research

Using this technique, the researchers were able to produce agents who behaved in a highly realistic, natural manner. Impressively, the bots were also able to manage never-before-seen conditions, such as challenging terrain or obstacles. This was an added bonus of the reinforcement learning, and not something the researchers had to work on specifically.

“We present a conceptually simple [reinforcement learning] framework that enables simulated characters to learn highly dynamic and acrobatic skills from reference motion clips, which can be provided in the form of mocap data [i.e. motion capture] recorded from human subjects,” writes Peng. “Given a single demonstration of a skill, such as a spin-kick or a backflip, our character is able to learn a robust policy to imitate the skill in simulation. Our policies produce motions that are nearly indistinguishable from mocap,” adding that “We’re moving toward a virtual stuntman.”

Simulated dragon.
GIF: Berkeley Artificial Intelligence Research

Not to be outdone, the researchers used DeepMimic to create realistic movements from simulated lions, dinosaurs, and mythical beasts. They even created a virtual version of ATLAS, the humanoid robot voted most likely to destroy humanity. This platform could conceivably be used to produce more realistic computer animation, but also for virtual testing of robots.

Source: After Millions of Trials, These Simulated Humans Learned to Do Perfect Backflips and Cartwheels

Facebook admits: Apps were given users’ permission to go into their inboxes

Facebook has admitted that some apps had access to users’ private messages, thanks to a policy that allowed devs to request mailbox permissions.

The revelation came as current Facebook users found out whether they or their friends had used the “This Is Your Digital Life” app that allowed academic Aleksandr Kogan to collect data on users and their friends.

Users whose friends had been suckered in by the quiz were told that as a result, their public profile, Page likes, birthday and current city were “likely shared” with the app.

So far, so expected. But, the notification went on:

A small number of people who logged into “This Is Your Digital Life” also shared their own News Feed, timeline, posts and messages which may have included post and messages from you. They may also have shared your hometown.

That’s because, back in 2014 when the app was in use, developers using Facebook’s Graph API to get data off the platform could ask for read_mailbox permission, allowing them access to a person’s inbox.

That was just one of a series of extended permissions granted to devs under v1.0 of the Graph API, which was first introduced in 2010.

Following pressure from privacy activists – but much to the disappointment of developers – Facebook shut that tap off for most permissions in April 2015, although the changelog shows that read_mailbox wasn’t deprecated until 6 October 2015.

Facebook confirmed to The Register that this access had been requested by the app and that a small number of people had granted it permission.

“In 2014, Facebook’s platform policy allowed developers to request mailbox permissions but only if the person explicitly gave consent for this to happen,” a spokesborg told us.

“According to our records only a very small number of people explicitly opted into sharing this information. The feature was turned off in 2015.”

Source: Facebook admits: Apps were given users’ permission to go into their inboxes • The Register

How to Check if Cambridge Analytica Had Your Facebook Data

Facebook launched a tool yesterday that you can use to find out whether you or your friends shared information with Cambridge Analytica, the Trump-affiliated company that harvested data from a Facebook app to support the then-candidate’s efforts in the 2016 presidential election.

If you were affected directly—and you have plenty of company, if so—you should have already received a little notification from Facebook. If you missed that in your News Feed (or you’ve already sworn off Facebook, but want to check and see if your information was compromised), Facebook also has a handy little Cambridge Analytica tool you can use.

The problem? While the tool can tell you if you or your friends shared your information via the spammy “This is Your Digital Life” app, it won’t tell you who among your friends was foolish enough to give up your information to a third party. You have lost your ability to publicly shame them, yell at them, or go over to where they live (or fire up a remote desktop session) to teach them how to … not do that ever again.

So, what can you do now?

Even though your past Facebook data might already be out there in the digital ether somewhere, you can now start locking down your information a bit more. Once you’re done checking the Cambridge Analytica tool, go here (Facebook’s Settings page). Click on Apps and Websites. Up until recently, Facebook had a setting (under “Apps Others Use”) that you could use to restrict the information that your friends could share about you to apps they were using. Now, you’ll see this message instead:

“These outdated settings have been removed because they applied to an older version of our platform that no longer exists.

To see or change the info you currently share with apps and websites, review the ones listed above, under ‘Logged in with Facebook.’”

Sounds ominous, right? Well, according to Facebook, these settings haven’t really done much of anything for years, anyway. As a Facebook spokesperson recently told Wired:

“These controls were built before we made significant changes to how developers build apps on Facebook. At the time, the Apps Others Use functionality allowed people to control what information could be shared to developers. We changed our systems years ago so that people could not share friends’ information with developers unless each friend also had explicitly granted permission to the developer.”

Instead, take a little time to review (again) the apps you’ve allowed to access your Facebook information. If you’re not using the app anymore, or if it sounds a little fishy, remove it—heck, remove as many apps as you can in one go.

Source: How to Check if Cambridge Analytica Had Your Facebook Data

Skip to toolbar