325 stories
·
2 followers

git log --author=... confused me

1 Comment

Today I was looking for recent commits by co worker Fred Flooney, address fflooney@example.com, so I did

    git log --author=ffloo

but nothing came up. I couldn't remember if --author would do a substring search, so I tried

    git log --author=fflooney
    git log --author=fflooney@example.com

and still nothing came up. “Okay,” I said, “probably I have Fred's address wrong.” Then I did

    git log --format=%ae | grep ffloo

The --format=%ae means to just print out commit author email addresses, instead of the usual information. This command did produce many commits with the author address fflooney@example.com.

I changed this to

    git log --format='%H %ae' | grep ffloo

which also prints out the full hash of the matching commits. The first one was 542ab72c92c2692d223bfca4470cf2c0f2339441.

Then I had a perplexity. When I did

    git log -1 --format='%H %ae' 542ab72c92c2692d223bfca4470cf2c0f2339441

it told me the author email address was fflooney@example.com. But when I did

    git show 542ab72c92c2692d223bfca4470cf2c0f2339441

the address displayed was fredf@example.com.

The answer is, the repository might have a file in its root named .mailmap that says “If you see this name and address, pretend you saw this other name and address instead.” Some of the commits really had been created with the address I was looking for, fflooney. But the .mailmap said that the canonical version of that address was fredf@. Nearly all Git operations use the canonical address. The git-log --author option searches the canonical address, and git-show and git-log, by default, display the canonical address.

But my --format=%ae overrides the default behavior; %ae explicitly requests the actual address. To display the canonical address, I should have used --format=%aE instead.

Also, I learned that --author= does not only a substring search but a regex search. I asked it for --author=d* and was puzzled when it produced commits written by people with no d. This is a beginner mistake: d* matches zero or more instances of d, and every name contains zero or more instances of d. (I had thought that the * would be like a shell glob.)

Also, I learned that --author=d+ matches only authors that contain the literal characters d+. If you want the + to mean “one or more” you need --author=d\+.

Thanks to Cees Hek, Gerald Burns, and Val Kalesnik for helping me get to the bottom of this.

The .mailmap thing is documented in git-check-mailmap.

[ Addendum: I could also have used git-log --no-use-mailmap ..., had I known about this beforehand. ]

Read the whole story
kbrint
1 day ago
reply
Thought I knew a thing or two about git. Learned something new today.
Share this story
Delete

TSA Admits Liquid Ban Is Security Theater

4 Comments and 13 Shares

The TSA is allowing people to bring larger bottles of hand sanitizer with them on airplanes:

Passengers will now be allowed to travel with containers of liquid hand sanitizer up to 12 ounces. However, the agency cautioned that the shift could mean slightly longer waits at checkpoint because the containers may have to be screened separately when going through security.

Won't airplanes blow up as a result? Of course not.

Would they have blown up last week were the restrictions lifted back then? Of course not.

It's always been security theater.

Interesting context:

The TSA can declare this rule change because the limit was always arbitrary, just one of the countless rituals of security theater to which air passengers are subjected every day. Flights are no more dangerous today, with the hand sanitizer, than yesterday, and if the TSA allowed you to bring 12 ounces of shampoo on a flight tomorrow, flights would be no more dangerous then. The limit was bullshit. The ease with which the TSA can toss it aside makes that clear.

All over America, the coronavirus is revealing, or at least reminding us, just how much of contemporary American life is bullshit, with power structures built on punishment and fear as opposed to our best interest. Whenever the government or a corporation benevolently withdraws some punitive threat because of the coronavirus, it's a signal that there was never any good reason for that threat to exist in the first place.

Read the whole story
kbrint
4 days ago
reply
yup
Share this story
Delete
3 public comments
DGA51
11 days ago
reply
Ain't it the truth.
Central Pennsyltucky
cherjr
12 days ago
reply
!
48.840867,2.324885
bogorad
12 days ago
reply
hahaha
Barcelona, Catalonia, Spain

Pooling to multiply SARS-CoV-2 testing throughput

2 Comments and 3 Shares

Here is an email from Kevin Patrick Mahaffey, and I would like to hear your views on whether this makes sense:

One question I don’t hear being asked: Can we use pooling to repeatedly test the entire labor force at low cost with limited SARS-CoV-2 testing supplies?

Pooling is a technique used elsewhere in pathogen detection where multiple samples (e.g. nasal swabs) are combined (perhaps after the RNA extraction step of RT-qPCR) and run as one assay. A negative result confirms no infection of the entire pool, but a positive result indicates “one or more of the pool is infected.” If this is the case, then each individual in the pool can receive their own test (or, if we’re getting fancy [read: probably too hard to implement in the real world], perform an efficient search of the space using sub-pools).

To me, at least, the key questions seem to be:

– Are current assays sensitive enough to work? Technion researchers report yes in a pool as large as 60.

– Can we align limiting factors in testing cost/velocity with pooled steps? For example, if nasal swabs are the limiting reagent, then pooling doesn’t help; however if PCR primers and probes are limiting it’s great.
– Can we get a regulatory allowance for this? Perhaps the hardest step.

Example (readers, please check my back-of-the-envelope math): If we assume base infection rate of the population is 1%, then pooling of 11 samples has a ~10% chance of coming out positive. If you run all positive pools through individual assays, the expected number of tests per person is 0.196 or a 5.1x multiple on testing throughput (and a 5.1x reduction in cost). This is a big deal.

If we look at this from the view of whole-population biosurveillance after the outbreak period is over and we have a 0.1% base infection rate, pools of 32 samples have an expected number of tests per person at 0.0628 or a 15.9x multiple on throughput/cost reduction.

Putting prices on this, an initial whole-US screen at 1% rate would require about 64M tests. Afterward, performing periodic biosurveillance to find hot spots requires about 21M tests per whole-population screen. At $10/assay (what some folks working on in-field RT-qPCR tests believe marginal cost could be), this is orders of magnitude less expensive than mitigations that deal with a closed economy for any extended period of time.

I’m neither a policy nor medical expert, so perhaps I’m missing something big here. Is there really $20 on the ground or [something something] efficient market?

By the way, Iceland is testing many people and trying to build up representative samples.

The post Pooling to multiply SARS-CoV-2 testing throughput appeared first on Marginal REVOLUTION.

Read the whole story
kbrint
4 days ago
reply
Bloom filters for your nose.
Share this story
Delete
1 public comment
freeAgent
5 days ago
reply
This is another interesting idea. It would have been much more useful earlier on, but you still have to have a significant number of tests to make this strategy feasible. Given that tests were restricted so much early on and backlogs were so long, I don't think it would have worked. Maybe it can work now, but of course now we need even more tests.
Los Angeles, CA

SoftBank Owned Patent Troll, Using Monkey Selfie Law Firm, Sues To Block Covid-19 Testing, Using Theranos Patents

jwz
1 Share
From the and-that's-not-even-all-the-insane-parts dept:

So, let's summarize: The firm that basically created the mess that is WeWork by dumping billions of dollars into the company, also owns a patent troll that bought up the patents from the sham medical testing firm Theranos, and is now using those patents to sue one of the few diagnostics companies that is actually making a Covid-19 test... in the middle of a pandemic. And, demanding the use of those tests be blocked.

Previously, previously, previously, previously, previously.

Read the whole story
kbrint
10 days ago
reply
Share this story
Delete

Capers

2 Comments

I asked the coffee shop lady how she explains capers when a customer asks what they are. She said “tiny pickles”.

Read the whole story
kbrint
23 days ago
reply
Got the same question once. I described it as a cross between a peppercorn and an olive.
Share this story
Delete
1 public comment
brennen
24 days ago
reply
Reasonable.
Boulder, CO

How the NSA hacked international mobile carriers

1 Share


For many years, the GSMA didn't know it was being spied on.

The London-based GSM Association, with more than 750 mobile network operators from 220 countries among its members, is one of the most important organizations in the world, as industry insiders well know. Its job: to make sure that phones from around the world can talk to one another — and that they can do so securely.

Encryption is key to that work, and because ever-faster code-cracking computers keep increasing eavesdroppers' abilities to decrypt intercepted communications, experts at GSMA and other standards-setting organizations must constantly devise new ways to stay ahead. To do so, they routinely author and share with one another obscure technical documents that contain proprietary information about mobile networks, including details on technical architecture, cell phone roaming, compatibility between networks, and more. They also contain insight into the encryption implementations the companies will deploy in the future.

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

In short, these files contain the sort of granular information seemingly of interest only to technical experts who make the telecommunications networks function. But another group cares about these documents, too: the spies interested in tapping the world's phone and data networks.

The technical files provide a roadmap to the digital terrain upon which the NSA and others will, in the near future, carry out cyber operations. By understanding how the landscape is changing and what is coming next, especially in terms of encryption and security upgrades, the agency can better prepare to develop and deploy decryption capabilities — ideally ones that are what the NSA calls "NOBUS," for "Nobody but us," in nature. Originally, the idea behind the term was that there were mathematical ways to ensure that only the United States could use certain espionage capabilities. In current usage, though, it often refers to a more general policy goal: When there is tension between the offensive and defensive missions — perhaps because both targets and citizens use the same kinds of encryption — the NSA tries to secure communications against all forms of eavesdropper decryption except those decryption capabilities that are so complex, difficult or inaccessible that only the NSA can use them. To aid the development of secret decryption capabilities and circumvent future security measures, the NSA calls the documents "technology warning mechanism[s]," and spies on groups like the GSMA to get them.

As first reported by The Intercept and revealed via documents leaked by contractor Edward Snowden, the NSA uses a secretive unit, the Target Technology Trends Center, to do this. The unit's logo, a giant telescope superimposed on a globe, and its motto — "Predict, Plan, Prevent" — give a sense of its mission, which is to make sure the agency is not rendered blind by the network operators' security upgrades and advances. The mobile communications experts and analysts in the unit spy on phone companies all over the world to ensure that future collection remains unimpeded.

The Target Technology Trends Center builds and maintains a database of mobile phone operators. As of 2012, the database included around 700 companies, about 70% of the world's total. The group focuses on gathering information that the agency can use to defeat security mechanisms and gain access to cellular calls, messages and data. The NSA maintains a list of around 1,200 email addresses associated with employees at mobile phone operators around the world. Using its signals intelligence methods — almost certainly including passive collection — the NSA makes its own surreptitious copy of some of the information sent to and from these addresses. It uses these intercepted technical documents to anticipate what sorts of encryption its targets will use in the future, and to find vulnerabilities in those systems so that it can eavesdrop as needed.

As a result of these and other efforts, the NSA and its partners can crack a great deal of the encryption that protects cellular communications. In an effort that one NSA file described as "a very high priority," the agency devised mechanisms to break the security on 4G cell phone systems several years before those systems were actually in widespread use by mobile phone customers. Prior to that, the Five Eyes used specialized computers and invested millions of dollars to successfully break the encryption used in 3G cell phone networks. They also broke the most widely used encryption system in the world, called A5/1, which protects billions of phones in the developing world, as first reported by The Washington Post.

But sometimes it's not practical — too time consuming, too arduous — for the Five Eyes to break the cryptography by finding mathematical weaknesses. In these cases, there is another option: impersonating the legitimate recipient by stealing the key that enables decryption. In the case of cell networks, this is the secret information baked into cell phones' SIM cards. A large Dutch firm called Gemalto produces these small cards by the billions, each with its own unique encryption key. The company contracts with more than 450 mobile phone operators all over the world, including all the major American companies, providing them with SIM cards and, at least in theory, improved security.

The mechanism through which Gemalto's system works is called symmetric key encryption. With symmetric key encryption, the two sides agree on a key in advance. In this respect, symmetric key encryption is somewhat akin to a pitcher and catcher arranging signs before a baseball game. Gemalto determines this pre-shared key, puts one copy of the secret key in the SIM card, and sends another copy to the mobile operator. With one reproduction of the key on each end of the communication, it thus becomes possible to encrypt and decrypt communications while leaving the messages secure against passive collection in transit. The eavesdroppers in the middle who lack the key cannot figure out what is being said.

Many targets might have used the cell networks assuming that encryption secured their communications, but the GCHQ program enabled analysts to sidestep these protections.

In baseball, a batter who has figured out the key — which pitch corresponds to which sign — can intercept and decrypt the catcher's codes. He or she will know which pitch is coming next and will stand a much better chance of hitting it. The same holds true for symmetric key encryption. If a signals intelligence agency can get a copy of the secret key in symmetric key encryption, it can decrypt communications.

Through a sophisticated, multistage hacking effort, other Snowden documents and reporting from The Intercept show that GCHQ gained access to millions of keys that Gemalto produced and shared with some of its wide range of mobile phone company clients, particularly those in developing countries. This gave the agency the code-breaking edge it desired. Many targets might have used the cell networks assuming that encryption secured their communications, but the GCHQ program enabled analysts to sidestep these protections.

One analysis suggests that the SIM card hack might have been most useful in tactical military or counterterrorism environments, perhaps enabling the agency to acquire insight into adversaries' activities and quickly share the information with those who could act on it. In an internal status report, the agency described the result of its key theft triumph in language that was bureaucratic yet hinted at the scale and success of the mission: analysts were "very happy with the data so far and working through the vast quantity of product."

···

From 2011 to 2014, John Napier Tye was the section chief for internet freedom in the State Department's Bureau of Democracy, Human Rights, and Labor. In March 2014, he wrote a speech for his boss to deliver at an upcoming human rights conference. Tye, a Rhodes Scholar and Yale Law graduate, wanted to emphasize the importance of checks and balances in the American system when it came to surveillance. "If U.S. citizens disagree with congressional and executive branch determinations about the proper scope of signals intelligence activities," he wrote, "they have the opportunity to change the policy through our democratic process." It seemed like a boilerplate, feel-good statement, one designed to respond to criticisms of NSA overreach.

But the White House legal office disagreed. The lawyers called Tye and instructed him to change the line. Instead of stating specifically that the practices of the intelligence community were subject to democratic process, they wanted him to make only a broad reference to how American citizens could change laws. He was not to mention Americans' power to change intelligence activities. At the White House's insistence, Tye changed the phrasing.

To most people, this distinction might seem meaningless. But Tye drew an alarming conclusion from the White House's modifications: Some American intelligence activities were beyond the reach of citizens' democratic process. As Tye knew, intelligence agencies' overseas activities against foreign targets are most closely governed by a presidential executive order signed by Ronald Reagan, known as EO 12333, and updated several times since. EO 12333 gives the NSA and other intelligence agencies a much freer hand in their overseas programs than they have on American soil.

Historically, EO 12333's clear foreign versus domestic distinction might have made some sense. In the Reagan years, comparatively fewer pieces of data on Americans ended up overseas. In the internet age, however, the world's digital networks are bound ever more closely together, and the lines between foreign and domestic activities become blurrier. Just as foreign-to-foreign traffic travels through the United States, enabling the NSA to passively collect other countries' data from American hubs and cables, so, too, does the data of Americans travel through other countries.

Long before Snowden's leaks and Tye's amended draft, technology companies took some steps to try to protect this data overseas.

Major technology companies routinely back up and mirror information in data centers all across the world, both for redundancy and to ensure the fastest possible retrieval of the information when needed. When an American's data resides in a foreign data center or travels along a foreign cable, the privacy protections that restrict the U.S. government from collecting the data diminish, so long as the government is not directly targeting Americans. Intelligence agencies have partially classified guidelines that stipulate how they interpret this condition.

Long before Snowden's leaks and Tye's amended draft, technology companies took some steps to try to protect this data overseas. Google and Yahoo encrypted users' connections to their sites, blocking eavesdroppers who could not break the encryption. This might have posed a problem for the NSA, especially if the agency's encryption-busting tools could not work against the technology companies' systems.

If the NSA wanted the data and could not decrypt it, it had several options. The agency could, under its interpretation of the authority of the Foreign Intelligence Surveillance Act, use the PRISM program to compel American companies to turn over the desired information about their users. But the PRISM program operated on American soil under at least some oversight and constraint. Under the law, the agency's target had to be foreign and fit into one of the broad categories of permissible intelligence collection.

So the Five Eyes instead found a different way to get the information they wanted. As first reported by The Washington Post, they targeted the series of fiber-optic links that Google and Yahoo had built to connect their data centers outside American borders. These private cables gave the companies the capacity to move information quickly and securely — they thought — between different parts of their expansive digital infrastructure. These were the cables and centers that made the cloud possible.

Google protected its data centers very carefully, with round-the-clock guards, heat-sensitive cameras, and biometric authentication of employees. But because the connections were privately owned and only between data centers, Google and other companies had not prioritized encrypting the communications that flowed through the cables. Google had planned to do so, even before the Snowden revelations, but the project had moved slowly. This delay left large amounts of user data unencrypted; getting access to these cables would be almost as good as getting into the heavily protected data centers themselves.

Like a quarterback drawing up a play in the dirt, someone at the NSA hand-diagrammed the plan for targeting the technology companies and copied it onto a slide. The image showed the unencrypted links between data centers that the Five Eyes would target. Near the text highlighting the unencrypted areas, the artist had included a mocking smiley face. Through their telecommunications partners, the NSA and GCHQ gained access to the cables. The agencies then reverse-engineered the targeted companies' internal data formats, a technically complex task. They then built their own custom tools, just as if they worked for the firms, so that they could make sense of their massive new trove of data.

And what a trove it was. As with other passive collection programs, this one yielded too much data for the Five Eyes to handle. Even after filtering the fire-hose spray of data for the most useful information, the flow was still a torrent. In just a 30-day period, the NSA collected and sent back to headquarters almost 200 million records of internet activities, including website visits, records of emails and messages sent and received, and copies of text, audio and video content. It was an immense haul made possible by the interconnected global architecture of technology firms and the agency's secret access to their cables.

Much of this data was from foreigners. But other data was from Americans whose information had ended up overseas thanks to the complexity of the technology companies' clouds. In the parlance of the NSA, this data on Americans was "incidentally collected." Even though the data was American in origin, because the NSA had incidentally collected it overseas under EO 12333, the agency could hang onto it for five years with certain restrictions, and sometimes longer.

When the Snowden leaks revealed the NSA's tapping of private cables, Google and Yahoo were apoplectic. To Silicon Valley, it appeared that the NSA had used a legal loophole to do an end run around oversight. It seemed as if the NSA had hacked American companies either to gather data on Americans or to gather data on foreign targets that the agency could have collected, with more oversight and accountability, through the PRISM program. Google Chairman Eric Schmidt said the company was "annoyed to no end" about what the Five Eyes had done.

Google engineers were more direct. When The Washington Post showed two Google-affiliated engineers the NSA's smiley face diagram, they "exploded in profanity." Another wrote an online post condemning the Five Eyes operation and expressing his sentiments plainly: "Fuck these guys." Google and other firms had built their security systems to try to keep out Chinese and Russian hackers, and many considered the U.S. government a partner, even if that partnership was legally compelled. Finding out that the Five Eyes targeted them and bypassed their encryption felt like a betrayal to their engineers.

Even government employees were concerned by the activities conducted under EO 12333. Tye came to believe that the U.S. intelligence community was using the workaround provided by EO 12333 to authorize vast collection programs that existed almost entirely outside of congressional and judicial oversight. After eventually leaving government, he strongly suggested in a Washington Post op-ed that Americans should be concerned about the impact on their privacy.

The post-Snowden signals intelligence review commission was similarly alarmed and recommended reform. As part of new legislation authorizing intelligence activities in 2015, some overseas collection procedures were amended and codified, though the NSA retained a great deal of flexibility. The law authorizes the NSA to retain data incidentally collected from Americans for five years, and also includes a broad exception allowing data to be held even longer if the agency determines it is "necessary to understand or assess foreign intelligence or counterintelligence."

The NSA's EO 12333 operations made a direct contribution to the agency's success in gaining unencumbered access to data from all over the world. Though encryption would always pose a threat, the Five Eyes' efforts were actively defeating it and preserving the power of global espionage. An internal Five Eyes document boasted that, thanks to the multipronged efforts, "vast amounts of encrypted Internet data, which have up till now been discarded, are now exploitable."

Defeating encryption like this required complete secrecy. There was no value for the alliance in signaling or posturing about its decryption capabilities. To do so would cause adversaries to change their tactics, perhaps by avoiding the encryption mechanisms the Western spies could crack or circumvent. Even within the Five Eyes, where almost every significant employee had a security clearance, guidelines instructed those who were not working on the cryptologic capabilities not to speculate or ask about how those capabilities worked.

As long as the decryption capabilities were still hidden, times were good. The Five Eyes' intelligence targets all over the world knew enough to use computers, but did not know how to secure them. Whether operating against countries like Russia or China or terrorist groups like al-Qaeda, the NSA had the capacity to uncover meaningful secrets and bring them to U.S. government policymakers in time to act. While details of these operations are almost entirely hidden from view, their success was so significant that senior officials in the NSA refer to this period, up until 2013 or so, as "the golden age of signals intelligence."

But all golden ages must end. Snowden's revelations exposed the decryption capabilities and put forward a public debate the NSA did not want to have. Others, like Tye and the post-Snowden review commission, lent additional credibility to concerns about the agency's activities. Even worse, as the NSA grappled with the exposure of its tactics, foreign adversaries lurked, waiting for an opportunity. For as much as the Five Eyes aspired to live in a world of Nobody But Us, they were not alone.

Excerpt adapted from Ben Buchanan's forthcoming book, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics, out Feb. 25 from Harvard University Press.



Read the whole story
kbrint
41 days ago
reply
Share this story
Delete
Next Page of Stories