75 stories
·
2 followers

The world in which IPv6 was a good design

2 Comments and 5 Shares

Last November I went to an IETF meeting for the first time. The IETF is an interesting place; it seems to be about 1/3 maintenance grunt work, 1/3 extending existing stuff, and 1/3 blue sky insanity. I attended mostly because I wanted to see how people would react to TCP BBR, which was being presented there for the first time. (Answer: mostly positively, but with suspicion. It kinda seemed too good to be true.)

Anyway, the IETF meetings contain lots and lots of presentations about IPv6, the thing that was supposed to replace IPv4, which is what the Internet runs on. (Some would say IPv4 is already being replaced; some would say it has already happened.) Along with those presentations about IPv6, there were lots of people who think it's great, the greatest thing ever, and they're pretty sure it will finally catch on Any Day Now, and IPv4 is just a giant pile of hacks that really needs to die so that the Internet can be elegant again.

I thought this would be agreat chance to really try to figure out what was going on. Why is IPv6 such a complicated mess compared to IPv4? Wouldn't it be better if it had just been IPv4 with more address bits? But it's not, oh boy, is it ever not. So I started asking around. Here's what I found.

Buses ruined everything

Once upon a time, there was the telephone network, which used physical circuit switching. Essentially, that meant moving connectors around so that your phone connection was literally just a very long wire ("layer 1"). A "leased line" was literally a very long wire that you leased from the phone company. You would put bits in one end of the wire, and they'd come out the other end, a fixed amount of time later. You didn't need addresses because there was exactly one machine at each end.

Eventually the phone company optimized that a bit. Time-division multiplexing (TDM) and "virtual circuit switching" was born. The phone company could transparently take the bits at a slower bit rate from multiple lines, group them together with multiplexers and demultiplexers, and let them pass through the middle of the phone system using fewer wires than before. Making that work was a little complicated, but as far as we modem users were concerned, you still put bits in one end and they came out the other end. No addresses needed.

The Internet (not called the Internet at the time) was built on top of this circuit switching concept. You had a bunch of wires that you could bits into and have them come out the other side. If one computer had two or three interfaces, then it could, if given the right instructions, forward bits from one line to another, and you could do something a lot more efficient than a separate line between each pair of computers. And so IP addresses ("layer 3"), subnets, and routing were born. Even then, with these point-to-point links, you didn't need MAC addresses, because once a packet went into the wire, there was only one place it could come out. You used IP addresses to decide where it should go after that.

Meanwhile, LANs got invented as an alternative. If you wanted to connect computers (or terminals and a mainframe) together at your local site, it was pretty inconvenient to need multiple interfaces, one for each wire to each satellite computer, arranged in a star configuration. To save on electronics, people wanted to have a "bus" network (also known as a "broadcast domain," a name that will be important later) where multiple stations could just be plugged into a single wire, and talk to any other station plugged into the same wire. These were not the same people as the ones building the Internet, so they didn't use IP addresses for this. They all invented their own scheme ("layer 2").

One of the early local bus networks was arcnet, which is dear to my heart (I wrote the first Linux arcnet driver and arcnet poetry way back in the 1990s, long after arcnet was obsolete). Arcnet layer 2 addresses were very simplistic: just 8 bits, set by jumpers or DIP switches on the back of the network card. As the network owner, it was your job to configure the addresses and make sure you didn't have any duplicates, or all heck would ensue. This was kind of a pain, but arcnet networks were usually pretty small, so it was only kind of a pain.

A few years later, ethernet came along and solved that problem once and for all, by using many more bits (48, in fact) in the layer 2 address. That's enough bits that you can assign a different (semi-sequential) address to every device that has ever been manufactured, and not have any overlaps. And that's exactly what they did! Thus the ethernet MAC address was born.

Various LAN technologies came and went, including one of my favourites, IPX (Internetwork Packet Exchange, though it had nothing to do with the "real" Internet) and Netware, which worked great as long as all the clients and servers were on a single bus network. You never had to configure any addresses, ever. It was beautiful, and reliable, and worked. The golden age of networking, basically.

Of course, someone had to ruin it: big company/university networks. They wanted to have so many computers that sharing 10 Mbps of a single bus network between them all became a huge bottleneck, so they needed a way to have multiple buses, and then interconnect - "internetwork," if you will - those buses together. You're probably thinking, aha, use the Internet Protocol for that, right? Ha ha, no. The Internet protocol, still not called that, wasn't mature or popular back then, and nobody took it seriously. Netware-over-IPX (and the many other LAN protocols at the time) were serious business, so as serious businesses do, they invented their own thing(s) to extend the already-popular thing, ethernet. Devices on ethernet already had addresses, MAC addresses, which were about the only thing the various LAN protocol people could agree on, so they decided to use ethernet addresses as the keys for their routing mechanisms. (Actually they called it bridging and switching instead of routing.)

The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical. That means the "bridging table" is not as nice as a modern IP routing table, which can talk about the route for a whole subnet at a time. In order to do efficient bridging, you had to remember which network bus each MAC address could be found on. And humans didn't want to configure each of those by hand, so it needed to figure itself out automatically. If you had a complex internetwork of bridges, this could get a little complicated. As I understand it, that's what led to the spanning tree poem, and I think I'll just leave it at that.

Anyway, it mostly worked, but it was a bit of a mess, and you got broadcast floods every now and then, and the routes weren't always optimal, and it was pretty much impossible to debug. (You definitely couldn't write something like traceroute for bridging, because none of the tools you need to make it work - such as the ability for an intermediate bridge to even have an address - exist in plain ethernet.)

On the other hand, all these bridges were hardware-optimized. The whole system was invented by hardware people, basically, as a way of fooling the software, which had no idea about multiple buses and bridging between them, into working better on large networks. Hardware bridging means the bridging could go really really fast - as fast as the ethernet could go. Nowadays that doesn't sound very special, but at the time, it was a big deal. Ethernet was 10 Mbps, because you could maybe saturate it by putting a bunch of computers on the network all at once, not because any one computer could saturate 10 Mbps. That was crazy talk.

Anyway, the point is, bridging was a mess, and impossible to debug, but it was fast.

Internet over buses

While all that was happening, those Internet people were getting busy, and were of course not blind to the invention of cool cheap LAN technologies. I think it might have been around this time that the ARPANET got actually renamed to the Internet, but I'm not sure. Let's say it was, because the story is better that way.

At some point, things progressed from connecting individual Internet computers over point-to-point long distance links, to the desire to connect whole LANs together, over point-to-point links. Basically, you wanted a long-distance bridge.

You might be thinking, hey, no big deal, why not just build a long distance bridge and be done with it? Sounds good, doesn't work. I won't go into the details right now, but basically the problem is congestion control. The deep dark secret of ethernet bridging is that it assumes all your links are about the same speed, and/or completely uncongested, because they have no way to slow down. You just blast data as fast as you can, and expect it to arrive. But when your ethernet is 10 Mbps and your point-to-point link is 0.128 Mbps, that's completely hopeless. Separately, the idea of figuring out your routes by flooding all the links to see which one worked - this is the actual way bridging typically works - is hugely wasteful for slow links. And sub-optimal routing, an annoyance on local networks with low latency and high throughput, is nasty on slow, expensive long-distance links. It just doesn't scale.

Luckily, those Internet people (if it was called the Internet yet) had been working on that exact set of problems. If we could just use Internet stuff to connect ethernet buses together, we'd be in great shape.

And so they designed a "frame format" for Internet packets over ethernet (and arcnet, for that matter, and every other kind of LAN).

And that's when everything started to go wrong.

The first problem that needed solving was that now, when you put an Internet packet onto a wire, it was no longer clear which machine was supposed to "hear" it and maybe forward it along. If multiple Internet routers were on the same ethernet segment, you couldn't have them all picking it up and trying to forward it; that way lies packet storms and routing loops. No, you had to choose which router on the ethernet bus is supposed to pick it up. We can't just use the IP destination field for that, because we're already using that for the final destination, not the router destination. Instead, we identify the desired router using its MAC address in the ethernet frame.

So basically, to set up your local IP routing table, you want to be able to say something like, "send packets to IP address 10.1.1.1 via the router at MAC address 11:22:33:44:55:66." That's the actual thing you want to express. This is important! Your destination is an IP address, but your router is a MAC address. But if you've ever configured a routing table, you might have noticed that nobody writes it like that. Instead, because the writers of your operating system's TCP/IP stack are stubborn, you write something like "send packets to IP address 10.1.1.1 via the router at IP address 192.168.1.1."

In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.

To do that pointless intermediate step, you need to add ARP (address resolution protocol), a simple non-IP protocol whose job it is to convert IP addresses to ethernet addresses. It does this by broadcasting to everyone on the local ethernet bus, asking them all to answer if they own that particular IP address. If you have bridges, they all have to forward all the ARP packets to all their interfaces, because they're ethernet broadcast packets, and that's what broadcasting means. On a big, busy ethernet with lots of interconnected LANs, excessive ARP starts becoming one of your biggest nightmares. It's especially bad on wifi. As time went on, people started making bridges/switches with special hacks to avoid forwarding ARP as far as it's technically supposed to go, to try to cut down on this problem. But doing so is a hack.

Death by legacy

Time passed. Eventually (and this actually took quite a while), people pretty much stopped using non-IP protocols on ethernet at all. So basically all networks became a physical wire (layer 1), with multiple stations on a bus (layer 2), with multiple buses connected over bridges (gotcha! still layer 2!), and those inter-buses connected over IP routers (layer 3).

After a while, people got tired of manually configuring IP addresses, arcnet style, and wanted them to auto-configure, ethernet style, except it was too late to literally do it ethernet style, because a) the devices had already been manufactured with ethernet addresses, not IP addresses, and b) IP addresses were only 32 bits, which is not enough to just manufacture them forever with no overlaps, and c) just assigning IP addresses sequentially instead of using subnets would bring us back to square one: it would just be ethernet over again, and we already have ethernet. So that's where bootp and dhcp came from. Those protocols, by the way, are special kinda like ARP is special (except they pretend not to be special, by technically being IP packets). They have to be special, because an IP node has to be able to transmit them before it has an IP address, which is of course impossible, so it just fills the IP headers with essentially nonsense (albeit nonsense specified by an RFC), so the headers might as well have been left out. But nobody would feel nice if they were inventing a whole new protocol that wasn't IP, so they pretended it was IP, and then they felt nice. Well, as nice as one can feel when one is inventing dhcp. Anyway, I digress. The salient detail here is that unlike real IP services, bootp and dhcp need to know about ethernet addresses, because after all, it's their job to hear your ethernet address and assign you an IP address to go with it. They're basically the reverse of ARP, except we can't say that, because there's a protocol called RARP that is literally the reverse of ARP. Actually, RARP worked quite fine and did the same thing as bootp and dhcp while being much simpler, but we don't talk about that.

The point of all this is that ethernet and IP were getting further and further intertwined. They're nowadays almost inseparable. It's hard to imagine a network interface without a MAC address, and it's hard to imagine that network interface working without an IP address. You write your IP routing table using IP addresses, but of course you know you're lying when you name the router by IP address; you're just indirectly saying that you want to route via a MAC address. And you have ARP, which gets bridged but not really, and dhcp, which is an IP packet but is really an ethernet protocol, and so on.

Moreover, we still have both bridging and routing, and they both get more and more complicated as the LANs and the Internet get more and more complicated, respectively. Bridging is still, mostly, hardware based and defined by IEEE, the people who control the ethernet standards. Routing is still, mostly, software based and defined by the IETF, the people who control the Internet standards. Both groups still try to pretend the other group doesn't exist. Network operators basically choose bridging vs routing based on how fast they want it to go and how much they hate configuring dhcp servers, which they really hate very much, which means they use bridging as much as possible and routing when they have to.

In fact, has bridging gotten so completely out of control that people decided to extract the layer 2 bridging decisions out completely to a higher level (with configuration exchanged between bridges using a protocol layered over IP, of course!) so it can be centrally managed. That's called software-defined networking (SDN). It helps a lot, compared to letting your switches and bridges just do whatever they want, but it's also fundamentally silly, because you know what's sofware defined networking? IP. It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, it was always too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring dhcp really is a huge pain, so network operators just learned how to bridge bigger and bigger things. And nowadays big data centers are basically just SDNed, and you might as well not be using IP in the data center at all, because nobody's routing the packets. It's all just one big virtual bus network.

It is, in short, a mess.

Now forget I said all that...

Great story, right? Right. Now pretend none of that happened, and we're back in the early 1990s, when most of that had in fact already happened, but people at the IETF were anyway pretending that it hadn't happened and that the disaster could all be avoided. This is the good part!

There's one thing I forgot to mention in that big long story above: somewhere in that whole chain of events, we completely stopped using bus networks. Ethernet is not actually a bus anymore. It just pretends to be a bus. Basically, we couldn't get ethernet's famous CSMA/CD to keep working as speeds increased, so we went back to the good old star topology. We run bundles of cables from the switch, so that we can run one cable from each station all the way back to the center point. Walls and ceilings and floors are filled with big, thick, expensive bundles of ethernet, because we couldn't figure out how to make buses work well... at layer 1. It's kinda funny actually when you think about it.

In fact, in a bonus fit of insanity, even wifi - the ultimate bus network, right, where literally everybody is sharing the same open-air "bus!" - we almost universally use wifi in a mode, called "infrastructure mode," which literally simulates a giant star topology. If you have two wifi stations connected to the same access point, they can't talk to each other directly. They send a packet to the access point, but addressed to the MAC address of the other node. The access point then bounces it back out to the destination node.

HOLD THE HORSES LET ME JUST REVIEW THAT FOR YOU. There's a little catch there. When node X wants to send to Internet node Z, via IP router Y, via wifi access point A, what does the packet look like? Just to draw a picture, here's what we want to happen:

X -> [wifi] -> A -> [wifi] -> Y -> [internet] -> Z

Z is the IP destination, so obviously the IP destination field has to be Z. Y is the router, which we learned above that we specify by using its ethernet MAC address in the ethernet destination field. But in wifi, X can't just send out a packet to Y, for various reasons (including that they don't know each other's encryption keys). We have to send to A. Where do we put A's address, you might ask?

No problem! 802.11 has a thing called 3-address mode. They add a third ethernet MAC address to every frame, so they can talk about the real ethernet destination, and the intermediate ethernet destination. On top of that, there are bit fields called "to-AP" and "from-AP," which tell you if the packet is going from a station to an AP, or from an AP to a station, respectively. But actually they can both be true at the same time, because that's how you make wifi repeaters (APs send packets to APs).

Speaking of wifi repeaters! If A is a repeater, it has to send back to the base station, B, along the way, which looks like this:

X -> [wifi] -> A -> [wifi-repeater] -> B -> [wifi] -> Y -> [internet] -> Z

X->A uses three-address mode, but A->B has a problem: the ethernet source address is X, and the ethernet destination address is Y, but the packet on the air is actually being sent from A to B; X and Y aren't involved at all. Suffice it to say that there's a thing called 4-address mode, and it works pretty much like you think.

(In 802.11s mesh networks, there's a 6-address mode, and that's about where I gave up trying to understand.)

Avery, I was promised IPv6, and you haven't even mentioned IPv6

Oh, oops. This post went a bit off the rails, didn't it?

Here's the point of the whole thing. The IETF people, when they were thinking about IPv6, saw this mess getting made - and maybe predicted some of the additional mess that would happen, though I doubt they could have predicted SDN and wifi repeater modes - and they said, hey wait a minute, stop right there. We don't need any of this crap! What if instead the world worked like this?

  • No more physical bus networks (already done!)
  • No more layer 2 internetworks (that's what layer 3 is for)
  • No more broadcasts (layer 2 is always point-to-point, so where would you send the broadcast to? replace it with multicast instead)
  • No more MAC addresses (on a point-to-point network, it's obvious who the sender and receiver are)
  • No more ARP and DHCP (no MAC addresses, no so mapping IP addresses to MAC addresses)
  • No more complexity in IP headers (so you can hardware accelerate IP routing)
  • No more IP address shortages (so that we can go back to routing big subnets again)
  • No more manual IP address configuration except at the core (and there are so many IP addresses that we can recursively hand out subnets down the tree from there)

Imagine that we lived in such a world: wifi repeaters would just be IPv6 routers. So would wifi access points. So would ethernet switches. So would SDN. ARP storms would be gone. "IGMP snooping bridges" would be gone. Bridging loops would be gone. Every routing problem would be traceroute-able. And best of all, we could drop 12 bytes (source/dest ethernet addresses) from every ethernet packet, and 18 bytes (source/dest/AP addresses) from every wifi packet. Sure, IPv6 adds an extra 24 bytes of address (vs IPv4), but you're dropping 12 bytes of ethernet, so the added overhead is only 12 bytes - pretty comparable to using two 64-bit IP addresses but having to keep the ethernet header. The idea that we could someday drop ethernet addresses helped to justify the oversized IPv6 addresses.

It would have been beautiful. Except for one problem: it never happened.

Requiem for a dream

One person at work put it best: "layers are only ever added, never removed."

All this wonderfulness depended on the ability to start over and throw away the legacy cruft we had built up. And that is, unfortunately, pretty much impossible. Even if IPv6 hits 99% penetration, that doesn't mean we'll be rid of IPv4. And if we're not rid of IPv4, we won't be rid of ethernet addresses, or wifi addresses. And if we have to keep the IEEE 802.3 and 802.11 framing standards, we're never going to save those bytes. So we will always need the stupid "IPv6 neighbour discovery" protocol, which is just a more complicated ARP. Even though we no longer have bus networks, we'll always need some kind of simulator for broadcasts, because that's how ARP works. We'll need to keep running a local DHCP server at home so that our obsolete IPv4 light bulbs keep working. We'll keep needing NAT so that our obsolete IPv4 light bulbs can keep reaching the Internet.

And that's not the worst of it. The worst of it is we still need the infinite abomination that is layer 2 bridging, because of one more mistake the IPv6 team forgot to fix. Unfortunately, while they were blue-skying IPv6 back in the 1990s, they neglected to solve the "mobile IP" problem. As I understand it, the idea was to get IPv6 deployed first - it should only take a few years - and then work on it after IPv4 and MAC addresses had been eliminated, at which time it should be much easier to solve, and meanwhile, nobody really has a "mobile IP" device yet anyway. I mean, what would that even mean, like carrying your laptop around and plugging into a series of one ethernet port after another while you ftp a file? Sounds dumb.

The killer app: mobile IP

Of course, with a couple more decades of history behind us, now we know a few use cases for carrying around a computer - your phone - and letting it plug into one ethernet port wireless access point after another. We do it all the time. And with LTE, it even mostly works! With wifi, it works sometimes. Good, right?

Not really, because of the Internet's deep dark secret: all that stuff only works because of layer 2 bridging. Internet routing can't handle mobility - at all. If you move around on an IP network, your IP address changes, and that breaks any connections you have open.

Corporate wifi networks fake it for you, bridging their whole LAN together at layer 2, so that the giant central DHCP server always hands you the same IP address no matter which corporate wifi access point you join, and then gets your packets to you, with at most a few seconds of confusion while the bridge reconfigures. Those newfangled home wifi systems with multiple routers do the same trick. But if you switch from one wifi network to another as you walk down the street - like if there's a "Public Wifi" service in a series of stores - well, too bad. Each of those gives you a new IP address, and each time it happens, you kill all your connections.

LTE tries even harder. You keep your IP address (usually an IPv6 address in the case of mobile networks), even if you travel miles and miles and hop between numerous cell towers. How? Well... they typically just route all your traffic back to a central location, where it all gets bridged together (albeit with lots of firewalling) into one super-gigantic virtual layer 2 LAN. And your connections keep going. At the expense of a ton of complexity, and a truly embarrassing amount of extra latency.

Making mobile IP actually work

So okay, this has been a long story, but I managed to extract it from those IETF people eventually. When we got to this point - the problem of mobile IP - I could help but ask. What went wrong? Why can't we make it work?

The answer, it turns out, is surprisingly simple. The great design flaw was in how the famous "4-tuple" (source ip, source port, destination ip, destination port) was defined. We use the 4-tuple to identify a given TCP or UDP session; if a packet has those four fields the same, then it belongs to a given session, and we can route it to whatever socket is handling that session. But the 4-tuple crosses two layers: internetwork (layer 3) and transport (layer 4). If, instead, we had identified sessions using only layer 4 data, then mobile IP would have worked perfectly.

Let's do a quick example. X port 1111 is talking to Y port 80, so it sends a packet with 4-tuple (X,1111,Y,80). The response comes back with (Y,80,X,1111), and the kernel routes it to the socket that generated the original packet. When X sends more packets tagged (X,1111,Y,80), then Y routes them all to the same server socket, and so on.

Now, if X hops IP addresses, it gets a new name, say Q. Now it'll start sending packets with (Q,1111,Y,80). Y has no idea what that means, and throws it away. Meanwhile, if Y sends packets tagged (Y,80,X,1111), they get lost, because there is no longer an X.

Imagine now that we tagged sessions without reference to their IP address. For that to work, we'd need much bigger port numbers (which are currently 16 bits). Let's make them, say, 128 or 256 bits, some kind of unique hash.

Now X sends out packets to Y with tag (uuid,80). Note, the packets themselves still contains the (X,Y) addressing information, down at layer 3 - that's how they get delivered to the right machine in the first place. But the kernel doesn't use the layer 3 information to decide which socket to deliver to; it just uses the uuid. For the return direction, Y's kernel caches that packets for (uuid) go to IP address X, which is the address it most recently received (uuid) packets from.

Now imagine that X changes addresses to Q. It still sends out packets tagged with (uuid,80), to IP address Y, but now from address Q. On machine Y, it receives the packet and matches it to the socket associated with (uuid), notes that the packets for that socket are now coming from address Q, and updates its cache. Its return packets can now be sent, tagged as (uuid), back to Q instead of X. Everything works! (Modulo some care to prevent connection hijacking by impostors.)

There's only one catch: that's not how UDP and TCP work, and it's too late to update them. Updating UDP and TCP would be like updating IPv4 to IPv6; a project that sounded simple, back in the 1990s, but decades later, is less than half accomplished (and the first half was the easy part; the long tail is much harder).

The positive news is we may be able to hack around it with yet another layering violation. If we throw away TCP - it's getting rather old anyway - and instead use QUIC over UDP, then we can just stop using the UDP 4-tuple as a connection identifier at all. Instead, if the UDP port number is the "special mobility layer" port, we unwrap the content, which can be another packet with a proper uuid tag, match it to the right session, and deliver those packets to the right socket.

There's even more good news: the experimental QUIC protocol already, at least in theory, has the right packet structure to work like this. It turns out you need unique session identifiers anyhow if you want to use stateless packet encryption and authentication, which QUIC does. So, perhaps with not much work, QUIC could support transparent roaming. What a world that would be!

At that point, all we'd have to do is eliminate all remaining UDP and TCP from the Internet, and then we would definitely not need layer 2 bridging anymore, for real this time, and then we could get rid of broadcasts and MAC addresses and SDN and DHCP and all that stuff.

And then the Internet would be elegant again.

Read the whole story
kbrint
3 days ago
reply
IPV8, ladies and gentlemen!
Share this story
Delete
1 public comment
MotherHydra
6 days ago
reply
Theoretical merits aside, Lol @ this guy arguing that we should basically go back to the drawing board with the OSI model, "screw layer 2!" Uh, OK fella your academic ideas are strictly that, none of this gives me faith in the author's other writings. I'd love to see what happens between the copper and the presentation layer but this is all just bluster about breaking the rules.
Space City, USA
acdha
5 days ago
This piece is definitely like the Hunter S. Thompson school of tech writing but I thought he had enough credibility to at least read it. It does remind me of Vernor Vinge's world of far-future software archaeology — so easy to imagine a couple of reverse engineers going WTF 500 years from now.

Daily Hacker News for 2017-08-07

1 Comment
The 10 highest-rated articles on Hacker News on August 07, 2017 which have not appeared on any previous Hacker News Daily are:

Read the whole story
kbrint
4 days ago
reply
Just when you don't expect an elliptic curve!
Share this story
Delete

Next-level tagging

jwz
1 Comment and 2 Shares

Read the whole story
kbrint
12 days ago
reply
Cool
Share this story
Delete

How to Be Passive Aggressive When Collaborating in Google Docs

1 Comment

Collaborating in Google DocsRecently I’ve been wondering how I can be more passive aggressive when collaborating in Google Docs. So I asked a team of experts (my former co-workers) and they came up with these 14 brutal moves.

1. Leave the document open all the time

Even when you’re not reading it, leave the document open so your collaborators will think you’re watching every single thing they’re doing.

2. Highlight a piece of text then do nothing

Your collaborator will see the highlight and wonder what the hell you’re thinking, even after hours and hours have passed.

3. Type over their sentence while they’re typing it

Change words, correct spelling and grammar, or completely rewrite your collaborator’s sentence as they’re typing. This will drive them crazy, and make them think twice about continuing to write.

4. Take away edit access, then take away comment access

If your collaborator makes an edit or comment you don’t like, reduce their access to “view only” and ask them to just send you feedback through email.

5. Type large amounts of text above where they’re typing

What they’re typing will keep jumping down the page and, after losing their track of their cursor multiple times, they’ll simply give up.

6. Comment “+1” to every negative comment

Don’t add any new comments or edits, just reinforce every negative thing that someone else already said.

7. Resolve a comment without ever addressing it

When your collaborator asks what happened to their comment, say you don’t remember seeing it.

8. Rename the document

This will make it impossible for your collaborators to find it again in the Google Docs list.

9. Go into revision history and keep setting the document back to 15 minutes ago

Your collaborator will furiously wonder what keeps happening to all their edits. Follow up with his manager about why it’s taking him so long to finish this.

10. Make several comments without submitting them, then submit them all at once

This will make your collaborators wonder how you were able to read the document so fast.

11. Make a few minor edits, then add yourself as an author

You contributed plenty.

12. Comment asking a question, but not to the author

Ask a question about part of the document but add in another coworker to discuss it with you, making it clear that you don’t see the owner of the document as an expert in the subject.

13. Write some Appscript so that whenever the other person is done typing, an alert pops up saying “Seriously?”

I have no idea how to do that, but sounds very awesome.

14. Let everyone else do the work, then be the one to share it with the team

And finally, once it’s ready to go, be the one to share it with the team and your manager, along with a note about how much time and effort was spent on this. You’ll look like a true leader.

The post How to Be Passive Aggressive When Collaborating in Google Docs appeared first on The Cooper Review.

Read the whole story
kbrint
13 days ago
reply
Fact.
Share this story
Delete

Saturday Morning Breakfast Cereal - Family Vote

3 Comments and 12 Shares


Click here to go see the bonus panel!

Hovertext:
Let's be clear, kids. It's not *just*. It's potentially fair.

New comic!
Today's News:
Read the whole story
kbrint
15 days ago
reply
Share this story
Delete
2 public comments
lkeeney
15 days ago
reply
LOL
Apex, North Carolina
lmoffeit
15 days ago
so funny!
rclatterbuck
15 days ago
reply
I'll have to remember this.

What’s inside an 18650 cell? And why its important

1 Share

There has been a boom in ebike builders making their own battery packs out of the popular 18650-format cells, and I want to share what I’ve found out about the guts of an 18650, so you will understand more about proper DIY pack-building methods.

__________________________________

Why would somebody make their own pack?

The existing battery pack vendors will only make (and stock) the packs that they think they will sell enough of, to make it worth risking their available cash. Which means, they will only have packs with certain particular sizes, shapes, and with specific cost-effective cells.

 

Nobody makes a pack this exact shape and size, and frame shapes vary widely from one model to the next. If this builder settled on buying a turn-key pack, it would have fewer volts, fewer amps, or both. Pic courtesy of firefighter Barry Morfill, in Shrewsbury, UK.

 

But…what if you want a pack with a different cell? Or maybe, you desire a very custom shape?  There’s nothing “wrong” with a turn-key battery pack, but…we are under the impression that anyone who is going to go to the trouble and expense of building their own pack, they are likely to also be the type of person who wants high-performance. After all, why go to the expense and trouble of building a low-to-medium performance pack in a conventional shape? They already exist, and they are getting very affordable. (For building a custom pack, the most-often cited DIY cell spot-welder is about $250!)

 

This is a graphic from a custom battery building website to determine the maximum number of cells that will fit, pic courtesy of Allex in Sweden. I think he could have squeezed-in a couple more cells if he had flipped the shock over, so the fat end was at the rear

 

A third reason (after “custom features”, and using a specific cell), is that if you want to ship a large ebike battery pack internationally, it is very hard right now (and the rules and regulations are likely to get worse over time). If you live in a country where the major ebike battery pack retailers will not ship to you? Then…you can’t get ANY kind of pre-built battery pack. You can still buy all of the components, and build your own, but…buying a completed battery pack is just not available.

__________________________________

Negative cans and shoulder-shorts

I’m putting this fact about an 18650 cell construction first, because…I still run into people today who are surprised to find this out, and a “short” across the shoulder of an 18650 can will cost you trouble and money. It might ruin an entire expensive pack, and also…IT CAN START A FIRE!

The positive and negative electrodes of an 18650 cell. The only electrical separation between these two is the black plastic seal shown here, on the left. YES, the entire sides and bottom of these cells is a single conductive metal shell, which forms the negative electrode. It is normally covered with a Poly Vinyl Chloride / PVC “heat shrink” sleeve, with the retail info printed on it.

 

I have seen some caps and shells advertised as stainless steel, but…here is a quote from an 18650 parts supplier:

“The case and cap are both made from nickel-plated A3 steel, and the insulating seal is made from nylon”

Even though it is “possible” to draw current from the negative shoulder of an 18650 cell (so that the positive and negative are pulled from the same end of the cell), doing that would also mean you are using the sides of the can as a conductor. The shell is steel, and its conductivity is about 10% of what copper would be, so…it is not unreasonable to call the sides of the 18650 case a “resistor”.

 

The two ways I have seen home DIY pack-builders use to prevent a “shoulder short” is…to add a self-adhesive fiber insulator-washer to the positive end, and also…to add some type of plastic end-cap. Either one is a huge benefit, and I would do both.

 

If you are purposefully running current through the sides of the can, it means you are wasting battery watts to heat the shell. Wasted watts and heating-up the cell on purpose is a bad design. Always pull the negative current from the bottom of the 18650, using something that has better conductivity than steel (aluminum or copper, either raw or nickel-plated).

__________________________________

The PTC

Just under the positive-electrode cap, is the “Positive Temperature Coefficient” device, and it is a thin and compact way to limit the current coming out of an individual cell, when the amount of current is so much that the cell-tip is getting hot.

The PTC is a conductive washer, but…when it starts to get hot? Its resistance dramatically increases, so that hopefully…less current can pass through it. In this way, it is almost like a self-resetting breaker. That brings up the question, how hot does it have to get before it activates?

One seemingly-reliable source states that the current-throttling point begins significant activation at 134C (273F). If that is true for most common cells, I’m not sure what situations exist where the PTC will help…maybe it needs to allow the faulted cell to get “that hot” in order to allow the electrolyte to begin producing gasses, and it is actually the gas pressure build-up which then activates the CID, which provides the safeguard. This same reference states that the PTC returns to full-rated current capabilities when the cell cools back down.

 

A chart showing how much the resistance of the PTC increases as it gets hotter.

 

There are several references that indicate a temperature of 60C (140F) is the hottest that any 18650 cell should ever be allowed to get up to (if you want it to last a long time). If you know someone who has gotten their pack hotter than that, and they are proud of the fact that the pack still works, they may not realize that they have thrown away much of their expensive battery packs’ potential life-cycle.

Tesla has an 8-year warranty on their packs, and…they not only designed the pack to never get hot in normal charging and discharging conditions, they also incorporated a pack-cooling system, as did the Chevrolet Volt. The Nissan Leaf was introduced without a liquid-coolant heat-management system for the battery, and they depended on ambient air-cooling, which caused problems that they encountered during the summertime in regions with very hot weather. The Tesla system has a cooling target of 55C (131F), which fits right in with the widely accepted safe max temp of 60C (140F). If they are getting more than eight years out of their battery packs, they are doing something very right.

Also, here is a paper from NASA on PTC devices in cylindrical cells.

__________________________________

The CID

This stands for “Current Interrupt Device“, and it is a simple and compact device that “pops” when enough pressure has built up inside a cell, and it’s located just below the PTC. There are several variations in the designs. They operate on the same principle, but do it slightly different ways. The only reason any pressure would develop inside  a cell is because some of the electrolyte has converted from a thick gel (almost dry) into a gas, from experiencing too much heat.

If that has happened?… I would not “reset” the CID and try to use the cell again (cells are cheap, and you don’t need to add a risky cell to an expensive pack). The PTC will not reset by itself, but sometimes?…it can be done manually. I might use such a cell in a flashlight, but not an ebike pack. Wait a minute…actually…I would throw that bitch away. I don’t need my flashlights catching on fire. I have access to plenty of  new 18650’s, and I don’t need to spend even one minute of my life wrestling with an insurance adjuster over a house-fire.

The CID in an 18650 cell. In this pic, the PTC is the dark green washer between the white CID and the red cap (positive electrode). Everything inside the gold insulative gasket is positive, and all the grey stuff outside of it is the negative.

 

The CID is a thin disc of sheetmetal that is in-between the positive cathode cap, and the rest of the interior of the cell. It has a bowl-shaped depression in the center of it that presses down against another flat metal disc to make contact, and by doing so, that will complete the current-path in normal operations.

__________________________________

The Scored Burst Disc

Generic cells do NOT have a “burst disc”. If they get hot enough for the electrolyte to begin turning into gasses, and then expand from the building pressure, the cannister will split…somewhere.  If there is a burst disc, it will pop open at that specific location. You should only use name-brand cells from the “big five”, which always have this feature (Panasonic, Samsung, LG, Sony, Sanyo). If they are ever abused in a way where they will burst from internal pressure, the hot electrolyte vapors will always blow towards the burst disc, instead of splitting the sides (sometimes the disc is located on the bottom, sometimes on top…just like my…uh…nevermind).

If the sides split, the heat would be directed towards the cell next to it, and it would make the possibility of a runaway thermal meltdown (and fire) more likely. Here is a paper from NASA on “rupture discs”, if you are interested in this (isn’t everyone?).

 

Here is a quote from a vendor who sells parts to make 18650’s: “Safety valve will open at 2.8MPa (the valve will open to release interior high pressure if over 2.8MPa, to ensure no explosion from the can)”

MPa = Mega-Pascal, 2.8 MPa = 406-PSI. However, the actual bursting will be the pressure difference  between the inside of the cell and the outside. By that I mean, if you are in the mountains where the air pressure is lower, the cell will burst at a lower internal pressure. Air-pressure at sea level is roughly 14.5-PSI / 100 kiloPascals  / 100-kPa.

 

Either this cell didn’t have a burst-disc, or…the vents were soldered-over so that any pressure that was building couldn’t blow in the direction the designers intended. Try explaining this to your wife after your house burned down…

 

Here is a technical paper on thermal runaway events. The most interesting part for me was to see that the copper foil in the “jelly roll” had melted and formed globules, so the interior temperature had to have reached over 1085C (1985F).

__________________________________

Individual cell-fusing

This section may seem out of place, since this section doesn’t specify anything INSIDE an 18650 cell, but…a big reason I’m even writing this article is to preface the acceptable methods for a home battery pack builder to use, when using 18650 cells [in an upcoming article. Insert link here, when article is published].

This pic below is a close-up from a Tesla electric car pack. The electrodes on the 18650 cells has a fuse-wire connecting each one to a thick nickel-plated copper buss. These wires are connected to the cell tip by using an ultrasonic bonding machine (high-speed vibrations), which cause no heat to penetrate even the upper layer of the cell, much less the electrolyte.

 

 

Fuse wire connecting each Panasonic cell to the Tesla’s buss-plate.

 

Tesla vehicles are designed to draw “low amps” from each cell (to ensure long life, and provide long range), so…a surprisingly thin wire works fine as the connection to carry the current (in order to get high amps at the motor, they use thousands of them). The tiny diameter of the fuse-wire means that if the car is involved in a wreck and then one (or more) of the cells are shorted, any cell that is flowing high amps will get the fuse-wire hot enough that…the fuse melts very quickly.

An internal short of a cell is extremely unlikely, but…whatever the reason for high amps in the cell, heat from high amps will melt the fuse-wire, which will separate that particular cell from the pack. One of the Tesla models has 74 cells in each paralleled group, so…if one cell pops its fuse? that P-group will be just fine running on the remaining 73 cells.

There are quite a few youtube videos about taking some cheap salvaged low-mileage 18650 cells and building them into an off-grid home-electricity-storage system. Fusing is less needed in a stationary system (no crashes into other homes), but…it can be an easy and cheap feature to add to your design, whether it is for a home or a vehicle.

If you read farther below, you will see that I do NOT recommend soldering onto the negative anode of the cell (the flat bottom), but…I actually believe that…with the right tools and techniques? Soldering a fuse-wire onto the positive cathode is very easy and quite safe, with no risk of damaging the cell from overheating it. After a wreck, a damaged cell can be easily un-soldered and replaced.

Prep the surfaces, apply some solder-paste, set the fuse wire where you want it, and press down for a second with a fat-tip 100W soldering iron (thin tips cool off too fast). Or, use a resistance soldering rig, which I will write about soon. If you have a spot-welder? Fuse-wire can be spot-welded onto any 18650 positive terminal (hit it in two places).

If you want to use fuse-wire on your design, maybe consider flattening the tip of the fuse-wire to improve the contact area onto the positive 18650 electrode nipple. In order to get a consistent thickness on the fuse-wire tip, maybe put some steel sheet-metal on either side of the fuse-wire tip, and then when you whack it with a hammer, the wire tip-thickness will be very consistent. The sheet-metal that determines the thickness of the whacked-wire tip should be roughly about “half-to-one-third” the diameter of the round cross-section of solid wire.

 

Here’s an example of using fuse wire with solder, on a DIY home battery storage pack.

 

Individual cell-fusing doesn’t have anything to do with the internals of the 18650 cells, but…they are a safety feature that is easy and cheap for anyone to add to an 18650-cell pack. If you look back over the info above on how the positive end of a cell is internally constructed, you can see that the positive electrode end can take some heat without even coming close to being damaged.

__________________________________

A “Protected” cell circuit

Ebike battery packs are made from UN-protected cells, because the BMS and controller decide how many amps you will be drawing from them. I’m adding this section to this article because…the pictures in some web-catalogs do not show why protected cells are different. If you are buying cells to build an ebike pack, make sure you do not order the ones with these protection circuits.

 

A protection circuit on an 18650 cell, intended for a product that does not have any current-limiting.

 

Protected cells are also a little longer than an 18650 that has no protection circuit. This current-limiter is attached to the negative electrode, and then the current passes through a very thin conductive ribbon up the side of the can to the positive electrode (ribbon shown in the pic above). That ribbon is typically then attached to the underside of a false cap that “snaps on” over the factory positive electrode cap.

__________________________________

Flat top vs button top

Some flashlights (or other commercial devices) have a hollow cylindrical protrusion inside it where the cells’ positive cathode tip presses against the housing contact that the 18650 is inserted into. This prevents the cells’ negative electrode from making any contact, if the cell is inserted backwards by a drunk customer (who you lookin’ at? I’m not talkin’ about me…*breaks down and sobs uncontrollably “Why did you leave me?…WHY?” ).

That protective socket-shape means that a raw 18650 cell cathode will also not make any contact, even if it is inserted in the proper orientation. For those devices, you must order a “button top” cell. It has an additional “snap-on” cap that is narrower and protrudes farther out. This also makes the button-top 18650 a  little longer, and you should not order these when building an ebike pack.

 

18650 flat top vs button top. Ebikes use the flat-top cells.

__________________________________

The Jelly Roll

If you want to cut open an 18650 cell, then first… you must fully discharge it for safety. One way is to hook it up to any incandescent filament bulb, such as a 12V automobile tail-light bulb. A 12V LED will not work, because they require a fairly exact voltage to work, and you will be draining the volts down to zero. Also, a very large bulb (like an old headlight) might allow so much current that the cell overheats, so…a small 12V incandescent filament bulb is safer.

Of course, that also means that when using a small incandescent bulb, the cell will take longer to discharge. When you no longer see any dim light coming from the filament, I have even read about putting the cell in a bucket of water with a spoonful of table salt (overnight), to ensure that it is absolutely 100% drained.

To cut open the metal can of an 18650, I recommend a Dremel with a thin abrasive disc, instead of a tubing cutter. Those tubing-cutters cause an indentation on the edge of the cut, making a removal of the jelly-roll difficult. Once you get the jelly-roll out and begin to unroll it, you will notice there is a copper foil as a base for the anode chemicals, and aluminum foil as a base for the cathode chemicals.

There is a thin carbon rod that runs up the center to connect the edge of the roll to the positive cathode (#13 in the patent drawing below).

 

Discharge the cell completely, use gloves and eye protection, and do this in a well-ventilated area. Plus…wait until your wife is not home, and don’t tell your mommy.

__________________________________

Inside the bottom of the can, the Anode

Once you cut the can open, you can see that the only thing between the jelly-roll and the bottom of the metal can is a thin plastic insulation washer.

One of the most important things I want to get across in this article is that when you are assembling a bunch of cells into a pack, it is the negative anode (the sides and bottom) that are the most sensitive to heat.

__________________________________

How is the whole enchilada stacked up?

Here’s a couple of images that I thought would help to get my points across. They are from the official patent of a Samsung 18650 cell. The top of the cell has all those pieces stacked up above the electrolyte in the jelly-roll, but the bottom? It only has that one thin plastic washer…good old #11.

Samsung patent “US 20090117451 A1”

 

If you’ve read this far, here is another paper on 18650 cell construction and safety from Dr Wesselmark, who holds a PhD in Applied Electrochemistry from the Royal Institute of Technology, Stockholm…co-written by Tom O’Hara, who has over twenty years R&D experience with Energizer battery.

If Dr Wesselmark or Tom reads this? I owe you a beer, just like Niels Bohr.

__________________________________

If you liked this article, you might also like these:

The New 21700 format Lithium Cells in 2017

How to make a lithium battery last, or…kill it if you like

Amazing new 18650 cells for ebike batteries in 2015

A Home-Built Ebike battery pack from 18650 cells

Thanks for reading, and send any additional info, suggestions, or death threats to: Prisoner #41, Kansas state correctional facility for the mentally unstable.

__________________________________

Written by Ron/spinningmagnets, July 2017

Read the whole story
kbrint
18 days ago
reply
Share this story
Delete
Next Page of Stories