I forget who told me this story, but at some point the British tried a crazy known-plaintext attack by planting handwritten notes in dead German soldiers’ pockets that contained an “important message” to be sent, and then in the following days they would attempt to decrypt enigma communications against the known plaintext.
Few years ago I read 'Between Silk and Cyanide': Britain's Wartime Spies and Saboteurs', the autobiography of Leo Marks who worked in the British Special Operations Executive. He designed cryptography for agents behind enemy lines (thus the title: you could print one time pads on silk, and silk was harder to discover during pat-downs than paper).

Lots of interesting stories in there, including when he suspected that Germans had captured all of their Dutch spies and were transmitting fake messages: real agents made mistakes when encoding due to stress, the Germans' fake encodings were all perfect.

  • ·
  • 11 hours ago
  • ·
  • [ - ]
  • W3zzy
  • ·
  • 18 hours ago
  • ·
  • [ - ]
You can listen to the podcast, if you want.

It's called operation Mincemeat

https://pca.st/podcast/0d412ec0-af39-0139-c19f-0acc26574db2

that's... not what gp was talking about. Why are so many people jumping in with this mistake?

Operation mincemeat wasn't a german officer, it wasn't anything about using a known plaintext to compare to coded messages, it wasn't pretending to be german documents, and it wasn't to help with cryptanalysis. About the only similarity is a dead body

  • ·
  • 12 hours ago
  • ·
  • [ - ]
Now also a quite good West End musical
Couldn't enjoy it at all. One of the first scenes shows MI6 officers, during WWII, making plans on a post-1991 world map, with reunified Germany and independent Baltic countries, etc. Kills immersion for me immediately, along with the gender politics every few minutes in a history show. Maybe I'm old fashioned.
That's cool, I hadn't heard of that. I did hear that they made the mistake of repeating certain phrases, including signing everything with a "heil hitler", but also something about the weather forecast starting the same way every time.
The story of the man[1] whose body was used to fool German intelligence during Operation Mincemeat is quite tragic:

> Michael was born in Aberbargoed in Monmouthshire in South Wales. Before leaving the town, he held part-time jobs as a gardener and labourer. His father Thomas, a coal miner, killed himself when Michael was 15, and his mother died when he was 31. Homeless, friendless, depressed, and with no money, Michael drifted to London where he lived on the streets.

> Michael was found in an abandoned warehouse close to King's Cross, seriously ill from ingesting rat poison that contained phosphorus. Two days later, he died at age 36 in St Pancras Hospital. His death may have been suicide, although he might have simply been hungry, as the poison he ingested was a paste smeared on bread crusts to attract rats.

> After being ingested, phosphide reacts with hydrochloric acid in the stomach, generating phosphine, a highly toxic gas. One of the symptoms of phosphine poisoning is pulmonary oedema, an accumulation of large amounts of liquid in the lungs, which would satisfy the need for a body that appeared to have died by drowning. Purchase explained, "This dose was not sufficient to kill him outright, and its only effect was to so impair the functioning of the liver that he died a little time afterwards". When Purchase obtained Michael's body, it was identified as being in suitable condition for a man who would appear to have floated ashore several days after having died at sea by hypothermia and drowning.

[1] https://en.wikipedia.org/wiki/William_Martin_(Royal_Marines_...

> Part of the wider Operation Barclay, Mincemeat was based on the 1939 Trout memo, written by Rear Admiral John Godfrey, the director of the Naval Intelligence Division, and his personal assistant, Lieutenant Commander Ian Fleming

Wonder if we'll ever see it on a bond movie.

Not a Bond one, but there's already a movie about it: https://www.imdb.com/title/tt1879016/
The book by Ben Macintyre, "Operation Mincemeat: How a Dead Man and a Bizarre Plan Fooled the Nazis and Assured an Allied Victory", is very good.

https://www.goodreads.com/book/show/7632329-operation-mincem...

That's not what gp was talking about.
Why did you say this to me, and not also to the other person who came after me? The fact that the other person is also upvoted kind of puts your argument to rest. Being contrary isn’t a great argumentative style on HN. You have no way of knowing what GP meant.

https://news.ycombinator.com/item?id=45088082

because the other person hadn't said their piece by the time I said this, and because I stand by the fact that it's simply wrong to conflate leaving a body from england to deceive an enemy about the indented invasion location of an operation (regular deception, no cryptographic purpose). I think it's different, cryptographically speaking, to trying to provoke the enemy to use a known plaintext to try and help breaking their code, which I find a very interesting concept. For what its worth, I also downvoted the other comment yesterday, and the third comment today. I'm frankly astonished so many people are conflating the imo clearly different ideas.

I appreciate your edit that completely replaced the topic of your post; it is now much more interesting. But unfortunately, I could not edit my comment by the time I saw you had changed it

> For what its worth, I also downvoted the other comment yesterday

Seems like you just don’t like me. Sounds like motivated reasoning to me. But I thought you meant (my) other comment, not theirs. I think it’s possibly an issue with tone being hard to read in text. In any case, I try to add a correction instead of simply calling out mistakes, but you were right to say whatever you thought. I don’t mean to silence you, but your words had a chilling effect on my speech, so maybe give some reasoning and a correct answer next time instead of just calling someone wrong. Anyone can do that, and they too often do.

At least now I know it’s due to that argument being kind of a weak one. I thought they were concerned with the notes especially, which is why I included that reference because it specifically referred to notes. I think there may be other WW2 examples, but I couldn’t lay hand to them at the time.

> I appreciate your edit; it is now much more interesting.

I appreciate you saying that. I don’t mean to assume you don’t like me, but it seemed that way at the time you said it. Apologies for assuming, and for any offense caused.

Edit: For what it’s worth I didn’t downvote you either time, and in fact I upvoted the comment this one is in reply to.

I can’t edit this anymore, but you are correct.

> (regular deception, no cryptographic purpose)

That is a very good distinction with a difference, and you were right to elucidate this; I only wish you had done it in your original reply to me. In any case, my stream of consciousness post above was in haste, and I think we were both editing at the time. I will try to post better. I wonder if folks are copy posting me? I honestly can’t say.

https://en.wikipedia.org/wiki/Copypasta

https://knowyourmeme.com/memes/copypasta

It was more of an allusion than a reference, but expectations in communication ought be acknowledged and accommodated, so I apologize if you misunderstood my point as it wasn’t clear from context. Please see my edit.

(My prior comment referenced Operation Mincemeat at the time of its reply, for those reading after the fact.)

ETA: Note that I appear to have been mistaken about the connection to ENIAC.

Note that it is equally dangerous to send paraphrased messages using the same key (which is called sending messages "in depth"). This was used to crack the Lorenz ("Tunny") cipher. Interestingly Bletchley Park hadn't gotten their hands on a Lorenz machine, they cracked it based on speculation. And it lead to the development of the first tube computer, Collosus (which influenced the ENIAC). Nowadays we use nonces to avoid sending messages in depth, but nonce reuse can be similarly disastrous for systems like AES-GCM. For example there have been Bitcoin hardware wallets that reused nonces, allowing the private key to be extracted & the Bitcoin stolen. (To be clear, cryptocurrencies and AES-GCM are completely different systems that have this one property in common.)

https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Lorenz_ci...

https://www.youtube.com/watch?v=Ou_9ntYRzzw [Computerphile, 16m]

As an aside does anyone know why it's called "in depth?" I'm guessing that it's related to Bletchley Park's penchant for naming things after fish? But possibly also their techniques that involved arranging messages together and sliding a stencil over them to visually spot patterns (so they're sort of overlayed)? I tried some casual searching but it's a very generic phrase and so difficult to search. It's defined in the The 1944 Bletchley Park Cryptographic Dictionary but it doesn't give an etymology.

https://www.codesandciphers.org.uk/documents/cryptdict/crypt... [Page 28]

As I point out now and then, Colossus was not a computer. It was a key-tester, like a Bitcoin miner. Here's the block diagram of Colossus.[1]

Before there were general-purpose stored program digital computers, there were many special-purpose computing devices. They checked some, but not all, of those boxes.

- IBM had electronic arithmetic in test before WWII, but that went on hold during the war. Mechanical arithmetic worked fine, although slowly, and by 1939, Columbia University and IBM had something that looked vaguely like a programmable computer, built from IBM tabulator parts.

- The G.P.O. (the UK's post office and telephony provider) had been fooling around with electronic switching since 1934. That's where Tommy Flowers, who designed the electronics of Colossus, came from.[2] He had a tough life. After the war, he wanted to get into computers, but couldn't get funding because he couldn't talk about what he'd done for security reasons.

- Memory was the big problem. Colossus just had some registers, built from tubes. And plugboards, the ROM technology of the 1930s and 1940s. Useful memory devices were all post-war. Needed storage to get to stored program computers.

[1] https://www.researchgate.net/figure/Logical-architecture-of-...

[2] https://en.wikipedia.org/wiki/Tommy_Flowers

"Colossus was not a computer. It was a key-tester,"

The original definition of computer was basically a person wot computes (analyzes data and performs arithmetic and so on). That would have mostly involved pencil and paper, fag packets and napkins. IT co-opted the term for their devices, many years later.

What is your issue with Colossus performing automated computations/analysis given some inputs of some sort and hence being described as a computer?

One of the earliest modern day IT related truisms is "garbage in/garbage out" - that dates back to at least getting the clipper out on the cards. Can that notion be applied to Colossus or rather is Colossus the sort of device that gi/go might refer to?

What exactly is a computer?

I think the gp was confused with other devices. Colossus was indeed a computer by most definitions. I think the poster winced it up with the Bombe or other systems - not surprising because colossus wasn’t really known for many years. (It was secret into the 1970s iirc)

Other devices would calculate but not store instructions. The common ones you see are the fire directors on naval ships, which were analog “computers”, but single purpose.

By "computer" I mean what we call a computer today - a stored program general purpose electronic digital computer.

There were many early machines which checked some, but not all, of those boxes. IBM's electronic multiplier. The Harvard Mark I. The SSEC. Colossus. Reservisor. Western Electric Plan 55-A. General Railway Signal's NX. The Bell Labs Complex Calculator. The Automatic Odds option for racetrack totalizators. The Mathatron. All of those machines did something that resembled computation.

The late 1940s, 1950s, and 1960s were full of strange special-purpose electronic digital hardware that didn't quite make it to a computer, because the parts count to get to a general purpose machine was too high.

Then came microprocessors, and it became cheaper to use general purpose microprocessors in dedicated applications. Now all those weird machines are forgotten.

Here's a brochure from Teleregister, which built custom special purpose systems for railroads, the military, airlines, stock exchanges, and such, from before WWII into the 1960s. There's no computer in those things, but a lot of electronics.

The Harvard Mark I and its successors, and especially the IBM SSEC were "stored program general purpose computers".

Mark I was an electro-mechanical computer, while SSEC was hybrid, including both electro-mechanical parts and parts with vacuum tubes. For a few years, IBM's SSEC was the world's most powerful "supercomputer", and it has solved a great number of diverse problems. SSEC had some advanced features that have been introduced in fully electronic computers only about a decade later, e.g. pipelined instruction execution (to compensate for its slow circuits).

What you mean is that none of your examples was a von Neumann computer, i.e. where there is a common memory for program storage and data storage, enabling the computer to create or modify programs by itself.

Obviously the common memory was an essential element for the evolution of electronic computers, enabling many features that were impossible when the programs were stored separately, on a ROM such as punched tape.

However, saying just "stored program" also covers the case when the program is stored in a separate ROM, as it may still be the case for a microcontroller, though nowadays most of them store the program in an alterable flash memory.

I visited Bletchley Park museum this summer when in London. Can recommend and it's also really easy to get there; just a 50 minute train ride from London Euston station, and 5 minute walk to the museum. Entire family enjoyed the museum (have two teenage kids). There is also the "National Museum of Computing" located next to it which contains the Bombe, Collosus and related equipment. As I understand it most (or all?) of the original hardware was destroyed after the war to avoid leaking any information about the British code breaking skills. Thus, the machines on display are replicas, but should be fully working.

The computer museum also exhibits post-war computers all the way to modern machines. I'd say that museum is more for the geeks while the Bletchley Park museum is definitely worth a visit even if you're not into computers.

A personal Bletchley Park anecdote: my grandfather, an electrical engineer, staffed a radio listening station during the war, and every evening a motorcycle dispatch rider would take the day’s intercepts away to a secret location. It was more than 20 years before my grandfather figured out they went to Bletchley.

In the 1980s the Bletchley museum project put out a call for wartime electrical components so they could build their Colossus replica. My grandfather in the 1950s had made a chain of Christmas tree lights from govt issue tiny light lightbulbs he pinched from work. He painstakingly removed the nail polish he had painted them with 30 years earlier, and sent them to Bletchley. They used his family Christmas lightbulbs in the replica that is still there today.

I had the privilege of touring the museum with him in the 1990s. Also on that day I heard my grandmother’s stories of her time in the British Army during the war. That day was incredibly interesting and moving, and is an important memory for me.

What an incredible story, thank you for sharing.
At the end of 90-s some parts sent to the Russian Mir space station were found and bought at flea market - these parts had been pinched from work and their production ceased during those years of collapse in USSR/Russia.
Those parts really belong in a museum somewhere because they are an important part of history, irrespective of politics.

What happened to them?

  • ·
  • 1 day ago
  • ·
  • [ - ]
Oh that’s delightful! I love how contingent these things can be.
I recall from my own visit that the electrical transformers are supposedly original. So, the National Museum of Computing justifies calling its Colossus a rebuild rather than a replica, since it is made with some original parts.
An interesting quirk in Ethereum is that a contract address is determined by deployer address + nonce. So, you can send ETH to a contract that does not exist, then later deploy a contract there and recover it.
It is also the same address on many forks of Ethereum, which has led to some strange circumstances when Optimism sent tens of millions of dollars to a smart contract address on the wrong blockchain, and a hacker was able to create a smart contract they controlled using the same address on the blockchain it was accidentally sent to and steal the funds.
Do you have a link to read more about this?
https://gov.optimism.io/t/message-to-optimism-community-from...

I've never seen a corporate announcement whipsaw from technical incident report to tentative job offer to a threat paraphrasing the IRA before but I guess that's because I don't spend time in the cryptoasset community.

Bug or feature. Could it have been a transfer of funds organized to look like a hack?
If you model the distribution of messages as a tree from sender to recipients, the key's reuse across messages could be measured as "depth" in a structural sense.
My assumption about “in depth” is that it comes from the idea of giving the adversary a greater depth of material to work with. I don’t have anything to back this up.
This is the first I’ve heard of Colossus influencing the ENIAC. I was under the impression that Colossus was so secret that ENIAC was designed independently and (falsely) touted as the first tube computer prior to Colossus’ existence being declassified. I’m not sure if I’m misremembering that though.
The ENIAC seems to be the first general purpose electronic digital computer. It wasn't stored program, though - no good memory devices. Plugboards and lots of rotary switches. Took hours to load a new program. Unrelated to Colossus.

The first machine to have it all was the Manchester Baby.[1] Now this really was sort of a descendant of Colossus, with some of the same people involved. It was mostly a test rig for the Williams Tube memory device.

Once there was something that could do the job of RAM, things took off quickly. Within two years there were quite a number of stored program electronic digital computer projects. Electronic arithmetic worked fine, but everybody had been stuck on the memory problem.

[1] https://en.wikipedia.org/wiki/Manchester_Baby

Several earlier electro-mechanical computers were closer to a modern computer than ENIAC from the point of view of program storage, as they had a clearly defined instruction set and the programs were stored on punched tape (taken from teletype machines).

With ENIAC, reconfiguring the computer for solving a new problem was done essentially in the same way as for an analog computer (or nowadays for an FPGA), by rewiring the connections between the arithmetic units, the storage registers and the control sequencers, so that ENIAC will solve the new problem when powered on.

The resemblance of ENIAC to an analog computer is not an accident, but its architecture has been conceived as an electronic substitute of the electro-mechanical analog computers known as "differential analyzers", which had been in widespread use both before WWII and during WWII, for computing solutions of systems of differential equations, which were found in various engineering problems, including in many of military importance.

On the other hand, Harvard Mark I had been inspired by Babbage's proposal for a digital computer with stored program, hence its architecture much closer to modern digital computers.

While ENIAC had an architecture inspired by the mechanical differential analyzers, for the schematics of its electronic arithmetic and register circuits it used some information from the designers of the earlier Atanasoff-Berry Computer, which was a special-purpose electronic computer for solving systems of linear algebraic equations, and which included even the first DRAM memory (the second DRAM memory will be the British Williams CRT).

Any good books to recommend on computer history?
Colossus did not influence ENIAC.

However, there is a connection between British electronics and ENIAC, which is the same, but happened in parallel, with the connection between earlier British electronics and Colossus.

During the decade before WWII, several fundamental circuits of digital electronics had been invented in UK, e.g. several kinds of electronic counters and the Schmitt trigger.

Those circuits have been invented mainly for use in experiments of nuclear physics and elementary particle physics, e.g. for counting events from radiation detectors, for which the existing mechanical counters and accumulators were too slow. The first digital electronic circuit, the Eccles-Jordan trigger, had also been invented by British physicists, but another decade earlier, at the end of WWI.

The British digital electronic circuits were a source of inspiration for the circuits used in the first (special-purpose) digital electronic computer, the Atanasoff-Berry Computer, which was built at Iowa State University immediately before WWII (the published British research papers were explicitly quoted in the ABC design documents).

In turn, the digital electronic circuits used in the Atanasoff-Berry Computer were a source of inspiration for those used in ENIAC, because a member of the Mauchly-Eckert team had visited the designers of ABC, inquiring about its components, even if later they did not credit any source of inspiration for the ENIAC design (the Mauchly-Eckert team founded a startup for making electronic computers, so they were wary of providing any information that would make their work appear as less original and not patentable and they were also extremely annoyed by the publication of the von Neumann report, which explained for everyone how to make an electronic computer, so it created very soon a great number of competitors for the company of Mauchly and Eckert).

I think you're right, my mistake. I didn't find anything definitive but given they were developed around the same time by (on cursory inspection) different people and that Colossus was as secret as you say (it wasn't declassified until the 70s), it does seem unlikely. I thought that had been mentioned in a Computerphile/Numberphile video on the topic but I must be mistaken.
Interesting. I liked the explanations in the accepted answer. This rule especially,“Never repeat in the clear the identical text of a message once sent in cryptographic form, or repeat in cryptographic form the text of a message once sent in the clear.”

As a child I learned about codes from a library book. Fascinated with one-time pads, I convinced a friend to try a correspondence. We exchanged a few messages, and then got bored, because the juice wasn’t worth the squeeze.

Which makes me wonder about people who work in secrets. Encrypted communications seem opposite of scientific communications. Secrets peeps seem prolly aligned to politics.

Do you remember the book? I remember loving Alvin's Secret Code, which was on the bookshelf in my fourth-grade classroom where I sat in the back to be near the bookshelf...
Sorry, no. But it would have been a 70s or 80s publication. I recall there were several Cold War code stories, so it might have been on this subject. Like popular history stories, one after the other—you thought that was crazy? Check out this hollow nickel! But all very serious like.
>> the juice wasn’t worth the squeeze

I recall that Ovaltine goes better with decoded messages.

  • arccy
  • ·
  • 1 day ago
  • ·
  • [ - ]
i recall squeezing lemons to write invisible messages...
A crummy commercial!?
  • wpm
  • ·
  • 1 day ago
  • ·
  • [ - ]
Son of a bitch!
"... two minutes into that Ovaltine thing and I just couldn't take it anymore."
> Never repeat in the clear the identical text of a message once sent in cryptographic form, or repeat in cryptographic form the text of a message once sent in the clear

And (more or less) that’s how the Enigma was cracked. Turns out starting weather report with ‘weather’ every single time is not a good idea.

Or ending it with the same salute involving the name of the leader, for that matter.
Standard US cryptographic protocol during the same time period was to begin and end every message with a few random words specifically to thwart such attacks.
Which accidentally caused a bit of a rift in the US Pacific Admiralty: https://en.wikipedia.org/wiki/The_world_wonders
Seems like an interesting conundrum. If you encrypt all transmissions, you end up having a lot of boring repetition, like weather and sign offs to just fill space. But if you don't encrypt the boring stuff, then the transmission itself is a nice signal of something interesting about to happen. But if you try to just pad with completely random noise, the other end might worry they've decoded something wrong and ask for a new cipher pad increasing the chance of interception. So maybe they should have tried to find something almost random but with known structure instead of sending the weather? Seems similar to how we now know that choosing a random password from the dictionary adds encoding redundancy without reducing security. Or similar to the goal of getting ordinary people to use Tor for ordinary things?
  • vl
  • ·
  • 1 day ago
  • ·
  • [ - ]
In modern crypto it’s solved by using random nonce to star with and by using (encrypted) hash of data at the end. Random nonce gives you different cypher text for same inputs, hash tells you if you actually decrypted what was intended.
Isn't that why we have PFS now?
  • gruez
  • ·
  • 1 day ago
  • ·
  • [ - ]
No, PFS is to ensure communications aren't compromised even if the server's private keys are compromised afterwards. It has nothing to do with mitigating known plaintext attacks. That's already mitigated with techniques like randomized IVs.
So-called perfect forward secrecy uses temporary keys so that eavesdropped logs can't be decrypted after those keys are discarded. To prevent known-plaintext attacks and/or statistical analysis, data entropy must be equalized so that patterns won't be apparent even before encryption.
  • ajb
  • ·
  • 16 hours ago
  • ·
  • [ - ]
No - our actual encryption primitives work better, and don't suffer from this problem. (Other comments give an explanation of what PFS is actually for).
  • cwmma
  • ·
  • 1 day ago
  • ·
  • [ - ]
For people interested in these kinds of things, there is a very interesting military manual on the internet archives which goes though all the various pre computer pen and paper ciphers and how to crack them.

1. https://archive.org/details/Fm3440.2BasicCryptAnalysis/mode/...

Good find; a great companion to the GCHQ Puzzle Book indeed!
The term to google for more information about this would be Known plaintext attack.
Oh that makes sense. I assumed wrong that it was going to be about prisoners sending secret messages in their letters home, and the guards wanting to scramble those out.
I clicked thinking it was about avoiding watermarks when exfiltrating data. I enjoyed the cryptography lesson I got instead.
And the term for _that_ is steganography
This is a familiar concept from reading about WW2 spy stuff (Between Silk and Cyanide, for example, which I highly recommend). But what REALLY intrigues me is the typeface of the letter with its upper-case 'E' used in place of 'e'. What's up with that?
That is peculiar. Brief internet search turned up a Reddit post where someone had a sample of typed text with the same odd typography: https://www.reddit.com/r/typewriters/s/f2CIY0TCm3

The suggestion that it may have been a striker from a bilingual - cyrillic typewriter that was mixed in is an interesting possibility; someone transcribing diplomatic telegrams in WWII may indeed have need of access to Cyrillic typewriters…

  • ·
  • 1 day ago
  • ·
  • [ - ]
  • andix
  • ·
  • 1 day ago
  • ·
  • [ - ]
Interesting idea, but both the Cyrillic and Greek capital E would be a similar size to the Latin capital E. And in both alphabets the lower case e doesn't look like a smaller capital E. It's е/ε.
[dead]
Might be unrelated in this example, but when a message is written in a lazy ROT13-like cypher, the letter e becomes a notorious rat that allows anyone to break the entire thing in very little time.

Randomizing/obfuscating the letter case might buy you a little time, though I think it's something else entirely here.

Zvtug oR haeRyngRq va guvf RknzcyR, ohg juRa n zRffntR vf jevggRa va n ynml EBG13-yvxR plcuRe, guR yRggRe R oRpbzRf n abgbevbhf eng gung nyybjf nalbaR gb oeRnx guR RagveR guvat va iRel yvggyR gvzR.

Enaqbzvmvat/boshfpngvat guR yRggRe pnfR zvtug ohl lbh n yvggyR gvzR, gubhtu V guvax vg'f fbzRguvat RyfR RagveRyl uReR.

V guvax gur vqRn jnf gb fcyvg guR uvtu seRdhrapl "r" gb gjb qvssReRag flzobyf r naq R ng yRffRe serdhrapvRf. Fvzcyl ercynpvat nyy r'f jvgu R qbrfa'g qb gung.
ChatGPT was able to decrypt this in about 12 seconds with no context, which I found interesting.
I had the same question about the upper case E.

Some of the E's look a little curly like epsilons but I'm guessing that may be an optical illusion.

But check out the 3 in "chancE3"

Legibility would be my guess. Can't confuse ᴇ for c.
If we're guessing I have ideas:

1) it's just the typeface,

2) the teletype machine has unique letter so the machine it was received in is known (and hence which staff received it), reducing the ability to forge messages. Different machines could have had special letters, or all machines handling secrets had that particular "e"??

3) the machine broke and the repair shop only had a small-caps "E" handy.

I assume this is a typed up decrypt - not raw teletype output. Teletype would be all caps; this has been typed, capitalized, and laid out by a typist.
  • andix
  • ·
  • 1 day ago
  • ·
  • [ - ]
The document on the picture was for sure typed on a typewriter. Teletype machines would either be all caps or all lower case. Also they wouldn't be able to print a multi column header like on top of the document.
The repeating of the message is how the Allies initially broke the Geheimskreiber a much more secure encryption machine to Enigma that used XOR and rotors:

https://en.wikipedia.org/wiki/Siemens_and_Halske_T52

  • junto
  • ·
  • 18 hours ago
  • ·
  • [ - ]
This is interesting in itself because should the Germans have got a hold of this widespread memo, then it might have tipped them off as to how their Enigma system might be attacked.
I was trained with regards to realtime control systems to put salt in the messages to reduce repetition. Many systems just repeat a status or number from which you could more easily get the keys. Never knew if it was a real concern or not. Interesting to see from the post and comments how old a concept this is. With today’s encryption is this still a concern?
> In this process, deletion rather than expansion of the wording of the message is preferable, because if an ordinary message is paraphrased simply by expanding it along its original lines, an expert can easily reduce the paraphrased message to its lowest terms, and the resultant wording will be practically the original message.

This bit has me perplexed. If you had a single message that you wanted to send multiple times in different forms, wouldn't compressing the message exponentially limit possible variation whereas expanding it would exponentially increase it? If you had to send the same message more than a couple of times I'd expect to see accidental duplicates pretty quickly if everyone had been instructed to reduce the message size.

I guess the idea is that if the message has been reduced in two different ways then you have to have removed some information about the original, whereas that's not a guarantee with two different expansions. But what I don't understand is that even if you have a pair of messages, decrypt one, and manage to reconstruct the original message, isn't the other still encrypted expansion still different to the original message? How does that help you decrypt the second one if you don't know which parts of the encrypted message represent the differences?

  • Khoth
  • ·
  • 1 day ago
  • ·
  • [ - ]
It's mostly talking about the case where someone receives an encrypted message which is intended to later be published openly. If it was padded by adding stuff, an attacker can try to reconstruct the original plaintext by removing the flowery adjectives, whereas if things were deleted the attacker doesn't know what to add.
In particular, the length of a message is not encrypted when encrypting the text. So if the encrypted message is shorter, you know exactly how much to remove to get back the original, and then just need to guess what to delete. If the message is longer, it is much harder to guess whether to add flowery adjectives, a new sentence, change a pronoun for a name, or some other change.
I was able to find the 2 earlier manuals mentioned:

RadioNerds-TM 11-485 (PDF) (33.22 MB) 4

Internet Archive-US Army Cryptography Manuals Collection (see "TM_11-485.pdf")

https://radionerds.com/index.php/File:TM_11-485.pdf

https://archive.org/details/US-Army-Cryptography-Manuals

Hasn’t known invariants been used to break modern encryption in TLs, etc? Like a SSH packet will always contain some known info, etc.
In some systems sort of. The esp32 encryption has a bizarre implementation where adjacent blocks in counter mode reuse the same nonce, so knowing the structure of the plaintext can directly reveal the content of some blocks.
I'm not sure why drum55's answer is buried but they're correct that the Nonce concept in modern crypto addresses this issue.
It's not only the nonce. The nonce helps to ensure that the message re-encrypted doesn't have the same ciphertext, but the known plaintext can still be used to forge messages. What stops message forgery is the message tag that TLS has (using the AEADs like AES-GCM or ChaCha20Poly1305).

That said, the nonce is still very important to avoid most key recovery attacks

Yeah the real answer here is that this is what AEADs are for.
Probably because that's the user's only comment. I've vouched for it.
First thought that came into my mind, when I read this article header, was regarding the Chat Control and the Telegram IM. Then I saw the history.stackexchange...

And the revolution is: It's really nice that nowadays we have telegrams that are more safe that they were during WW2 for example even with the military infrastructure available back then...

Or maybe we did have?

What an interesting find!

Not that this specific quirk is covered in the novel, but a reading of Neal Stephenson's Cryptonomicon would certainly help make one understand the kind of necessary paranoia that would lead to this kind of (important!) protective measure.

Does this also apply if someone were to do the following: Receive encrypted transmission -> unencrypt it -> need to pass it on, so re-encrypt it and pass it on?

I would imagine that the paraphrasing wouldn't be necessary in this case because it isn't quite as useful to compare two encrypted versions of the text versus an encrypted version and an unencrypted version (also I feel like there is some risk of a game of 'telephone' in that the meaning would change bit by bit to the point of having a different meaning over time, even if not intentionally)

  • eszed
  • ·
  • 1 day ago
  • ·
  • [ - ]
No. As explained in the SO answer, the worry is that the enemy will have been able to decrypt one or the other of your messages, at which point the identical underlying plaintext will help them crack the second cypher.
‘Crack the cipher’ in this case most likely meaning: figure out the daily code word key you are using for that cipher.

If they have already gained the ability to decrypt today’s messages from station A in cipher A, and can therefore recover the plaintext of those messages; if they then find a message of the same length sent from station B in cipher B they can guess that that might be the same message, reverse engineer the key and maybe then decrypt all the messages being sent from station B in cipher B today.

Bletchley Park employed linguists alongside cryptographers, and the linguists would help permute the messages (substituting German words for common abbreviations, for example) to mount these sorts of attacks.
So it would make sense for the first message in a chain to be very verbose and repetitive to make it easier to modify down the chain. Bureaucrats must've had fun writting those.
Repetitive and verbose but make sure you don’t use up all the synonyms for a concept, right? Everything you use is taken from your paraphraser.
  • ·
  • 1 day ago
  • ·
  • [ - ]
Ironically, stating this at the beginning of telegram would precisely cause what it seeks to prevent (vulnerability to known plaintext attacks).

Which makes me wonder: how many permutations of this rule could be conceived (and needed) that on the one hand would keep the point clear to the receiver, but on the other hand prevent such attacks?

In any case the best option is to not have (to repeat) this rule inside messages.

It could be sent in the clear, although since the point was to apply it to every encrypted message, that would likely already have been redundant with having originally been encrypted. Just consider it part of the decryption algorithm itself instead: step 1, attach warning text, step 2, initialize decryption state and decrypt.
As close to the original as possible not using the same phrasing? Obviously?
Lowercase E is unusual in the text. Is it a special teletype font?
This reminds me of similar discussions we've been having about this topic. The key challenge I see is implementation at scale.
Tangentially related — sending everyone in a company a slightly different document can help catch the person leaking confidential documents to the press.
Tyrion did that thing in GoT (fictional btw) - https://www.reddit.com/r/gameofthrones/comments/45256e/s2e3_...
Fictional, but based on real approach.
That’s what I thought it was going to be.
Knowing the original plaintext is a big leg up in cracking encryption.
Known-plaintext attacks aside, if you're going to compress text, it must be done before encryption.

I don't know if compression offers much protection against plaintext attacks.

This also makes me wonder how helpful AI is in such situations. AI is essential an extremely effective, lossy, compression algorithm.

  • hcs
  • ·
  • 1 day ago
  • ·
  • [ - ]
Compression + encryption can be dangerous if the compression rate is exposed somehow (between messages or within packets of a message).

> we show that it is possible to identify the phrases spoken within encrypted VoIP calls when the audio is encoded using variable bit rate codecs

https://crypto.stackexchange.com/a/2188

See also https://breachattack.com/ when the plaintext is partially attacker-controlled.

If nothing else it would make a great twist in a fiction setting.

These paraphrasing instructions could be followed. But the paraphrasing could be done using some LLM. A sufficiently advanced adversary manages to invert the model somehow, and as a result can get the original plain text out of the paraphrased message, which lets them do a known-plaintext attack, get the key, and use it on other messages.

Sort of technobabble (is the idea of inverting an LLM nonsense?) but fun.

  • s20n
  • ·
  • 21 hours ago
  • ·
  • [ - ]
Well, that's one way to make it CPA-secure
  • ·
  • 1 day ago
  • ·
  • [ - ]
I guess CBC and IVs (or similar) weren't invented yet?
How is this solved in modern cryptography?
Generally speaking, in two different ways: (a) cipher modes will usually use a combination of initialization vectors, block chaining, or an incrementing counter to perturb the encipherment of each block so that repeated data does not result in repeated ciphertext and (b) encrypted protocols will include a section to be filled with random "nonce" data so that repeatedly enciphering the same message will also result in different ciphertext, and might also add random padding so that the length of ciphertext can't be used to deduce the length of a particular message.
Modern cryptography solves this by using randomness (IVs, nonces, padding, salts) so that even identical plaintexts encrypt to different ciphertexts, eliminating predictable patterns.
“Close” meant secret in the 1940s. A “close secret” was next to “top secret” classification.

See also the use of the word “close” in literature, eg The Lord of the Rings “Gandalf is closer that ever”.

To keep it close or to hold it close meant to keep it secret.

How come this isnt a problem with modern cryptography? What did we invent?
In short: cipher modes, IVs, nonces, and random padding.
  • ars
  • ·
  • 1 day ago
  • ·
  • [ - ]
You add a random number to the encryption key, and also send that random number (seed) as part of the message.

Boiled down to the very essence modern cryptography is: Using a secret seed plus a public seed, generate a long random number (of the same length as the message), then XOR that number with the message.

The hard part is generating that random number in such a way that you can not reverse the process and reclaim the secret seed.

Lookup "initialization vector" for more.

I didn’t know that. I read somewhere that they kept reusing expressions, and even the weather updates kicked off with the same words each time
LLMs would be amazing for this
Enigma 2.0 getting cracked due to the prevalence of the em dash.
I wouldn't put an LLM in the loop for anything that has security implications.