Cracking Random Number Generators using Machine Learning – Part 2: Mersenne Twister

Outline 1. Introduction2. How does MT19937 PRNG work?3. Using Neural Networks to model the MT19937 PRNG3.1 Using NN for State Twisting3.1.1 Data Preparation3.1.2 Neural Network Model Design3.1.3 Optimizing the NN Inputs3.1.4 Model Results3.1.5 Model Deep Dive3.1.5.1 Model First Layer Connections3.1.5.2 The Logic Closed-Form from the State Twisting Model Output3.2 Using NN for State Tempering3.2.1 Data Preparation3.2.2 … Continue reading Cracking Random Number Generators using Machine Learning – Part 2: Mersenne Twister

Cracking Random Number Generators using Machine Learning – Part 1: xorshift128

Outline 1. Introduction2. How does xorshift128 PRNG work?3. Neural Networks and XOR gates4. Using Neural Networks to model the xorshift128 PRNG4.1 Neural Network Model Design4.2 Model Results4.3 Model Deep Dive5. Creating a machine-learning-resistant version of xorshift1286. Conclusion 1. Introduction This blog post proposes an approach to crack Pseudo-Random Number Generators (PRNGs) using machine learning. By cracking … Continue reading Cracking Random Number Generators using Machine Learning – Part 1: xorshift128

On Linux’s Random Number Generation

I have been asked about the usefulness of security monitoring of entropy levels in the Linux kernel. This calls for some explanation of how random generation works in Linux systems. So, randomness and the Linux kernel. This is an area where there is longstanding confusion, notably among some Linux kernel developers, including Linus Torvalds himself. … Continue reading On Linux’s Random Number Generation