Translater

Mittwoch, 14. Januar 2026

๐Ÿ”“ ๐๐ž๐ฐ ๐๐ซ๐ž๐ฉ๐ซ๐ข๐ง๐ญ: ๐ƒ๐ž๐ญ๐ž๐ซ๐ฆ๐ข๐ง๐ข๐ฌ๐ญ๐ข๐œ ๐๐ซ๐ž๐ข๐ฆ๐š๐ ๐ž ๐‹๐จ๐œ๐š๐ฅ๐ข๐ณ๐š๐ญ๐ข๐จ๐ง ๐Ÿ๐จ๐ซ ๐Œ๐ƒ5 ๐š๐ง๐ ๐’๐‡๐€-256

๐๐ž๐ฐ ๐๐ซ๐ž๐ฉ๐ซ๐ข๐ง๐ญ ๐ƒ๐ž๐ญ๐ž๐ซ๐ฆ๐ข๐ง๐ข๐ฌ๐ญ๐ข๐œ ๐๐ซ๐ž๐ข๐ฆ๐š๐ ๐ž ๐‹๐จ๐œ๐š๐ฅ๐ข๐ณ๐š๐ญ๐ข๐จ๐ง ๐Ÿ๐จ๐ซ ๐Œ๐ƒ5 ๐š๐ง๐ ๐’๐‡๐€-256. Trauth Research


Hash functions such as MD5 and SHA-256 are widely assumed to be one-way: given a hash, the corresponding preimage is considered computationally irretrievable. This assumption underpins modern cryptographic security.


In a new preprint, I report deterministic, reproducible preimage localization for both MD5 and SHA-256.

Crucially, this is not achieved by using a neural network to invert hashes.
Instead, a maximally unconventional architecture is constructed in which the available information is forced to self-organize into a geometric structure.
The neural network acts solely as a substrate that allows this information geometry to form.

Across dozens of controlled test cases using fictional passwords (up to 23 characters), the resulting geometry enables up to 100% byte-level accuracy.
The behavior is deterministic and repeatable across independent runs, ruling out chance effects.

๐Š๐ž๐ฒ ๐ž๐ฆ๐ฉ๐ข๐ซ๐ข๐œ๐š๐ฅ ๐จ๐›๐ฌ๐ž๐ซ๐ฏ๐š๐ญ๐ข๐จ๐ง๐ฌ:
• ๐˜™๐˜ฆ๐˜ฑ๐˜ณ๐˜ฐ๐˜ฅ๐˜ถ๐˜ค๐˜ช๐˜ฃ๐˜ญ๐˜ฆ ๐˜ฑ๐˜ณ๐˜ฆ๐˜ช๐˜ฎ๐˜ข๐˜จ๐˜ฆ ๐˜ญ๐˜ฐ๐˜ค๐˜ข๐˜ญ๐˜ช๐˜ป๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ง๐˜ฐ๐˜ณ ๐˜”๐˜‹5 ๐˜ข๐˜ฏ๐˜ฅ ๐˜š๐˜๐˜ˆ-256
• ๐˜œ๐˜ฑ ๐˜ต๐˜ฐ 100% ๐˜ฃ๐˜บ๐˜ต๐˜ฆ-๐˜ญ๐˜ฆ๐˜ท๐˜ฆ๐˜ญ ๐˜ณ๐˜ฆ๐˜ค๐˜ฐ๐˜ฏ๐˜ด๐˜ต๐˜ณ๐˜ถ๐˜ค๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ข๐˜ค๐˜ค๐˜ถ๐˜ณ๐˜ข๐˜ค๐˜บ
• 41.8% ๐˜ช๐˜ฏ๐˜ง๐˜ฐ๐˜ณ๐˜ฎ๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ ๐˜ฑ๐˜ฆ๐˜ณ๐˜ด๐˜ช๐˜ด๐˜ต๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ ๐˜ข๐˜ค๐˜ณ๐˜ฐ๐˜ด๐˜ด 11 ๐˜ง๐˜ถ๐˜ญ๐˜ญ๐˜บ ๐˜ช๐˜ฏ๐˜ฅ๐˜ฆ๐˜ฑ๐˜ฆ๐˜ฏ๐˜ฅ๐˜ฆ๐˜ฏ๐˜ต ๐˜ณ๐˜ถ๐˜ฏ๐˜ด (๐˜ธ๐˜ฉ๐˜ฆ๐˜ณ๐˜ฆ ๐˜ค๐˜ญ๐˜ข๐˜ด๐˜ด๐˜ช๐˜ค๐˜ข๐˜ญ ๐˜ฆ๐˜น๐˜ฑ๐˜ฆ๐˜ค๐˜ต๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด ๐˜ฑ๐˜ณ๐˜ฆ๐˜ฅ๐˜ช๐˜ค๐˜ต ๐˜ป๐˜ฆ๐˜ณ๐˜ฐ)
• 66 ๐˜ญ๐˜ข๐˜บ๐˜ฆ๐˜ณ ๐˜ฑ๐˜ข๐˜ช๐˜ณ๐˜ด ๐˜ธ๐˜ช๐˜ต๐˜ฉ ๐˜ฑ < 0.001 ๐˜ด๐˜ช๐˜จ๐˜ฏ๐˜ช๐˜ง๐˜ช๐˜ค๐˜ข๐˜ฏ๐˜ค๐˜ฆ (≈70× ๐˜ฐ๐˜ท๐˜ฆ๐˜ณ ๐˜ฆ๐˜น๐˜ฑ๐˜ฆ๐˜ค๐˜ต๐˜ข๐˜ต๐˜ช๐˜ฐ๐˜ฏ)
• ๐˜๐˜ฏ๐˜ท๐˜ฆ๐˜ณ๐˜ด๐˜ฆ ๐˜ด๐˜ค๐˜ข๐˜ญ๐˜ช๐˜ฏ๐˜จ: ๐˜ญ๐˜ฐ๐˜ฏ๐˜จ๐˜ฆ๐˜ณ ๐˜ฑ๐˜ข๐˜ด๐˜ด๐˜ธ๐˜ฐ๐˜ณ๐˜ฅ๐˜ด ๐˜ข๐˜ณ๐˜ฆ ๐˜ฆ๐˜ข๐˜ด๐˜ช๐˜ฆ๐˜ณ ๐˜ต๐˜ฐ ๐˜ญ๐˜ฐ๐˜ค๐˜ข๐˜ญ๐˜ช๐˜ป๐˜ฆ, ๐˜ฅ๐˜ช๐˜ณ๐˜ฆ๐˜ค๐˜ต๐˜ญ๐˜บ ๐˜ค๐˜ฐ๐˜ฏ๐˜ต๐˜ณ๐˜ข๐˜ฅ๐˜ช๐˜ค๐˜ต๐˜ช๐˜ฏ๐˜จ ๐˜ฃ๐˜ณ๐˜ถ๐˜ต๐˜ฆ-๐˜ง๐˜ฐ๐˜ณ๐˜ค๐˜ฆ ๐˜ข๐˜ด๐˜ด๐˜ถ๐˜ฎ๐˜ฑ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด

These results indicate that hash irreversibility is not guaranteed.
The findings suggest that what is commonly treated as “one-wayness” reflects geometric obscuration, not destruction of information.

This work does not present an exploit toolkit or attack pipeline.
It reports a structural failure of assumed irreversibility under an empirically demonstrated computational regime.

Preprint: https://doi.org/10.5281/zenodo.18226838
Technical scrutiny and competent critique are welcome.
Image: ChatGPT 5.2

Hashtag
#Cryptography #MD5 #SHA256 #InformationTheory #AIArchitecture #Complexity #SecurityResearch Stefan Trauth #TrauthResearch #NeuralNetwork 


Planned validation:
A live demonstration is currently being prepared.
It will show the full external pipeline from hash generation, system initialization, execution, to independent validation of the reconstructed preimage.

No architectural details, input preparation methods, parameterizations, or internal representations will be disclosed during this demonstration.

Access is limited to organizations with demonstrated frontier-scale research infrastructure. Evaluation is conducted individually under strict dual-use governance.

Keine Kommentare:

Kommentar verรถffentlichen