Compare commits

..

148 commits

Author SHA1 Message Date
An0nymous
bec16d090e
Merge pull request #34 from 0xalivecow/dev
Merge refactoring changes
2024-12-23 10:31:54 +01:00
Alivecow
0a573d84da refactor: Remove unused function 2024-12-23 10:30:21 +01:00
Alivecow
848ad15bb8 refactor: Remove unneded imports 2024-12-23 10:29:12 +01:00
Alivecow
c9c26b3971 refactor: remove commented code 2024-12-23 10:27:23 +01:00
Alivecow
b24c703429 refactor: Apply general cargo recommendations 2024-12-22 18:13:23 +01:00
Alivecow
2f0e265ed6 refactor: Change vector init in padding oracle 2024-12-05 18:12:26 +01:00
Alivecow
0da047110f feat: enable tcp no delay option 2024-12-05 17:48:31 +01:00
Alivecow
6d1b735a0b refactor: remove unneded prints and enable mt 2024-12-05 16:37:54 +01:00
Alivecow
7a0d1219f9 fix: Fix GCM crack output 2024-12-05 15:57:18 +01:00
Alivecow
90d61a655e fix: Fix length field implementation in gcm_crack
Length field was calculated after padding
2024-12-03 23:15:42 +01:00
An0nymous
555ee45aad
Merge pull request #33 from 0xalivecow/dev
fix: Add ciphertext padding to gcm_crack
2024-12-03 23:00:03 +01:00
Alivecow
9e31b6cc5b fix: Add ciphertext padding to gcm_crack 2024-12-03 22:58:10 +01:00
Alivecow
6a2f631e46 fix: Fix the output of mask if mask is empty 2024-12-03 17:51:59 +01:00
An0nymous
274b65c6fd
Merge pull request #32 from 0xalivecow/dev
Merge gcm_crack
2024-12-03 17:18:15 +01:00
Alivecow
b348c19c6c refactor: clean up gcm_crack code 2024-12-03 17:09:43 +01:00
Alivecow
b632e0c803 refactor: clen edf code 2024-12-03 17:00:23 +01:00
Alivecow
4c9adb9fdc feat: add the gcm crack
Example testcase is working
2024-12-03 16:59:30 +01:00
Alivecow
16b65b0de4 refactor: re-enable multi threading 2024-11-30 23:47:41 +01:00
Alivecow
e8f4a58732 fix: Fix error in random polynomial generation in edf
Upper bound wa incorrect
2024-11-30 23:46:59 +01:00
Alivecow
e2ef29bfd5 refactor: test with MT disabled 2024-11-30 21:16:47 +01:00
An0nymous
7fa3586e49
Merge pull request #31 from 0xalivecow/dev
MT and Fixes
2024-11-30 16:09:55 +01:00
Alivecow
8064dcb9e8 fix: Fix incorrect dic output. Remainder could be zero. 2024-11-30 16:07:39 +01:00
Alivecow
3687733b7f feat: Enable MT 2024-11-30 16:07:11 +01:00
An0nymous
8f0356e2ba
Merge pull request #30 from 0xalivecow/dev
Initial MT and div improvements
2024-11-29 21:14:43 +01:00
Alivecow
7dbcf20891 feat/fix: add initial MT and remove extend from div for performance 2024-11-29 21:12:42 +01:00
Alivecow
60d73968fb Merge branch 'main' into dev 2024-11-29 19:31:22 +01:00
Alivecow
2623bd9a8d refactor: Change initialisations of field elements to be cleaner 2024-11-29 19:31:10 +01:00
Alivecow
bf4c3ee4ca refactor: Remove unneded prints and change gfmul 2024-11-29 19:02:37 +01:00
Alivecow
007bbddfcd fix: Fix incorrect ouput in gfdiv task 2024-11-29 17:22:47 +01:00
Alivecow
12254744d4 refactor: Change implementation to only switch semantic once 2024-11-29 16:50:19 +01:00
An0nymous
679c0223af
Merge pull request #29 from 0xalivecow/dev
Merge performance improvements
2024-11-29 14:33:54 +01:00
An0nymous
c24d47e4b6
Merge branch 'main' into dev 2024-11-29 14:33:47 +01:00
Alivecow
e8c9cb1ade refactor: Imrpove gfmul to remove unneded vec manipulation and imporve performanve 2024-11-29 14:31:52 +01:00
Alivecow
5b27a4ad9c feat/refactor: Change unneded vec initialisations and start on possible new gfmul 2024-11-29 13:49:57 +01:00
Alivecow
e934d4317f Merge branch 'dev' 2024-11-28 17:45:16 +01:00
Alivecow
270abdb7b7 refactor: Change gcd implementation to attempt faster calc 2024-11-28 17:45:10 +01:00
Alivecow
dd19c90ae1 refactor: Apply cargo recommended refactoring 2024-11-28 15:10:02 +01:00
An0nymous
c0685e9b7b
Merge pull request #28 from 0xalivecow/dev
feat: Add testing runner for edf
2024-11-28 14:03:10 +01:00
Alivecow
f7f3c44acb feat: Add testing runner for edf 2024-11-28 14:00:47 +01:00
Alivecow
905e905c35 fix: Add needed dependency 2024-11-28 13:36:56 +01:00
Alivecow
444000a101 fix: Adding fix after merge error 2024-11-28 13:30:44 +01:00
An0nymous
9acddc2867
Merge pull request #27 from 0xalivecow/dev
feat: Add edf calculation
2024-11-28 13:22:17 +01:00
An0nymous
2cbda23e9c
Merge branch 'main' into dev 2024-11-28 13:22:11 +01:00
Alivecow
39c4d9b80d feat: Add edf calculation 2024-11-28 13:17:51 +01:00
Alivecow
b898c32ded fix: Re-Add else-if case 2024-11-27 14:13:30 +01:00
Alivecow
6532c576c6 fix: Fix incorrect degree calculation 2024-11-27 14:06:40 +01:00
An0nymous
4a2b0ab014
Merge pull request #26 from 0xalivecow/dev
fix: Attempting further ddf fixes
2024-11-27 13:48:54 +01:00
Alivecow
fa7d33aaf6 fix: Attempting further ddf fixes 2024-11-27 13:47:13 +01:00
Alivecow
d599292d3a fix: Fix dff algorithm attempt 2024-11-27 10:17:29 +01:00
Alivecow
b54753fe7e fix: Remove mod from X subtrahend in dff
The modular operation is likely incorrect in this case.
Removig it.
2024-11-26 14:55:40 +01:00
An0nymous
361c6ab813
Merge pull request #25 from 0xalivecow/dev
feat: Add ddf algorithm
2024-11-26 13:21:49 +01:00
Alivecow
341b22e184 feat: Add ddf algorithm 2024-11-26 13:19:07 +01:00
An0nymous
be4f8c9f14
Merge pull request #24 from 0xalivecow/dev
Merge sff runner adaption
2024-11-25 14:24:23 +01:00
Alivecow
6856420ff9 feat: Add task runner for the sff task 2024-11-25 14:19:41 +01:00
Alivecow
1c9948ac62 fix: Change sff to use the exponent as a tuple again 2024-11-24 14:07:37 +01:00
Alivecow
2d4f7a1110 feat: sff working in testcase 2024-11-23 19:20:25 +01:00
Alivecow
17bade8a62 WIP: feat: Initial implementation of ssf. Sort missinf 2024-11-23 19:07:30 +01:00
An0nymous
aa756b5144
Merge pull request #23 from 0xalivecow/dev
fix: Make all polynomials monic in task fn
2024-11-23 13:46:10 +01:00
Alivecow
69a2026c84 fix: Make all polynomials monic in task fn 2024-11-23 13:33:51 +01:00
An0nymous
454790d24f
Merge pull request #22 from 0xalivecow/dev
feat: Adding gcd implementation
2024-11-23 12:34:29 +01:00
An0nymous
2e73125e14
Merge branch 'main' into dev 2024-11-23 12:34:23 +01:00
Alivecow
0b18ba1bff feat: Adding gcd implementation 2024-11-23 12:31:27 +01:00
Alivecow
1a2910b28f fix: Add removal of leading zeros in poly diff 2024-11-23 11:42:05 +01:00
An0nymous
ab755444c6
Merge pull request #21 from 0xalivecow/dev
Merge poly diff functionality
2024-11-23 10:29:03 +01:00
Alivecow
8be8dc7a54 feat: Add edge case handling for poly diff
Add handling for cases in which poly is of degree 0 or 1
2024-11-23 10:26:32 +01:00
Alivecow
4b1bca8ee0 feat: add function for polynomial differentiation 2024-11-23 10:17:08 +01:00
Alivecow
b595276143 fix: Fix incorrect naming of response json object for monic 2024-11-23 09:44:47 +01:00
Alivecow
1290adcd9b fix: Fix error in calling of monic function 2024-11-22 21:34:01 +01:00
An0nymous
1b45c192b3
Merge pull request #20 from 0xalivecow/dev
Merging test runner implementation for monic and sqrt
2024-11-22 21:19:12 +01:00
Alivecow
5bb9bcebff feat: ready test runner for monic and sqrt tasks 2024-11-22 21:16:53 +01:00
Alivecow
f75e7de733 feat: Add polynomial square root algo 2024-11-22 20:48:06 +01:00
An0nymous
e90491a03c
Merge pull request #19 from 0xalivecow/dev
Merge Monic functionality
2024-11-22 15:49:59 +01:00
Alivecow
6391912bc4 feat: Add and improve poly monic function with testcases
Make a polynomial monic by dividing all field elements with the leading
element
2024-11-22 15:47:59 +01:00
Alivecow
5e50ef6091 refactor: apply cargo recommended cleanups 2024-11-22 15:28:36 +01:00
Alivecow
a5a3ea61fa refactor: Split Polynomial class into poly.rs file 2024-11-22 15:28:00 +01:00
Alivecow
ad8326b51e fix: Modifiy the sorting behavior and remove unneded testcases 2024-11-22 14:36:20 +01:00
An0nymous
922fdd04cc
Merge pull request #18 from 0xalivecow/dev
Merge fixes and initial monic function
2024-11-22 11:40:23 +01:00
Alivecow
1db9b65dda Merge branch 'feat_poly_algs' into dev 2024-11-22 11:37:43 +01:00
Alivecow
a520a811b4 fix/feat: Attempt more fixes for the sorting function and add initial monic function 2024-11-22 11:37:35 +01:00
An0nymous
e92c8ddba8
Merge pull request #17 from 0xalivecow/dev
Add fixes for powmod and sorting of polynomials
2024-11-21 17:22:14 +01:00
Alivecow
81fe06941d fix: add fix for powmod spcial case k=0 2024-11-21 17:20:29 +01:00
Alivecow
b63dc86c7e WIP: feat: Change soring behaviour and add new testcase 2024-11-21 16:56:28 +01:00
An0nymous
279571dc00
Merge pull request #16 from 0xalivecow/dev
Add polynomial sorting
2024-11-20 19:52:16 +01:00
Alivecow
bad946e9ac feat: Add proper handling in testcase runner and add testing json file 2024-11-20 19:50:26 +01:00
Alivecow
c3ea652c87 feat: Sorting of polynomial array with rust standard sort implemented 2024-11-20 19:37:46 +01:00
Alivecow
bb5e762a1d chore: Try pushing again because no response was received from pipeline 2024-11-16 23:21:10 +01:00
Alivecow
ca2067c04e fix: Add better handling of special cases to powmod 2024-11-16 20:31:16 +01:00
An0nymous
c5d3db27f4
Merge pull request #15 from 0xalivecow/dev
fix: Add further handling to leading zero blocks in add and powmod
2024-11-16 15:01:18 +01:00
Alivecow
295ed98c1e fix: Add further handling to leading zero blocks in add and powmod 2024-11-16 14:59:31 +01:00
alivecow
7dc6fa1ac9 fix: Fix handling of special cases in powmod 2024-11-15 20:14:54 +01:00
alivecow
67bbf67f18 fix: Handle response on adding arbit. len equal polynomials 2024-11-15 20:02:16 +01:00
alivecow
6a04e00fb2 fix: Fix remainder output of div function 2024-11-15 15:27:13 +01:00
An0nymous
c1bcb768ba
Merge pull request #14 from 0xalivecow/dev
Merge fixes for pfmath functions
2024-11-15 12:50:31 +01:00
alivecow
0784c26456 fix: Add handling for larger divisor 2024-11-15 11:29:25 +01:00
alivecow
2a9db307d9 fix: Add handling of pow with 0 2024-11-15 10:26:38 +01:00
alivecow
5dc299372a fix: Add handling of zero mulitplication for polynomials 2024-11-15 10:13:05 +01:00
alivecow
9785b8d8aa refactor: apply rust suggested code cleanups 2024-11-14 23:42:38 +01:00
An0nymous
a0ff95548e
Merge pull request #13 from 0xalivecow/dev
Add basic pfmath functionality
2024-11-14 23:12:02 +01:00
alivecow
68d9f13a3d feat: finialise test runner and add testing json 2024-11-14 23:08:20 +01:00
alivecow
deb4261121 feat: add division and powmod (WIP) and start adapting task runner 2024-11-14 22:30:55 +01:00
alivecow
a05f2f02b6 feat/refactor: Change gfmul to take references and add field div 2024-11-13 20:27:20 +01:00
alivecow
11916e29f0 feat: initial pow support working 2024-11-12 18:58:20 +01:00
0xalivecow
6431a6636e
feat: start working on add for polynomial 2024-11-11 10:31:59 +01:00
0xalivecow
6e33e2e44c
feat: Initial multiplication working 2024-11-10 18:30:41 +01:00
0xalivecow
811e2b21f6
feat: Implement field object and addition
Starting work on proper field object
Polynomial addition working
2024-11-08 13:09:12 +01:00
An0nymous
b5be86401d
Merge pull request #12 from 0xalivecow/dev
fix: performance improvements
2024-11-07 22:49:24 +01:00
0xalivecow
84d99f2414
fix: performance improvements 2024-11-07 22:47:24 +01:00
An0nymous
7d0ca81a10
Merge pull request #11 from 0xalivecow/dev
Add fixed for pad oracle performance and range
2024-11-07 20:57:23 +01:00
0xalivecow
95de66aca0
fix: Fix performance and algorithm issues
Consolidate sent to server to save time
Add full range to q block sending
2024-11-07 20:55:57 +01:00
0xalivecow
10fd837be9
refactor: improve performance 2024-11-07 17:45:05 +01:00
An0nymous
5953b98897
Merge pull request #10 from 0xalivecow/dev
Add padding oracle funcionality
2024-11-07 10:32:32 +01:00
0xalivecow
0f8d202a06
feat: Add edge case treatment 2024-11-07 10:28:09 +01:00
0xalivecow
757afbdc95
refactor: Hopefully increase speed by reducing send code 2024-11-07 09:32:18 +01:00
0xalivecow
9ae53e12fd
feat: Initial padding oracle working. Pending check for special case.
The initial padding oracle attack is working. More tests need to be
added and there needs to be a check for the special case of the 02 01,
02 02 padding case
2024-11-06 23:38:54 +01:00
0xalivecow
b81bbab16c
doc: add docmentation and test 2024-11-04 15:46:09 +01:00
An0nymous
766a801071
Merge pull request #9 from 0xalivecow/dev
fix: add padding to empty ad case
2024-11-03 20:17:29 +01:00
0xalivecow
1dfed264e9
fix: add padding to empty ad case 2024-11-03 20:15:55 +01:00
An0nymous
aa1468c635
Merge pull request #8 from 0xalivecow/dev
fix: add handling for larger ad values
2024-11-03 17:55:43 +01:00
0xalivecow
0d8f110902
fix: add handling for larger ad values 2024-11-03 17:52:40 +01:00
An0nymous
f0fc2ea0e8
Merge pull request #7 from 0xalivecow/dev
feat: add aes/sea encrypt/decrypt in gcm and add test cases
2024-11-03 14:29:28 +01:00
0xalivecow
6b2775cde1
feat: add aes/sea encrypt/decrypt in gcm and add test cases 2024-11-03 14:12:48 +01:00
An0nymous
aa57e74b98
Merge pull request #6 from 0xalivecow/dev
Add gcm aes and modified gfmul
2024-11-03 11:25:20 +01:00
0xalivecow
6bef350301
feat: adapt test runner for gcm aes and add test cases 2024-11-03 11:20:09 +01:00
0xalivecow
e33a26adab
feat: gfmul and aes gcm working 2024-11-03 10:58:52 +01:00
An0nymous
7a7483fade
Merge pull request #5 from 0xalivecow/dev
feat: add gcm semantic to b2p and p2b
2024-11-01 21:22:56 +01:00
0xalivecow
8db0bbaa63
feat: add gcm semantic to b2p and p2b 2024-11-01 21:20:46 +01:00
An0nymous
3f861d7a1e
Merge pull request #4 from 0xalivecow/dev
Dev
2024-10-30 18:02:16 +01:00
0xalivecow
28a8753d55
feat: add test case for XEX empty 2024-10-30 18:00:09 +01:00
0xalivecow
2e22bd5789
refactor: fix broken gfmil algo 2024-10-30 17:57:24 +01:00
0xalivecow
6d808aef54
chore: debug official ci 2024-10-29 20:22:02 +01:00
0xalivecow
ccf0b03ec0
feat: add more shifting capabilities for gfmul 2024-10-29 14:50:55 +01:00
0xalivecow
f4c49a9137
refactor: externalise gfmul to make it more accessible and semantic support 2024-10-29 13:53:10 +01:00
An0nymous
3b0757132e
Merge pull request #3 from 0xalivecow/dev
Dev merge gfmul and XEX tasks
2024-10-28 18:45:47 +01:00
0xalivecow
31050ea696
feat: finalise XEX runner and testing 2024-10-28 18:41:15 +01:00
0xalivecow
c34557ea29
feat: both XEX enc/dec are working in atomic tests 2024-10-28 00:35:39 +01:00
0xalivecow
5c1c0f6c5e
refactor: Refactor gfmul function to enable use in XEX 2024-10-27 22:32:14 +01:00
0xalivecow
f6fe75b987
Merge branch 'dev' into feat_xex 2024-10-27 17:25:27 +01:00
0xalivecow
f3410c705e
chore: add debug statements to the pipeline run 2024-10-26 23:13:38 +02:00
0xalivecow
01c7f522b5
chore: move testing files 2024-10-26 22:07:16 +02:00
0xalivecow
32bc8725e2
chore: claen up debug prints 2024-10-26 22:02:18 +02:00
0xalivecow
76cbe0d4c0
chore: remove lockfile to enable pipeline 2024-10-26 21:51:21 +02:00
0xalivecow
e1a6ae20a4
feat: set up testing for gfmul task 2024-10-26 21:43:35 +02:00
0xalivecow
6ef05f6018
chore: fix dependency verions 2024-10-26 19:06:24 +02:00
0xalivecow
96f65bf42d
chore: add vendored packages to cargo configs 2024-10-26 19:03:44 +02:00
0xalivecow
16ace40116
chore: remove num-bigint dependency 2024-10-26 18:53:14 +02:00
0xalivecow
8e9388c353
refactor: Fix github build pipeline 2024-10-26 18:51:07 +02:00
An0nymous
c818d5cde4
Merge pull request #2 from 0xalivecow/dev
Poly2Block; Block2Poly; SEA128 tasks working
2024-10-23 17:21:06 +02:00
0xalivecow
becb953926
WIP: feat: starting work on xex 2024-10-23 10:46:11 +02:00
46 changed files with 17189 additions and 527 deletions

View file

@ -0,0 +1,5 @@
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"

View file

@ -66,4 +66,5 @@ jobs:
docker tag ghcr.io/johndoe31415/labwork-docker:master labwork docker tag ghcr.io/johndoe31415/labwork-docker:master labwork
- name: Run labwork container - name: Run labwork container
run: | run: |
docker run -v $PWD:/dut/ labwork /bin/bash -c '/dut/build && /dut/kauma ./example_challenges/block2poly.json' docker run -v $PWD:/dut/ labwork /bin/bash -c '/dut/build && /dut/kauma ./test_json/kauma_tests.json'

3
.gitignore vendored
View file

@ -2,10 +2,11 @@
# will have compiled files and executables # will have compiled files and executables
# debug/ # debug/
target/ target/
vendor/
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries # Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html # More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
# Cargo.lock Cargo.lock
# These are backup files generated by rustfmt # These are backup files generated by rustfmt
**/*.rs.bk **/*.rs.bk

250
Cargo.lock generated
View file

@ -1,250 +0,0 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "anyhow"
version = "1.0.90"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37bf3594c4c988a53154954629820791dde498571819ae4ca50ca811e060cc95"
[[package]]
name = "autocfg"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "base64"
version = "0.22.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6"
[[package]]
name = "bitflags"
version = "2.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de"
[[package]]
name = "cc"
version = "1.1.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c2e7962b54006dcfcc61cb72735f4d89bb97061dd6a7ed882ec6b8ee53714c6f"
dependencies = [
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
[[package]]
name = "foreign-types"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6f339eb8adc052cd2ca78910fda869aefa38d22d5cb648e6485e4d3fc06f3b1"
dependencies = [
"foreign-types-shared",
]
[[package]]
name = "foreign-types-shared"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "00b0228411908ca8685dba7fc2cdd70ec9990a6e753e89b6ac91a84c40fbaf4b"
[[package]]
name = "itoa"
version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49f1f14873335454500d59611f1cf4a4b0f786f9ac11f4312a78e4cf2566695b"
[[package]]
name = "kauma"
version = "0.1.0"
dependencies = [
"anyhow",
"base64",
"num-bigint",
"openssl",
"serde",
"serde_json",
]
[[package]]
name = "libc"
version = "0.2.161"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9489c2807c139ffd9c1794f4af0ebe86a828db53ecdc7fea2111d0fed085d1"
[[package]]
name = "memchr"
version = "2.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "num-bigint"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5e44f723f1133c9deac646763579fdb3ac745e418f2a7af9cd0c431da1f20b9"
dependencies = [
"num-integer",
"num-traits",
]
[[package]]
name = "num-integer"
version = "0.1.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7969661fd2958a5cb096e56c8e1ad0444ac2bbcd0061bd28660485a44879858f"
dependencies = [
"num-traits",
]
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
[[package]]
name = "once_cell"
version = "1.20.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1261fe7e33c73b354eab43b1273a57c8f967d0391e80353e51f764ac02cf6775"
[[package]]
name = "openssl"
version = "0.10.68"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6174bc48f102d208783c2c84bf931bb75927a617866870de8a4ea85597f871f5"
dependencies = [
"bitflags",
"cfg-if",
"foreign-types",
"libc",
"once_cell",
"openssl-macros",
"openssl-sys",
]
[[package]]
name = "openssl-macros"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a948666b637a0f465e8564c73e89d4dde00d72d4d473cc972f390fc3dcee7d9c"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "openssl-sys"
version = "0.9.104"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "45abf306cbf99debc8195b66b7346498d7b10c210de50418b5ccd7ceba08c741"
dependencies = [
"cc",
"libc",
"pkg-config",
"vcpkg",
]
[[package]]
name = "pkg-config"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "953ec861398dccce10c670dfeaf3ec4911ca479e9c02154b3a215178c5f566f2"
[[package]]
name = "proc-macro2"
version = "1.0.88"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c3a7fc5db1e57d5a779a352c8cdb57b29aa4c40cc69c3a68a7fedc815fbf2f9"
dependencies = [
"unicode-ident",
]
[[package]]
name = "quote"
version = "1.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
dependencies = [
"proc-macro2",
]
[[package]]
name = "ryu"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "serde"
version = "1.0.210"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c8e3592472072e6e22e0a54d5904d9febf8508f65fb8552499a1abc7d1078c3a"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.210"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "243902eda00fad750862fc144cea25caca5e20d615af0a81bee94ca738f1df1f"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.129"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6dbcf9b78a125ee667ae19388837dd12294b858d101fdd393cb9d5501ef09eb2"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
]
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "syn"
version = "2.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89132cd0bf050864e1d38dc3bbc07a0eb8e7530af26344d3d2bbbef83499f590"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "unicode-ident"
version = "1.0.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e91b56cd4cadaeb79bbf1a5645f6b4f8dc5bde8834ad5894a8db35fda9efa1fe"
[[package]]
name = "vcpkg"
version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426"

View file

@ -5,9 +5,22 @@ edition = "2021"
rust = "1.75" rust = "1.75"
[dependencies] [dependencies]
anyhow = "1.0.90" anyhow = "1.0.91"
base64 = "0.22.1" base64 = "0.22"
num-bigint = "0.4.6" openssl = "0.10"
openssl = "0.10.68" serde = { version = "1.0", features = ["derive"] }
serde = { version = "1.0.210", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
num = "0.4"
rand = "0.8"
threadpool = "1.8"
num_cpus = "1.16.0"
[source.crates-io]
replace-with = "vendored-sources"
[source.vendored-sources]
directory = "vendor"
[profile.profiling]
inherits = "release"
debug = true

7
build
View file

@ -1,7 +1,8 @@
#!/bin/bash #!/bin/bash
export OPENSSL_LIB_DIR=/usr/lib/x86_64-linux-gnu
export OPENSSL_INCLUDE_DIR=/usr/include/openssl
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
cd $SCRIPT_DIR cd $SCRIPT_DIR
cargo build --release --locked --offline
ln -s /rust/vendor $SCRIPT_DIR/vendor
cargo build --release --offline

3
kauma
View file

@ -1,7 +1,4 @@
#!/bin/bash #!/bin/bash
export OPENSSL_LIB_DIR=/usr/lib/x86_64-linux-gnu
export OPENSSL_INCLUDE_DIR=/usr/include/openssl
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
cd $SCRIPT_DIR cd $SCRIPT_DIR
cargo run --release --locked --offline -- $@ cargo run --release --locked --offline -- $@

View file

@ -1,8 +1,10 @@
use std::{ use std::{
env::{self, args}, env::{self},
fs, fs,
}; };
// TESTING 2
use anyhow::Result; use anyhow::Result;
fn main() -> Result<()> { fn main() -> Result<()> {
@ -12,7 +14,7 @@ fn main() -> Result<()> {
let json = fs::read_to_string(path_to_workload).unwrap(); let json = fs::read_to_string(path_to_workload).unwrap();
let workload = kauma::utils::parse::parse_json(json)?; let workload = kauma::utils::parse::parse_json(json)?;
let response = kauma::tasks::task_distrubute(&workload)?; let response = kauma::tasks::task_distribute(&workload)?;
println!("{}", serde_json::to_string(&response)?); println!("{}", serde_json::to_string(&response)?);
Ok(()) Ok(())

View file

@ -1,12 +1,28 @@
use base64::prelude::*;
use std::collections::HashMap; use std::collections::HashMap;
use crate::utils::parse::{Responses, Testcase, Testcases}; use crate::utils::parse::{Responses, Testcase, Testcases};
use tasks01::{block2poly::block2poly, poly2block::poly2block, sea128::sea128}; use tasks01::{
block2poly::block2poly,
gcm::{gcm_decrypt, gcm_encrypt},
gcm_crack::gcm_crack,
gfmul::gfmul_task,
pad_oracle::padding_oracle,
pfmath::{
gfdiv, gfpoly_add, gfpoly_diff, gfpoly_divmod, gfpoly_factor_ddf, gfpoly_factor_edf,
gfpoly_factor_sff, gfpoly_gcd, gfpoly_make_monic, gfpoly_mul, gfpoly_pow, gfpoly_powmod,
gfpoly_sort, gfpoly_sqrt,
},
poly2block::poly2block,
sea128::sea128,
xex::fde_xex,
};
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Result};
use serde_json::{json, Value}; use serde_json::{json, Value};
mod tasks01; pub mod tasks01;
pub fn task_deploy(testcase: &Testcase) -> Result<Value> { pub fn task_deploy(testcase: &Testcase) -> Result<Value> {
/* /*
@ -19,7 +35,7 @@ pub fn task_deploy(testcase: &Testcase) -> Result<Value> {
match testcase.action.as_str() { match testcase.action.as_str() {
"poly2block" => { "poly2block" => {
let result = poly2block(args); let result = BASE64_STANDARD.encode(poly2block(args)?);
let json = json!({"block" : result}); let json = json!({"block" : result});
Ok(json) Ok(json)
} }
@ -34,6 +50,140 @@ pub fn task_deploy(testcase: &Testcase) -> Result<Value> {
let json = json!({"output" : result}); let json = json!({"output" : result});
Ok(json) Ok(json)
} }
"gfmul" => {
let result = BASE64_STANDARD.encode(gfmul_task(args)?);
let json = json!({"product" : result});
Ok(json)
}
"xex" => {
let result = BASE64_STANDARD.encode(fde_xex(args)?);
let json = json!({"output" : result});
Ok(json)
}
"gcm_encrypt" => {
let (ciphertext, auth_tag, l_field, auth_key_h) = gcm_encrypt(args)?;
let out_ciph = BASE64_STANDARD.encode(&ciphertext);
let out_tag = BASE64_STANDARD.encode(&auth_tag);
let out_l = BASE64_STANDARD.encode(&l_field);
let out_h = BASE64_STANDARD.encode(&auth_key_h);
let json = json!({"ciphertext" : out_ciph, "tag" : out_tag, "L" : out_l, "H" : out_h});
Ok(json)
}
"gcm_decrypt" => {
let (plaintext, valid) = gcm_decrypt(args)?;
let out_plain = BASE64_STANDARD.encode(&plaintext);
let json = json!({ "authentic" : valid, "plaintext" : out_plain});
Ok(json)
}
"padding_oracle" => {
let plaintext = padding_oracle(args)?;
let out_plain = BASE64_STANDARD.encode(&plaintext);
let json = json!({"plaintext" : out_plain});
Ok(json)
}
"gfpoly_add" => {
let result = gfpoly_add(args)?;
let json = json!({"S" : result.to_c_array()});
Ok(json)
}
"gfpoly_mul" => {
let result = gfpoly_mul(args)?;
let json = json!({"P" : result.to_c_array()});
Ok(json)
}
"gfpoly_pow" => {
let result = gfpoly_pow(args)?;
let json = json!({"Z" : result.to_c_array()});
Ok(json)
}
"gfdiv" => {
let result = gfdiv(args)?;
let out = result.to_b64();
let json = json!({"q" : out});
Ok(json)
}
"gfpoly_divmod" => {
let result = gfpoly_divmod(args)?;
let json = json!({"Q" : result.0.to_c_array(), "R" : result.1.to_c_array()});
Ok(json)
}
"gfpoly_powmod" => {
let result = gfpoly_powmod(args)?;
let json = json!({"Z" : result.to_c_array()});
Ok(json)
}
"gfpoly_sort" => {
let sorted_array = gfpoly_sort(args)?;
let mut result: Vec<Vec<String>> = vec![];
for poly in sorted_array {
result.push(poly.to_c_array());
}
let json = json!({"sorted_polys" : json!(result)});
Ok(json)
}
"gfpoly_make_monic" => {
let result = gfpoly_make_monic(args)?;
let json = json!({"A*" : result.to_c_array()});
Ok(json)
}
"gfpoly_sqrt" => {
let result = gfpoly_sqrt(args)?;
let json = json!({"S" : result.to_c_array()});
Ok(json)
}
"gfpoly_diff" => {
let result = gfpoly_diff(args)?;
let json = json!({"F'" : result.to_c_array()});
Ok(json)
}
"gfpoly_gcd" => {
let result = gfpoly_gcd(args)?;
let json = json!({"G" : result.to_c_array()});
Ok(json)
}
"gfpoly_factor_sff" => {
let result = gfpoly_factor_sff(args)?;
let json = json!({"factors" : result});
Ok(json)
}
"gfpoly_factor_ddf" => {
let result = gfpoly_factor_ddf(args)?;
let json = json!({"factors" : result});
Ok(json)
}
"gfpoly_factor_edf" => {
let result = gfpoly_factor_edf(args)?;
let json = json!({"factors" : result});
Ok(json)
}
"gcm_crack" => {
let result = gcm_crack(args)?;
let json = json!(result);
Ok(json)
}
_ => Err(anyhow!( _ => Err(anyhow!(
"Fatal. No compatible action found. Json data was {:?}. Arguments were; {:?}", "Fatal. No compatible action found. Json data was {:?}. Arguments were; {:?}",
testcase, testcase,
@ -42,16 +192,60 @@ pub fn task_deploy(testcase: &Testcase) -> Result<Value> {
} }
} }
pub fn task_distrubute(testcases: &Testcases) -> Result<Responses> { fn task_distribute_mt(testcases: &Testcases) -> Result<Responses> {
eprintln!("USING MULTITHREADED");
let mut responses: HashMap<String, Value> = HashMap::new();
let pool = threadpool::ThreadPool::default();
let (tx, rx) = std::sync::mpsc::channel();
for (key, testcase) in testcases.testcases.clone() {
let tx = tx.clone();
let testcase = testcase.clone();
pool.execute(move || {
tx.send((key, task_deploy(&testcase)))
.expect("could not send return value of thread to main thread")
});
}
for _ in 0..testcases.testcases.len() {
let result = match rx.recv_timeout(std::time::Duration::from_secs(60 * 5)) {
Ok(r) => r,
Err(e) => {
eprintln!("! Job timed out: {e}");
return Err(e.into());
}
};
match result.1 {
Ok(v) => {
let _ = responses.insert(result.0, v);
}
Err(e) => {
eprintln!("! failed to solve a challenge: {e:#}");
continue;
}
}
}
Ok(Responses { responses })
}
pub fn task_distribute_st(testcases: &Testcases) -> Result<Responses> {
//eprintln!("USING SINGLETHREADED");
let mut responses: HashMap<String, Value> = HashMap::new(); let mut responses: HashMap<String, Value> = HashMap::new();
for (id, testcase) in &testcases.testcases { for (id, testcase) in &testcases.testcases {
responses.insert(id.to_owned(), task_deploy(testcase).unwrap()); responses.insert(id.to_owned(), task_deploy(testcase).unwrap());
} }
Ok(Responses { Ok(Responses { responses })
responses: responses, }
})
pub fn task_distribute(testcases: &Testcases) -> Result<Responses> {
let cpus = num_cpus::get();
if cpus > 1 {
task_distribute_mt(testcases)
} else {
task_distribute_st(testcases)
}
} }
#[cfg(test)] #[cfg(test)]
@ -62,7 +256,7 @@ mod tests {
#[test] #[test]
fn test_task_deploy() { fn test_task_deploy() {
let json = fs::read_to_string("src/test_json/poly2block_example.json").unwrap(); let json = fs::read_to_string("test_json/poly2block_example.json").unwrap();
let parsed = parse_json(json).unwrap(); let parsed = parse_json(json).unwrap();
let testcase = parsed let testcase = parsed
.testcases .testcases
@ -78,13 +272,13 @@ mod tests {
#[test] #[test]
fn test_task_distribution() -> Result<()> { fn test_task_distribution() -> Result<()> {
let json = fs::read_to_string("src/test_json/poly2block_example.json").unwrap(); let json = fs::read_to_string("test_json/poly2block_example.json").unwrap();
let parsed = parse_json(json).unwrap(); let parsed = parse_json(json).unwrap();
let expected = json!({ "responses": { "b856d760-023d-4b00-bad2-15d2b6da22fe": {"block": "ARIAAAAAAAAAAAAAAAAAgA=="}}}); let expected = json!({ "responses": { "b856d760-023d-4b00-bad2-15d2b6da22fe": {"block": "ARIAAAAAAAAAAAAAAAAAgA=="}}});
assert_eq!( assert_eq!(
serde_json::to_value(task_distrubute(&parsed)?).unwrap(), serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap() serde_json::to_value(expected).unwrap()
); );
@ -93,7 +287,7 @@ mod tests {
#[test] #[test]
fn test_task_sea128_task_full() -> Result<()> { fn test_task_sea128_task_full() -> Result<()> {
let json = fs::read_to_string("src/test_json/sea128.json").unwrap(); let json = fs::read_to_string("test_json/sea128.json").unwrap();
let parsed = parse_json(json).unwrap(); let parsed = parse_json(json).unwrap();
let expected = json!({ let expected = json!({
@ -108,7 +302,134 @@ mod tests {
}); });
assert_eq!( assert_eq!(
serde_json::to_value(task_distrubute(&parsed)?).unwrap(), serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gfmul_full() -> Result<()> {
let json = fs::read_to_string("test_json/gfmul_test.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses": { "b856d760-023d-4b00-bad2-15d2b6da22fe": {"product": "hSQAAAAAAAAAAAAAAAAAAA=="}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_xex_full() -> Result<()> {
let json = fs::read_to_string("test_json/xex_tests.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses": {
"0192d428-3913-762b-a702-d14828eae1f8": {"output": "mHAVhRCKPAPx0BcufG5BZ4+/CbneMV/gRvqK5rtLe0OJgpDU5iT7z2P0R7gEeRDO"},
"0192d428-3913-7168-a3bb-69c258c74dc1": {"output": "SGV5IHdpZSBrcmFzcyBkYXMgZnVua3Rpb25pZXJ0IGphIG9mZmVuYmFyIGVjaHQu"}
}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gcm_encrypt_aes_case() -> Result<()> {
let json = fs::read_to_string("test_json/gcm_encrypt.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses" : { "b856d760-023d-4b00-bad2-15d2b6da22fe" : {
"ciphertext": "ET3RmvH/Hbuxba63EuPRrw==",
"tag": "Mp0APJb/ZIURRwQlMgNN/w==",
"L": "AAAAAAAAAEAAAAAAAAAAgA==",
"H": "Bu6ywbsUKlpmZXMQyuGAng=="
}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gcm_encrypt_sea_case() -> Result<()> {
let json = fs::read_to_string("test_json/gcm_encrypt_sea.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses" : { "b856d760-023d-4b00-bad2-15d2b6da22fe" : {
"ciphertext": "0cI/Wg4R3URfrVFZ0hw/vg==",
"tag": "ysDdzOSnqLH0MQ+Mkb23gw==",
"L": "AAAAAAAAAEAAAAAAAAAAgA==",
"H": "xhFcAUT66qWIpYz+Ch5ujw=="
}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gcm_decrypt_aes_case() -> Result<()> {
let json = fs::read_to_string("test_json/gcm_decrypt_aes.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses" : { "b856d760-023d-4b00-bad2-15d2b6da22fe" : {
"plaintext": "RGFzIGlzdCBlaW4gVGVzdA==",
"authentic": true,
}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gcm_decrypt_sea_case() -> Result<()> {
let json = fs::read_to_string("test_json/gcm_decrypt_sea.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses" : { "b856d760-023d-4b00-bad2-15d2b6da22fe" : {
"plaintext": "RGFzIGlzdCBlaW4gVGVzdA==",
"authentic": true,
}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap()
);
Ok(())
}
#[test]
fn test_task_gcm_gfpoly_add() -> Result<()> {
let json = fs::read_to_string("test_json/gcm_decrypt_sea.json").unwrap();
let parsed = parse_json(json).unwrap();
let expected = json!({ "responses" : { "b856d760-023d-4b00-bad2-15d2b6da22fe" : {
"plaintext": "RGFzIGlzdCBlaW4gVGVzdA==",
"authentic": true,
}}});
assert_eq!(
serde_json::to_value(task_distribute(&parsed)?).unwrap(),
serde_json::to_value(expected).unwrap() serde_json::to_value(expected).unwrap()
); );

View file

@ -1,15 +1,17 @@
use crate::utils::poly::{b64_2_num, get_coefficients}; use crate::utils::poly::block_2_polynomial;
use anyhow::Result; use anyhow::Result;
use base64::prelude::*;
use serde_json::Value; use serde_json::Value;
pub fn block2poly(val: &Value) -> Result<Vec<u8>> { pub fn block2poly(val: &Value) -> Result<Vec<u8>> {
// Convert JSON data in to a u128 // Convert JSON data in to a u128
// TODO: Transfer decoding into own function? // TODO: Transfer decoding into own function?
let string: String = serde_json::from_value(val["block"].clone())?; let string: String = serde_json::from_value(val["block"].clone())?;
let block = BASE64_STANDARD.decode(string)?;
let number = b64_2_num(&string)?; let semantic: String = serde_json::from_value(val["semantic"].clone())?;
let coefficients: Vec<u8> = get_coefficients(number); let coefficients: Vec<u8> = block_2_polynomial(block, &semantic)?; //get_coefficients(number);
Ok(coefficients) Ok(coefficients)
} }
@ -17,14 +19,13 @@ pub fn block2poly(val: &Value) -> Result<Vec<u8>> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use serde_json::json; use serde_json::json;
use std::str::FromStr;
// Note this useful idiom: importing names from outer (for mod tests) scope. // Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*; use super::*;
#[test] #[test]
fn block2poly_task01() -> Result<()> { fn block2poly_task01() -> Result<()> {
let block: Value = json!({"block" : "ARIAAAAAAAAAAAAAAAAAgA=="}); let block: Value = json!({"block" : "ARIAAAAAAAAAAAAAAAAAgA==", "semantic" : "xex"});
let coefficients: Vec<u8> = vec![0, 9, 12, 127]; let coefficients: Vec<u8> = vec![0, 9, 12, 127];
assert_eq!( assert_eq!(
block2poly(&block)?, block2poly(&block)?,
@ -35,4 +36,18 @@ mod tests {
Ok(()) Ok(())
} }
#[test]
fn block2poly_task02() -> Result<()> {
let block: Value = json!({"block" : "ARIAAAAAAAAAAAAAAAAAgA==", "semantic" : "gcm"});
let coefficients: Vec<u8> = vec![7, 11, 14, 120];
assert_eq!(
block2poly(&block)?,
coefficients,
"Coefficients were: {:?}",
block2poly(&block)?
);
Ok(())
}
} }

52
src/tasks/tasks01/gcm.rs Normal file
View file

@ -0,0 +1,52 @@
use anyhow::{anyhow, Result};
use base64::prelude::*;
use serde_json::Value;
use crate::utils::ciphers::{gcm_decrypt_aes, gcm_decrypt_sea, gcm_encrypt_aes, gcm_encrypt_sea};
pub fn gcm_encrypt(args: &Value) -> Result<(Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>)> {
let nonce_text: String = serde_json::from_value(args["nonce"].clone())?;
let nonce = BASE64_STANDARD.decode(nonce_text)?;
let key_text: String = serde_json::from_value(args["key"].clone())?;
let key = BASE64_STANDARD.decode(key_text)?;
let plaintext_text: String = serde_json::from_value(args["plaintext"].clone())?;
let plaintext = BASE64_STANDARD.decode(plaintext_text)?;
let ad_text: String = serde_json::from_value(args["ad"].clone())?;
let ad = BASE64_STANDARD.decode(ad_text)?;
let alg_text: String = serde_json::from_value(args["algorithm"].clone())?;
match alg_text.as_str() {
"aes128" => Ok(gcm_encrypt_aes(nonce, key, plaintext, ad)?),
"sea128" => Ok(gcm_encrypt_sea(nonce, key, plaintext, ad)?),
_ => Err(anyhow!("No compatible algorithm found")),
}
}
pub fn gcm_decrypt(args: &Value) -> Result<(Vec<u8>, bool)> {
let nonce_text: String = serde_json::from_value(args["nonce"].clone())?;
let nonce = BASE64_STANDARD.decode(nonce_text)?;
let key_text: String = serde_json::from_value(args["key"].clone())?;
let key = BASE64_STANDARD.decode(key_text)?;
let plaintext_text: String = serde_json::from_value(args["ciphertext"].clone())?;
let plaintext = BASE64_STANDARD.decode(plaintext_text)?;
let ad_text: String = serde_json::from_value(args["ad"].clone())?;
let ad = BASE64_STANDARD.decode(ad_text)?;
let tag_text: String = serde_json::from_value(args["tag"].clone())?;
let tag = BASE64_STANDARD.decode(tag_text)?;
let alg_text: String = serde_json::from_value(args["algorithm"].clone())?;
match alg_text.as_str() {
"aes128" => Ok(gcm_decrypt_aes(nonce, key, plaintext, ad, tag)?),
"sea128" => Ok(gcm_decrypt_sea(nonce, key, plaintext, ad, tag)?),
_ => Err(anyhow!("No compatible algorithm found")),
}
}

View file

@ -0,0 +1,174 @@
use anyhow::{Ok, Result};
use base64::{prelude::BASE64_STANDARD, Engine};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use crate::utils::{
ciphers::ghash,
dff::ddf,
edf::edf,
field::FieldElement,
math::{reverse_bits_in_bytevec, xor_bytes},
poly::Polynomial,
sff::sff,
};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct CrackAnswer {
tag: String,
H: String,
mask: String,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
struct Message {
ciphertext: Vec<u8>,
ad: Vec<u8>,
tag: Vec<u8>,
l_field: Vec<u8>,
}
fn parse_message(val: &Value) -> Result<(Message, Polynomial)> {
let ciphertext_text: String = serde_json::from_value(val["ciphertext"].clone())?;
let mut ciphertext_bytes: Vec<u8> = BASE64_STANDARD.decode(ciphertext_text)?;
let mut c_len: Vec<u8> = ((ciphertext_bytes.len() * 8) as u64).to_be_bytes().to_vec();
if ciphertext_bytes.len() % 16 != 0 {
ciphertext_bytes.append(vec![0u8; 16 - (ciphertext_bytes.len() % 16)].as_mut());
}
let ciphertext_chunks: Vec<FieldElement> = ciphertext_bytes
.chunks(16)
.into_iter()
.map(|chunk| FieldElement::new(chunk.to_vec()))
.collect();
let ad_text: String = serde_json::from_value(val["associated_data"].clone())?;
let mut ad_bytes: Vec<u8> = BASE64_STANDARD.decode(ad_text)?;
let mut l_field: Vec<u8> = ((ad_bytes.len() * 8) as u64).to_be_bytes().to_vec();
if ad_bytes.len() % 16 != 0 || ad_bytes.is_empty() {
ad_bytes.append(vec![0u8; 16 - (ad_bytes.len() % 16)].as_mut());
}
let ad_chunks: Vec<FieldElement> = ad_bytes
.chunks(16)
.into_iter()
.map(|chunk| FieldElement::new(chunk.to_vec()))
.collect();
let tag_text: String = serde_json::from_value(val["tag"].clone()).unwrap_or("".to_string());
let tag_bytes: Vec<u8> = BASE64_STANDARD.decode(tag_text)?;
let tag_field: FieldElement = FieldElement::new(tag_bytes.clone());
l_field.append(c_len.as_mut());
// Combine all data
let mut combined: Vec<FieldElement> =
Vec::with_capacity(ad_chunks.len() + ciphertext_chunks.len() + 1);
combined.extend(ad_chunks);
combined.extend(ciphertext_chunks.clone());
combined.push(FieldElement::new(l_field.clone()));
combined.push(tag_field);
combined.reverse();
let h_poly: Polynomial = Polynomial::new(combined);
Ok((
Message {
ciphertext: ciphertext_bytes,
ad: ad_bytes,
tag: tag_bytes,
l_field,
},
h_poly,
))
}
pub fn gcm_crack(args: &Value) -> Result<CrackAnswer> {
// Prepare first equation
let (m1_data, m1_h_poly) = parse_message(&args["m1"])?;
let (_, m2_h_poly) = parse_message(&args["m2"])?;
let (m3_data, _) = parse_message(&args["m3"])?;
let combine_poly = m1_h_poly + m2_h_poly;
let combine_sff = sff(combine_poly.monic());
let mut combine_ddf: Vec<(Polynomial, u128)> = vec![];
for (factor, _) in combine_sff {
combine_ddf.extend(ddf(factor));
}
let mut combine_edf: Vec<Polynomial> = vec![];
for (factor, degree) in combine_ddf {
if degree == 1 {
combine_edf.extend(edf(factor, degree as u32));
}
}
let mut m3_auth_tag: Vec<u8> = vec![];
let mut h_candidate: FieldElement = FieldElement::zero();
let mut eky0: Vec<u8> = vec![];
for candidate in combine_edf {
if candidate.degree() == 1 {
h_candidate = candidate.extract_component(0);
let m1_ghash = ghash(
reverse_bits_in_bytevec(h_candidate.to_vec()),
m1_data.ad.clone(),
m1_data.ciphertext.clone(),
m1_data.l_field.clone(),
)
.unwrap();
eky0 = xor_bytes(&m1_data.tag, m1_ghash).unwrap();
eprintln!("eky0: {:?}", BASE64_STANDARD.encode(eky0.clone()));
let m3_ghash = ghash(
reverse_bits_in_bytevec(h_candidate.to_vec()),
m3_data.ad.clone(),
m3_data.ciphertext.clone(),
m3_data.l_field.clone(),
)
.unwrap();
m3_auth_tag = xor_bytes(&eky0, m3_ghash).unwrap();
eprintln!(
"M3 auth tag: {:02X?}",
BASE64_STANDARD.encode(m3_auth_tag.clone())
);
if m3_auth_tag == m3_data.tag {
break;
} else {
eprintln!("H candidate not valid");
}
}
}
let (forgery_data, _) = parse_message(&args["forgery"])?;
let forgery_ghash = ghash(
reverse_bits_in_bytevec(h_candidate.to_vec()),
forgery_data.ad.clone(),
forgery_data.ciphertext.clone(),
forgery_data.l_field.clone(),
)
.unwrap();
let forgery_auth_tag = xor_bytes(&eky0, forgery_ghash).unwrap();
if eky0.is_empty() {
eky0 = vec![0; 16];
}
Ok(CrackAnswer {
tag: BASE64_STANDARD.encode(forgery_auth_tag),
H: h_candidate.to_b64(),
mask: BASE64_STANDARD.encode(eky0),
})
}

View file

@ -1,99 +1,26 @@
use crate::utils::poly::gfmul;
use anyhow::Result; use anyhow::Result;
use base64::prelude::*; use base64::prelude::*;
//use num_bigint::{BigUint, ToBigUint};
use serde_json::Value; use serde_json::Value;
use crate::utils::{ pub fn gfmul_task(args: &Value) -> Result<Vec<u8>> {
math::ByteArray,
poly::{b64_2_num, coefficient_to_binary},
};
pub const RED_POLY: u128 = 0x87000000_00000000_00000000_00000000;
pub fn gfmul(args: &Value) -> Result<String> {
eprintln!("{args}");
// Generate reduction polynomial
let mut red_poly_bytes: ByteArray = ByteArray(RED_POLY.to_be_bytes().to_vec());
eprintln!("Before push {:01X?}", red_poly_bytes);
red_poly_bytes.0.push(0x01);
//red_poly_bytes.0.reverse();
eprintln!("After push {:01X?}", red_poly_bytes);
//let red_poly_num = ; //coefficient_to_binary(reduction_polynomial_coeffs);
//eprintln!("{:?}", serde_json::from_value(args["a"].clone())?);
let poly1_text: String = serde_json::from_value(args["a"].clone())?; let poly1_text: String = serde_json::from_value(args["a"].clone())?;
let mut poly1: ByteArray = ByteArray(BASE64_STANDARD.decode(poly1_text)?); let poly_a = BASE64_STANDARD.decode(poly1_text)?;
poly1.0.push(0x00);
//poly1.0.reverse();
let poly2_text: String = serde_json::from_value(args["b"].clone())?; let poly2_text: String = serde_json::from_value(args["b"].clone())?;
let mut poly2: ByteArray = ByteArray(BASE64_STANDARD.decode(poly2_text)?); let poly_b = BASE64_STANDARD.decode(poly2_text)?;
poly2.0.push(0x00);
//poly2.0.reverse();
eprintln!("poly1 is: {:01X?}", poly1); let semantic: String = serde_json::from_value(args["semantic"].clone())?;
eprintln!("poly2 is: {:01X?}", poly2);
/* Begin of magic algorithm let result = gfmul(&poly_a, &poly_b, &semantic)?;
* poly1 = a = X = V ???
* poly2 = b
* result = Z
*/
let mut result: ByteArray = ByteArray(vec![0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]); Ok(result)
if poly2.LSB_is_one() {
result.xor_byte_arrays(&poly1);
poly2.right_shift();
} else {
poly2.right_shift();
}
while !poly2.is_empty() {
if poly2.LSB_is_one() {
poly1.left_shift();
poly1.xor_byte_arrays(&red_poly_bytes);
eprintln!("Poly1 after reduction: {:01X?}", poly1);
result.xor_byte_arrays(&poly1);
eprintln!(
"LSB was one; \n
poly1 is {:01X?}; \n
poly2 is {:01X?}; \n
result is: {:01X?}",
poly1.0, poly2.0, result.0
)
} else {
poly1.left_shift();
poly1.xor_byte_arrays(&red_poly_bytes);
eprintln!(
"LSB was 0; \n
poly1 is {:01X?}; \n
poly2 is {:01X?}; \n
result is: {:01X?}",
poly1.0, poly2.0, result.0
)
}
poly2.right_shift();
}
//result.xor_byte_arrays(&red_poly_bytes);
//result.xor_byte_arrays(&red_poly_bytes);
eprintln!("Result after last red {:01X?}", &result.0);
eprintln!(
"Should be: {:01X?}",
ByteArray(BASE64_STANDARD.decode("hSQAAAAAAAAAAAAAAAAAAA==")?)
);
result.0.remove(16);
let mut bytes: [u8; 16] = [0u8; 16];
bytes.copy_from_slice(&result.0);
Ok(BASE64_STANDARD.encode(bytes))
} }
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use serde_json::json; use serde_json::json;
use std::str::FromStr;
// Note this useful idiom: importing names from outer (for mod tests) scope. // Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*; use super::*;
@ -101,7 +28,75 @@ mod tests {
#[test] #[test]
fn gfmul_task01() -> Result<()> { fn gfmul_task01() -> Result<()> {
let args: Value = json!({"a": "ARIAAAAAAAAAAAAAAAAAgA==", "b": "AgAAAAAAAAAAAAAAAAAAAA=="}); let args: Value = json!({"a": "ARIAAAAAAAAAAAAAAAAAgA==", "b": "AgAAAAAAAAAAAAAAAAAAAA=="});
let result = gfmul(&args)?;
let poly1_text: String = serde_json::from_value(args["a"].clone())?;
let poly_a = BASE64_STANDARD.decode(poly1_text)?;
let poly2_text: String = serde_json::from_value(args["b"].clone())?;
let poly_b = BASE64_STANDARD.decode(poly2_text)?;
let result = BASE64_STANDARD.encode(gfmul(&poly_a, &poly_b, "xex")?);
assert_eq!(
result, "hSQAAAAAAAAAAAAAAAAAAA==",
"Failure. Calulated result was: {}",
result
);
Ok(())
}
#[test]
fn gfmul_task02() -> Result<()> {
let args: Value = json!({"a": "AwEAAAAAAAAAAAAAAAAAgA==", "b": "gBAAAAAAAAAAAAAAAAAAAA=="});
let poly1_text: String = serde_json::from_value(args["a"].clone())?;
let poly_a = BASE64_STANDARD.decode(poly1_text)?;
let poly2_text: String = serde_json::from_value(args["b"].clone())?;
let poly_b = BASE64_STANDARD.decode(poly2_text)?;
let result = BASE64_STANDARD.encode(gfmul(&poly_a, &poly_b, "xex")?);
assert_eq!(
result, "QKgUAAAAAAAAAAAAAAAAAA==",
"Failure. Calulated result was: {}",
result
);
Ok(())
}
#[test]
fn gfmul_task03() -> Result<()> {
let args: Value = json!({"a": "AwEAAAAAAAAAAAAAAAAAgA==", "b": "oBAAAAAAAAAAAAAAAAAAAA=="});
let poly1_text: String = serde_json::from_value(args["a"].clone())?;
let poly_a = BASE64_STANDARD.decode(poly1_text)?;
let poly2_text: String = serde_json::from_value(args["b"].clone())?;
let poly_b = BASE64_STANDARD.decode(poly2_text)?;
let result = BASE64_STANDARD.encode(gfmul(&poly_a, &poly_b, "xex")?);
assert_eq!(
result, "UIAUAAAAAAAAAAAAAAAAAA==",
"Failure. Calulated result was: {}",
result
);
Ok(())
}
#[test]
fn gfmul_task04() -> Result<()> {
let args: Value = json!({"a": "ARIAAAAAAAAAAAAAAAAAgA==", "b": "AgAAAAAAAAAAAAAAAAAAAA=="});
let poly1_text: String = serde_json::from_value(args["a"].clone())?;
let poly_a = BASE64_STANDARD.decode(poly1_text)?;
let poly2_text: String = serde_json::from_value(args["b"].clone())?;
let poly_b = BASE64_STANDARD.decode(poly2_text)?;
let result = BASE64_STANDARD.encode(gfmul(&poly_a, &poly_b, "xex")?);
assert_eq!( assert_eq!(
result, "hSQAAAAAAAAAAAAAAAAAAA==", result, "hSQAAAAAAAAAAAAAAAAAAA==",
"Failure. Calulated result was: {}", "Failure. Calulated result was: {}",

View file

@ -1,4 +1,9 @@
pub mod block2poly; pub mod block2poly;
pub mod gcm;
pub mod gcm_crack;
pub mod gfmul; pub mod gfmul;
pub mod pad_oracle;
pub mod pfmath;
pub mod poly2block; pub mod poly2block;
pub mod sea128; pub mod sea128;
pub mod xex;

View file

@ -0,0 +1,148 @@
use anyhow::Result;
use base64::prelude::*;
use serde_json::Value;
use std::io::prelude::*;
use std::net::TcpStream;
use std::usize;
pub fn padding_oracle(args: &Value) -> Result<Vec<u8>> {
let hostname: String = serde_json::from_value(args["hostname"].clone())?;
let port_val: Value = serde_json::from_value(args["port"].clone())?;
let port: u64 = port_val.as_u64().expect("Failure in parsing port number");
let iv_string: String = serde_json::from_value(args["iv"].clone())?;
let iv: Vec<u8> = BASE64_STANDARD.decode(iv_string)?;
let cipher_text: String = serde_json::from_value(args["ciphertext"].clone())?;
let ciphertext: Vec<u8> = BASE64_STANDARD.decode(cipher_text)?;
// Initialise tracker to adapt correct byte
let byte_counter = 15;
eprintln!("byte_counter is: {}", byte_counter);
let mut plaintext: Vec<u8> = vec![];
eprintln!("Ciphertext: {:002X?}", ciphertext);
let cipher_chunks: Vec<&[u8]> = ciphertext.chunks(16).rev().collect();
let mut chunk_counter = 0;
for chunk in &cipher_chunks {
let mut stream = TcpStream::connect(format!("{}:{}", hostname, port))?;
stream.set_nodelay(true).expect("Error on no delay");
stream.set_nonblocking(false)?;
// Track value sent to server
let mut attack_counter: Vec<u8> = vec![0; 16];
// Amount of q blocks to send to server.
// TODO:: May be increased via function
let q_block_count: u16 = 256;
//Send the first ciphertext chunk
stream.flush()?;
stream.write_all(&chunk)?;
stream.flush()?;
for i in (0..=15).rev() {
// Craft length message
// FIXME: Assignment is redundant for now
// TODO: Goal is to maybe add speed increase in the future
let l_msg: [u8; 2] = q_block_count.to_le_bytes();
// Generate attack blocks
// TODO: Collect all and send in one
let mut payload: Vec<u8> = Vec::with_capacity(2 + 16 * 265);
payload.extend(l_msg.to_vec());
for _j in 0..q_block_count {
// Next byte
payload.extend(&attack_counter);
attack_counter[i as usize] += 1;
}
stream.write_all(&payload)?;
stream.flush()?;
// Read server response
let mut server_q_resp = [0u8; 256];
stream.read_exact(&mut server_q_resp)?;
// extract valid position
let valid_val = server_q_resp
.iter()
.position(|&r| r == 0x01)
.unwrap_or(0x00) as u8;
if valid_val == 0x00 {
eprintln!("No valid found in main loop");
}
// Craft next attack vector padding; 0x01, 0x02, ...
attack_counter[i as usize] = valid_val;
// Check for edgecase
if i == 15 {
let mut l_msg_check: Vec<u8> = vec![0x01, 0x00];
let mut check_q_block: Vec<u8> = vec![0; 16];
check_q_block[15] = attack_counter[15];
check_q_block[14] = !check_q_block[15];
l_msg_check.extend(check_q_block.as_slice());
stream.write_all(&l_msg_check)?;
let mut buf = [0u8; 0x01];
stream.read(&mut buf)?;
if buf == [0x01] {
} else {
// Search for second hit
let valid_val = 255
- server_q_resp
.iter()
.rev()
.position(|&r| r == 0x01)
.unwrap_or(0x00) as u8;
if valid_val == 0x00 {
eprintln!("No valid found");
}
// Craft next attack vector padding; 0x01, 0x02, ...
attack_counter[i as usize] = valid_val;
}
}
if chunk_counter + 1 < cipher_chunks.len() {
plaintext.push(
cipher_chunks[chunk_counter + 1][i]
^ (attack_counter[i as usize] ^ (15 - i as u8 + 1)),
);
} else {
plaintext.push(iv[i] ^ (attack_counter[i as usize] ^ (15 - i as u8 + 1)));
}
let range = i;
for pos in range..=15 {
let intermediate = attack_counter[pos as usize] ^ (15 - i as u8 + 1);
attack_counter[pos as usize] = intermediate ^ ((15 - i as u8 + 1) + 1);
}
stream.flush()?;
// Write plaintext
}
chunk_counter += 1;
stream.flush()?;
drop(stream);
}
plaintext.reverse();
eprintln!("{:02X?}", BASE64_STANDARD.encode(&plaintext));
Ok(plaintext)
} // the stream is closed here
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_connection() -> Result<()> {
Ok(())
}
}

282
src/tasks/tasks01/pfmath.rs Normal file
View file

@ -0,0 +1,282 @@
use anyhow::Result;
use base64::{prelude::BASE64_STANDARD, Engine};
use serde_json::Value;
use crate::utils::{
self,
dff::ddf,
edf::edf,
field::FieldElement,
poly::{gcd, Polynomial},
sff::{sff, Factors},
};
pub fn gfpoly_add(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let poly_b = Polynomial::from_c_array(&args["B"].clone());
let result = poly_a + poly_b;
Ok(result)
}
pub fn gfpoly_mul(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let poly_b = Polynomial::from_c_array(&args["B"].clone());
let result = poly_a * poly_b;
Ok(result)
}
pub fn gfpoly_pow(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let k: u128 = serde_json::from_value(args["k"].clone())?;
let result = poly_a.pow(k);
Ok(result)
}
pub fn gfdiv(args: &Value) -> Result<FieldElement> {
let f1_text: String = serde_json::from_value(args["a"].clone())?;
let f_a = FieldElement::new(BASE64_STANDARD.decode(f1_text)?);
let f2_text: String = serde_json::from_value(args["b"].clone())?;
let f_b = FieldElement::new(BASE64_STANDARD.decode(f2_text)?);
let result = f_a / f_b;
Ok(result)
}
pub fn gfpoly_divmod(args: &Value) -> Result<(Polynomial, Polynomial)> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let poly_b = Polynomial::from_c_array(&args["B"].clone());
let result = poly_a.div(&poly_b);
Ok(result)
}
pub fn gfpoly_powmod(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let poly_m = Polynomial::from_c_array(&args["M"].clone());
let k: u128 = serde_json::from_value(args["k"].clone())?;
let result = poly_a.pow_mod(k, poly_m);
Ok(result)
}
pub fn gfpoly_sort(args: &Value) -> Result<Vec<Polynomial>> {
let poly_arrays: Vec<Value> = serde_json::from_value(args["polys"].clone())?;
let mut polys: Vec<Polynomial> = vec![];
for array in poly_arrays {
polys.push(Polynomial::from_c_array(&array));
}
polys.sort();
//polys.sort();
Ok(polys)
}
pub fn gfpoly_make_monic(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let result = poly_a.monic();
Ok(result)
}
pub fn gfpoly_sqrt(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["Q"].clone());
let result = poly_a.sqrt();
Ok(result)
}
pub fn gfpoly_diff(args: &Value) -> Result<Polynomial> {
let poly_f = Polynomial::from_c_array(&args["F"].clone());
let result = poly_f.diff();
Ok(result)
}
pub fn gfpoly_gcd(args: &Value) -> Result<Polynomial> {
let poly_a = Polynomial::from_c_array(&args["A"].clone());
let poly_b = Polynomial::from_c_array(&args["B"].clone());
let result = gcd(&poly_a.monic(), &poly_b.monic());
Ok(result)
}
pub fn gfpoly_factor_sff(arsg: &Value) -> Result<Vec<Factors>> {
let poly_f = Polynomial::from_c_array(&arsg["F"].clone());
let mut factors = sff(poly_f);
factors.sort();
let mut result: Vec<Factors> = vec![];
for (factor, exponent) in factors {
result.push(Factors {
factor: factor.to_c_array(),
exponent,
});
}
Ok(result)
}
pub fn gfpoly_factor_ddf(arsg: &Value) -> Result<Vec<utils::dff::Factors>> {
let poly_f = Polynomial::from_c_array(&arsg["F"].clone());
let mut factors = ddf(poly_f);
factors.sort();
let mut result: Vec<utils::dff::Factors> = vec![];
for (factor, degree) in factors {
result.push(utils::dff::Factors {
factor: factor.to_c_array(),
degree: degree as u32,
});
}
Ok(result)
}
pub fn gfpoly_factor_edf(arsg: &Value) -> Result<Vec<Vec<String>>> {
let poly_f = Polynomial::from_c_array(&arsg["F"].clone());
let d: u32 = serde_json::from_value(arsg["d"].clone())?;
let mut factors = edf(poly_f, d);
factors.sort();
let mut result: Vec<Vec<String>> = vec![];
for factor in factors {
result.push(factor.to_c_array())
}
Ok(result)
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
#[test]
fn test_poly_sorting() {
let json1 = json!(
{"polys": [
[
"NeverGonnaGiveYouUpAAA==",
"NeverGonnaLetYouDownAA==",
"NeverGonnaRunAroundAAA==",
"AndDesertYouAAAAAAAAAA=="
],
[
"WereNoStrangersToLoveA==",
"YouKnowTheRulesAAAAAAA==",
"AndSoDoIAAAAAAAAAAAAAA=="
],
[
"NeverGonnaMakeYouCryAA==",
"NeverGonnaSayGoodbyeAA==",
"NeverGonnaTellALieAAAA==",
"AndHurtYouAAAAAAAAAAAA=="
]
]});
let expected = json!([
[
"WereNoStrangersToLoveA==",
"YouKnowTheRulesAAAAAAA==",
"AndSoDoIAAAAAAAAAAAAAA=="
],
[
"NeverGonnaMakeYouCryAA==",
"NeverGonnaSayGoodbyeAA==",
"NeverGonnaTellALieAAAA==",
"AndHurtYouAAAAAAAAAAAA=="
],
[
"NeverGonnaGiveYouUpAAA==",
"NeverGonnaLetYouDownAA==",
"NeverGonnaRunAroundAAA==",
"AndDesertYouAAAAAAAAAA=="
]
]);
let sorted_array = gfpoly_sort(&json1).unwrap();
let mut result: Vec<Vec<String>> = vec![];
for poly in sorted_array {
result.push(poly.to_c_array());
}
assert_eq!(json!(result), expected);
//assert_eq!(BASE64_STANDARD.encode(product), "MoAAAAAAAAAAAAAAAAAAAA==");
}
#[test]
fn test_poly_sorting_02() {
let json1 = json!(
{"polys": [
[
"AQAAAAAAAAAAAAAAAAAAAA==", // 0x01
"AgAAAAAAAAAAAAAAAAAAAA==", // 0x02
"AwAAAAAAAAAAAAAAAAAAAA==" // 0x03
],
[
"AQAAAAAAAAAAAAAAAAAAAA==", // 0x01
"AgAAAAAAAAAAAAAAAAAAAA==", // 0x02
"BAAAAAAAAAAAAAAAAAAAAA==" // 0x04
],
[
"AQAAAAAAAAAAAAAAAAAAAA==", // 0x01
"AgAAAAAAAAAAAAAAAAAAAA==" // 0x02
],
[
"AQAAAAAAAAAAAAAAAAAAAA==", // 0x01
"AwAAAAAAAAAAAAAAAAAAAA==" // 0x03
]
],});
let expected = json!([
["AQAAAAAAAAAAAAAAAAAAAA==", "AgAAAAAAAAAAAAAAAAAAAA=="],
["AQAAAAAAAAAAAAAAAAAAAA==", "AwAAAAAAAAAAAAAAAAAAAA=="],
[
"AQAAAAAAAAAAAAAAAAAAAA==",
"AgAAAAAAAAAAAAAAAAAAAA==",
"BAAAAAAAAAAAAAAAAAAAAA=="
],
[
"AQAAAAAAAAAAAAAAAAAAAA==",
"AgAAAAAAAAAAAAAAAAAAAA==",
"AwAAAAAAAAAAAAAAAAAAAA=="
]
]);
let sorted_array = gfpoly_sort(&json1).unwrap();
let mut result: Vec<Vec<String>> = vec![];
for poly in sorted_array {
result.push(poly.to_c_array());
}
assert_eq!(json!(result), expected);
//assert_eq!(BASE64_STANDARD.encode(product), "MoAAAAAAAAAAAAAAAAAAAA==");
}
}

View file

@ -1,8 +1,8 @@
use crate::utils::poly::{self}; use crate::utils::poly::polynomial_2_block;
use base64::prelude::*; use anyhow::{Ok, Result};
use serde_json::Value; use serde_json::Value;
pub fn poly2block(args: &Value) -> String { pub fn poly2block(args: &Value) -> Result<Vec<u8>> {
let coefficients: Vec<u8> = args["coefficients"] let coefficients: Vec<u8> = args["coefficients"]
.as_array() .as_array()
.unwrap() .unwrap()
@ -10,5 +10,9 @@ pub fn poly2block(args: &Value) -> String {
.map(|x| x.as_u64().unwrap() as u8) .map(|x| x.as_u64().unwrap() as u8)
.collect(); .collect();
BASE64_STANDARD.encode(poly::coefficient_to_binary(coefficients).to_ne_bytes()) let semantic: String = serde_json::from_value(args["semantic"].clone())?;
let result = polynomial_2_block(coefficients, &semantic).unwrap();
Ok(result)
} }

View file

@ -6,19 +6,13 @@ use crate::utils::ciphers::{sea_128_decrypt, sea_128_encrypt};
pub fn sea128(args: &Value) -> Result<String> { pub fn sea128(args: &Value) -> Result<String> {
let key_string: String = serde_json::from_value(args["key"].clone())?; let key_string: String = serde_json::from_value(args["key"].clone())?;
//let key: &[u8] = b64_2_num(key_string)?.to_ne_bytes();
let key = BASE64_STANDARD.decode(key_string)?; let key = BASE64_STANDARD.decode(key_string)?;
//eprintln!("{:?}", key);
let input_string: String = serde_json::from_value(args["input"].clone())?; let input_string: String = serde_json::from_value(args["input"].clone())?;
//let plaintexts: &[u8] = &b64_2_num(plaintexts_string)?.to_ne_bytes();
let input = BASE64_STANDARD.decode(input_string)?; let input = BASE64_STANDARD.decode(input_string)?;
let xor_val: u128 = 0xc0ffeec0ffeec0ffeec0ffeec0ffee11;
let mode: String = serde_json::from_value(args["mode"].clone())?; let mode: String = serde_json::from_value(args["mode"].clone())?;
match mode.as_str() { match mode.as_str() {
"encrypt" => { "encrypt" => {
//eprintln!("{:?}", plaintexts);
let output = BASE64_STANDARD.encode(sea_128_encrypt(&key, &input)?); let output = BASE64_STANDARD.encode(sea_128_encrypt(&key, &input)?);
Ok(output) Ok(output)
@ -34,7 +28,6 @@ pub fn sea128(args: &Value) -> Result<String> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::fs;
use anyhow::Result; use anyhow::Result;
use serde_json::json; use serde_json::json;

27
src/tasks/tasks01/xex.rs Normal file
View file

@ -0,0 +1,27 @@
use anyhow::{anyhow, Result};
use base64::prelude::*;
use serde_json::Value;
use crate::utils::ciphers::{xex_decrypt, xex_encrypt};
pub fn fde_xex(args: &Value) -> Result<Vec<u8>> {
let key_string: String = serde_json::from_value(args["key"].clone())?;
let key: Vec<u8> = BASE64_STANDARD.decode(key_string)?;
let tweak_string: String = serde_json::from_value(args["tweak"].clone())?;
let tweak: Vec<u8> = BASE64_STANDARD.decode(tweak_string)?;
let input_string: String = serde_json::from_value(args["input"].clone())?;
let input: Vec<u8> = BASE64_STANDARD.decode(input_string)?;
let mode_string: String = serde_json::from_value(args["mode"].clone())?;
match mode_string.as_str() {
"encrypt" => Ok(xex_encrypt(key, &tweak, &input)?),
"decrypt" => Ok(xex_decrypt(key, &tweak, &input)?),
_ => Err(anyhow!(
"Failure: No compatible mode found. Data was: {:?}",
args
)),
}
}

View file

@ -1,8 +1,12 @@
use crate::utils::{field::ByteArray, poly::gfmul};
use anyhow::Result; use anyhow::Result;
use openssl::symm::{Cipher, Crypter, Mode}; use openssl::symm::{Cipher, Crypter, Mode};
use super::math::xor_bytes; use super::math::xor_bytes;
/// AES ENCRYPT
/// Function to perform encryption with AES ECB mode
/// Function does not use padding for blocks
pub fn aes_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> { pub fn aes_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
let mut encrypter = Crypter::new(Cipher::aes_128_ecb(), Mode::Encrypt, &key, None)?; let mut encrypter = Crypter::new(Cipher::aes_128_ecb(), Mode::Encrypt, &key, None)?;
encrypter.pad(false); encrypter.pad(false);
@ -18,6 +22,9 @@ pub fn aes_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
Ok(ciphertext) Ok(ciphertext)
} }
/// AES DECRPYT
/// Function to perform decryption with AES ECB mode
/// Function does not use padding for blocks
pub fn aes_128_decrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> { pub fn aes_128_decrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
let mut decrypter = Crypter::new(Cipher::aes_128_ecb(), Mode::Decrypt, key, None)?; let mut decrypter = Crypter::new(Cipher::aes_128_ecb(), Mode::Decrypt, key, None)?;
decrypter.pad(false); decrypter.pad(false);
@ -30,13 +37,18 @@ pub fn aes_128_decrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
let mut bytes: [u8; 16] = [0u8; 16]; let mut bytes: [u8; 16] = [0u8; 16];
bytes.copy_from_slice(&plaintext); bytes.copy_from_slice(&plaintext);
let number: u128 = <u128>::from_be_bytes(bytes);
Ok(plaintext) Ok(plaintext)
} }
/// SEA ENCRYPT
/// Function to perform sea encrption.
/// At its core, the function ses the AES ENCRYPT, but then xors with a constant value of:
/// 0xc0ffeec0ffeec0ffeec0ffeec0ffee11
pub fn sea_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> { pub fn sea_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
// Constant value used for XOR
let xor_val: u128 = 0xc0ffeec0ffeec0ffeec0ffeec0ffee11; let xor_val: u128 = 0xc0ffeec0ffeec0ffeec0ffeec0ffee11;
let sea128_out = xor_bytes( let sea128_out = xor_bytes(
&aes_128_encrypt(key, input)?, &aes_128_encrypt(key, input)?,
xor_val.to_be_bytes().to_vec(), xor_val.to_be_bytes().to_vec(),
@ -44,16 +56,507 @@ pub fn sea_128_encrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
Ok(sea128_out) Ok(sea128_out)
} }
/// SEA DECRYPT
/// Function to perform sea decryption.
/// At its core, the function ses the AES DECRYPT, but then xors with a constant value of:
/// 0xc0ffeec0ffeec0ffeec0ffeec0ffee11
pub fn sea_128_decrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> { pub fn sea_128_decrypt(key: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
// Constant value used for XOR
let xor_val: u128 = 0xc0ffeec0ffeec0ffeec0ffeec0ffee11; let xor_val: u128 = 0xc0ffeec0ffeec0ffeec0ffeec0ffee11;
let intermediate = xor_bytes(input, xor_val.to_be_bytes().to_vec())?; let intermediate = xor_bytes(input, xor_val.to_be_bytes().to_vec())?;
Ok(aes_128_decrypt(&key, &intermediate)?) Ok(aes_128_decrypt(&key, &intermediate)?)
} }
/* /// Function to perform xex encryption.
* let mut bytes: [u8; 16] = [0u8; 16]; /// The function performs the encryption for XEX on the basis of the SEA ENCRYPT.
bytes.copy_from_slice(&ciphertext); pub fn xex_encrypt(mut key: Vec<u8>, tweak: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
let number: u128 = <u128>::from_be_bytes(bytes); let key2: Vec<u8> = key.split_off(16);
* */ let input_chunks: Vec<Vec<u8>> = input.chunks(16).map(|x| x.to_vec()).collect();
let mut output: Vec<u8> = vec![];
let mut tweak_block: ByteArray = ByteArray(sea_128_encrypt(&key2, tweak)?);
for chunk in input_chunks {
let plaintext_intermediate = xor_bytes(&tweak_block.0, chunk)?;
let cypher_block_intermediate = sea_128_encrypt(&key, &plaintext_intermediate)?;
let mut cypher_block = xor_bytes(&tweak_block.0, cypher_block_intermediate)?;
output.append(cypher_block.as_mut());
tweak_block.left_shift_reduce("xex");
}
Ok(output)
}
pub fn xex_decrypt(mut key: Vec<u8>, tweak: &Vec<u8>, input: &Vec<u8>) -> Result<Vec<u8>> {
let key2: Vec<u8> = key.split_off(16);
let input_chunks: Vec<Vec<u8>> = input.chunks(16).map(|x| x.to_vec()).collect();
let mut output: Vec<u8> = vec![];
let mut tweak_block: ByteArray = ByteArray(sea_128_encrypt(&key2, tweak)?);
for chunk in input_chunks {
let cyphertext_intermediate = xor_bytes(&tweak_block.0, chunk)?;
let plaintext_block_intermediate = sea_128_decrypt(&key, &cyphertext_intermediate)?;
let mut cypher_block = xor_bytes(&tweak_block.0, plaintext_block_intermediate)?;
output.append(cypher_block.as_mut());
tweak_block.left_shift_reduce("xex");
}
Ok(output)
}
pub fn gcm_encrypt_aes(
mut nonce: Vec<u8>,
key: Vec<u8>,
plaintext: Vec<u8>,
ad: Vec<u8>,
) -> Result<(Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>)> {
let mut ciphertext: Vec<u8> = vec![];
let mut counter: u32 = 1;
nonce.append(counter.to_be_bytes().to_vec().as_mut());
//nonce.append(0u8.to_le_bytes().to_vec().as_mut());
let auth_tag_xor = aes_128_encrypt(&key, &nonce)?;
let auth_key_h = aes_128_encrypt(&key, &0u128.to_be_bytes().to_vec())?;
let plaintext_chunks: Vec<Vec<u8>> = plaintext.chunks(16).map(|x| x.to_vec()).collect();
counter = 2;
for chunk in plaintext_chunks {
nonce.drain(12..);
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let inter1 = aes_128_encrypt(&key, &nonce)?;
let mut inter2 = xor_bytes(&inter1, chunk.clone())?;
ciphertext.append(inter2.as_mut());
counter += 1;
}
let mut l_field: Vec<u8> = ((ad.len() * 8) as u64).to_be_bytes().to_vec();
let mut c_len: Vec<u8> = ((ciphertext.len() * 8) as u64).to_be_bytes().to_vec();
l_field.append(c_len.as_mut());
let auth_tag = xor_bytes(
&ghash(auth_key_h.clone(), ad, ciphertext.clone(), l_field.clone())?,
auth_tag_xor,
)?;
Ok((ciphertext, auth_tag, l_field, auth_key_h))
}
pub fn gcm_decrypt_aes(
mut nonce: Vec<u8>,
key: Vec<u8>,
ciphertext: Vec<u8>,
ad: Vec<u8>,
tag: Vec<u8>,
) -> Result<(Vec<u8>, bool)> {
let mut plaintext: Vec<u8> = vec![];
let mut counter: u32 = 1;
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let auth_tag_xor = aes_128_encrypt(&key, &nonce)?;
let auth_key_h = aes_128_encrypt(&key, &0u128.to_be_bytes().to_vec())?;
let ciphertext_chunks: Vec<Vec<u8>> = ciphertext.chunks(16).map(|x| x.to_vec()).collect();
counter = 2;
for chunk in ciphertext_chunks {
nonce.drain(12..);
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let inter1 = aes_128_encrypt(&key, &nonce)?;
let mut inter2 = xor_bytes(&inter1, chunk.clone())?;
plaintext.append(inter2.as_mut());
counter += 1;
}
let mut l_field: Vec<u8> = ((ad.len() * 8) as u64).to_be_bytes().to_vec();
let mut c_len: Vec<u8> = ((ciphertext.len() * 8) as u64).to_be_bytes().to_vec();
l_field.append(c_len.as_mut());
let auth_tag = xor_bytes(
&ghash(auth_key_h.clone(), ad, ciphertext.clone(), l_field.clone())?,
auth_tag_xor,
)?;
let valid = auth_tag == tag;
Ok((plaintext, valid))
}
pub fn gcm_encrypt_sea(
mut nonce: Vec<u8>,
key: Vec<u8>,
plaintext: Vec<u8>,
ad: Vec<u8>,
) -> Result<(Vec<u8>, Vec<u8>, Vec<u8>, Vec<u8>)> {
let mut ciphertext: Vec<u8> = vec![];
let mut counter: u32 = 1;
nonce.append(counter.to_be_bytes().to_vec().as_mut());
//nonce.append(0u8.to_le_bytes().to_vec().as_mut());
let auth_tag_xor = sea_128_encrypt(&key, &nonce)?;
let auth_key_h = sea_128_encrypt(&key, &0u128.to_be_bytes().to_vec())?;
let plaintext_chunks: Vec<Vec<u8>> = plaintext.chunks(16).map(|x| x.to_vec()).collect();
counter = 2;
for chunk in plaintext_chunks {
nonce.drain(12..);
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let inter1 = sea_128_encrypt(&key, &nonce)?;
let mut inter2 = xor_bytes(&inter1, chunk.clone())?;
ciphertext.append(inter2.as_mut());
counter += 1;
}
let mut l_field: Vec<u8> = ((ad.len() * 8) as u64).to_be_bytes().to_vec();
let mut c_len: Vec<u8> = ((ciphertext.len() * 8) as u64).to_be_bytes().to_vec();
l_field.append(c_len.as_mut());
let auth_tag = xor_bytes(
&ghash(auth_key_h.clone(), ad, ciphertext.clone(), l_field.clone())?,
auth_tag_xor,
)?;
Ok((ciphertext, auth_tag, l_field, auth_key_h))
}
pub fn gcm_decrypt_sea(
mut nonce: Vec<u8>,
key: Vec<u8>,
ciphertext: Vec<u8>,
ad: Vec<u8>,
tag: Vec<u8>,
) -> Result<(Vec<u8>, bool)> {
let mut plaintext: Vec<u8> = vec![];
let mut counter: u32 = 1;
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let auth_tag_xor = sea_128_encrypt(&key, &nonce)?;
let auth_key_h = sea_128_encrypt(&key, &0u128.to_be_bytes().to_vec())?;
let plaintext_chunks: Vec<Vec<u8>> = ciphertext.chunks(16).map(|x| x.to_vec()).collect();
counter = 2;
for chunk in plaintext_chunks {
nonce.drain(12..);
nonce.append(counter.to_be_bytes().to_vec().as_mut());
let inter1 = sea_128_encrypt(&key, &nonce)?;
let mut inter2 = xor_bytes(&inter1, chunk.clone())?;
plaintext.append(inter2.as_mut());
counter += 1;
}
let mut l_field: Vec<u8> = ((ad.len() * 8) as u64).to_be_bytes().to_vec();
let mut c_len: Vec<u8> = ((plaintext.len() * 8) as u64).to_be_bytes().to_vec();
l_field.append(c_len.as_mut());
let auth_tag = xor_bytes(
&ghash(auth_key_h.clone(), ad, ciphertext.clone(), l_field.clone())?,
auth_tag_xor,
)?;
let valid = auth_tag == tag;
Ok((plaintext, valid))
}
pub fn ghash(
auth_key_h: Vec<u8>,
mut ad: Vec<u8>,
mut ciphertext: Vec<u8>,
l_field: Vec<u8>,
) -> Result<Vec<u8>> {
let output: Vec<u8> = vec![0; 16];
if ad.len() % 16 != 0 || ad.is_empty() {
ad.append(vec![0u8; 16 - (ad.len() % 16)].as_mut());
}
if ciphertext.len() % 16 != 0 {
ciphertext.append(vec![0u8; 16 - (ciphertext.len() % 16)].as_mut());
}
let mut ad_chunks = ad.chunks(16);
let inter1 = xor_bytes(&output, ad_chunks.next().unwrap().to_vec())?;
let mut inter_loop = gfmul(&inter1, &auth_key_h, "gcm")?;
for chunk in ad_chunks {
let inter2 = xor_bytes(&inter_loop, chunk.to_vec())?;
inter_loop = gfmul(&inter2, &auth_key_h, "gcm")?;
}
let cipher_chunks = ciphertext.chunks(16);
for chunk in cipher_chunks {
let inter3 = xor_bytes(&inter_loop, chunk.to_vec())?;
inter_loop = gfmul(&inter3, &auth_key_h, "gcm")?;
}
let inter4 = xor_bytes(&inter_loop, l_field)?;
inter_loop = gfmul(&inter4, &auth_key_h, "gcm")?;
Ok(inter_loop)
}
#[cfg(test)]
mod tests {
use super::*;
use base64::prelude::*;
#[test]
fn test_xex_encrypt() -> Result<()> {
let key = BASE64_STANDARD.decode("B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=")?;
let tweak = BASE64_STANDARD.decode("6VXORr+YYHrd2nVe0OlA+Q==")?;
let input = BASE64_STANDARD
.decode("/aOg4jMocLkBLkDLgkHYtFKc2L9jjyd2WXSSyxXQikpMY9ZRnsJE76e9dW9olZIW")?;
let output = BASE64_STANDARD.encode(xex_encrypt(key, &tweak, &input)?);
assert_eq!(
output,
"mHAVhRCKPAPx0BcufG5BZ4+/CbneMV/gRvqK5rtLe0OJgpDU5iT7z2P0R7gEeRDO"
);
Ok(())
}
#[test]
fn test_xex_decrypt() -> Result<()> {
let key = BASE64_STANDARD.decode("B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=")?;
let tweak = BASE64_STANDARD.decode("6VXORr+YYHrd2nVe0OlA+Q==")?;
let input = BASE64_STANDARD
.decode("lr/ItaYGFXCtHhdPndE65yg7u/GIdM9wscABiiFOUH2Sbyc2UFMlIRSMnZrYCW1a")?;
let output = BASE64_STANDARD.encode(xex_decrypt(key, &tweak, &input)?);
assert_eq!(
output,
"SGV5IHdpZSBrcmFzcyBkYXMgZnVua3Rpb25pZXJ0IGphIG9mZmVuYmFyIGVjaHQu"
);
Ok(())
}
#[test]
fn test_xex_encrypt_empty_case() -> Result<()> {
let key = BASE64_STANDARD.decode("B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=")?;
let tweak = BASE64_STANDARD.decode("6VXORr+YYHrd2nVe0OlA+Q==")?;
let input = BASE64_STANDARD.decode("")?;
let output = BASE64_STANDARD.encode(xex_encrypt(key, &tweak, &input)?);
assert_eq!(output, "");
Ok(())
}
#[test]
fn test_gcm_encrypt_aes() -> Result<()> {
let nonce = BASE64_STANDARD.decode("4gF+BtR3ku/PUQci")?;
let key = BASE64_STANDARD.decode("Xjq/GkpTSWoe3ZH0F+tjrQ==")?;
let plaintext = BASE64_STANDARD.decode("RGFzIGlzdCBlaW4gVGVzdA==")?;
let ad = BASE64_STANDARD.decode("QUQtRGF0ZW4=")?;
let (ciphertext, auth_tag, l_field, auth_key_h) =
gcm_encrypt_aes(nonce, key, plaintext, ad)?;
eprintln!(
"Cipher: {:001X?} \n Tag: {:001X?} \n L_Field: {:001X?} \n H: {:001X?}",
BASE64_STANDARD.encode(&ciphertext),
BASE64_STANDARD.encode(&auth_tag),
BASE64_STANDARD.encode(&l_field),
BASE64_STANDARD.encode(&auth_key_h)
);
assert_eq!(
BASE64_STANDARD.encode(ciphertext),
"ET3RmvH/Hbuxba63EuPRrw=="
);
assert_eq!(BASE64_STANDARD.encode(auth_tag), "Mp0APJb/ZIURRwQlMgNN/w==");
assert_eq!(BASE64_STANDARD.encode(l_field), "AAAAAAAAAEAAAAAAAAAAgA==");
assert_eq!(
BASE64_STANDARD.encode(auth_key_h),
"Bu6ywbsUKlpmZXMQyuGAng=="
);
Ok(())
}
#[test]
fn test_gcm_encrypt_aes_long_ad() -> Result<()> {
let nonce = BASE64_STANDARD.decode("yv66vvrO263eyviI")?;
let key = BASE64_STANDARD.decode("/v/pkoZlcxxtao+UZzCDCA==")?;
let plaintext = BASE64_STANDARD.decode(
"2TEyJfiEBuWlWQnFr/UmmoanqVMVNPfaLkwwPYoxinIcPAyVlWgJUy/PDiRJprUlsWrt9aoN5le6Y3s5",
)?;
let ad = BASE64_STANDARD.decode("/u36zt6tvu/+7frO3q2+76ut2tI=")?;
let (ciphertext, auth_tag, l_field, auth_key_h) =
gcm_encrypt_aes(nonce, key, plaintext, ad)?;
eprintln!(
"Cipher: {:001X?} \n Tag: {:001X?} \n L_Field: {:001X?} \n H: {:001X?}",
BASE64_STANDARD.encode(&ciphertext),
BASE64_STANDARD.encode(&auth_tag),
BASE64_STANDARD.encode(&l_field),
BASE64_STANDARD.encode(&auth_key_h)
);
assert_eq!(
BASE64_STANDARD.encode(ciphertext),
"QoMewiF3dCRLciG3hNDUnOOqIS8sAqTgNcF+IymsoS4h1RSyVGaTHH2PalqshKoFG6MLOWoKrJc9WOCR"
);
assert_eq!(BASE64_STANDARD.encode(auth_tag), "W8lPvDIhpduU+ula5xIaRw==");
assert_eq!(BASE64_STANDARD.encode(l_field), "AAAAAAAAAKAAAAAAAAAB4A==");
assert_eq!(
BASE64_STANDARD.encode(auth_key_h),
"uDtTNwi/U10KpuUpgNU7eA=="
);
Ok(())
}
/*
* TODO:Not sure if this case can really happen in our data
#[test]
fn test_gcm_encrypt_aes_long_0000() -> Result<()> {
let nonce = BASE64_STANDARD.decode(
"kxMiXfiEBuVVkJxa/1Jpqmp6lThTT32h5MMD0qMYpyjDwMlRVoCVOfzw4kKaa1JUFq7b9aDealemN7Ob",
)?;
let key = BASE64_STANDARD.decode("/v/pkoZlcxxtao+UZzCDCP7/6ZKGZXMcbWqPlGcwgwg=")?;
let plaintext = BASE64_STANDARD.decode(
"2TEyJfiEBuWlWQnFr/UmmoanqVMVNPfaLkwwPYoxinIcPAyVlWgJUy/PDiRJprUlsWrt9aoN5le6Y3s5",
)?;
let ad = BASE64_STANDARD.decode("/u36zt6tvu/+7frO3q2+76ut2tI=")?;
let (ciphertext, auth_tag, l_field, auth_key_h) =
gcm_encrypt_aes(nonce, key, plaintext, ad)?;
eprintln!(
"Cipher: {:001X?} \n Tag: {:001X?} \n L_Field: {:001X?} \n H: {:001X?}",
BASE64_STANDARD.encode(&ciphertext),
BASE64_STANDARD.encode(&auth_tag),
BASE64_STANDARD.encode(&l_field),
BASE64_STANDARD.encode(&auth_key_h)
);
assert_eq!(
BASE64_STANDARD.encode(ciphertext),
"Wo3vLwyeU/H3XXhTZZ4qIO6ysiqv3mQZoFirT290a/QPwMO3gPJERS2j6/HF2CzeokGJlyAO+C5Ern4/"
);
assert_eq!(BASE64_STANDARD.encode(auth_tag), "pEqCZu4cjrDItdTPWunxmg==");
assert_eq!(BASE64_STANDARD.encode(l_field), "AAAAAAAAAKAAAAAAAAAB4A==");
assert_eq!(
BASE64_STANDARD.encode(auth_key_h),
"rL7yBXm0uOvOiJushzLa1w=="
);
Ok(())
}
*/
#[test]
fn test_gcm_encrypt_sea() -> Result<()> {
let nonce = BASE64_STANDARD.decode("4gF+BtR3ku/PUQci")?;
let key = BASE64_STANDARD.decode("Xjq/GkpTSWoe3ZH0F+tjrQ==")?;
let plaintext = BASE64_STANDARD.decode("RGFzIGlzdCBlaW4gVGVzdA==")?;
let ad = BASE64_STANDARD.decode("QUQtRGF0ZW4=")?;
let (ciphertext, auth_tag, l_field, auth_key_h) =
gcm_encrypt_sea(nonce, key, plaintext, ad)?;
eprintln!(
"Cipher: {:001X?} \n Tag: {:001X?} \n L_Field: {:001X?} \n H: {:001X?}",
BASE64_STANDARD.encode(&ciphertext),
BASE64_STANDARD.encode(&auth_tag),
BASE64_STANDARD.encode(&l_field),
BASE64_STANDARD.encode(&auth_key_h)
);
assert_eq!(
BASE64_STANDARD.encode(ciphertext),
"0cI/Wg4R3URfrVFZ0hw/vg=="
);
assert_eq!(BASE64_STANDARD.encode(auth_tag), "ysDdzOSnqLH0MQ+Mkb23gw==");
assert_eq!(BASE64_STANDARD.encode(l_field), "AAAAAAAAAEAAAAAAAAAAgA==");
assert_eq!(
BASE64_STANDARD.encode(auth_key_h),
"xhFcAUT66qWIpYz+Ch5ujw=="
);
Ok(())
}
#[test]
fn test_gcm_decrypt_aes() -> Result<()> {
let nonce = BASE64_STANDARD.decode("4gF+BtR3ku/PUQci")?;
let key = BASE64_STANDARD.decode("Xjq/GkpTSWoe3ZH0F+tjrQ==")?;
let ciphertext = BASE64_STANDARD.decode("ET3RmvH/Hbuxba63EuPRrw==")?;
let ad = BASE64_STANDARD.decode("QUQtRGF0ZW4=")?;
let tag = BASE64_STANDARD.decode("Mp0APJb/ZIURRwQlMgNN/w==")?;
let (plaintext, valid) = gcm_decrypt_aes(nonce, key, ciphertext, ad, tag)?;
eprintln!(
"Cipher: {:001X?} \n Valids: {:001X?}",
BASE64_STANDARD.encode(&plaintext),
&valid,
);
assert_eq!(
BASE64_STANDARD.encode(plaintext),
"RGFzIGlzdCBlaW4gVGVzdA=="
);
assert_eq!(valid, true);
Ok(())
}
#[test]
fn test_gcm_decrypt_sea() -> Result<()> {
let nonce = BASE64_STANDARD.decode("4gF+BtR3ku/PUQci")?;
let key = BASE64_STANDARD.decode("Xjq/GkpTSWoe3ZH0F+tjrQ==")?;
let ciphertext = BASE64_STANDARD.decode("0cI/Wg4R3URfrVFZ0hw/vg==")?;
let ad = BASE64_STANDARD.decode("QUQtRGF0ZW4=")?;
let tag = BASE64_STANDARD.decode("ysDdzOSnqLH0MQ+Mkb23gw==")?;
let (plaintext, valid) = gcm_decrypt_sea(nonce, key, ciphertext, ad, tag)?;
eprintln!(
"Plaintext: {:001X?} \n Valid: {:001X?}",
BASE64_STANDARD.encode(&plaintext),
&valid,
);
assert_eq!(
BASE64_STANDARD.encode(plaintext),
"RGFzIGlzdCBlaW4gVGVzdA=="
);
assert_eq!(valid, true);
Ok(())
}
}

81
src/utils/dff.rs Normal file
View file

@ -0,0 +1,81 @@
use std::usize;
use num::{pow::Pow, BigUint, FromPrimitive};
use serde::{Deserialize, Serialize};
use super::poly::{gcd, Polynomial};
#[derive(Debug, Serialize, Deserialize)]
pub struct Factors {
pub factor: Vec<String>,
pub degree: u32,
}
pub fn ddf(f: Polynomial) -> Vec<(Polynomial, u128)> {
let q = BigUint::pow(&BigUint::from_u8(2).unwrap(), 128);
let mut z: Vec<(Polynomial, u128)> = vec![];
let mut d: u128 = 1;
let mut f_star = f.clone();
let one_cmp = Polynomial::one();
while f_star.degree() as u128 >= (2 * d) {
let h = Polynomial::x().bpow_mod(q.clone().pow(d), &f_star.clone()) + Polynomial::x();
let g = gcd(&h, &f_star);
if g != one_cmp {
z.push((g.clone(), d));
f_star = f_star.div(&g).0;
}
d += 1;
}
if f_star != one_cmp {
z.push((f_star.clone(), f_star.degree() as u128));
} else if z.len() == 0 {
z.push((f.clone(), 1));
}
z
}
#[cfg(test)]
mod tests {
use serde_json::json;
// Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*;
#[test]
fn test_dff_sheet() {
let json_f = json!([
"tpkgAAAAAAAAAAAAAAAAAA==",
"m6MQAAAAAAAAAAAAAAAAAA==",
"8roAAAAAAAAAAAAAAAAAAA==",
"3dUAAAAAAAAAAAAAAAAAAA==",
"FwAAAAAAAAAAAAAAAAAAAA==",
"/kAAAAAAAAAAAAAAAAAAAA==",
"a4AAAAAAAAAAAAAAAAAAAA==",
"gAAAAAAAAAAAAAAAAAAAAA=="
]);
let poly_f = Polynomial::from_c_array(&json_f);
let mut factors = ddf(poly_f);
factors.sort();
let mut result: Vec<Factors> = vec![];
for (factor, degree) in factors {
result.push(Factors {
factor: factor.to_c_array(),
degree: degree as u32,
});
}
println!("Result: {:?}", result);
let _bit_indices: Vec<u8> = vec![0];
assert!(false)
}
}

86
src/utils/edf.rs Normal file
View file

@ -0,0 +1,86 @@
use num::{BigUint, FromPrimitive, One};
use rand::Rng;
use super::poly::{gcd, Polynomial};
pub fn edf(f: Polynomial, d: u32) -> Vec<Polynomial> {
let q = BigUint::pow(&BigUint::from_u8(2).unwrap(), 128);
let n: u32 = (f.degree() as u32) / (d);
let mut z: Vec<Polynomial> = vec![f.clone()];
let one_cmp = Polynomial::one();
while (z.len() as u32) < n {
let h = Polynomial::rand(&rand::thread_rng().gen_range(1..=f.degree()));
let exponent = (q.pow(d) - BigUint::one()) / BigUint::from_u8(3).unwrap();
let g = h.bpow_mod(exponent, &f) + Polynomial::one();
for i in (0..z.len()).rev() {
if z[i].degree() as u32 > d {
let j = gcd(&z[i], &g);
if j != one_cmp && j != z[i] {
let intemediate = z[i].div(&j).0;
z.remove(i);
z.push(j.clone());
z.push(intemediate);
}
}
}
}
z
}
#[cfg(test)]
mod tests {
use serde_json::json;
// Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*;
#[test]
fn test_edf_sheet() {
let json_f = json!([
"mmAAAAAAAAAAAAAAAAAAAA==",
"AbAAAAAAAAAAAAAAAAAAAA==",
"zgAAAAAAAAAAAAAAAAAAAA==",
"FwAAAAAAAAAAAAAAAAAAAA==",
"AAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"gAAAAAAAAAAAAAAAAAAAAA=="
]);
let d = 3;
let poly_f = Polynomial::from_c_array(&json_f);
let mut factors = edf(poly_f, d);
factors.sort();
let mut result: Vec<Vec<String>> = vec![];
for factor in factors {
result.push(factor.to_c_array())
}
println!("Result: {:?}", result);
assert_eq!(
result,
vec![
[
"iwAAAAAAAAAAAAAAAAAAAA==",
"CAAAAAAAAAAAAAAAAAAAAA==",
"AAAAAAAAAAAAAAAAAAAAAA==",
"gAAAAAAAAAAAAAAAAAAAAA=="
],
[
"kAAAAAAAAAAAAAAAAAAAAA==",
"CAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"gAAAAAAAAAAAAAAAAAAAAA=="
]
]
)
}
}

468
src/utils/field.rs Normal file
View file

@ -0,0 +1,468 @@
use base64::prelude::*;
use std::{u128, u8, usize};
use std::{
cmp::Ordering,
ops::{Add, BitXor, Div, Mul},
};
use anyhow::{anyhow, Ok, Result};
use super::{
math::{reverse_bits_in_bytevec, xor_bytes},
poly::gfmul,
};
#[derive(Debug, serde::Serialize, serde::Deserialize)]
pub struct FieldElement {
field_element: Vec<u8>,
}
impl FieldElement {
pub const IRREDUCIBLE_POLYNOMIAL: [u8; 17] = [
0x87, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 0x01,
];
pub fn rand() -> Self {
let rand_field: [u8; 16] = rand::random();
FieldElement::new_no_convert(rand_field.to_vec())
}
pub fn zero() -> Self {
FieldElement::new_no_convert(vec![0; 16])
}
pub fn one() -> Self {
FieldElement::new_no_convert(vec![0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
}
pub fn to_vec(&self) -> Vec<u8> {
self.field_element.clone()
}
pub fn new(field_element: Vec<u8>) -> Self {
Self {
field_element: reverse_bits_in_bytevec(field_element),
}
}
pub fn new_no_convert(field_element: Vec<u8>) -> Self {
Self { field_element }
}
pub fn mul(&self, poly_a: Vec<u8>, poly_b: Vec<u8>) -> Result<Vec<u8>> {
gfmul(&poly_a, &poly_b, "gcm")
}
pub fn to_b64(&self) -> String {
BASE64_STANDARD.encode(reverse_bits_in_bytevec(self.field_element.to_owned()))
}
pub fn pow(mut self, mut exponent: u128) -> FieldElement {
let mut result: FieldElement = FieldElement::one();
if exponent == 1 {
return self;
}
if exponent == 0 {
let result = FieldElement::one();
return result;
}
while exponent > 0 {
if exponent & 1 == 1 {
let temp = &self * &result;
result = temp
}
let temp_square = &self * &self;
self = temp_square;
exponent >>= 1;
}
result
}
pub fn inv(mut self) -> Self {
const INVERSER_START: u128 = 0xfffffffffffffffffffffffffffffffe;
let mut inverser = INVERSER_START;
let mut inverse: Vec<u8> = vec![0x01, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
while inverser > 0 {
if inverser & 1 == 1 {
inverse = gfmul(&self.field_element, &inverse, "xex").unwrap();
}
inverser >>= 1;
self.field_element = gfmul(&self.field_element, &self.field_element, "xex")
.expect("Error in sqrmul sqr");
}
FieldElement::new_no_convert(inverse)
}
pub fn is_zero(&self) -> bool {
self.field_element.iter().all(|&x| x == 0x00)
}
pub fn reverse_bits(&self) -> Self {
FieldElement::new_no_convert(reverse_bits_in_bytevec(self.field_element.clone()))
}
}
impl Mul for FieldElement {
type Output = Self;
fn mul(self, rhs: Self) -> Self::Output {
FieldElement::new_no_convert(
gfmul(&self.field_element, &rhs.field_element, "xex")
.expect("Error during multiplication"),
)
}
}
impl Mul for &FieldElement {
type Output = FieldElement;
fn mul(self, rhs: &FieldElement) -> FieldElement {
FieldElement::new_no_convert(
gfmul(&self.field_element, &rhs.field_element, "xex")
.expect("Error during multiplication"),
)
}
}
impl Add for FieldElement {
type Output = Self;
fn add(self, rhs: Self) -> Self::Output {
FieldElement::new_no_convert(
xor_bytes(&self.field_element, rhs.field_element).expect("Error in poly add"),
)
}
}
impl Add for &FieldElement {
type Output = FieldElement;
fn add(self, rhs: Self) -> Self::Output {
FieldElement::new_no_convert(
xor_bytes(&self.field_element, rhs.field_element.clone()).expect("Error in poly add"),
)
}
}
impl AsRef<[u8]> for FieldElement {
fn as_ref(&self) -> &[u8] {
&self.field_element.as_ref()
}
}
impl Clone for FieldElement {
fn clone(&self) -> Self {
FieldElement {
field_element: self.field_element.clone(),
}
}
}
impl BitXor for FieldElement {
type Output = Self;
fn bitxor(self, rhs: Self) -> Self::Output {
let result: Vec<u8> = self
.field_element
.iter()
.zip(rhs.field_element.iter())
.map(|(&x1, &x2)| x1 ^ x2)
.collect();
FieldElement::new_no_convert(result)
}
}
impl Div for FieldElement {
type Output = Self;
fn div(self, rhs: Self) -> Self::Output {
let inverse = rhs.inv();
self * inverse
}
}
impl Div for &FieldElement {
type Output = FieldElement;
fn div(self, rhs: Self) -> Self::Output {
self.clone() * rhs.clone().inv()
}
}
impl PartialOrd for FieldElement {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
for (byte_a, byte_b) in self.as_ref().iter().rev().zip(other.as_ref().iter().rev()) {
if byte_a > byte_b {
return Some(Ordering::Greater);
} else if byte_a < byte_b {
return Some(Ordering::Less);
} else {
continue;
}
}
Some(Ordering::Equal)
}
}
impl PartialEq for FieldElement {
fn eq(&self, other: &Self) -> bool {
self.field_element == other.field_element
}
}
impl Eq for FieldElement {
// add code here
}
impl Ord for FieldElement {
fn cmp(&self, other: &Self) -> Ordering {
for (byte_a, byte_b) in self.as_ref().iter().rev().zip(other.as_ref().iter().rev()) {
if byte_a > byte_b {
return Ordering::Greater;
} else if byte_a < byte_b {
return Ordering::Less;
} else {
continue;
}
}
Ordering::Equal
}
}
#[derive(Debug)]
pub struct ByteArray(pub Vec<u8>);
impl ByteArray {
pub fn left_shift(&mut self, semantic: &str) -> Result<u8> {
match semantic {
"xex" => {
let mut carry = 0u8;
for byte in self.0.iter_mut() {
let new_carry = *byte >> 7;
*byte = (*byte << 1) | carry;
carry = new_carry;
}
Ok(carry)
}
"gcm" => {
let mut carry = 0u8;
for byte in self.0.iter_mut() {
let new_carry = *byte & 1;
*byte = (*byte >> 1) | (carry << 7);
carry = new_carry;
}
Ok(carry)
}
_ => Err(anyhow!("Failure in lsh. No compatible action found")),
}
}
pub fn left_shift_reduce(&mut self, semantic: &str) {
match semantic {
"xex" => {
let alpha_poly: Vec<u8> = base64::prelude::BASE64_STANDARD
.decode("AgAAAAAAAAAAAAAAAAAAAA==")
.expect("Decode failed");
self.0 = gfmul(&self.0, &alpha_poly, "xex").unwrap();
}
"gcm" => {
let alpha_poly: Vec<u8> = base64::prelude::BASE64_STANDARD
.decode("AgAAAAAAAAAAAAAAAAAAAA==")
.expect("Decode failed");
self.0 = gfmul(&self.0, &alpha_poly, "gcm").unwrap();
}
_ => {}
}
}
pub fn right_shift(&mut self, semantic: &str) -> Result<u8> {
match semantic {
"xex" => {
let mut carry = 0u8;
for byte in self.0.iter_mut().rev() {
let new_carry = *byte & 1;
*byte = (*byte >> 1) | (carry << 7);
carry = new_carry;
}
Ok(carry)
}
"gcm" => {
let mut carry = 0u8;
for byte in self.0.iter_mut().rev() {
let new_carry = *byte & 1;
*byte = (*byte << 1) | carry;
carry = new_carry;
}
Ok(carry)
}
_ => Err(anyhow!("Failure in rsh. No valid semantic found")),
}
}
pub fn xor_byte_arrays(&mut self, vec2: &ByteArray) {
self.0
.iter_mut()
.zip(vec2.0.iter())
.for_each(|(x1, x2)| *x1 ^= *x2);
}
pub fn LSB_is_one(&self) -> bool {
(self.0.first().unwrap() & 1) == 1
}
pub fn msb_is_one(&self) -> bool {
(self.0.last().unwrap() & 1) == 1
}
pub fn is_empty(&self) -> bool {
for i in self.0.iter() {
if *i != 0 {
return false;
}
}
true
}
pub fn reverse_bits_in_bytevec(&mut self) {
self.0 = self.0.iter_mut().map(|byte| byte.reverse_bits()).collect();
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_byte_array_shift1() {
let mut byte_array: ByteArray = ByteArray(vec![0x00, 0x01]);
let shifted_array: ByteArray = ByteArray(vec![0x00, 0x02]);
byte_array.left_shift("xex").unwrap();
assert_eq!(byte_array.0, shifted_array.0);
}
#[test]
fn test_byte_array_shift2() {
let mut byte_array: ByteArray = ByteArray(vec![0xFF, 0x00]);
let shifted_array: ByteArray = ByteArray(vec![0xFE, 0x01]);
byte_array.left_shift("xex").unwrap();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:?}",
byte_array.0
);
}
#[test]
fn test_byte_array_shift1_gcm() {
let mut byte_array: ByteArray = ByteArray(vec![0xFF, 0x00]);
let shifted_array: ByteArray = ByteArray(vec![0x7F, 0x80]);
byte_array.left_shift("gcm").unwrap();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:02X?}",
byte_array.0
);
}
#[test]
fn test_byte_array_shift1_right_gcm() {
let mut byte_array: ByteArray = ByteArray(vec![0xFF, 0x00]);
let shifted_array: ByteArray = ByteArray(vec![0xFE, 0x00]);
byte_array.right_shift("gcm").unwrap();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:02X?}",
byte_array.0
);
}
#[test]
fn test_byte_array_shift_right() {
let mut byte_array: ByteArray = ByteArray(vec![0x02]);
let shifted_array: ByteArray = ByteArray(vec![0x01]);
byte_array.right_shift("xex").unwrap();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:?}",
byte_array.0
);
}
#[test]
fn test_lsb_one() {
let byte_array: ByteArray = ByteArray(vec![0x00, 0xFF]);
assert!(!byte_array.LSB_is_one());
let byte_array2: ByteArray = ByteArray(vec![0x02, 0xFF]);
assert!(!byte_array2.LSB_is_one());
let byte_array3: ByteArray = ByteArray(vec![0xFF, 0x00]);
assert!(byte_array3.LSB_is_one());
}
#[test]
fn test_byte_xor() {
let mut byte_array: ByteArray = ByteArray(vec![0x25, 0x25]);
let byte_array2: ByteArray = ByteArray(vec![0x55, 0x55]);
byte_array.xor_byte_arrays(&byte_array2);
assert_eq!(byte_array.0, vec![0x70, 0x70]);
}
#[test]
fn test_byte_xor2() {
let mut byte_array: ByteArray = ByteArray(vec![0x00, 0x00]);
let byte_array2: ByteArray = ByteArray(vec![0x55, 0x55]);
byte_array.xor_byte_arrays(&byte_array2);
assert_eq!(byte_array.0, vec![0x55, 0x55]);
}
#[test]
fn test_field_add_01() {
let element1: FieldElement =
FieldElement::new(BASE64_STANDARD.decode("NeverGonnaGiveYouUpAAA==").unwrap());
let element2: FieldElement =
FieldElement::new(BASE64_STANDARD.decode("KryptoanalyseAAAAAAAAA==").unwrap());
let sum = element2 + element1;
assert_eq!(sum.to_b64(), "H1d3GuyA9/0OxeYouUpAAA==");
}
#[test]
fn test_field_add_02() {
let element1: FieldElement =
FieldElement::new(BASE64_STANDARD.decode("NeverGonnaLetYouDownAA==").unwrap());
let element2: FieldElement =
FieldElement::new(BASE64_STANDARD.decode("DHBWMannheimAAAAAAAAAA==").unwrap());
let sum = element2 + element1;
assert_eq!(sum.to_b64(), "OZuIncPAGEp4tYouDownAA==");
}
#[test]
fn test_field_div_01() {
let element1 =
FieldElement::new(BASE64_STANDARD.decode("JAAAAAAAAAAAAAAAAAAAAA==").unwrap());
let element2 =
FieldElement::new(BASE64_STANDARD.decode("wAAAAAAAAAAAAAAAAAAAAA==").unwrap());
let result = element1 / element2;
assert_eq!(result.to_b64(), "OAAAAAAAAAAAAAAAAAAAAA==");
}
}

View file

@ -1,5 +1,6 @@
use anyhow::{Ok, Result}; use anyhow::{Ok, Result};
pub fn xor_bytes(vec1: &Vec<u8>, mut vec2: Vec<u8>) -> Result<Vec<u8>> { pub fn xor_bytes(vec1: &Vec<u8>, mut vec2: Vec<u8>) -> Result<Vec<u8>> {
for (byte1, byte2) in vec1.iter().zip(vec2.iter_mut()) { for (byte1, byte2) in vec1.iter().zip(vec2.iter_mut()) {
*byte2 ^= byte1; *byte2 ^= byte1;
@ -8,120 +9,8 @@ pub fn xor_bytes(vec1: &Vec<u8>, mut vec2: Vec<u8>) -> Result<Vec<u8>> {
Ok(vec2) Ok(vec2)
} }
#[derive(Debug)] pub fn reverse_bits_in_bytevec(mut vec: Vec<u8>) -> Vec<u8> {
pub struct ByteArray(pub Vec<u8>); vec = vec.iter_mut().map(|byte| byte.reverse_bits()).collect();
impl ByteArray { vec
pub fn left_shift(&mut self) -> u8 {
let mut carry = 0u8;
for byte in self.0.iter_mut() {
let new_carry = *byte >> 7;
*byte = (*byte << 1) | carry;
carry = new_carry;
}
carry
}
pub fn right_shift(&mut self) -> u8 {
let mut carry = 0u8;
for byte in self.0.iter_mut().rev() {
let new_carry = *byte & 1;
*byte = (*byte >> 1) | (carry << 7);
carry = new_carry;
}
carry
}
pub fn xor_byte_arrays(&mut self, vec2: &ByteArray) {
self.0
.iter_mut()
.zip(vec2.0.iter())
.for_each(|(x1, x2)| *x1 ^= *x2);
}
pub fn LSB_is_one(&self) -> bool {
(self.0.first().unwrap() & 1) == 1
}
pub fn is_empty(&self) -> bool {
for i in self.0.iter() {
if *i != 0 {
return false;
}
}
true
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
#[test]
fn test_byte_array_shift1() {
let mut byte_array: ByteArray = ByteArray(vec![0x00, 0x01]);
let shifted_array: ByteArray = ByteArray(vec![0x00, 0x02]);
byte_array.left_shift();
assert_eq!(byte_array.0, shifted_array.0);
}
#[test]
fn test_byte_array_shift2() {
let mut byte_array: ByteArray = ByteArray(vec![0xFF, 0x00]);
let shifted_array: ByteArray = ByteArray(vec![0xFE, 0x01]);
byte_array.left_shift();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:?}",
byte_array.0
);
}
#[test]
fn test_byte_array_shift_right() {
let mut byte_array: ByteArray = ByteArray(vec![0x02]);
let shifted_array: ByteArray = ByteArray(vec![0x01]);
byte_array.right_shift();
assert_eq!(
byte_array.0, shifted_array.0,
"Failure: Shifted array was: {:?}",
byte_array.0
);
}
#[test]
fn test_lsb_one() {
let mut byte_array: ByteArray = ByteArray(vec![0x00, 0xFF]);
assert!(!byte_array.LSB_is_one());
let mut byte_array2: ByteArray = ByteArray(vec![0x02, 0xFF]);
assert!(!byte_array2.LSB_is_one());
let mut byte_array3: ByteArray = ByteArray(vec![0xFF, 0x00]);
assert!(byte_array3.LSB_is_one());
}
#[test]
fn test_byte_xor() {
let mut byte_array: ByteArray = ByteArray(vec![0x25, 0x25]);
let byte_array2: ByteArray = ByteArray(vec![0x55, 0x55]);
byte_array.xor_byte_arrays(&byte_array2);
assert_eq!(byte_array.0, vec![0x70, 0x70]);
}
#[test]
fn test_byte_xor2() {
let mut byte_array: ByteArray = ByteArray(vec![0x00, 0x00]);
let byte_array2: ByteArray = ByteArray(vec![0x55, 0x55]);
byte_array.xor_byte_arrays(&byte_array2);
assert_eq!(byte_array.0, vec![0x55, 0x55]);
}
} }

View file

@ -1,4 +1,9 @@
pub mod ciphers; pub mod ciphers;
pub mod dff;
pub mod edf;
pub mod field;
pub mod math; pub mod math;
pub mod net;
pub mod parse; pub mod parse;
pub mod poly; pub mod poly;
pub mod sff;

1
src/utils/net.rs Normal file
View file

@ -0,0 +1 @@

View file

@ -8,13 +8,13 @@ pub struct Testcases {
pub testcases: HashMap<String, Testcase>, pub testcases: HashMap<String, Testcase>,
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Testcase { pub struct Testcase {
pub action: String, pub action: String,
pub arguments: Value, pub arguments: Value,
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize, Clone)]
pub struct Responses { pub struct Responses {
pub responses: HashMap<String, Value>, pub responses: HashMap<String, Value>,
} }
@ -28,14 +28,12 @@ pub fn parse_json(json: String) -> Result<Testcases> {
mod tests { mod tests {
use std::fs; use std::fs;
use serde_json::json;
// Note this useful idiom: importing names from outer (for mod tests) scope. // Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*; use super::*;
#[test] #[test]
fn test_json_parsing() { fn test_json_parsing() {
let json = fs::read_to_string("src/test_json/parse_example.json").unwrap(); let json = fs::read_to_string("test_json/parse_example.json").unwrap();
let parsed = parse_json(json).unwrap(); let parsed = parse_json(json).unwrap();
/* /*

File diff suppressed because it is too large Load diff

92
src/utils/sff.rs Normal file
View file

@ -0,0 +1,92 @@
use serde::{Deserialize, Serialize};
use crate::utils::{
field::FieldElement,
poly::{gcd, polynomial_2_block},
};
use super::poly::Polynomial;
#[derive(Debug, Serialize, Deserialize)]
pub struct Factors {
pub factor: Vec<String>,
pub exponent: u128,
}
pub fn sff(mut f: Polynomial) -> Vec<(Polynomial, u128)> {
let mut c = gcd(&f, &f.clone().diff());
f = f.div(&c).0;
let mut z: Vec<(Polynomial, u128)> = vec![];
let mut e: u128 = 1;
let one_element = Polynomial::new(vec![FieldElement::new(
polynomial_2_block(vec![0], "gcm").unwrap(),
)]);
while f != one_element {
let y = gcd(&f, &c);
if f != y {
z.push(((f.div(&y).0), e));
}
f = y.clone();
c = c.div(&y).0;
e += 1;
}
if c != one_element {
let r = sff(c.sqrt());
for (f_star, e_star) in r {
z.push((f_star, 2 * e_star));
}
}
z
}
#[cfg(test)]
mod tests {
use serde_json::json;
// Note this useful idiom: importing names from outer (for mod tests) scope.
use super::*;
#[test]
fn byte_indices_0x01() {
let json_f = json!([
"vL77UwAAAAAAAAAAAAAAAA==",
"mEHchYAAAAAAAAAAAAAAAA==",
"9WJa0MAAAAAAAAAAAAAAAA==",
"akHfwWAAAAAAAAAAAAAAAA==",
"E12o/QAAAAAAAAAAAAAAAA==",
"vKJ/FgAAAAAAAAAAAAAAAA==",
"yctWwAAAAAAAAAAAAAAAAA==",
"c1BXYAAAAAAAAAAAAAAAAA==",
"o0AtAAAAAAAAAAAAAAAAAA==",
"AbP2AAAAAAAAAAAAAAAAAA==",
"k2YAAAAAAAAAAAAAAAAAAA==",
"vBYAAAAAAAAAAAAAAAAAAA==",
"dSAAAAAAAAAAAAAAAAAAAA==",
"69gAAAAAAAAAAAAAAAAAAA==",
"VkAAAAAAAAAAAAAAAAAAAA==",
"a4AAAAAAAAAAAAAAAAAAAA==",
"gAAAAAAAAAAAAAAAAAAAAA=="
]);
let poly_f = Polynomial::from_c_array(&json_f);
let mut factors = sff(poly_f);
factors.sort();
let mut result: Vec<Factors> = vec![];
for (factor, exponent) in factors {
result.push(Factors {
factor: factor.to_c_array(),
exponent,
});
}
println!("{:?}", result);
let _bit_indices: Vec<u8> = vec![0];
assert!(false)
}
}

View file

@ -0,0 +1,14 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gcm_decrypt",
"arguments": {
"algorithm": "aes128",
"nonce": "4gF+BtR3ku/PUQci",
"key": "Xjq/GkpTSWoe3ZH0F+tjrQ==",
"ciphertext": "ET3RmvH/Hbuxba63EuPRrw==",
"ad": "QUQtRGF0ZW4=",
"tag": "Mp0APJb/ZIURRwQlMgNN/w=="
} }
}
}

View file

@ -0,0 +1,14 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gcm_decrypt",
"arguments": {
"algorithm": "sea128",
"nonce": "4gF+BtR3ku/PUQci",
"key": "Xjq/GkpTSWoe3ZH0F+tjrQ==",
"ciphertext": "0cI/Wg4R3URfrVFZ0hw/vg==",
"ad": "QUQtRGF0ZW4=",
"tag": "ysDdzOSnqLH0MQ+Mkb23gw=="
} }
}
}

View file

@ -0,0 +1,14 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gcm_encrypt",
"arguments": {
"algorithm": "aes128",
"nonce": "4gF+BtR3ku/PUQci",
"key": "Xjq/GkpTSWoe3ZH0F+tjrQ==",
"plaintext": "RGFzIGlzdCBlaW4gVGVzdA==",
"ad": "QUQtRGF0ZW4="
}
}
}
}

View file

@ -0,0 +1,14 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gcm_encrypt",
"arguments": {
"algorithm": "sea128",
"nonce": "4gF+BtR3ku/PUQci",
"key": "Xjq/GkpTSWoe3ZH0F+tjrQ==",
"plaintext": "RGFzIGlzdCBlaW4gVGVzdA==",
"ad": "QUQtRGF0ZW4="
}
}
}
}

12
test_json/gfmul_test.json Normal file
View file

@ -0,0 +1,12 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gfmul",
"arguments": {
"semantic": "xex",
"a": "ARIAAAAAAAAAAAAAAAAAgA==",
"b": "AgAAAAAAAAAAAAAAAAAAAA=="
}
}
}
}

View file

@ -0,0 +1,73 @@
{
"testcases": {
"254eaee7-05fd-4e0d-8292-9b658a852245": {
"action": "gfmul",
"arguments": {
"semantic": "xex",
"a": "ARIAAAAAAAAAAAAAAAAAgA==",
"b": "AgAAAAAAAAAAAAAAAAAAAA=="
}
},
"b8f6d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "sea128",
"arguments": {
"mode": "encrypt",
"key": "istDASeincoolerKEYrofg==",
"input": "yv66vvrO263eyviIiDNEVQ=="
}
},
"254eaee7-05fd-4e0d-8292-9b658b852245": {
"action": "sea128",
"arguments": {
"mode": "decrypt",
"key": "istDASeincoolerKEYrofg==",
"input": "D5FDo3iVBoBN9gVi9/MSKQ=="
}
},
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "block2poly",
"arguments": {
"semantic": "xex",
"block": "ARIAAAAAAAAAAAAAAAAAgA=="
}
},
"254eafe7-05fd-4e0d-8292-9b658a852245": {
"action": "poly2block",
"arguments": {
"semantic": "xex",
"coefficients": [
12,
127,
9,
0
]
}
},
"0192d428-3913-762b-a702-d14828eae1f8": {
"action": "xex",
"arguments": {
"mode": "encrypt",
"key": "B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=",
"tweak": "6VXORr+YYHrd2nVe0OlA+Q==",
"input": "/aOg4jMocLkBLkDLgkHYtFKc2L9jjyd2WXSSyxXQikpMY9ZRnsJE76e9dW9olZIW"
}
},
"0192d428-3913-7168-a3bb-69c258c74dc1": {
"action": "xex",
"arguments": {
"mode": "decrypt",
"key": "B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=",
"tweak": "6VXORr+YYHrd2nVe0OlA+Q==",
"input": "lr/ItaYGFXCtHhdPndE65yg7u/GIdM9wscABiiFOUH2Sbyc2UFMlIRSMnZrYCW1a"
}
},
"0192d428-3913-78b5-9b35-3171c1c85484": {
"action": "gfmul",
"arguments": {
"semantic": "xex",
"a": "ARIAAAAAAAAAAAAAAAAAgA==",
"b": "AgAAAAAAAAAAAAAAAAAAAA=="
}
}
}
}

1203
test_json/padding_long.json Normal file

File diff suppressed because it is too large Load diff

12015
test_json/padding_oracle.json Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,98 @@
{
"testcases": {
"gfpoly_add": {
"action": "gfpoly_add",
"arguments": {
"A": [
"NeverGonnaGiveYouUpAAA==",
"NeverGonnaLetYouDownAA==",
"NeverGonnaRunAroundAAA==",
"AndDesertYouAAAAAAAAAA=="
],
"B": [
"KryptoanalyseAAAAAAAAA==",
"DHBWMannheimAAAAAAAAAA=="
]
}
},
"gfpoly_mul": {
"action": "gfpoly_mul",
"arguments": {
"A": [
"JAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"ACAAAAAAAAAAAAAAAAAAAA=="
],
"B": [
"0AAAAAAAAAAAAAAAAAAAAA==",
"IQAAAAAAAAAAAAAAAAAAAA=="
]
}
},
"gfpoly_mul_10": {
"action": "gfpoly_mul",
"arguments": {
"A": [
"JAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"ACAAAAAAAAAAAAAAAAAAAA=="
],
"B": [
"AAAAAAAAAAAAAAAAAAAAAA=="
]
}
},
"gfpoly_mul_01": {
"action": "gfpoly_mul",
"arguments": {
"A": [
"AAAAAAAAAAAAAAAAAAAAAA=="
],
"B": [
"0AAAAAAAAAAAAAAAAAAAAA==",
"IQAAAAAAAAAAAAAAAAAAAA=="
]
}
},
"gfpoly_pow": {
"action": "gfpoly_pow",
"arguments": {
"A": [
"JAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"ACAAAAAAAAAAAAAAAAAAAA=="
],
"k": 3
}
},
"gfpoly_pow_k0": {
"action": "gfpoly_pow",
"arguments": {
"A": [
"JAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"ACAAAAAAAAAAAAAAAAAAAA=="
],
"k": 0
}
},
"gfpoly_pow_k1": {
"action": "gfpoly_pow",
"arguments": {
"A": [
"JAAAAAAAAAAAAAAAAAAAAA==",
"wAAAAAAAAAAAAAAAAAAAAA==",
"ACAAAAAAAAAAAAAAAAAAAA=="
],
"k": 1
}
},
"gfdiv": {
"action": "gfdiv",
"arguments": {
"a": "JAAAAAAAAAAAAAAAAAAAAA==",
"b": "wAAAAAAAAAAAAAAAAAAAAA=="
}
}
}
}

29
test_json/poly_algs.json Normal file
View file

@ -0,0 +1,29 @@
{
"testcases": {
"b856d760-023d-4b00-bad2-15d2b6da22fe": {
"action": "gfpoly_sort",
"arguments": {
"polys": [
[
"NeverGonnaGiveYouUpAAA==",
"NeverGonnaLetYouDownAA==",
"NeverGonnaRunAroundAAA==",
"AndDesertYouAAAAAAAAAA=="
],
[
"WereNoStrangersToLoveA==",
"YouKnowTheRulesAAAAAAA==",
"AndSoDoIAAAAAAAAAAAAAA=="
],
[
"NeverGonnaMakeYouCryAA==",
"NeverGonnaSayGoodbyeAA==",
"NeverGonnaTellALieAAAA==",
"AndHurtYouAAAAAAAAAAAA=="
]
]
}
}
}
}

29
test_json/sandbox.json Normal file
View file

@ -0,0 +1,29 @@
{
"testcases": {
"gcm_crack1": {
"action": "gcm_crack",
"arguments": {
"nonce": "4gF+BtR3ku/PUQci",
"m1": {
"ciphertext": "CGOkZDnJEt24aVV8mqQq+P4pouVDWhAYj0SN5MDAgg==",
"associated_data": "TmFjaHJpY2h0IDE=",
"tag": "GC9neV3aZLnmznTIWqCC4A=="
},
"m2": {
"ciphertext": "FnWyLSTfRrO8Y1MuhLIs6A==",
"associated_data": "",
"tag": "gb2ph1vzwU85/FsUg51t3Q=="
},
"m3": {
"ciphertext": "CGOkZDnJEt25aV58iaMt6O8+8chKVh0Eg1XFxA==",
"associated_data": "TmFjaHJpY2h0IDM=",
"tag": "+/aDjsAzTseDLuM4jt5Q6Q=="
},
"forgery": {
"ciphertext": "AXe/ZQ==",
"associated_data": ""
}
}
}
}
}

22
test_json/xex_tests.json Normal file
View file

@ -0,0 +1,22 @@
{
"testcases": {
"0192d428-3913-762b-a702-d14828eae1f8": {
"action": "xex",
"arguments": {
"mode": "encrypt",
"key": "B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=",
"tweak": "6VXORr+YYHrd2nVe0OlA+Q==",
"input": "/aOg4jMocLkBLkDLgkHYtFKc2L9jjyd2WXSSyxXQikpMY9ZRnsJE76e9dW9olZIW"
}
},
"0192d428-3913-7168-a3bb-69c258c74dc1": {
"action": "xex",
"arguments": {
"mode": "decrypt",
"key": "B1ygNO/CyRYIUYhTSgoUysX5Y/wWLi4UiWaVeloUWs0=",
"tweak": "6VXORr+YYHrd2nVe0OlA+Q==",
"input": "lr/ItaYGFXCtHhdPndE65yg7u/GIdM9wscABiiFOUH2Sbyc2UFMlIRSMnZrYCW1a"
}
}
}
}