Content pfp
Content
@
0 reply
0 recast
2 reactions

gakonst pfp
gakonst
@gakonst
crossposting this EVM optimization idea, tl;dr: 1. execute N opcodes at once, define specialized handlers 2. train a transformer to predict stack/mem view given a view of the current mem/stack and a sequence of previous/next opcodes https://twitter.com/gakonst/status/1714411279765688586
2 replies
3 recasts
14 reactions

brock pfp
brock
@brock
i’m not totally convinced this actually will result in much of a speed up personally this comes down to stack machines being extremely trivial to convert into machine code in the first place (see this talk: https://youtu.be/umSuLpjFUf8?feature=shared) i think it may end up resulting in more branch pred misses
1 reply
0 recast
0 reaction

gakonst pfp
gakonst
@gakonst
I went thru the talk quickly and didnt find a reason why this technique is not useful? are you saying jit/aot is better? jiting evm code has analysis costz for contracts you cannot convert to machine code ahead of time, so even then ngrams seem useful. also how do you gas meter in machine code?
1 reply
0 recast
0 reaction

brock pfp
brock
@brock
basically i’m making the argument that simple machines like the EVM are probably optimized pretty well by their implementation languages compiler such that JIT & multi-op ops won’t show much benefit reasoning being that LLVM likely produces better asm + reduces branch prediction misses i could def be wrong
1 reply
0 recast
1 reaction

gakonst pfp
gakonst
@gakonst
ah. this could be right - obviously devil in the details and hard to tell without running the numbers^TM but I can see that happening or the perf gain not being big enough for the dev time
1 reply
0 recast
0 reaction