by dpJudas » Sat Jan 05, 2019 7:52 am
Hehe, I'm by no means a compiler expert myself.
My knowledge about SSA vars and basic blocks comes mostly from the LLVM architecture. I also once wrote an incomplete C# compiler for LLVM that helps me compare your compiler frontend to that one. The two most striking differences are that you solved the register allocation problem and constant folding in the front end, while I forwarded those problems to the LLVM back end.
While the VM opcodes doesn't map *that* badly to AsmJIT (thanks to its virtual register allocator feature), the biggest problem left is how to implement the missing optimization passes: dead code elimination, constant folding and function inlining. What all those have in common is that they affect if variables end up as constants and consequently how many registers are needed. The current VM set doesn't prevent writing such optimization passes, but they do make it somewhat harder.
About fixing the VM before adding the JIT, my answer to that is roughly the same as it was for the calling convention. It would have made the work required much bigger, which ultimately would have stopped it in its tracks. I personally prefer iterative solutions if possible, even if it sometimes means doing more work in the long run. A non-optimizing JIT is better than no JIT at all, just like your direct-to-VM compiler code was better than no ZScript. Our current code may not win a design award, but I just read the other day that Ruby only reached the stage of outputting C code that it then feeds into a C compiler - we are already ahead of THAT.
Hehe, I'm by no means a compiler expert myself. :)
My knowledge about SSA vars and basic blocks comes mostly from the LLVM architecture. I also once wrote an incomplete C# compiler for LLVM that helps me compare your compiler frontend to that one. The two most striking differences are that you solved the register allocation problem and constant folding in the front end, while I forwarded those problems to the LLVM back end.
While the VM opcodes doesn't map *that* badly to AsmJIT (thanks to its virtual register allocator feature), the biggest problem left is how to implement the missing optimization passes: dead code elimination, constant folding and function inlining. What all those have in common is that they affect if variables end up as constants and consequently how many registers are needed. The current VM set doesn't prevent writing such optimization passes, but they do make it somewhat harder.
About fixing the VM before adding the JIT, my answer to that is roughly the same as it was for the calling convention. It would have made the work required much bigger, which ultimately would have stopped it in its tracks. I personally prefer iterative solutions if possible, even if it sometimes means doing more work in the long run. A non-optimizing JIT is better than no JIT at all, just like your direct-to-VM compiler code was better than no ZScript. Our current code may not win a design award, but I just read the other day that Ruby only reached the stage of outputting C code that it then feeds into a C compiler - we are already ahead of THAT. :)