- 05 Jul, 2015 1 commit
-
-
Kevin Modzelewski authored
assembler: Use 32bit moves if the immediate fits inside 32bit
-
- 02 Jul, 2015 2 commits
-
-
Marius Wachtler authored
makes the encoding 4-5byte shorter and works because the upper 32bit will get auto cleared
-
Kevin Modzelewski authored
Use less large constants
-
- 01 Jul, 2015 8 commits
-
-
Kevin Modzelewski authored
Preperations for the new JIT tier
-
Kevin Modzelewski authored
Conflicts: src/core/types.h
-
Marius Wachtler authored
IC sizes are guessed...
-
Marius Wachtler authored
It deleted the passed ICSlotRewrite* and there was no way for a caller to know this without looking at the source. Make the ownership explicit by using a std::unique_ptr
-
Marius Wachtler authored
and make code ready for the new JIT tier.
-
Marius Wachtler authored
-
Kevin Modzelewski authored
Conflicts: src/runtime/iterobject.cpp
-
Kevin Modzelewski authored
add -a flag which outputs assembly of ICs
-
- 30 Jun, 2015 13 commits
-
-
Chris Toshok authored
-
Chris Toshok authored
build sorted vectors for large/huge objects at gc time, and use binarySearch to power ::allocationFrom
-
Chris Toshok authored
-
Kevin Modzelewski authored
fastpath investigations
-
Kevin Modzelewski authored
I think this only gets hit through explicit calls to tuple(), but at one point it was crashing on this codepath which is how I noticed it.
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
Our tuple format is now the same as CPython's, so we can just enable the fast macros again. Little bit of trickiness since they declare their storage array to have a size of 1; I'm not sure how important that is, but let's play it safe and match it.
-
Kevin Modzelewski authored
We already have this for the rest of our code, but it wasn't enabled for C files.
-
Kevin Modzelewski authored
- conservative references into the large/huge heaps are especially expensive - it's nice to know what kind of object we are scanning
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
So we can compare against their latest instead of whatever was in apt.
-
Kevin Modzelewski authored
Notably, to the rewriter. Turns out that a decent amount of the time that used to get allocated to slowpaths is actually time spent in the rewriter, doing successful rewrites.
-
- 29 Jun, 2015 2 commits
-
-
Kevin Modzelewski authored
Take another pass over the failing cpython tests
-
Kevin Modzelewski authored
-
- 27 Jun, 2015 5 commits
-
-
Kevin Modzelewski authored
Switch BoxedMethodDescriptor to tpp_call
-
Kevin Modzelewski authored
While it's bad that we do so poorly with large heaps, that's not the point of this benchmark.
-
Kevin Modzelewski authored
- "Override" the stattimer stack while we are throwing an exception. - Mark some more things as being back in "builtins" code.
-
Kevin Modzelewski authored
Minor, but should cut down on some overhead
-
Kevin Modzelewski authored
reduce api conversions
-
- 26 Jun, 2015 9 commits
-
-
Kevin Modzelewski authored
We got away with sloppy exception clearing in some places, since raiseExcHelper would end up calling typeCall which would re-throw the capi exception that hadn't been cleared. Now that typeCall doesn't do that (which is fine), those bugs got exposed.
-
Kevin Modzelewski authored
We can't rewrite into the object-creation behavior, but we can rewrite the decision to just use CPython's c slots.
-
Kevin Modzelewski authored
ie try to avoid api conversion.
-
Kevin Modzelewski authored
-
Kevin Modzelewski authored
something like 40x slower. also, has some sort of super-linear overhead: 10xing the size of the benchmark takes 33x longer for us, but 10x longer for cpython. (Looks like it's coming from the GC)
-
Kevin Modzelewski authored
We should hopefully be able to avoid getting here in the first place though.
-
Marius Wachtler authored
in order to save one register. This makes it possible to pass up to 3 args to a runtimc IC callattr (Runtime ICs currently don't support passing args on the stack)
-
Kevin Modzelewski authored
add tpp_call for faster calling of non-functions
-
Kevin Modzelewski authored
This register reuse here didn't go through our locations-tracking infrastructure, and it just assumed it'd be ok to do. I think it's only called from places where the register we reuse won't be used after this function. This broke though when I added the ability to allocate orig-arg registers; now, the argument is still dead in that it won't be used by the function, but it could be "used" if a guard failed. But this wasn't being tracked in the locations map, so we ended up trashing the orig arg despite the lengths we went to to not do that :/
-