Nuitka this week #9

Communication vs. Coding

My new communication strategy is a full success, engagement with Nuitka is on an all time high.

But the recent weeks more than ever highlighted why I have to force myself to do it. I do not like to talk about unfinished stuff. And right now, there is really a lot of it, almost only it. Also I was ill, and otherwise busy, so this is now late by a week.

But I am keeping it up, and will give an update, despite the feeling that it would be better to just finish a few of those things and then talk about it, but then it will take forever and leave you in the dark. And that is not what is supposed to be.

Bear in mind, that this is supposed to be a quick, not too polished, and straight from top of my head, even if really a lot of content. But I feel that esp. the optimization parts are worth reading.

Hotfixes

So the 0.6.0 release was a huge success, but it definitely wasn't perfect, and hotfixes were necessary. The latest one 0.6.0.5 was done just yesterday and actually contains one for an important mis-optimization being done, and you ought to update to it from any prior 0.6.0 release.

There are also a few remaining compatibility issues fixed for 3.7 and generally using the latest hotfix is always a good idea.

Kind of what one has to expect from a 0 release, this one also had more expose than usual is seems.

Google Summer of Code for Nuitka

I need more people to work on Nuitka. One way of doing this could be to participate in Google Summer of Code under the Python umbrella. To make that possible, I need you to volunteer as a mentor. So please, please, do.

I know you will feel not qualified. But I just need a backup that will help a student around obstacles in case I go missing. Contact me and I will be very happy.

Website Overhaul

I updated the website to recent Nikola and dropped the tag cloud that I was using. Should have cleaner and better looks. Also integrated privacy aware sharing links, where two clicks are necessary to share a page or article like this one on Twitter, Facebook, etc.

Also the download page saw some structural updates and polishing. It should easier to overview now.

Performance Work

Adding specialized object operations

The feedback for performance and the work on 0.6.1 are fully ongoing, and there are many major points that are ongoing. I want to briefly cover each one of them now, but many of them will only have full effect, once everything is in place, which each one is very critical.

So, with the type tracing, objects have known types, and short of using a C type, knowing e.g. that an object is an int, and the other one too, doing + for them can take a lot of advantage avoiding unrelated checks and code paths, even if still using PyObject * at the end of the day.

And even we are only knowing it's not an int, but say one value is a tuple and the other an unknown, that allows to remove checks for int shortcuts as they can no longer apply. These are tiny optimizations then, but still worthwhile.

To further this, first the inplace operations for a couple of more or less randomly selected types, list, tuple, int, long, str, unicode, bytes, and float, have been looked at and have gotten their own special object based helpers if one or both types are known to be of that kind.

Finding missing specialized object code generation

A report has been added, that will tell when such an operation could have been used, but was not available. This uncovered where typical stuff goes non optimized, a nice principle to see what is actually happening.

So adding list and str would now give a warning, although of course, the optimization phase ought to catch the static raise that is and never let it get there, so this report also addresses missing optimization in an earlier phase.

Optimizing plain object operations too

So the in-place operations were then covered, so this was extended to mere + operations too, the ones that are not in-place. Sometimes, esp. for immutable types, there was already code for that, e.g. int doesn't really do it, in other cases, list + list code for a quicker concat was added.

And again a report for where it's missing was added and basic coverage for most of the types. However, in some instances, the optimization doesn't use the full knowledge yet. But where it does, it will shove off quite a few cycles.

Lack of type knowledge

To apply these things effectively, optimization and value tracing need to know types in the first place. I have found two obstacles for that. One are branch merges. If a branch or both assign to the same type or original type, well the type is changed. Previously it became "unknown" which is treated as object for code generation, and allows nothing really. But now that is better on develop now, and was actually a trivial missing thing.

The other area is loops. Loops put values to unknown when entering loop body, and again when leaving. Essentially making type tracing not effective where it is needed the most to achieve actual performance. Also this was limiting the knowledge for all function to one type to not happening for these kinds of variables that were assigned inside a loop at all.

Took me a while, but I figured out how to build type tracing for loops that works. It currently is still unfinished in my private repo, but passes all tests, I would just like to make it use dedicated interfaces, and clean it up.

I will most likely have that for 0.6.1 too and that should expand the cases where types are known in code generation by a fair amount.

The effect of that will be that more often C code generation will actually see types. Currently e.g. a boolean variable that is assigned in a loop, cannot use the C target type in code generation. Once loop code is merged, it will however take advantage there too. And only then I think adding "C int" as a C type makes sense at all.

Performance regressions vs. CPython

Then another area is performance regressions. So one thing I did early on in the 0.6.1 cycle was using the "module var C target type" to get in-place working for those too. Doing string concatenations on module variables could be slower by an order of magnitude, as could be other operations.

I still need to do it for closure variables too. Then Nuitka will do at least as many of them perfectly as CPython does. It also will be better at it them, because e.g. it doesn't have to delete from the module dictionary first, due to it never taking a reference, and same applies to the cell. Should be faster for that too.

But strings in-place on these if not optimized, it will look very ugly in terms of worse performance, so 0.6.0 was still pretty bad for some users. This will however hopefully be addressed in 0.6.1 then.

In-place unicode still being bad

Another field was in-place string add for the already optimized case, it was still slower than CPython, and I finally found out what causes this. And that is the using of libpython where PyUnicode_Append is far worse than in the python binary that you normally use, I have see that at least for 3.5 and higher CPython. Analysis showed that e.g. MiniConda had the issue to a much smaller extent, and was being much faster anyway, but probably just has better libpython compilation flags.

So what to do. Ultimately that was to be solved by including a clone of that function, dubbed UNICODE_APPEND that behaves the same, and can even shove off a couple of cycles, by indicating the Python error status without extra checks, and specializing it for the pure unicode += unicode case that we see most often, same for UNICODE_CONCAT for mere +.

Right now the benchmarks to show it do not exist yet. Again something that typically wants me to delay stuff. But as you can imagine, tracking down these hard issues, writing that much code to replace the unicode resizing, is hard enough by itself.

But I hope to convince myself that this will allow to show that for compiled code, things are going to be faster only now.

Benchmarks Missing

In fact, speedcenter as a whole is currently broken, mostly due to Nikola changes that I am trying to work around, but it will take more time apparently and isn't finished as I write this.

Type shapes in optimization

Another optimization end, is the type shapes of the + operation itself. Right now what is being done is that the shape is derived from the shape of the left argument with the right shape to be considered by it. These also have reports now, for cases where they are missing. So saying e.g. that int + float results in float and these kinds of things, are stuff being encoded there right now.

This is necessary step to e.g. know that int + int -> int_or_long, to make effective loop variable optimization.

Without these, and again, that is a lot of code to write, there is no way to hope for wide spread type knowledge in code generation.

Control flow escape

Something missing there, is to also make it known that + unlike it currently is now, should not in all cases lead to "control flow escape" with the consequence of removing all stuff, and expecting an exception possible, but instead to let the int type also make known that + int ont it not only gives an int_or_long result shape, but also while doing so, that it will never raise an exception (bare MemoryError), and therefore allow more optimization to happen and less and therefore faster code generated.

Until this is done, what is actually going to happen is that while the + result is known, Nuitka will assume control flow escape.

And speaking of that, I think this puts too many variables to a too unknown state. You can to distrust all values, but not the types in this case, so that could be better, but right now it is not. Something else to look into.

Overall

So 0.6.1 is in full swing in terms of optimization. All these ends need a completion, and then I can expect to use advantage of things in a loop, and ultimately to generate C performance code for one example of loop. esp. if we add a C int target type, which currently isn't yet started, because I think it would barely be used yet.

But we are getting there and I wouldn't even say we are making small steps, this is all just work to be completed, nothing fundamental about it. But it may take more than one release for sure.

Mind you, there is not only +, there is also -, *, %, and many more operators, all of them will require work. Granted, loop variables tend to use + more often, but any un-optimized operation will immediately loose a lot of type knowledge.

Improved Annotations

There are two kinds of annotations, ones for classes and modules, which actually are stored in a __annotations__ variable, and everything else is mostly just ignored.

So Nuitka got the criterion wrong, and did one thing for functions, and the other for everything else. So that annotations in generators, coroutines and asyncgen ended up with wrong, crashing, and slower code, due to it updating the module __annotations__, so that one is important too if you have to do those.

Release or not

To release or not. There is at least one bug about star imports that affects numpy that is solved in develop, and wasn't back ported, and I was thinking it only applies to develop, but in fact does to stable. It makes me want to release even before all these optimization things happen and are polished, and I might well decide to go with that.

Maybe I only add the closure in-place stuff and the polish the loop SSA stuff, and then call it a release. It already will solve a lot of performance issues that exist right now, while staging the ground for more.

Standalone Improvements

Standalone work is also improving. Using pyi files got more apt, and a few things were added, all of which make sense to be used by people.

But I also have a backlog of issues there however. I will schedule one sprint for those I guess, where I focus on these. I am neglecting those somewhat recently.

Caching Examined

For the static code, I now noticed that it's compiled for each target name, due to the build directory being part of the object file for debug. For gcc 8 there is an option to allow pointing at the original static C file location, and then ccache is more effective, because object files will be the same.

That's actually pretty bad, as most of my machines are on gcc-6 and makes me think that libnuitka.a is really more of an requirement than ever. I might take some time to get this sorted out.

Python3 deprecation warnings

So Nuitka supports the no_warnings Python flag, and for a long time I have been annoyed at how it was not working for Python3 in some cases. The code was manually settign filters, but these would get overridden by CPython test suites testing warnings. And the code said that there is no CPython C-API to control it, which is just plain wrong.

So I changed that and it became possible to remove lots of ignore_stderr annotations in CPython test suites, and more importantly, I can stop adding them for when running older/newer CPython version with a suite.

Twitter

I continue to be very active there.

Follow @kayhayen

And lets not forget, having followers make me happy. So do re-tweets.

Adding Twitter more prominently to the web site is something that is also going to happen.

Help Wanted

If you are interested, I am tagging issues help wanted and there is a bunch, and very likely at least one you can help with.

Nuitka definitely needs more people to work on it.

Donations

If you want to help, but cannot spend the time, please consider to donate to Nuitka, and go here:

Donate to Nuitka

Nuitka this week #8

Public / Private CI / Workflow

Note

I wrote this as part of a discussion recently, and I think it makes sense to share it here. This is a lot text though, feel free to skip forward.

Indeed I have a private repo, where I push and only private CI picks up. Based on Buildbot, I run many more compilations, basically around the clock on all of my computers, to find regressions from new optimization or codegen changes, and well UI changes too.

Public CI offerings like Travis are not aimed at allowing this many compilations. It will be a while before public cloud infrastructure will be donated to Nuitka, although I see it happening some time in the future. This leaves developers with the burden to run tests on their own hardware, and never enough. Casual contributors will never be able to do it themselves.

My scope is running the CPython test suites on Windows and Linux. These are the adapted 26, 27, 32, 33, 34, 35, 36, 37 suites, and also to get even more errors covered, they are ran with mismatching Python versions, so a lot of exceptions are raised. Often running the 36 tests with 37 and vice versa will extend the coverage, because of the exceptions being raise.

On Windows I compile with and without debug mode, x86 and x64, and it's kind of getting too much. For Linux I have 2 laptops in use, and an ARM CuBox bought from your donations, there it's working better, esp. due to ccache being used everywhere, although recent investigations show room for improvement there as well.

For memory usage I still compile mercurial and observe the memory it used in addition to comparing the mercurial tests to expected outputs its test suite gives. It's a sad day when Mercurial tests find changes in behavior, and luckily that has been rare. Running the Mercurial test suite gives some confidence in the thing not corrupting data it works with without knowing.

Caching the CPython outputs of tests to compare against is something I am going to make operational these days, trying to make things ever faster. There is no point to re-run tests with Python, just to get at its output, which will typically not change at all.

But for the time being, ccache.exe and clcache.exe seem to have done wonders for Windows too, but I will want to investigate some more to avoid unnecessary cache misses.

Workflow

As for my workflow with Nuitka, I often tend to let some commits settle in my private repo only until they become trusted. Other times I will make bigger changes and put them out to factory immediately, because it will be hard to split up the changes later, so putting them out makes it easier.

I am more conservative with factory right after telling people to try something there. But also I break it on purpose, just trying out something. I really consider it a private branch for interacting with me or public CI. I do not recommend to use it, and it's like a permanent pull request of mine that is not ever going to be finished.

Then on occasions I am making a sorting of all commits on factory and split it into some things that become hotfixes, some things that become current pre-release, and other things that will remain in that proving ground. That is why I typically make hotfix and pre-release at the same times. The git flow suggests doing that and it's easy, so why not. As a bonus, develop is then practically stable at nearly all times too, with hardly any regressions.

I do however normally not take things as hotfixes that are on develop already, I hate the duplication of commits. Hotfixes must be small and risk free, and easy to put out, when there is any risk, it definitely will be on develop. Nuitka stable typically covers nearly all grounds already. No panic needed to add missing stuff and break others.

Hunting bugs with bisect

For me the git bisect is very important. My private commit history is basically a total mess and worthless, but on factory I am making very nice organized commits that I will frequently amend, even for the random PyLint cleanup. This allows me when e.g. one test suddenly says "segfault" on Windows to easily find the change that triggers it, look at C code difference, and spot the bug introduced, then amend the commit and be done with it.

It's amazing how much time this can save. My goal is to always have a workable state which is supposed to pass all tests. Obviously I cannot prove it for every commit, but when I know it to not be the case, I tend to make rebases. At times I have been tempted and followed up on backward amending develop and even stable.

I am doing that to be sure to have that bisect ability, but fortunately it's rare that kind of bug occurs, and I try not to do it.

Experimental Changes

As with recent changes, I sometimes make changes with the isExperimental() marker, activating breaking changes only gradually. The C bool type code generation has been there for months in a barely useful form, until it became more polished, and always guarded with a switch, until one day for 0.6 finally I changed it, and made the necessary fixes retroactively before that switch, to make it work while that was still in factory.

Then I will remove the experimental code. I feel it's very important and even ideal to be able to always compare outputs to a fully working solution. I am willing to postpone some cleanups until later date as a price, but when then something in my mind tells me again "This cannot possibly have ever worked"... a command line flag away, I have the answer to compare, plus, that includes extra changes happened in the meantime, they don't add noise to diff outputs of generated C code for example.

Then looking at that diff, I can tell where the unwanted effect is, and fix all the things, and that way find bugs much faster.

Even better, if I decide to make a cleanup action as part of making a change more viable to execute, then I get to execute it on stable grounds, covered by the full test suite. I can complete that cleanup, e.g. using variable identifier objects instead of mere strings was needed to make "heap generators" more workable. But I was able to put that one to active before "heap generators" was ever fully workable, and complete it, and actually reap some of its benefits already.

Hardware

Obviously this takes a lot of hardware and CPU to be able to compile this much Python code on a regular basis. And I really wish I could add one of the new AMD Threadripper 2 to the mix. Anybody donating one to me? Yes I know, I am only dreaming. But it would really help the cause.

Milestone Release

So the 0.6 is out, and already a hotfix that addresses mostly use cases of people that didn't work. More people seemed to have tried out 0.6.0 and as a result 0.6.0.1 is going to cover a few corner cases. So far I have not encountered a single regression of 0.6.0, but instead it contained ones for 0.5.33 which did have one that was not easy to fix.

So that went really smooth.

UI rework

The UI needs more work still. Specifically that packages do not automatically include all stuff below them and have to be specified by file path instead of by name, is really annoying to me.

But I had delayed 0.6 for some UI work, and the quirks are to remain some. I will work on these things eventually.

Benchmarks

So I updated the website to state that PyStone is now 312% faster, from a number that was very old. I since then ran it with an updated version for Python3, and it's much less there. That is pretty sad.

I will be looking into that for 0.6.1 release, or I will have to update the wording to provide 2 numbers there, because it seems for Python3 performance with Nuitka it might be misleading.

Something with unicode strings and in-place operations is driving me crazy. Nuitka is apparently slower for that, and I can't point where that is happening exactly. It seems internally unicode objects are maybe put into a different state from some operations, which then making in-place extending in realloc fail more often, but I cannot know yet.

Inplace Operations

So more work has been put into those, adding more specialization, and esp. also applying them for module variables as well. CPython can do that, and actually is giving itself a hard time about it, and Nuitka should be doing this much clever with its more static knowledge.

But I cannot tell you how much scratching my head was wasted debugging that. I was totally stupid about how I approached that, looking from the final solution, it was always easy. Just not for me apparently.

New use cases

Talked about those above. So the top level logging module of your own was working fine in accelerated mode, but for standalone it failed and used the one from standard library instead. That kind of shadowing happened because Nuitka was going from module objects to their names and back to objects, which are bad in case of duplicates. That is fixed for develop, and one of those risk cases, where it cannot be a hotfix because it touched too much.

Then pure Python3 packages need not have __init__.py and so far that was best working for sub-packages, but after 0.6.0.1 hotfix, now it will also work for the main module you compile to be that empty.

Tcl/Tk Standalone

So instructions have been provided how to properly make that work for Python standalone on Windows. I have yet to live up to my promise and make Nuitka automatically include the necessary files. I hope to do it for 0.6.1 though.

Caching Examined

So I am looking at ccache on Linux right now, and found e.g. that it was reporting that gcc --version was called a lot at startup of Scons and then g++ --version once. The later is particularly stupid, because we are not going to use g++ normally, except if gcc is really old and does not support C11. So in case a good one was found, lets disable that version query and not do it.

And for the gcc version output, monkey patching scons to a version of getting that output that caches the result, removes those unnecessary forks.

So ccache is being called less frequently, and actually these --version outputs appears to actually take measurable time. It's not dramatic, but ccache was apparently getting locks, and that's worth avoiding by itself.

That said, the goal is for ccache and clcache to make them both report their effectiveness of cache usage after the end of a test suite run. That way I am hoping to notice and be able to know, if caching is used to its full effect.

Twitter

I continue to be very active there. I put out a poll about the comment system, and disabling Disqus comments as a result, I will focus on Twitter for web site comments too now.

Follow @kayhayen

And lets not forget, having followers make me happy. So do re-tweets.

Adding Twitter more prominently to the web site is something that is also going to happen.

Help Wanted

If you are interested, I am tagging issues help wanted and there is a bunch, and very likely at least one you can help with.

Nuitka definitely needs more people to work on it.

Plans

Working on the 0.6.1 release, attacking more in-place add operations as a first goal, and now turning to binary operations, I am trying to shape how using different helper functions to different object types looks like. And to gain performance without C types. But ultimately the same issue will arise there, what to do with mixed input types.

My desire is for in-place operations to fully catch up with CPython, as these can easily loose a lot of performance. Closure variables and their cells are another target to pick on, and I feel they ought to be next after module ones are now working, because also their solution ought to be very similar. Then showing that depending on target storage, local, closure, or module, is then faster in all cases would be a goal for the 0.6.1 release.

This feels not too far away, but we will see. I am considering next weekend for release.

Donations

If you want to help, but cannot spend the time, please consider to donate to Nuitka, and go here:

Donate to Nuitka

Nuitka Release 0.6.0

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler. Please see the page "What is Nuitka?" for an overview.

This release adds massive improvements for optimization and a couple of bug fixes.

It also indicates reaching the mile stone of doing actual type inference, even if only very limited.

And with the new version numbers, lots of UI changes go along. The options to control recursion into modules have all been renamed, some now have different defaults, and finally the filenames output have changed.

Bug Fixes

  • Python3.5: Fix, the awaiting flag was not removed for exceptions thrown into a coroutine, so next time it appeared to be awaiting instead of finished.
  • Python3: Classes in generators that were using built-in functions crashed the compilation with C errors.
  • Some regressions for XML outputs from previous changes were fixed.
  • Fix, hasattr was not raising an exception if used with non-string attributes.
  • For really large compilations, MSVC linker could choke on the input file, line length limits, which is now fixed for the inline copy of Scons.
  • Standalone: Follow changed hidden dependency of PyQt5 to PyQt5.sip for newer versions
  • Standalone: Include certificate file using by requests module in some cases as a data file.

New Optimization

  • Enabled C target type nuitka_bool for variables that are stored with boolean shape only, and generate C code for those
  • Using C target type nuitka_bool many more expressions are now handled better in conditions.
  • Enhanced is and is not to be C source type aware, so they can be much faster for them.
  • Use C target type for bool built-in giving more efficient code for some source values.
  • Annotate the not result to have boolean type shape, allowing for more compile time optimization with it.
  • Restored previously lost optimization of loop break handling StopIteration which makes loops much faster again.
  • Restore lost optimization of subscripts with constant integer values making them faster again.
  • Optimize in-place operations for cases where left, right, or both sides have known type shapes for some values. Initially only a few variants were added, but there is more to come.
  • When adjacent parts of an f-string become known string constants, join them at compile time.
  • When there is only one remaining part in an f-string, use that directly as the result.
  • Optimize empty f-strings directly into empty strings constant during the tree building phase.
  • Added specialized attribute check for use in re-formulations that doesn't expose exceptions.
  • Remove locals sync operation in scopes without local variables, e.g. classes or modules, making exec and the like slightly leaner there.
  • Remove try nodes that did only re-raise exceptions.
  • The del of variables is now driven fully by C types and generates more compatible code.
  • Removed useless double exception exits annotated for expressions of conditions and added code that allows conditions to adapt themselves to the target shape bool during optimization.

New Features

  • Added support for using .egg files in PYTHONPATH, one of the more rare uses, where Nuitka wasn't yet compatible.
  • Output binaries in standalone mode with platform suffix, on non-Windows that means no suffix. In accelerated mode on non-Windows, use .bin as a suffix to avoid collision with files that have no suffix.
  • Windows: It's now possible to use clang-cl.exe for CC with Nuitka as a third compiler on Windows, but it requires an existing MSVC install to be used for resource compilation and linking.
  • Windows: Added support for using ccache.exe and clcache.exe, so that object files can now be cached for re-compilation.
  • For debug mode, report missing in-place helpers. These kinds of reports are to become more universal and are aimed at recognizing missed optimization chances in Nuitka. This features is still in its infancy. Subsequent releases will add more like these.

Organizational

  • Disabled comments on the web site, we are going to use Twitter instead, once the site is migrated to an updated Nikola.
  • The static C code is now formatted with clang-format to make it easier for contributors to understand.
  • Moved the construct runner to top level binary and use it from there, with future changes coming that should make it generally useful outside of Nuitka.
  • Enhanced the issue template to tell people how to get the develop version of Nuitka to try it out.
  • Added documentation for how use the object caching on Windows to the User Manual.
  • Removed the included GUI, originally intended for debugging, but XML outputs are more powerful anyway, and it had been in disrepair for a long time.
  • Removed long deprecated options, e.g. --exe which has long been the default and is no more accepted.
  • Renamed options to include plugin files to --include-plugin-directory and --include-plugin-files for more clarity.
  • Renamed options for recursion control to e.g. --follow-imports to better express what they actually do.
  • Removed --python-version support for switching the version during compilation. This has only worked for very specific circumstances and has been deprecated for a while.
  • Removed --code-gen-no-statement-lines support for not having line numbers updated at run time. This has long been hidden and probably would never gain all that much, while causing a lot of incompatibilty.

Cleanups

  • Moved command line arguments to dedicated module, adding checks was becoming too difficult.
  • Moved rich comparison helpers to a dedicated C file.
  • Dedicated binary and unary node bases for clearer distinction and more efficient memory usage of unuary nodes. Unary operations also no longer have in-place operation as an issue.
  • Major cleanup of variable accesses, split up into multiple phases and all including module variables being performed through C types, with no special cases anymore.
  • Partial cleanups of C type classes with code duplications, there is much more to resolve though.
  • Windows: The way exec was performed is discouraged in the subprocess documentation, so use a variant that cannot block instead.
  • Code proving information about built-in names and values was using not very portable constructs, and is now written in a way that PyPy would also like.

Tests

  • Avoid using 2to3 for basic operators test, removing test of some Python2 only stuff, that is covered elsewhere.
  • Added ability to cache output of CPython when comparing to it. This is to allow CI tests to not execute the same code over and over, just to get the same value to compare with. This is not enabled yet.

Summary

This release marks a point, from which on performance improvements are likely in every coming release. The C target types are a major milestone. More C target types are in the work, e.g. void is coming for expressions that are done, but not used, that is scheduled for the next release.

Although there will be a need to also adapt optimization to take full advantage of it, progress should be quick from here. There is a lot of ground to cover, with more C types to come, and all of them needing specialized helpers. But as soon as e.g. int, str are covered, many more programs are going to benefiting from this.

Nuitka this week #7

Nuitka Design Philosophy

Note

I wrote this as part of a discussion recently, and I think it makes sense to share my take on Nuitka and design. This is a lot text though, feel free to skip forward.

The issue with Nuitka and design mainly for me is that the requirements for many parts were and are largely unknown to me, until I actually start to do it.

My goto generators approach worked out as originally designed, and that felt really cool for once, but the whole "C type" thing was a total unknown to me, until it all magically took form.

But rather I know it will evolve further if I go from "bool" (complete and coming for 0.6.0) via "void" (should be complete already, but enabling will happen only for 0.6.1 likely) to "int", not sure how long that will take.

I really think Nuitka, unlike other software that I have designed, is more of a prototype project that gradually turns more and more into the real thing.

I have literally spent years to inject proper design in steps into the optimization phase, what I call SSA, value tracing, and it is very much there now. I am probably going to spend similar amounts of time, to execute on applying type inference results to the code generation.

So I turned that into something working with code strings to something working with variable declaration objects knowing their type for the goto generators, aiming at C types generally. All the while carrying the full weight of passing every compatibility test there is.

Then e.g. suddenly cleaning up module variables to no longer have their special branch, but a pseudo C type, that makes them like everything else. Great. But when I first introduced the new thing, I postponed that, because I could sooner apply its benefits to some things and get experience from it.

While doing partial solutions, the design sometimes horribly degrades, but only until some features can carry the full weight, and/or have been explored to have their final form.

Making a whole Nuitka design upfront and then executing it, would instead give a very high probability of failing in the real world. I am therefore applying the more agile approach, where I make things work first. And then continue to work while I clean it up.

For every feature I added, I actively go out, and change the thing, that made it hard or even fail. Always. I think Nuitka is largely developed by cleanups and refactoring. Goto generators were a fine example of that, solving many of the issues by injecting variable declarations objects into code generation, made it easy to indicate storage (heap or object or stack) right there.

That is not to say that Nuitka didn't have the typical compiler design. Like parsing inputs, optimizing a tree internally, producing outputs. But that grand top level design only tells you the obvious things really and is stolen anyway from knowing similar projects like gcc.

There always were of course obvious designs for Nuitka, but that really never was what anybody would consider to make a Python compiler hard. But for actual compatibility of CPython, so many details were going to require examination with no solutions known ahead of time.

I guess, I am an extreme programmer, or agile, or however they call it these days. At least for Nuitka. In my professional life, I have designed software for ATC on the drawing board, then in paper, and then in code, the design just worked, and got operational right after completion, which is rare I can tell you.

But maybe that is what keeps me exciting about Nuitka. How I need to go beyond my abilities and stable ground to achieve it.

But the complexity of Nuitka is so dramatically higher than anything I ever did. It is doing a complicated, i.e. detail rich work, and then it also is doing hard jobs where many things have to play together. And the wish to have something working before it is completed, if it ever is, makes things very different from projects I typically did.

So the first version of Nuitka already had a use, and when I publicly showed it first, was capable of handling most complex programs, and the desire was to evolve gradually.

I think I have desribed this elsewhere, but for large parts of the well or bad designed solutions of Nuitka, there is reliable ways of demonstrating it works correctly. Far better than I have ever encountered. i believe it's the main reason I managed to get this off the ground is that. Having a test "oracle" is what makes Nuitka special, i.e. comparing to existing implementations.

Like a calculator can be tested comparing it to one of the many already perfect ones out there. That again makes Nuitka relatively easy despite the many details to get right, there is often an easy way to tell correct from wrong.

So for me, Nuitka is on the design level, something that goes through many iterations, discovery, prototyping, and is actually really exciting in that.

Compilers typically are boring. But for Nuitka that is totally not the case, because Python is not made for it. Well, that*s technically untrue, lets say not for optimizing compilers, not for type inference, etc.

UI rework

Following up on discussion on the mailing list, the user interface of Nuitka will become more clear with --include-* options and --[no]follow-import* options that better express what is going to happen.

Also the default for following with extension modules is now precisely what you say, as going beyond what you intend to deliver makes no sense in the normal case.

Goto Generators

Now release as 0.5.33 and there has been little regressions so far, but the one found is only in the pre-release of 0.6.0 so use that instead if you encounter a C compilation error.

Benchmarks

The performance regressions fixed for 0.6.0 impact pystone by a lot, loops were slower, so were subscripts with constant integer indexes. It is a pity these were introduced in previous releases during refactorings without noticing.

We should strive to have benchmarks with trends. Right now Nuitka speedcenter cannot do it. Focus shoud definitely go to this. Like I said, after 0.6.0 release, this will be a priority, to make them more useful.

Twitter

I continue to be active there. I just put out a poll about the comment system, and disabling Disqus comments I will focus on Twitter for web site comments too now.

Follow @kayhayen

And lets not forget, having followers make me happy. So do re-tweets.

Help Wanted

If you are interested, I am tagging issues help wanted and there is a bunch, and very likely at least one you can help with.

Nuitka definitely needs more people to work on it.

Egg files in PYTHONPATH

This is a relatively old issue that now got addressed. Basically these should be loaded from for compilation. Nuitka now unpacks them to a cache folder so it can read source code from them, so this apparently rare use case works now, yet again improving compatibility.

Will be there for 0.6.0 release.

Certifi

Seems request module sometimes uses that. Nuitka now includes that data file starting with 0.6.0 release.

Compatibility with pkg_resources

It seems that getting "distributions" and taking versions from there, is really a thing, and Nuitka fails pkg_resources requirement checks in standalone mode at least, and that is of course sad.

I am currently researching how to fix that, not sure yet how to do it. But some forms of Python installs are apparently very affected by it. I try looking into its data gathering, maybe compiled modules can be registered there too. It seems to be based on file system scans of its own makings, but there is always a monkey patch possible to make it better.

Plans

Still working on the 0.6.0 release, cleaning up open ends only. Release tests seem to be pretty good looking. The UI changes and stuff are a good time to be done now, but delay things, and there is a bunch of small things that are low hanging fruits while I wait for test results.

But since it fixes so many performance things, it really ought to be out any day now.

Also the in-place operations stuff, I added it to 0.6.0 too, just because it feels very nice, and improves some operations by a lot too. Initially I had made a cut for 0.6.1 already, but that is no more.

Donations

If you want to help, but cannot spend the time, please consider to donate to Nuitka, and go here:

Donate to Nuitka