Saturday 16 August 2014

Still here, still going

I'm back, after another long absence.
 
I went on vacation, which meant the weeks before were spent on the pre-vacation rush, where you work like crazy to leave things in a state that can then be managed by the rest of the team.
 
After a few days winding off, I've started coding again, mainly tackling off code puzzles in C++. I've also been refactoring (and redesigning) the small program I was going to use for the "next post" I promised on my last post (no, it's not forgotten). And what else?
 

Building GCC

I'll kick this off by saying: Hats off to the folks behind GCC! Well done! I've built it a couple of times, with different configurations, on a CentOS VM, and it was totally effortless, just fire-and-forget. This time, instead of downloading and building each prerequisite by itself, I've decided to use every automatism available. So, I've reread the docs more carefully, and found this (which I had missed the first time around):
Alternatively, after extracting the GCC source archive, simply run the ./contrib/download_prerequisites script in the GCC source directory. That will download the support libraries and create symlinks, causing them to be built automatically as part of the GCC build process. Set GRAPHITE_LOOP_OPT=yes in the script if you want to build GCC with the Graphite loop optimizations.

GRAPHITE_LOOP_OPT=yes is the default, but it doesn't hurt to check it before running the script.
 
I've also noted that it is now possible for all prerequisites to be built together with GCC. A few months back, when I first did it, this was possible only for GMP, MPFR, and MPC (or so the docs said, at the time).
 

Linux distros

After playing around with a CentOS VM, I've decided I needed something else.
 
What did I need?
 
Something more "bleeding edge", that gave me simple access to more up-to-date software. Simple as in "no need to set up every repo known to man" and, at the same time, as in "no need to manually configure every damn piece of software on the system". After some research, I chose Fedora 20.
 
Why did I need it?
 
I'm going to start taking a closer look at some open-source software and, while I've become quite comfortable with building OSS on Windows, I'd rather just install it from a repository.
 
Couldn't I get by with CentOS?
 
Not really. I've decided to use FileZilla as a pilot for this, and build it. Even after adding some non-official repositories on CentOS (e.g., EPEL), it was still a pain to get all the necessary packages in the required versions. On Fedora? I was running make in a matter of minutes.
 
I may have to go for a memory upgrade, since I didn't design my system specs with virtualization in mind. But, for now, I'll make do without it.
 
I did have to install a different desktop. Not only did I find GNOME Shell (Fedora's default) non-intuitive, but it was also resource-consuming. Fedora responded a lot slower than CentOS, and both VMs had the same characteristics. I switched to MATE, and it's much more responsive.
 
I understand the need for a unified desktop experience across all devices, and I accept it brings an advantage both to the (average) user and the developer. However, not only am I perfectly capable of dealing with different paradigms on different devices, I actually prefer it that way. AFAIC, it makes sense that different devices require different experiences. On a desktop, GNOME Shell doesn't make sense for me; just like Unity, on Ubuntu, didn't; same for Windows 8 (although, to its credit, MS is making corrections). But with Linux we can, at least, switch desktops.
 
Anyway...
 
I expect to be absent again for a few weeks, since I'm going to enter the post-vacation rush, where you work like crazy to pick up everything that was left behind while on vacation.
 

No comments:

Post a Comment