echo $RANDOM ## Category: Fedora ### Fedora 20 Scientific Released Fedora 20 is now released, which also means the newest release of Fedora Scientific along with other spins are also available. Download You can download the spin images from here. What’s new in Fedora Scientific The notable additions in this release are: • Sage, along with Sage notebook. • SymPy, the Python library for Symbolic mathematics • The Python 3 versions for scipy, numpy, matplotlib libraries and IPython (including IPython notebook) • Commons math, a Java library for numerical computing The Fedora 20 release notes are here. Fedora Scientific Documentation I started work on some documentation for Fedora Scientific about a month or so back. It is far from what I want it to be, but you can see the current version here. The first goal that I have in mind is to document all the major scientific tools and libraries that are shipped with Fedora Scientific. By document, I imply links to the official project resources and guides. The second goal is to actually add original content and make it a guide book for Fedora Scientific which may be used as an entry point for Open Source Scientific Computing. Once the guide has taken some shape, an RPM package can be created and distributed with Fedora Scientific so that the entire documentation is available for offline perusal. Contributing The Fedora Scientific documentation is an excellent starting point if you are looking to make a contribution to Fedora Scientific. You can view the project here. If you have some toy/throwaway scripts that makes use of one of the libraries/tools, you may want to contribute it to the “tests” here. They will help sanity check these libraries and tools during the development of upcoming Fedora Scientific releases. Discussions and support Please join the Fedora scitech mailing list. Suggetions and ideas? Please leave a comment. ### poweroff, halt, reboot and systemctl On Fedora (and perhaps other Linux distros using systemd) you will see that the poweroff, reboot and halt commands are all symlinks to systemctl: > ls -l /sbin/poweroff /sbin/halt /sbin/reboot lrwxrwxrwx. 1 root root 16 Oct 1 11:04 /sbin/halt -> ../bin/systemctl lrwxrwxrwx. 1 root root 16 Oct 1 11:04 /sbin/poweroff -> ../bin/systemctl lrwxrwxrwx. 1 root root 16 Oct 1 11:04 /sbin/reboot -> ../bin/systemctl  So, how does it all work? The answer lies in this code block from systemctl.c: .. 5556 if (program_invocation_short_name) { 5557 5558 if (strstr(program_invocation_short_name, "halt")) { 5559 arg_action = ACTION_HALT; 5560 return halt_parse_argv(argc, argv); 5561 } else if (strstr(program_invocation_short_name, "poweroff")) { 5562 arg_action = ACTION_POWEROFF; 5563 return halt_parse_argv(argc, argv); 5564 } else if (strstr(program_invocation_short_name, "reboot")) { 5565 if (kexec_loaded()) ..  program_invocation_short_name program_invocation_short_name is a variable (GNU extension) which contains the name used to invoke a program. The short indicates that if you call your program as /bin/myprogram, it is set to ‘myprogram’. There is also a program_invocation_name variable consisting of the entire path. Here is a demo:  /*myprogram.c*/ # include <stdio.h> extern char *program_invocation_short_name; extern char *program_invocation_name; int main(int argc, char **argv) { printf("%s \n", program_invocation_short_name); printf("%s \n", program_invocation_name); return 0; }  Assume that the executable for the above program is created as myprogram, execute the program from a directory which is one level up from where it resides. For example, in my case, myprogram is in $HOME/work and I am executing it from $HOME: > ./work/myprogram myprogram ./work/myprogram  You can see the difference between the values of the two variables. Note that any command line arguments passed are not included in any of the variables. Back to systemctl Okay, so now we know that when we execute the poweroff command (for example), program_invocation_short_name is set to poweroff and this check matches: if (strstr(program_invocation_short_name, "poweroff")) ..  and then the actual action of powering down the system takes place. Also note that how the halt_parse_argv function is called with the parameters argc and argv so that when you invoke the poweroff command with a switch such as --help, it is passed appropriately to halt_parse_argv: 5194 static const struct option options[] = { 5195 { "help", no_argument, NULL, ARG_HELP }, .. .. 5218 case ARG_HELP: 5219 return halt_help();  Fooling around Considering that systemctl uses strstr to match the command it was invoked as, it allows for some fooling around. Create a symlink mypoweroff to /bin/systemctl and then execute it as follows: > ln -s /bin/systemctl mypoweroff > ./mypoweroff --help mypoweroff [OPTIONS...] Power off the system. --help Show this help --halt Halt the machine -p --poweroff Switch off the machine --reboot Reboot the machine -f --force Force immediate halt/power-off/reboot -w --wtmp-only Don't halt/power-off/reboot, just write wtmp record -d --no-wtmp Don't write wtmp record --no-wall Don't send wall message before halt/power-off/reboot  This symlink is for all purpose going to act like the poweroff command since systemctl basically checks whether ‘poweroff’ is a substring of the invoked command. To learn more, see systemctl.c Related Few months back, I had demoed invoking a similar behaviour in programs where a program behaves differently based on how you invoke it using argv[0] here. I didn’t know of the GNU extensions back then. ### Managing IPython notebook server via systemd: Part-I If you are using IPython notebook on a Linux distribution which uses systemd as it’s process manager (such as Fedora Linux, Arch Linux) , you may find this post useful. I will describe a fairly basic configuration to manage (start/stop/restart) IPython notebook server using systemd. Creating the Systemd unit file First, we will create the systemd unit file. As root user, create a new file  /usr/lib/systemd/system/ipython-notebook.service and copy the following contents into it:  [Unit] Description=IPython notebook  [Service] Type=simple PIDFile=/var/run/ipython-notebook.pid ExecStart=/usr/bin/ipython notebook --no-browser --pylab=inline User=ipynb Group=ipynb WorkingDirectory=/home/ipynb/notebooks [Install] WantedBy=multi-user.target  Note that due to the naming of our unit file, the service will run as ipython-notebook. To completely understand the above unit file, you will need to read up a little of the topic. You may find my earlier post useful which also has links to systemd resources. Three things deserve explanation though: The line,  ExecStart=/usr/bin/ipython notebook --no-browser --pylab=inline specifies the command to start the IPython notebook server. This should be familiar to someone who uses it. The lines, User=ipynb and Group=ipynb specify that we are going to run this process as user/group ipynb (we create them in the next step). The line WorkingDirectory=/home/ipynb/notebooks specify that the notebooks will be stored/server in/from /home/ipynb/notebooks Setting up the user As root, create the user ipynb:  # useradd ipynb  Next, as ipynb, create a sub-directory, notebooks:  # su - ipynb [ipynb@localhost ~]$ mkdir notebooks [ipynb@localhost ~]$exit  Starting IPython notebook We are all set now to start IPython notebook. As the root user, reload all the systemd unit files, enable the ipython-notebook service so that it starts on boot, and then start the service:  # systemctl daemon-reload # systemctl enable ipython-notebook ln -s '/usr/lib/systemd/system/ipython-notebook.service' '/etc/systemd/system/multi-user.target.wants/ipython-notebook.service' # systemctl start ipython-notebook  If you check the status of the service, it should show the following:  # systemctl status ipython-notebook ipython-notebook.service - IPython notebook Loaded: loaded (/usr/lib/systemd/system/ipython-notebook.service; enabled) Active: active (running) since Sun 2013-09-22 22:39:59 EST; 23min ago Main PID: 3671 (ipython) CGroup: name=systemd:/system/ipython-notebook.service ├─3671 /usr/bin/python /usr/bin/ipython notebook --no-browser --pylab=inline └─3695 /usr/bin/python -c from IPython.zmq.ipkernel import main; main() -f /home/ipynb/.ipython/profile_default/security/kernel-6dd8b338-e779-4e67-bf25-1cd238... Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] Serving notebooks from /home/ipynb/notebooks Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] The IPython Notebook is running at: http://127.0.0.1:8888/ Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] Use Control-C to stop this server and shut down all kernels. Sep 22 22:40:21 localhost ipython[3671]: [NotebookApp] Using MathJax from CDN: http://cdn.mathjax.org/mathjax/latest/MathJax.js Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Kernel started: 6dd8b338-e779-4e67-bf25-1cd23884cf5a Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:51666 Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:52244 Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:44667 Sep 22 22:40:22 localhost ipython[3671]: [IPKernelApp] To connect another client to this kernel, use: Sep 22 22:40:22 localhost ipython[3671]: [IPKernelApp] --existing kernel-6dd8b338-e779-4e67-bf25-1cd23884cf5a.json  You should now be able to access IPython notebook as you would normally do. Finally, you can stop the server as follows:  # systemctl stop ipython-notebook  The logs are redirected to /var/log/messages:  Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] Created profile dir: u'/home/ipynb/.ipython/profile_default' Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] Serving notebooks from /home/ipynb/notebooks Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] The IPython Notebook is running at: http://127.0.0.1:8888/ Sep 22 22:39:59 localhost ipython[3671]: [NotebookApp] Use Control-C to stop this server and shut down all kernels. Sep 22 22:40:21 localhost ipython[3671]: [NotebookApp] Using MathJax from CDN: http://cdn.mathjax.org/mathjax/latest/MathJax.js Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Kernel started: 6dd8b338-e779-4e67-bf25-1cd23884cf5a Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:51666 Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:52244 Sep 22 22:40:22 localhost ipython[3671]: [NotebookApp] Connecting to: tcp://127.0.0.1:44667 Sep 22 23:05:35 localhost ipython[3671]: [NotebookApp] received signal 15, stopping Sep 22 23:05:35 localhost ipython[3671]: [NotebookApp] Shutting down kernels Sep 22 23:05:35 localhost ipython[3671]: [NotebookApp] Kernel shutdown: 6dd8b338-e779-4e67-bf25-1cd23884cf5a   For me, the biggest reason to do this is that I do not have to start the IPython notebook server everytime on system startup manually, since I know it will be running when I need to use it. I plan to explore managing custom profiles next and also think more about a few other things. ### Fedora Scientific Spin Update The Fedora Scientific 20 Spin will have a number of new packages: notably, it will now include the Python 3 tool chain for the Python libraries for scientific/numerical computing. If you are interested to check it out, download a nightly from here. Testing the applications/libraries It took me 3 releases to figure out – or rather, sit down to do it. Anyway, here it is now. I have created a Wiki page where I want to list scripts/other ways to sanity test the various packages/applications that are being shipped. I believe it will help in two ways: • Often, the entire functionality of a tool/library is split into more than one package and hence pulling in only the main package is no guarantee that the tool/library/application will work. • We may also be able to catch genuine bugs/faults in the packages being shipped. (Think of things like missing shared library dependency, etc) So, please add whatever you can to the wiki page. Just simple ways to see if the application/library actually works. My plan is to collect whatever I/we gather there into a git repository somewhere and run it prior to/during releases to see everything is working as expected. Links • Download a nightly compose from here (The usual warning against testing in-progress Fedora releases apply) • Add scripts/tests to the wiki page here If you find a problem, leave a comment here, or add to the wiki page. Note You may see that the nightlies are failing, but this should be fixed soon (See this bug). You can still download a TC5 build from here which has all the packages that are going to be shipped, except for sagemath. Once again, thanks are due to the packagers actually packaging all this software that makes Fedora Scientific possible. ### Get started with Beaker on Fedora Beaker 0.14 was released recently and if you are an existing user of Beaker, you may see the What’s new page here If however, you do not know what Beaker is, the Architecture guide is a good start and if things look interesting, with this release there is also documentation now to setup a Beaker “test bed” using two Virtual machines (via libvirt). ### Notes on writing systemd unit files for Beaker’s daemon processes Recently, I had a chance to write systemd unit files for the daemon processes that run as part of Beaker: beakerd which is the scheduling daemon running on the server and the four daemons running on the lab controller – beaker-proxy, beaker-provision, beaker-watchdog and beaker-transfer. This post may be of interest to you if you are using python-daemon to write programs which are capable of running as daemon processes and you want to write systemd unit files for them. beakerd’s unit file Here is the systemd unit file for beakerd, which I will use to illustrate the core points of this post. The other unit files are similar, and hence I will explain only where they differ from this one: [Unit] Description=Beaker scheduler After=mysqld.service [Service] Type=forking PIDFile=/var/run/beaker/beakerd.pid ExecStart=/usr/bin/beakerd User=apache Group=apache [Install] WantedBy=multi-user.target The [Unit] section has a description of the service (using the Description option) and specifies that it should start after the mysqld.service has started using the After option. beakerd needs to communicate to a MySQL server before it can start successfully. It can work with a local or a remote MySQL server. Hence, specifying After sets up an ordering that if there is a local MySQL server, then wait for it to start before starting beakerd. Using Requires is not suitable here to accommodate the possibility that beakerd may be configured to use a remote MySQL server. In the [Service] section, the Type is set to Forking. This is because, beakerd uses python-daemon which forks itself (detaches itself) during the daemonization. However, you must ensure that when creating a DaemonContext() object, you should specify detach_process=True. This is because, if python-daemon detects that it is running under a init manager, it doesn’t detach itself unless the keyword is explicitly set to True, as above (you can see the code in daemon.py). Hence, although not setting the above keyword would work under SysV Init, it doesn’t work under systemd (with Type=Forking), since the daemon doesn’t fork at all and systemd expects it to fork (and finally kills it). The PIDFile specifies where the process ID is dumped by beakerd and is setup while creating the DaemonContext object as follows and ExecStart specifies the location to the binary that is to be started. The beakerd process is to be run as the apache user and group, which is specified by the User and Group options. In the [Install] section, the WantedBy option specifies when the beakerd process should be started (similar to the concept of “run levels” in SysV init). systemd defines several targets, and here we define that we want beakerd to start as part of the multi user setup. That’s all about beakerd’s unit file. beaker-provision’s unit file beaker-provision and the other daemons running on the lab controller have similar unit files: [Unit] Description=Beaker provisioning daemon After=httpd.service [Service] Type=forking PIDFile=/var/run/beaker-lab-controller/beaker-provision.pid ExecStart=/usr/bin/beaker-provision User=root Group=root [Install] WantedBy=multi-user.target All the four lab controller daemons need to communicate with Beaker web application – which can be local or remote, and hence the After option specifies the dependency on httpd.service. And, this particular daemon runs as root user/group, which is specified by the User and group options. And everything else is similar to beakerd’s unit file and also the other lab controller daemons. Shipping SysV init files and systemd unit files in the same package The beaker packages ship both SysV init files and systemd unit files now so that it can use systemd when available, but use SysV init otherwise. This commit can give you some idea of how to go about it. systemd resources These links proved helpful to learn more about systemd, including how to package unit files for Fedora: ### /proc/cpuinfo on various architectures The /proc/cpuinfo file contains runtime information about the processors on your Linux computer (including your Android phone). For example, here is what it looks like on my phone: u0_a123@android:/$ cat /proc/cpuinfo
Processor       : ARMv7 Processor rev 1 (v7l)
processor       : 0
BogoMIPS        : 1592.52

processor       : 1
BogoMIPS        : 2388.78

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x2
CPU part        : 0xc09
CPU revision    : 1

Hardware        : SMDK4210
Revision        : 000e
Serial          : 304d19f36a02309e


Depending on what you are looking for, these are all useful information. Being plain text files, you can write shell scripts or some other programming language (see my earlier article using CPython on this topic) to parse this information and mine the data you are looking for. These are useful information and for projects such as lshw and Beaker, quite vital too. However, one problem with dealing with this file is that depending on the hardware architecture, the information varies – both in their presentation format and the information available. If you compare the contents of your Intel/AMD desktop or laptop with the above, you will know what I am talking about. Hence, it is necessary that whatever tool/script one writes to read and use the data from this file and hopes it to work across architectures should consider these differences.

I won’t attempt to make any guesses to why they are different. However, I will share with you how to find out the information you may find in this file on different architectures. The post is admittedly half baked and may not satisfy all your queries, but I think I am on the right track.

Get the Kernel sources

Download the Linux kernel sources (tarball from http://kernel.org or clone it from https://github.com/torvalds/linux/). The arch/ sub directory has architecture specific code and in all you will see 31 subdirectories in this sub directory: alpha, arc, arm, arm64, and others. The links in the rest of this post are cross referenced links, so you may not need to download the sources.

Definition of cpuinfo_op

One file for each of the above architectures defines a cpuinfo_op variable of type seq_operations. For example, for the arm64 architecture, this variable is defined in arm64/kernel/setup.c and this is what it looks like:

956 const struct seq_operations cpuinfo_op = {
957         .start  = c_start,
958         .next   = c_next,
959         .stop   = c_stop,
960         .show   = c_show
961 };


The key member assignment for our purpose here is the .show attribute which is a function pointer pointing to the c_show() function. This is the function where you can see the information that you will see when you see the contents of /proc/cpuinfo. So for example, the c_show() function for arm64 is here and you can see the fields earlier shown in the blog post. (I can’t see “Serial” there, which I am not sure why yet, I am still to figure out if it’s the right architecture at all, but you get the idea, I hope).

You can search for cpuinfo_op and see the files for each arch where they are defined in. The function which the .show member points has the information that will be show in /proc/cpuinfo. Note that, the function name can be different. For example it is show_cpuinfo() for s390x.

Examples

For an example of how the different architecture specific information can be dealt with in C/C++ program/tool using architecture specific macros, see lshw’s cpuinfo.cc file. For shell scripts or a Python program, using uname (via os.uname() in CPython) may be a possible approach.

### Creating a Fedora 19 Scientific ISO

In my last post, I explained how you could upgrade your Fedora 18 Scientific installation to Fedora 19 scientific installation. However, if you are looking to perform a fresh installation, you would need a ISO image. I will now show you how you create an ISO image yourself.

Please note that you will need a Fedora 19 installation to create the Fedora 19 scientific image. Also, you should keep in mind that the architecture of the ISO that you build is the same as the architecture of the system you build it on.

The steps are as follows:

Install the necessary tools

We will be using the program, livecd-creator to create the ISO which is installed by the livecd-tools package. Hence install this package using:

#yum install livecd-tools

Clone the kickstarts repository

The Fedora Scientific spin and other spins and images are created from a set of kickstart files maintained in the spin-kickstarts.git repository. Clone this repository (we will directly clone the ‘f19′ branch, since we will be generating a Fedora 19 image). The command for cloning is:

 # git clone -b f19 git://git.fedorahosted.org/spin-kickstarts.git

Build the image

Before you can build the image, set SELinux to permissive mode:

# setenforce 0

Now, change your current working directory to the spin-kickstarts directory and invoke the livecd-creator program as follows:

# livecd-creator fedora-livedvd-scientific-kde.ks

The entire process is fairly time consuming, pulls in a lot of packages from the repositories (network data consuming) and disk space intensive. Make sure you have access to all of them. Once the build process is complete, you should have a .iso file in the same directory (The file name will be something like livecd-fedora-livedvd-scientific-kde-xxyyzz.iso). You can now proceed with installing it in a virtual machine or burning it to a USB stick and installing it on a real machine.

Resources

### Upgrading Fedora 18 Scientific to Fedora 19 Scientific

Fedora 19 was released on July 2, 2013. The Fedora Scientific spin is unfortunately not available for this release, since I failed to remove couple of missing packages (scite and netbeans) from the kickstart file and hence the build was failing at the cut-off time. My apologies to all those old users who were looking forward to do a fresh install of Fedora 19 Scientific and the new users who were looking to try it out.

In this post I will describe to you in a few steps how you can upgrade your existing Fedora 18 Scientific installation to Fedora 19. We will use the FedUp utility for this purpose. Since this is a upgrade, the usual warnings apply. Things may go wrong, so please take backups of your data and configuration files if they are important to you. It is a good way to read the FedUp page before proceeding to get a general idea of what’s involved.

Here are the steps (standard for any Fedora upgrade adopted from the FedUp guide):

First and foremost, update your system  (# yum update) and reboot to the latest kernel.

Install FedUp

Install the latest fedup package (# yum install fedup --enablerepo=updates-testing)

If you see the FedUp guide, you will see that there are more than one way of preparing for the upgrade. I used the recommended network method. Here is what you have to do:
 # fedup-cli --network 19 --instrepo http://dl.fedoraproject.org/pub/fedora/linux/releases/19/Fedora/x86_64/os/ --addrepo Everything=http://dl.fedoraproject.org/pub/fedora/linux/releases/19/Everything/x86_64/os/ 
Substitute the instrepo URL by something closer to you. I also found that I needed to add the Everything repo to get all the packages that Fedora Scientific has installed. If this step completes for you without any errors, reboot your system.

You should see a System Upgrade  menu item in your GRUB menu, hit ENTER on that and your system should show the Plymouth screen. You can hit the ESC key to see what is happening. You should see the packages being upgraded. Once it completes, it should reboot and you can now boot into Fedora 19.

And you should have Fedora 19 with all the scientific spin packages updated to the latest releases.

Next steps

I didn’t need to do anything else.

I needed to do a  # yum distro-sync  to remove some of the F18 packages and get the F19 ones (emacs, for example).

However, I performed all of this in a virtual machine and I also did not upgrade my GRUB. (See here for the instructions).They worked for me. If it doesn’t work for you, leave a comment and I will try to address it.

New Installation of Fedora 19 Scientific

If you want to perform a fresh install of Fedora 19 scientific, it is easy to build your own live image. I will show in a next post to come up soon.

Here is the post: http://echorand.me/2013/07/04/creating-a-fedora-19-scientific-iso/

### Raspberry Pi: Personal Perspectives

I have been playing around with the Raspberry Pi  (Pi) for two months now. I started by installing Raspbian on a spare 4 GB memory card. I also had a wireless Wifi dongle and a wireless keyboard-mouse lying around. I plugged the WiFi dongle and the dongle for the wireless keyboard-mouse in the two USB ports of the Pi and when I plugged in my spare mobile charger into the Pi’s power input, the Pi booted. I felt lucky to see the display on the TV without any hassles. So far all in all, out of the box experience and all I had bought was the Pi. I already had the other stuff lying around.  It was fun to see something on TV that I am used to seeing on a computer monitor.. The next thing I did was fire up the Python interpreter and tweeting the output of os.uname( ).

Next things I tried to do was like what I had seen most people do was trying to make a media centre out of the Pi. And few other things like setting up a audio server, etc came to my mind. But then, I thought to myself – I don’t have a huge media collection, nor do I play games, nor do I sit back a lot and watch stuff.  I think I am better off doing something else with the Pi. Its Linux in a more constrained environment after all. I fiddled around a bit with the Pi after that. Then, finally couple of weeks back, I installed the Fedora 17 ARM remix on the Pi. That was the point where I really started using the Pi.

Fedora 17 on the Pi

One of the first things I liked about the Fedora remix was that the SSH server was installed and enabled out of the box. Once I was done with firstboot, I plugged the TV jack out and work on the Pi by SSH-ing into it  (The Pi is connected to my home network via the Ethernet port).

I have been documenting my experiments with Fedora on the Pi here [1]. As you can see, most of the programming languages, frameworks and tools there is all the typical things you would be using on any Linux system – BASH, Python, Ruby, Flask – you name it. It was like – so it worked on Linux on the big machine, let’s say if its there on Pi and everything was pretty much so close to same that it started to seem like a chore – more about documenting what I was doing in the hope that it might be useful to some one and may be I can use them as a base for a book/article in the future. There in lies the essence of the awesome work done by the Fedora ARM team which made things a chore. Thank You folks! And of course to all the Python programmers who have made their packages available via PyPi. pip-python was indispensable.

No more a chore

Things changed a bit yesterday. I fiddled around with the GPIO pins for the first time. Using the WiringPi’s gpio command, it was as easy as it gets. I then used the Python package RPi to interact with the Pi pins from Python and it was quite dandy. As you can see from the documentation [2], among the two packages one of them was already installed and the other was installed from PyPi. Finally, that was something I can’t do so easily on my big Linux machines. That also meant, I do not have to use Arduino to fiddle around with some basic, hobbyist electronic stuff. In one of the coming days, I might hook up the Arduino with the Pi.

Role of Abstraction Layers

The most basic operating system course introduces it as an interface between the user and the hardware – an abstraction separating software from the hardware. I think the Pi is a great example for educators to show exactly what they mean. Have a traditional computer and a Pi side by side – write the same program and run it both the devices show them. That should be quite intuitive. The fact that I started finding things a chore was because the operating system was abstracting out the hardware from me.

Experimenting with C

The good thing (and the bad thing) about Python and Ruby (for example) is that you are one layer higher (than C) in the abstraction hierarchy. C is one layer close to the bare metal and hence should be an educational exercise to play with it on the Pi and compare it with an an Intel or AMD computer. That should help illustrating the differences you need to ponder about when you are programming in C (or C++) and intend your program to be also working on other architectures.

I touched on this in this article titled “Compilation and Interpretation in C and CPython“.

Conclusion

The experiments will continue.