Tech posts by aclassifier
Also see Designer's note page

You are at
Archive by month and about me: here
Last edit: 4Sept2015
Started move: September 2012
All updates at, but the overview is updated

Wednesday, December 29, 2010

021 - The problems with threads

Updated 8Jan11


Being year's passover 2010/11 it may be late to comment on a paper I found in a pile of IEEE Computer magazines. The article dates back to 2006. The article is available on the net as a report from the University of California at Berkeley:
The Problem with Threads 
Edward A. Lee 
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2006-1 (pdf)
January 10, 2006
IEEE Computer 39(5):33-42, May 2006
That IEEE Computer magazine was themed "Future Operating Systems (and) Programming Paradigms" and the article was a cover article.
I advice you to read the well-written article. And it's outspoken and fun reading!
The quotes I use here are from the report (IEEE Computer have edited a little).


This note's title uses plural "problems", not to make the titles equal - and it is a mix between a review and my own personal views and experience.

Over some years I have published along the line of that article, see I also have some posts here about the same theme, see

"It is widely acknowledged that concurrent programming is difficult" (Lee)

The magazine article says "Concurrent programming is difficult,..". This is what caused my interest, in addition to the title. But this first sentence seems to contradict the title. Why doesn't he say "..that programming with threads is difficult"?

I have attended the WoTUG- and now CPA- series of conferences for quite some years. What I have learned over the years is the opposite:

"Contrary to the belief out there, concurrent programming is easy" (WoTUG/CPA)

Here is the semantics of that statement: "Programming with threads is difficult, but programming with processes is easy". Later in his paper, I think this is also what Lee says. But he feels bleak about the future, while the conferences (and Lee's paper) try to do something with it.

When I programmed in occam 2 for some ten years, concurrent programming was indeed easy. I had no interrupts, only processes, so asynchronous I/O was tamed before it could overwrite common data. I had fully encapsulated processes, with parallel usage rules checked by the compiler. Processes only communicated over zero-buffered synchronous unidirectional named channels. Buffering had to be coded explicitly. Functions were side effect free, as they were not allowed to use channels. And, the compiler checked aliasing errors (so linked lists were not possible..). Occam 2 has been developed since it died as a commercial language, and now holds a plethora of new primitives.

Yes, I did have to learn the art of the trade. Some has called it Process Oriented Programming.

"The problem is that we have chosen concurrent abstractions that do not even vaguely
resemble the concurrency of the physical world." (Lee)

Even if "not even vaguely" is Lee's falsification term here I would disagree.

The physical world is one axis, concretizations of abstract descriptions another. If my concurrent abstraction may be given a name so that I understand what it basically does, and may be seen as a cognitive entity, then I'm ok. I think this is what Wittgenstein in 1921 proposed in his Tractacus.

"Channels", "rendezvous", "port", "protocol", "process" and "alternation" are taken from daily vocabulary. These are good terms that I grasp, but they fit more or less to their physical (if any) counterpart. These are "vague" terms, really - so Lee might have his words ok.

The idea that "object" as used in "object orientation" (OO) should resemble physical world objects is not needed for me. It has been used to describe OO ("object car has wheels"), but after those examples then just forget: "window has buttons" or worse "window has drop-down menu" makes little sense for real windows.

By the way: "thread" is perhaps a good term! I have seen the result when a cat has played with a yarn of a long single thread. Maybe they should have used better weaving terms? Weaving could be seen as quite formal! And if the terms chosen had been better, maybe the solution would have been better, recursing into Lee's statement?

Process model

In my opinion the CSP "process model" is much easier to understand than "object model". The definition of an object these days emphasizes encapsulation more than before. Some years ago, inheritance and polymorphism were more "modern" to include in the object model.

"Nondeterminism of threads" (Freely from Lee)

I'll fill in with an example. The libcsp library by R. Beton furnished Posix light weight thread programmers with a clean CSP API. However, the struct CSP_Channel_t contained three pthread_mutex_t and two pthread_cond_t, in addition to a CSP_Alt_t  that contained one of each. Reading it out aloud:
one channel at most needed 4 mutexes and 3 condition variables (*)
This, just to attempt to tame the "nondeterminism of threads" as Lee calls it.

(*) I don't know how many of these are there to handle asynchronous I/O (like interrupt) or preemption.

You could argue that's it's the channel construct that's wrong, but then bear in mind that it is possible to build a "naked" CSP library without any mutexes, condition variables or critical regions. Again, asynchronous I/O of course needs to be handled. However, preemption has not been needed in any CSP-type embedded system I have seen, but I have learnt that it is necessary for an experimental language called Toc which handles deadline scheduling, see this paper by Korsgaard and Hendseth.

Back to nondeterminism. I learned something from Lee's description. I have always though of nondeterminsim as what (should) happen in the occam ALT struct (as opposed to PRI ALT), or what happens with Promela's :: operator. These nondeterminisms are wanted, and would explicitly read "if more than one choice are ready, select any one of them". If that behaviour is not what I want, I instead use the determinsitic construct.

But here Lee points out that it's unwanted altogether in the thread'ed world. (Later on he goes on to describe the "judicious nondeterminism" - which is the one I am used to.)

WYSIWYG semantics (Welch)
"In fact, we have to know about all other threads that might execute (something that may not itself be well defined), and we would have to analyze all possible interleavings. We conclude that with threads, there is no useful theory of equivalence."
"Threads, on the other hand, are wildly nondeterministic. The job of the programmer is to prune away that nondeterminism. We have, of course, developed tools to assist in the pruning. Semaphores, monitors, and more modern overlays on threads (discussed in the following section) offer the programmer ever more effective pruning. But pruning a wild mass of brambles rarely yields a satisfactory hedge." (Lee)
Welch here has introduced the term WYSIWYG semantics. Lee in effect shows that threading does not have this type semantics.

I have discussed this in note [007 - Synchronous and asynchronous]. Here is a quote from Welch
"One crucial benefit of CSP is that its thread semantics are compositional (i.e. WYSIWYG), whereas monitor thread semantics are context-sensitive (i.e. non-WYSIWYG and that's why they hurt!). Example: to write and understand one synchronized method in a (Java) class, we need to write and understand all the synchronized methods in that class at the same time -- we can't knock them off one-by-one! This does not scale!! We have a combinatorial explosion of complexity!!!"
(Letter to Edward A. Parrish, The Editor, IEEE Computer. Peter Welch (University of Kent, UK) et al. (1997)
Observe that I in note 007 discuss this also relative to asynchronous "send and forget" message based systems.

"This pattern can be made to work robustly in Java" (Lee)

Lee shows how easy it is to make erroneous multithreaded code in Java. The observer pattern for one thread, and then the obvious thread safe version is shown. Only, the "thread safe" is only an attempt at it, as Lee shows. There is a potential deadlock lurking.

A couple of memories spring to my mind.

First, before the WoTUG-19 conference I thought that it might be a good idea to present the commstime program (a very simple but effective benchmark program for concurrency) in the new Java language. I did, but concluded that the "Java 1.0's PipedInput/OutputStream classes seem impractical to use because of an implementation error at Sun". I had my fingers burnt on trial one.

But here was a language with some built-in mechanisms for concurrency, so the community started building CSP libraries in Java. Commercial occam was dying, and so was the transputer. There was a void. So JCSP (Communicating Sequential Processes for Java) was developed by University of Kent at Canterbury. JCSP is being updated to have become a viable concurrency platform for Java; I note that last public release is from August 2010 (this is written in Dec. 2010).

There were all these experts, learning all the intricacies of synchronized and Java's monitor. By 1998 all seemed to work fine. Until a student some two years and who knows how many uses later found a simple case where it deadlocked. The result was that parts of JCSP was formally verified and then fixed. Read about this exiting adventure here ("Formal Analysis of Concurrent Java Systems" by Peter H. Welch and Jeremy M.R. Martin). The error was subtle, but the fix was simple.

Second, about the same time, Per Brinch Hansen wrote a letter that floated around on the internet, later to have become seminal. It is very interesting reading, because what he's basically saying is that the Java designer's at Sun hadn't done their homework. Here's his ingress:
"The author examines the synchronization features of Java and finds that they are insecure variants of his earliest ideas in parallel programming published in 1972–73. The claim that Java supports monitors is shown to be false. The author concludes that Java ignores the last twenty-five years of research in parallel programming languages."
(Hansen, Per Brinch, "Java’s insecure parallelism" in ACM SIGPLAN Notices, V.34(4) pp.38-45, April 1999.
Per Brinch Hansen then goes on to describe exactly what's wrong with Java. Observe that Brinch Hansen in 1993 invented the SuperPascal language, whose concurrency features are based on a subset of occam 2!-)

"Fixing threads by more aggressive pruning" (Lee)

Lee describes how they in the Ptolemy project ("an ongoing project aimed at modeling, simulating, and designing concurrent, real-time, embedded systems") used several established software engineering processes like a new "code maturity rating system", design reviews, code reviews, nightly builds, regression tests, and automated code coverage metrics.
"The strategy was to use Java threads with monitors." (Lee)
I wander if they had had access to Per Brinch Hansen's paper, or the Welch and Martin paper. Surely they must have been familiar with the problems.

Even with all this going on for them, Lee describes that "No problems were observed until the code deadlocked on April 26, 2004, four years later." Lee goes on to say that:
"Regrettably, I have to conclude that testing may never reveal all the problems in nontrivial multithreaded code" (Lee)
When a new version of some software is released, there has to be thorough testing. Case closed. But are we some times testing too much? Relying on testing where relying on other means would have been better?

Lee points out that avoiding deadlocks in a system based on semaphores is just to follow some well-known rules:
"Of course, there are tantalizingly simple rules for avoiding deadlock. For example, always acquire locks in the same order [32]. However, this rule is very difficult to apply in practice because no method signature in any widely used programming language indicates what locks the method acquires. You need to examine the source code of all methods that you call, and all methods that those methods call, in order to confidently invoke a method. Even if we fix this language problem by making locks part of the method signature, this rule makes it extremely difficult to implement symmetric accesses (where interactions can originate from either end). And no such fix gets around the problem that reasoning about mutual exclusion locks is extremely difficult. If programmers cannot understand their code, then the code will not be reliable" (Lee, italics by me)
In other words semaphores don't have WYSIWYG semantics!

Use of software patterns

Lee also discusses patterns. I have myself used patterns like the "The 'knock-come' deadlock free pattern", described in Note 007. But Lee has a point when he states that:
"Programmers will make errors, and there are no scalable techniques for automatically checking compliance of implementations to patterns." (Lee)
This is very true. But the shortest programs that function well with a pattern may become part of a repository for it. Real coding undermines patterns! Lee goes on:
"More importantly, the patterns can be difficult to combine. Their properties are not typically composable, and hence nontrivial programs that require use of more than one pattern are unlikely to be understandable" (Lee)
This is less universally true, I guess. If a pattern is based on composable components with WYSIWYG semantics, then combining these patterns should be ok semantically. Some times it's wanted to change Master and Slave processes in knock-come. Doing this has no other side effect than wanted. In a real-time system we need to know the timing constraints. If any process decides not to listen on a channel for some time, say 100 ms, then any one sending on that channel would have to block 100 ms. This blocking is no worse than an asynchronous message not been handled by some state for 100 ms. In fact it's better: the blocking doesn't fiddle with the toothwheels, since it would not even see a message that it would else have to schedule for itself some time later.

WYSIWYG semantics and timing constraints

This takes me to a point that I am not so sure about. WYSIWYG semantics will not include timing constraints? In such a system the protocol between processes describe all I need to know? Right? No, I don't think so. I think I need to describe (as a comment or whatever) what the maximum blocking time is. Or really, maximum time between any send and response (in any message based system).

Usually one just discards this problem by stating that an event-action takes zero time. For most systems this is ok. But if we may block 100 ms, this needs to be propagated, just like with the Toc language mentioned above by Korsgaard and Hendseth.

If a protocol is decorated with timing constraints and those are propagated, then You Can See What You Get. But only then.

Time messes up any good argument!

Real-time Java

Rounding the century my company (Navia/Autronica) was a member of the J Consortium - Real-Time Java™ Working Group. It was set up by NIST and backed by HP, Microsoft, Newmonics and others. These days, the honours its ancestry, but now it's about "real-time medical application".

I was the one doing the work for Navia/Autronica, in the Real-Time Java Working Group (RTJWG).

I remember the endless discussions about how to handle "asynchronous I/O". I had been working occam for 10 years then, and that problem as a problem was non-existing. As mentioned earlier, interrupt handlers were proper occam processes, where the interrupt vector was mapped as a channel, possible to use in an ALT (to be able to receive outputs as well). Results were preferably sent on a channel that never was in a steady state to block (by use of an "overflow buffer" process (two processes in series)). This was so nice, and there was no new thinking about an interrupt process or any process. They were born equal. The process model stayed.

"Alternatives to threads" (Lee)

Lee discusses several alternatives, but for me the most interesting is the fact that the Ptolemy II project also has a "rendezvous domain CSP" API. Here is their class hierarchy. They state that it's based on C.A.R. Hoare's 1978 version of CSP.  A process in the 1978 version sends to a named process, whereas in the 1985 version a process sends to a named channel, as in occam. I believe Hoare changed it after the Ada experience, in which his 1978 solution hindered building of precompiled (?) libraries. (Please send me a mail if Ptolemy II really sends on a named channel, so that I can fix.)

Using a CSP class library is A Quite Good Thing. This criticism falls in Ptolemy as well as JCSP (mentioned earlier). If a channel (or whatever well thought out concurrency communication primitive..) is not a "first rate citizen" of the language CSP (or..) will "never" succeed if this is all that's supplied in the future.

In a way it's "pruning of nondeterminstic" cognitive thinking, where each time something is read it has to be understood through a hierarchy. Every time! Not nice to say about OO which has its benefits and uses! A compiler would not know what a process is, it can't do parallel usage checks. At best it would be an add-on tool.

"Coordination languages" (Lee)

Finally Lee taks about coordination language, which would sit on top of a more a standard language (with new keywords?) The Ptolemy II rendezvous CSP "framework is properly viewed as a coordination language". There nondeterminsm is explicit when wanted, else it's determinstic.

After Lee's 2006 article: "The Go programming language" (Google)

But A Good Thing, invented after Lee's paper, is Google's Go programming language. Its concurrency is based on the CSP model, and uses channels as interprocess communication. Even if it "does not provide any built-in notion of safe or verifiable concurrency", it is a step in the right direction. I let the Go people have the final words in this note:
"Why build concurrency on the ideas of CSP?
Concurrency and multi-threaded programming have a reputation for difficulty. We believe the problem is due partly to complex designs such as pthreads and partly to overemphasis on low-level details such as mutexes, condition variables, and even memory barriers. Higher-level interfaces enable much simpler code, even if there are still mutexes and such under the covers.
One of the most successful models for providing high-level linguistic support for concurrency comes from Hoare's Communicating Sequential Processes, or CSP. Occam and Erlang are two well known languages that stem from CSP. Go's concurrency primitives derive from a different part of the family tree whose main contribution is the powerful notion of channels as first class objects."
(Go programming language faq)

Sunday, September 19, 2010

020 - Mac OS X network disk partially seen

This Network disk is connected via USB on an AirPort Extreme (7.4.2 and also the better 7.4.1 (*)). Mac OS X 10.6.4 Snow Leopard (but I have also noticed this on earlier versions). (Not fixed with "Apple Addresses AFP Vulnerability With Security Update 2010-006" in Sept. 2010). Tiger 10.4.11 on iBook G4. Disk connected to AirPort Extreme is via USB, and its a Western Digital WD My Book 1028, formatted as "Mac OS Extended (Journaled)".

  1. New: follow on 004.6 - Airport Extreme version juggling
  2. I am following this situation after the AirPort Extreme update from 7.4.1 to 7.5.2 in Dec. 2010. I wonder why Apple did such a version quantun leap. 
  3. I am also following the situation after the update of the Snow Leopard version of the AirPort Tool to 5.5.2 in Dec. 2010. This is the client sw that lets me configure the AirPort Extreme. I do not think that an update of it would have any effect on the situation described here, unless by a secondary effect coming from any error imposed by the AirPort Tool doing wrong configuring of the AirPort Extreme and its file server. (On Tiger 10.4.11 the AirPort Tool is 5.4.2, opening for configuration clash on the Airport Extreme. Maybe I should only use the newest version? How does Apple avoid any configuration clash?). Reported to

6Jan11 New: the disk may have been in need of repair. See 004.5 - Time Machine backup on Airport Extreme's USB connected disk
  1. The Network disk was visible and "mountable" in Finder. Disk and contents. Fine!
  2. But when I tried "Save as" from any application I could see the network disk, but not the files and directories
  3. However, during the same "Save as" I could make a New directory and save the file in it, even if I could not see the rest
  4. Running Spotlight on the Network disk showed me the hits, but double-clicking failed to open it, but gave an error message about "broken alias"
  5. At the same Spotlight result window, the file path at the bottom was not shown. I guess that since the alias was wrong, it could not build any path either
Fix (permanent?)

I tried to find a fix on the net, but failed. However, I reasoned that there must be at least two ways to look at the disk, and the system seems to pick the wrong some times. So I tried this (Disclaimer, I have Mac OS X in Norwegian, so the English texts could be wrong):
  1. Open Finder
  2. Go, Connect to server
  3. There were two previous servers in the "Last used servers" list
  4. I copy-pasted both server addresses into a .txt-file, for "backup"
  5. I tried first the one and then the other
  6. The one that removed all the symptoms above I noted
  7. Then I "Emptied" the list of last used servers (command in Finder:Go:Connect to server:look at the list)
  8. Finally I copy-pasted back the one that worked
  9. Now the system has one to look for, and since it's the only reasonable and correct - fine!
  10. I think I have found out that using Go, Connect to server and then use the smb (even if that is the only alternative) seems to work (always?)
It was the "afp://...." that caused my problems, and the "smb://...." that was correct. Please tell me why!

Reported to


When I tried this again the other day, there were still two alternatives. The smb is the (right, top) icon with the three persons, which always works - and the AirPort-type (right, bottom) icon is the afp, which does not work. It's all cabled, the AirPort is not on. When I tried a "Save as", it knew that the afb didn't work - see, it was "grayed out", compare with the larger non grayed out alternatives. (I have garbled my disk names). Please teach me! At least, I now know which to use and which not to!

I can see that the Mac OS X displays boths icons for the same disk! The one in the window and the one in the title in the window are one of each! I am lost. But point 10 above is my rope now.

(8April11) See further down (search for "smb") to find a point about a synchronization problem.

My iTunes library and Time Machine on this network disk

After I had changed to smb (above), then iTunes it could not find the iTunes library. It's on a network disk. To recitfy I pointed it to to the smb disk with the three-person icon. All fine, also after I stopped iTunes and started it again. Same for Time Machine!

Another observation with iTunes. It also seems to use several mount paths to the network disk. I can start iTunes, and it shows almost everything, but it displays an exclamation mark in front of the music. Then, the first time I want to play it, it opens a shared network disk dialogue box. When I type the password, music plays. I doubt that iTunes has temporary files on the machine's disk, because it complains immediately if it can't find any network disk. 

Tiger - no problem!

Observe that I have no problem whatsoever accessing the same network disk from Mac OS X 10.4.7 Tiger. It seems to always pick afp AppleShare protocol, and the icon is always a globe.

Another problem - a directory's contents invisible on Snow Leopard and not on Tiger

I noticed that synching my pictures on the iPhone from a Mac OS X Tiger machine (iTunes 9.x) all of a sudden showed a picture folder on the iPhone (3G, iOS4.1) that was'n on the network disk that I was synching against. It was called ".... orig" or something along that line. They both showed the exact 100 pictures in each folder, one real and one ghost.

I could see the original network disk directory from the Tiger machine, with contents.

Then I moved synching to the Snow Leopard machine, with iTunes 10. I had to point it to the picture folder of folders on the network disk. After synching the particular folder had gone on the iPhone!

I deleted the iTunes picture data base, and thousands of pictures were rebuilt - but that particular folder was still invisible, both from the Snow Leopard machine and iPhone.

I noticed that both the smb and afb mounts from above showed me the picture folder on the network disk - but it was always empty. I checked access rights all over the place but couldn't find anything. I could not move the directory around, with error (-43) because it could not find the files.

But from Tiger I could move it! (Remember all the other folders were visible on both machines, any mount). But I could not delete anything on Tiger, got error (-50). Tried to restart the disk server on the Airport Extreme. Nothing helped.

Until I made a copy of the folder from Tiger. That copy was fully visible also on Snow Leopard. Then I deleted the original from Tiger, which I now was allowed to! Then I renamed the copy back to the original name. 

Then all was visible, and I synched the iPhone from Snow Leopard - and the folder appeared again! 

HW connected: see top of note.

Related problem - iPhone synching picture fails on 3G, not 3GS

I just synched my wife's and my own iPhone. She has a 3GS (iOS 4.2.1) and I have a 3G (also iOS 4.2.1). We have separate picture folders on a network disk, connected to an AirPort Extreme (with 7.4.1, see blog 004, post 5 and 6).

I had created two directories from the iBook running Tiger, one on her sector and one one mine. I synched from a Mac Mini running Snow Leopard. I used iTunes 10.2. All this was done on 6Mar11 with all sw up to date.

The picture folders were visible on the 3GS. But not on mine, even after an iPhone restart. 

I tried an "old trick" by now: from the Mini I duplicated the directiory on my "sector", deleted the original and renamed it back to the same name. Now it worked! Now also I have the "47 Feb 11" album!

I noticed that iTunes reported that it deleted pictures and then synched some after my "duplicate, delete and rename" exercise! It could look like iTunes saw the pictures on the 3G, but the 3G didn't?

I shouldn't create new directories on a network disk used by iTunes on another machine!

Update: an experiment that went fine

I accessed the network disk that iTunes uses from all three machines: the two with Tiger (iBook G4 and lamp iMac) and one Snow Leopard (Mac Mini). I made a directory with one file from each machine. The files were also originally created on those machines from Grab. Then I duplicated the three album directories into three unit directiories, one for each unit: iPhone 3G, iPhone 3GS and iPad 2 picture synch directories. After synching - all three album directories were visible on all three units. 

This was with iTunes 10.2.1, and it exposed the rather peculiar behavior to have all units' pictures redone (thousands times three). (Update 1: next day it scaled and synchronized all the pictures again! For both iPhones and iPad. Remember, they do have separate picture directories, and nothing has changed from yesterday. Not even the in-logging to the network drive. (6April11)) (Update 2: I think that since I use a network disk, the unneccessary re-synching came because the same disk was mounted both with smb and afp. See further up in this post. When I removed the afb mount, iTunes again only synched what it should and not all.)  (Update 3: Was this fixed with iTunes 10.2.2? "Resolves an issue which may cause syncing photos with iPhone, iPad, or iPod touch to take longer than necessary".)

This blurs the previously original point: that the picture directory had to be created from the Snow Leopard machine to be visible on 3G. I observed this again two days ago when I synched with the exact same versions as today. (5April11)


Saturday, August 21, 2010

019 - Play (with) network-disk based iTunes library?

This note is updated at my new blog space only, blog note Welcome there!

Updated 5Nov2011

This post is a follow-up from my post 018 - From concrete CDs to abstract iTunes files. And now the old amp broke. Things have now become somewhat clearer for me. I now seem to know the questions - and some answers!

Fig. 1 - Obsoleted physical architecture. See it here ::

Fig. 2 - Present architecture ::
Before Nov. 2011:

Present architecture

July 2012: I have added Apple TV and thrown out JackOSX because it did not seem stable on Lion. (I am not able to update the old Mini to Mountain Lion.) So, the figure above is not updated, but see a new and simpler figure at note 027 "Experiencing Apple AirPlay" - and some points about Apple TV.

Previous architecure

Now, all iTunes files are located on a network disk and the Media center (Mac Mini) is running the iTunes as Master while the other machines run iTunes as Slaves. I have dropped the pseudo multiclient architecture described in 003 - Shared iTunes library on a network disk when iTunes went on to version 10.x but iTunes on Tiger machines now have stopped at version 9.x (Sept. 2010). There is a small description of the new architecture in the blog 003 ref above (blue text there).

The stereo now is a Denon Ceol RCDN-7 with AirPlay, new June 2011. See blog 027 Experiencing Apple AirPlay.

I also see the Apple's plug-in solution with the Remote app works like a dream! The whole library in the hand, with music graphics and all I sure have learned since I, half a year ago, deemed that solution as not practical!

But I had to (or have to?) quarrel with firmware updates to get there! See 004.5 and 004.6.

In Nov. 2011 I gave up on Airport Extreme with a USB connected NAS disk for an Apple Time Capsule with internal disk instead. See 004 - Living with a network disk in the house.

How I got here

The goal is to be able to play our music (and view the CD covers) on all the units above: the Media center, the Laptop and Desktop, the Stereo and the Portable (sitting permanently on the fridge). The existing local net should be used as much as possible. There should be as little direct cabling as possible, like analogue, USB or FireWire or opto-coupled TOSLINK (S/PDIF).

Also, the Stereo should be controllable from the Media center and Laptop. And the Stereo should be able to stream "sound out" from those two units, meaning that we could have better sound while viewing TV or running iTunes on the Laptop - since they are all in the same room.

A goal is to only require the Airport Extreme and Network disk to be constantly running. So, the Media center should not run to be able to listen to the Portable or Stereo. So, I hope not to have to install some server on the Media center to convert from iTunes file reading to datastreams. (Update: iTunes version conflict (above) and JackOSX (below) have already changed this for me!)

The users would be me and my wife. Children are satisfied with the the wifi when they are visiting. And grandchildren are satisfied with what we are satisfied with, for now. If wishes coincide, then one of us must at best yield to earphones, since the iTunes files are readable from several sources, but the iTunes client is not multiclient. However, radio, internet radio, TV, DVD or a book would be acceptable alternatives, as well as the basement shop - or a good walk. Please observe that a "5.1 home cinema system" or a "wireless multiroom music system" is not what we're after! However, we might end up using a component from one of these.

Units in need of replacement
  • Stereo
  • Portable
  • The audio switch
Then existing units, still in good working condition:
  • Media center computer ("Mini"): Intel Mac Mini with Mac OS X Snow Leopard (10.6.4) (Also controlled from the other machines with a VNC client). This may be replaced later with a newer Mini, one that plays full size AVCHD films
  • Laptop: G4 iBook with Mac OS X Tiger (10.4.11)
  • Desktop: G4 «Lamp» iMac with Mac OS X Tiger (10.4.11)
  • Airport extreme router and disk server: 7.4.1 (7.4.2 is unstable for this usage)
  • The Screen: is a full LCD TV with tuner taken out of use since the shut-down of analogue airborne tv in Norway in 2009. But it has a HDMI and a DVI input which fit exactly to the Decoder and Mini
  • TV sound: is terrible as with most LCD tvs, and a pair of Logitech active PC speakers with subwoofer unit are still small enough and good enough for a tenfold enhancement by all measures. But the Stereo is much better, but too old these days.
  • Other units, as seen in the figure
  • CDs are already obsoleted, as they have all been imported to iTunes. But they are still their own best backup! However, music is dowloaded more and more from iTunes Store
  • DVDs are still kept and played on the Media center
Some existing problems
  • 50 Hz hum. The Mac Mini has a TOSLINK opto-coupled and analogue audio output. Since the audio switch does not have a TOSLINK input, I use the analogue connection. Since the switch is also connected to the Screen, then I get hum, barely audible at normal distance. And the switch is not a switch, it's a Logitech volume control box with 3.5 mm aux input, which does not disconnect the other source (TV) when the input jack is inserted. I need a better solution! (This could have been solved with a Logitech Z-5500 Digital 5.1 Speaker System which has a SPDIF/TOSLINK-input - but it's completely overkill)
  • When I want to play music in iTunes on the Laptop and want some more power, I use a cable from it to the existing Stereo's line input. Surprisingly there is no audible hum. But for a new Stereo I'd need a better solution!
  • There is no way at the moment to play TV sound on the Stereo. Wiring it is no viable alternative.
  • Likewise, there is no way to play the Mini's sound on the Stereo
Questions and answers
  • Yes, I use iTunes for my music handling. I am not afraid, as I believe iTunes and Facebook both lock us in, to count two of themThe Web Is Dead")
  • Streaming of "my?" music with a Spotify client I seldom do. If all I needed resided "in the cloud", some problems would go and some come - and some stay. The architecture I describe already streams a lot, internally and externally - like internet radio stations. So, when iTunes comes with streaming services, so be it! I would still not delete the music library on the network disk, and still wonder if I ever find time to import all the legacy 33 rpm record albums.
  • No, I have no plan to switch to Windows, Windows Media Player or Winamp. I know they're good and that people are satisfied with them. Fine!
I am studying
  • It supports Mac OS X "Core Audio" (ref), which I learned about in note 018.
  • It references "Jack OS X - a Jack audio connection kit implementation for Mac OS X. Connecting audio from any OS X application to any OS X application"
I also sent a letter to one of the JackOSX authors, who forwarded it to the Yahoo! jackosx group. See I am now (again) registered as aclassifier there. I certainly hope there will be lots of help! I also am aclassifier at Sonos.

I have now dowloaded JackOSX onto the Media center (Mini).
  1. I now don't have to use the "switch" any more. Hum is gone
  2. The TV sound now comes into the Media center (Mini), and is being routed by JAR (Jack Router), to the line (headphone) output to the Amp (unlike Rogue Amoeba's Airfoil there is no notisable delay, so it's actually useful for tv listening)
  3. Setting up routing with the JackOSX Connection Manager is nice but has to be learned: push one end and then double-click the other end to connect. And stereo means connection '1' and '2' for "left" and "right"
  4. The Mini has to run all the time, since it's not practical to take the Mini out of idle to switch on the TV. I wish I didn't have to. The Mini is very silent, i.e. I can't hear the fan with the present power
  5. The JackRouter does not have a volume control. But the Mac OS X Sound config dialogue box shows that the Line-in and Headphone-out (used as "Line-out") both have. Line-in volume is set in that window only. Headphone-out volume may be set in that dialogue box, in the top Menu line, or with the Remote control. Perhaps this is called the Master or Main volume control? Then, if I connect Line-in to Headphone-out via JackServer (in the JackOSX Connection Manager) - then the Remote control works both for iTunes, DVD and TV! Of course the TV may also be controlled with the Tuner's Remote control, or the Screen's Remote control or Logitech's main control which is a nice physical knob which always works! (See figure below, where I assume that Norwegian should be little problem, Apple's graphics does most by itself)
  6. TV and iTunes could sound simultaneously. It's not enough to switch. One has to turn something on and something off. However, switching the Screen to the Media center disconnects the TV sound, since it comes through the HDMI to the "TV"
  7. Another thing: it's possible to run this sw invisibly. Stopping the JackPilot stops the Connection manager and the Pilot window but not the JackOSX server. Nice!

This is nice but rather complex. Next problem is to find a way to export a data stream to the Stereo and Portable. The JackOSX network module is not in the download since the authors are still(?) working on it. It's got to be more complex. Hmm

Yazsoft is not it
  • I am looking at the Yazsoft media server (which probably reads CoreAudio from JackOSX server?) and exports UPnP data streams..
  • ..which may be read by a Stereo, being an Arcom Solo unit, see,solo,Music-Systems.htm.
  • The Sonos ZonePlayer ZP120 I cannot see is able to read UPnP datastreams?
  • The best of the two worlds would be to find something that sees the network disk iTunes library and UPnP datastreams?
Darwin Streaming Server is not it

The Darwin Streaming Server does not seem to solve my needs See I found this from reading Comparison of streaming media systems at Wikipedia. Here is from macosforge's documentation:
The streaming server supports QuickTime Movie (MOV), MPEG-4 (MP4), and 3GPP (3GP) "hinted" files. Hinting is a post-process that you apply to your movies to make them RTSP-streamable. You can hint them with QuickTime Pro or the hinting tool available in the MPEG4IP package. If you don't hint your .mov's or mp4's they will still be HTTP-downloadable but it will take them some seconds to start playing. You won't need a streaming server for this, just use good old Apache. (
"Good old Apache"

Ok, maybe? Entering a url to a datastream plays that datastream. Starting an Apache server is a matter of ticking in a dialogue box. But then, that's not all I want.

Apple AirPlay ::

Update July 2012: I added an Apple TV unit. See my new note 027 "Experiencing Apple AirPlay".

Apple has already had the AirTunes streaming protocol some time, somewhat discussed in my previous note. I discarded it as pretty uninteresting there.

However, the follow-up protocol AirPlay, introduced by Steve Jobs in Sept. 2010, looks more promising. (Maybe he saw the despair in the notes..).

Apple informs about AirPlay at their page. There's also a wiki-page, of course. Read those now.

Apple say that there will be "featured partners" like "Denon, Marantz, B&W Bower & Wilkins, JBL and iHome" who will produce units that understand the AirPlay protocol. (Most of these companies are "so international" that they present top-level pages pointing to countries, where I found no AirPlay info (7Sept10). But the red should be interesting.)

A company called Frontier Silicon develops solutions for modern radios, like the Pinell Supersound II (blog 18). I would be surprised not to see AirPlay appear from those sources. Not many (any?) present systems will support AirPlay, so I'll have to wait to see it included. It might be worth it. As one of the major vendors answered my question in a closed User group: "Will nn support AirPlay in any of your products?" with the reply "We don't know yet.. with our current line of products it's not possible."

Please see fig.1 and the following (above).

So, why do I think this is for me?

Before I answer, I have a general worry here. See "Rogue Amoeba's Airfoil" (below). It routes AirPlay sound. However, it needs to insert a delay in the sound so that it can be heard in phase on all AirPlay units across the home wired or wireless network. In other words, the tv should also have delayed the video, like Rogue Amoeba's built-in Airfoil Video Player. So, in order to distribute tv, I would have to wait for a tv with builr in AirPlay? So, my general wish to have one audio routing sw only may not be easy to get for a while?
  1. I can play iTunes on any machine and treat my new Stereo (with AirPlay!?) as an external speaker, just like with Airport Express. Nice!
  2. It would work over WiFi and Ethernet. So, the Portable (with AirPlay) could equally be an external speaker!
  3. I would be able to stream also from iPhone to any of these units!
And what would I certainly hope for?
  1. I would hope that I also could take the sound from my tv and pick it up (via JackOSX server?) and route that sound also to the Stereo and Portable. In other words: it would also stream non-iTunes streams. I can't see why this should be a problem, I can already play radio url's in iTunes. (Update 3Jan11: see above, AirPlay probably delays sound like Rogue Amoeba's Airfoil, so this is not really possible?)
  2. I would hope that the Stereo and the Portable, which do not have iTunes (but support AirPlay) would also be able to serve as a client, i.e. I would be able to browse the iTunes library, with artwork and info and all - without running iTunes on any machine. Like how the Sonos ZonePlayer ZP120 would (I have assumed) be able to read my network disk based iTunes library
  3. I would certainly hope that the AirPlay sources would be available under macosforge. That way, any programmer like myself or the JacOSX people could get it going as well! Update: forget it! This most probably needs an AitPlay chip to run.
Posted to

Interesting reading is "Forget Apple TV. AirPlay Is Apple's Sneak Attack On Television" at Gizmodo which tells that there's a hw chip needed(?) for AirPlay - and asks if it will coexist with UPnP, DNLA, and Windows 7 streaming. That article points to this BridgeCo blog. Read on!

There also is an interesting post "AirTunes v2 UDP streaming protocol" at

Rogue Amoeba's Airfoil (for Mac) seems to be AirPlay companion

Conclusion: since I must also run the JackRouter, I don't need Airfoil before we also buy AirPlay hw. And then I would need to run both JackRouter (for tv) and Airfoil (for sound only or video played with the built-in Airfoil Video Player).

It seems like I could "Send any audio from your Mac to AirPort Express units, Apple TVs, iPhones and iPods Touch, and even other Macs and PCs, all in sync!" with Rogue Amoeba's Airfoil for Mac. I will test It may even outfunction the JackRouter. (The new version 4 in late 2010 seems more interesting than the version of Aug. 2010 when I first discovered it, partly because my target has moved..)

However, "Airfoil recognizes only AirPlay (formerly AirTunes) devices" (reply to support mail). Therefore Airfoil sw will not see or be seen by Sonos devices, since Sonos doesn't support AirPlay with their present range of hw (as of late 2010).

Rogue Amoeba does not have any white paper to describe Airplay's sw architecture. "As for a white paper, no, sorry" (reply to support mail).

In my architecture TV sound comes in on sound system input. There is no point in taking "live" sound from that input! Simply because Airfoil delays all inputs by some 1-2 seconds. They need to do it like this so that they can play the music at different locations over the wired or wifi local network with all speakers in phase. Even if AirPlay may be designed for real-time response, since they run on top of ip or wireless protocols I don't see how they could ensure unnotisable delay. With regards to the fact that I'd like one sw sound component only, this worries me! Maybe I'll have to run JackRouter for tv sound and Airfoil for iTunes in the future? (This is the reason why they have a separate Airfoil Video Player, so that they can delay the video playback as much as sound playback.)

Saturday, April 24, 2010

018 - From concrete CDs to abstract iTunes files. And now the old amp broke

This note is updated at my new blog space only, blog note Welcome there!

Latest (Aug.2010): This post is a discussion with myself, and has mostly been closed. Read 019 - Play (with) network-disk based iTunes library? as the next and more mature chapter.

My excuse for keeping this blog somewhat unorganized is that it is being written now, while I try to get organized. I am writing it to help me get organized. And, perhaps you could reckognise your situation?

Intro: how? to play my music

What can help, now that my LP vinyl collection has been stacked away for almost 20 years, and I have reached "9.5 days" of playing with CD import to iTunes, and still have a "day or two" left to import? In other words: I don't have to get more CD shelf space, it's opposite: I need less and less. In practice this also means that the the old Denon CD player DCD-680 will be out of use pretty soon. And it probably won't even be kept as a spare, as has been the old 33 rpm LP player. It was bought when I acquired the CD player, to make it possible to play LPs indefinitely. It still works. However, since CDs can be played (or imported) on any PC or Mac, I am uncertain about the short-term fate of the CD player.

And - for a some time now the old Denon DRA-335R radio and amplifier has been getting rusty in its voice, i.e. broken for any real world listening.

The times they are a'changin, sings Dylan on one of the CDs. Yes, they still are. But how do I cope?

The new backbone: a network disk with iTunes library

My new backbone is not the shelf with plastic, but a network disk with iTunes files, and a good backup scheme. See blogs [1] and [2]. It's available over wifi , but I have also wired all necessary spots. We use the wires most: it's faster, and we swim in less radio waves (set to 10% power).

But I have to move on with the blog. I'll start off with an example, and then perhaps later on try to set up a specification.


Sonos have a great box called ZonePlayer ZP120 [3]. What's in it for me?

When I say "me" I really mean "us": my wife and me, with no children in the house any more, but they come along visiting, some with grandchildren. How should our new "stereo" be like?

First, we're not talking about a home movie theatre, and no 5.1 surround sound. Not in our living room, and not in any other either. We go to the cinemas, watch tv, and we watch internet tv and DVDs using a Mac Mini and the tv as a screen. The central here is a multimedia rack I made a while ago [4].

However, we are talking about iPhones with playlists. They play home made movies and music in their iPod applications. Quite nice. There is a subtle integration here, with the living room. Even the car. Perhaps also with my workshop in the basement. But how should I do this?

Scenario 1: ZonePlayer ZP120?
This unit could easily replace the old amplifier. It's got a nice Class D amplifier [5] that I'd really like to own, and more than enough power. Power output for stereospeakers and a line subwoofer outputs. It connects to our home's wired network, which is nice - and reads files as well as radio stations out there. It also has wifi, but only over a proprietary SonosNet protocol.

There is a remote control for it, but it costs more than half of the unit. However, a free iPhone App also controls it, and that would help. Especially since we have a first generation iPod touch laying around.

Sonos would read iTunes playlists, once it's told by a Mac/Windows application called "Sonos Desktop Controller" where the iTunes library is located (where the file "iTunes Music Library.xml" is located). Even if it's on the home network disk. So, none of my machines need to run while Sonos streams music from the iTunes-loaded network disk. I like it. So, this software does set-up and it also is remote player client or control system for the ZonePlayer. They even say that it shows CD covers, also while playing iTunes [17]. The sw seems to have changed name since I first started this blog. Now (Aug.'10) it's called "Sonos Controller for Mac or PC".

However, it doesn't play DRM and WMA lossless files. So, some of my songs would not be seen. I have not studied the percentage I loose there.

On the other side Apple has this Airport Express that has the feature that it appears as an iTunes output channel [6]. Apple's protocol (AirTunes, in Sept. 2010 replaced by AirPlay, see note 19) is probably not any more standard than the SonosNet. The iTunes would stream to that unit, and I could connect the 3.5mm jack output to the ZonePlayer's input. The Airport Express also functions as a TOSLINK optical digital connector over the 3.5 mm jack [7]. This is beautiful to avoid hum - but the ZonePlayer only has RCA inputs, so I would need an optical PCM to analogue converter to use it [7]. This is a substantial extra cost. Also, a limitation with the Airport Express is that I can only stream from iTunes wirelessly, not via cable. However, the good thing is that Airport Express would be available from any machine that runs iTunes that sees the network disk.

So, how about connecting the output from the Mac Mini (which also has TOSLINK) to the ZonePlayer? With 10 m of analogue wire, I would rather not. I already have a little hum from not using the TOSLINK output to the little Logitec Z4 that replaces the sound from the Samsung tv. When I switch it to the Mac Mini, that's when the hum comes - since I have not used the supplied TOSLINK. The hum is small, but I may hear it on a bad day. But, as mentioned, I could use 10m of optical and an optical PCM to anlogue converter box [7] here. The cost is about 40% of the price of ZonePlayer ZP120.

I am confused. Buy a ZonePlayer just for the amp? Buy another little amp? Use two user interfaces, one for the ZonePlayer and one for iTunes? Drop iTunes? How big is the Sonos lock-in effect. Now that the iPhones already lock us in, I'd not want more!

(Any system would probably try to lock a user in, ref. Gödel and Russell - and the laws of capitalism. So, trying to be outside one could easily be locked out, which is just another kind of lock-in? Standards or lack of standards also cause lock-in.)

I would suggest you also read David Pogue's article in NYTimes [9].

Scenario 2: Just an amplifier: Denon DRA-F107
This unit costs half of the ZP120. But if I add 10m of optical cable and the optical PCM to analogue converter [7] the only gain is a little lower price and the simplicity. As you understand, there is no TOSLINK input here, no digital audio input. DRA-F107 has a phono input from my still kept record player, to play LPs. It also has an FM radio, and DAB+ may be ordered. It also has a remote controller, somewhat easier than the AppStore remote app. ZP120 has internet radio, but no phono input.

This unit is smaller than the two old Denon 43 cm width units, but a little wider than ZP120.

I want to keep the small speakers we have: Bang & Olufsen Beovox CX50 [8]. I bought new elements in them some years ago, and the play quite well. But I would like a subwoofer in addition, since the CX50 are only 2.5 liter internally.

How easy should it be to play a song?

A. Before
  1. Hmm, I'd like to play a record
  2. Where is it, I think I placed it here. Darn, it's too dark, I can't find the CD.
  3. There it is!
  4. Switch on the units
  5. Push in the CD
  6. Find the song and play!
B.1 iTunes with help from external amplifier (before Post 019)
  1. Hmm, I'd like to play a record
  2. Start the Mac and iTunes (I never switch it off, so I only need to wake it up)
  3. Switch TV to become Mac mini screen
  4. Since I played iTunes on another machine before, I must wait until iTunes library has been updated
  5. Select speakers: the local PC/TV speakers or the nicer across the 10m optical cable
  6. Switch that system on
  7. Play
B.2 iTunes when Post 019 is much easier (all already runs, machine and iTunes):

  1. Hmm, I'd like to play a record
  2. Start Apple's "Remote" app your own iPhone (or iPod)
  3. Or use the remote control that came with the Mac Mini to continue playing. Observe I haven't switched on the Mac Mini's tv screen
  4. Play (it may take some seconds before the network disk spins)

C. iTunes files only with ZP120
  1. Hmm, I'd like to play a record
  2. Switch on Sonos ZP120
  3. Find the iPod and start the remote control App
  4. Have it connect to the ZP120
  5. Wait until the network disk is seen by ZP120
  6. Find the song in the list
  7. Play
I don't know which is fastest, but I suspect A. So, why not keep the old CDs? Because I cannot and want not stop the world alone. I see that with a ZP120 I also have the choice between B and C. But with DRA-107 I only have B.

Longevity of living-room music playing instruments

So, how technical should it be? It is good that a two year old grandchild never would be able to start her favourite music alone (B and C, above). And I, being a computer programmer, in a way don't like B and C. They are not usage elegant! Just technically elegant!

But maybe they are, after all, two steps ahead and only one back. And I won't come to half way by going two ahead and one back, but I'll come to another place. In iTunes I will be able to see the full collection at once, with front covers. I can search and group and play across albums with playlists. I can easily move music to iPhones etc.

But is iTunes here in five or ten years? I do keep two networks disks: one connected and in the house, the other in another house, which carry over and mirror some times per year. I also have a copy on a machine in house. How do I know that my backup is a valid backup? Of the CDs there was no backup: lost is lost. Hmm.

And how is ZP120 updated? For how long?

Any system that needs backup and centralized storage needs an "adminstrator". That's me in this house! But will I survive my wife, and if so, admin myself? We're still only 60 young, but age is volatile. Hmm.

Ok, I need switch: Presonus Firestudio Mobile or Apogee Duet or iMic?
I want to play iTunes directly from the external disk without any machine. And I want the same speakers to play music from any (?) machine playing iTunes, or any sound from any machine! And I want no hum from them. And I don't want to stream the music over WiFi. Seems like I'm starting to understand what I need?

Here are some products that could help. I could place one of these units beside the Sonos player, and any one of them could be of great help. The Presonus (left in the picture) unit takes both FireWire and S/PDIF (TOSLINK) inputs. I don't think it shows up as an external speaker on my Mac, and that I need Presonus sw to be able to stream music out on that FireWire? However, the Apogee Duet will turn up as an external sound unit because it supports Mac OS X Core Audio. 

Thinking about excatly the last point. I have a very old now, iMic USB Audio Interface from Griffin Technology. I have enjoyd it for 8 years, sitting next to a Lamp Mac that I love. It connects to USB and has 3.5 mm stereo input and also output. So it also must support Core Audio, and has since Mac OS X 10.3 I think it was! The iMic is still a product! 

However, none of the products function as an USB, FireWire or optical switch. And I have to have wires from each machine up to the ZP120. Am I getting closer, or am I just learning?

Another thing: FireWire or USB from a remote machine, and analogue out into ZP120 - I have a feeling I would still get some hum! 

I probably either want an optical switch or to stream music from a sound "http driver" directly into ZP120? That would be nice! One url from each machine! I think I will continue my search. Maybe I'm only a driver away! No extra hw!

New knowledge: UPnP - Universal Plug and Play

Maybe the UPnP - Universal Plug and Play [11] (for Mac, for me) comes to the rescue?

A. C. Ryan has a box that seems to read UPnP servers [12]. There is a lot about iTunes and the like at the discussion thread [13], "iTunes streaming". There is pointed to Allegro [14] and Yazsoft [15] Media Servers for Mac.  

And I must find out if Sono ZP120 reads UPnP. 

More new knowledge: DLNA - Digital Living Network Alliance

This is an alliance that defines and looks after protocol usage(?) [16]. There is a connection between UPnP and DLNA that I'll have to find out about. More later, since "knowledge" in the headings is very exaggerated. I'll have to learn this!

Sonos Controller for Mac or PC

This is s client to control the ZonePlayer (see above). It is not a server that helps me stream music from an other machine, which is what I am discussing at this stage.

Jack Audio Connection Kit

Just for the record for myself, this is very interesting, and I will certainly look into this more in my next note. Here is a quote from [18]:
What is JACK?
Have you ever wanted to take the audio output of one piece of software and send it to another? How about taking the output of that same program and send it to two others, then record the result in the first program? Or maybe you're a programmer who writes real-time audio and music applications and who is looking for a cross-platform API that enables not only device sharing but also inter-application audio routing, and is incredibly easy to learn and use? If so, JACK may be what you've been looking for.
JACK is system for handling real-time, low latency audio (and MIDI). It runs on GNU/Linux, Solaris, FreeBSD, OS X and Windows (and can be ported to other POSIX-conformant platforms). It can connect a number of different applications to an audio device, as well as allowing them to share audio between themselves. Its clients can run in their own processes (ie. as normal applications), or can they can run within the JACK server (ie. as a "plugin"). JACK also has support for distributing audio processing across a network, both fast & reliable LANs as well as slower, less reliable WANs.
Logitech Squeezebox™ Touch

Closing this name throwing, here is the last piece I need for homework for my next note: the Logitech Squeezebox™ Touch. It takes physical connections as well as works as a streaming client. See [19]. There's also the Logitech Squeezebox Radio, which I will need to look more into.

An "on the fridge" type radio: Pinell Supersound II

I'll watch out for this in my next note. It's a Norwegian radio, but it looks interesting. See [20]. It also reads UPnP streams. It turns out that Pinell is using a "turnkey solution" from  Frontier Silicon [22] for their units.


I will try to connect all this in the next note 019 - Play (with) network-disk based iTunes library?


[1] - Blog post: 003 - Shared iTunes library on a network disk

[2] - Blog post: 004 - Living with a network disk in the house

[3] - Sonos:

[4] - Furniture design an building: Multimedia rack

[5] - Class D amplifier:

[6] - Apple Airport Express and AirTunes In Sept. 2010 replaced or expanded into the AirPlay protocol, see note 19

[7] - TOSLINK optical connection:
PCM decoding from Mac Mini: S/PDIF:
PCM adaptor:
Box to convert optical S/PDIF or TOSLINK digital audio to L/R analog audio:
Gefen Digital Audio Decoder DD D/A-converter:

[8] - Bang & Olufsen Beovox CX50

[9] - David Pogue (The New York Times), "No ‘System,’ but Music Housewide" -

[10] - Denon DRA-F107:

[11] - UPnP - Universal Plug and Play:

[12] - A.C. Ryan - PlayOn! Media player:

[13] - Thread "iTunes streaming" -

[14] - Allegro Media Server for Mac:

[15] - Yazsoft Media Server for Mac:

[16] - DLNA - Digital Living Network Alliance:

[17] - Sonos Controller for Mac or PC -

[18] - Jack Audio Connection Kit -

[19] - Logitech Squeezebox™ Touch -

[20] - Pinell Supersound II -

[21] - Comparison of streaming media systems:

[22] - Frontier Silicon: - used by "practically all major consumer electronics brands, including Bang & Olufsen, Bose, Bush, Cyrus, Denon, Goodmans, Grundig, Harman/Kardon, Hitachi, JVC, Magic Box, Ministry of Sound, Onkyo, Panasonic, Philips, Pioneer, Pure, Revo, Roberts, Samsung, Sanyo, Sharp, Sony, TEAC and Yamaha".


Tuesday, April 13, 2010

017 - Safari and Opera Mini on iPhone/iPad

This blog will show how a table intensic page (my home page at is rendered on the iPhone by Safari and Opera Mini. Naturally, for 10 years the page has been my private browser "acid" test. Since there are billions of other pages to compare with, I'll stick to this.

Observe that iPhone (original, 3G, 3GS, 4) has a 3/2 screen (480/320) or (960/640) while iPad (1,2) has a 4/3 screen (1024/768). My index file uses table layout to percentage of screen, not pixel counts. Except for icons, which have to be pixel-defined. H, T, P and F are also graphics. Opera tries to use some heuristics for the small screens, which are basically not successful. Safari does what it's told and does not attempt to be smart, which is super.

1 - Initial version of Opera Mini
  • Safari on iPhone 3G 3.1.3 (7E18)
  • Opera Mini Web browser, v5.0 
  • 13 April 2010
Verdict: Presently I will use Opera if I need to search for text in a page, need to read it off-line, or have to download a large page with per-MB cost over my phone connection (good speed and also optionally no pictures). This means that I'll mostly stick to Safari, since it's also an iPhone Application, not a port. One "star" for this version.
    1.1 - Safari has always been super

    Safari Safari

    1.2 - Opera Mini on iPhone was bad

    Opera Opera
    2 - iOS 4 with Opera didn't help as Opera was the same
    • Safari on iPhone 3G 4.0.1 (8A306)
    • Opera Mini Web browser, v5.0 (5.0.0119802 2010-05-06)
    • 26 July 2010
    Safari is new, but the layout is 100% equal.
    Also no change in Opera Mini's layout. It's the same version so that's hardly no surprise.

    3 - Opera Mini 6 on iPhone is getting better
    • iPhone 3G, iOS 4.2.1
    • Opera Mini
    • 22 June 2011 (the file has modified contents, of no importance)
    Opera Opera

    4 - Opera Mini 6 on iPhone 4 oops! and getting better
    • iPhone 4, iOS 4.3.3
    • Opera Mini
    • 24 June 2011
    • Showing two standing pictures here. Same iOne
    • Left screen cut reported to
    Opera Opera

    5 - Opera Mini 6 on iPad is super
    • iPad2, iOS 4.3.3
    • Opera Mini
    • 23 June 2011
    • The pixel real estate available is about the same as on the iPhone 4, yet layout here is super. Do Opera Software use another algorithm, on board iPad or on their layout server?
    Opera Opera

    Wednesday, March 17, 2010

    016 - Cooperative scheduling in ANSI-C and process body software quality metrics

    Updated 5June2016 (linked from


    The software metric STPTH ("static path count") some times yields an unacceptably worrying figure. The higher the worse, like 6000, which is really bad - signifying a high code complexity. However, is it the code or the formula that's unacceptable? Let's try to find out.

    The fairy tale of "Hansel and Gretel"

    In this fairy tale [9], when Hansel and Gretel are left alone deep in the woods, Hansel has left a trail of white pebbles (stones) which their parents did not notice. To find home again, Hansel and Gretel just followed this trail. But the next day, they cannot find home because Hansel made the trail of bread crumbs. The birds ate the bread, and the woods became dangerous. They were lost.

    The first day the woods was not dangerous, the second day it was.

    Even in a fairy tale the context or trail is important. A child can understand it. So, why should a high STPTH number always be bad? You tell me!

    STPTH - what is it?

    STPTH seems to be defined in the now superseded ISO/IEC 9126 Quality Attribute Model [1]. The [1] Wikipedia article does not mention the sub-characteristic attribute tree: maintainability - complexity - static path count, which [2] does. The figure is a screen clip from [2].
    I have not been able to find the formula, but the number of branches is in it. However, in order to clarify, let's look at the difference between STPTH ("static path count") and STCYC ("cyclomatic complexity"). Quoting from a mail, where I had queried about these matters:
    ..It is very easy to generate code with a large value of STPTH because the algorithm for computing STPTH is simplistic. For example, the addition of a switch construct with 5 branches will add 5 to the value of STCYC but will multiply the value of STPTH by 5.
    In other words: if then else and switch case code is expensive STPTH-wise. But watch out, there is at least a case where STPTH is not very meaningful: high number is ok. It's worse: the normal figure is high!

    Recently I queried the English company Programming Research and they replied:
    This is similar to Nejmeh’s (1988) NPATH statistic and gives an upper bound on the number of possible paths in the control flow of the function.
    I found a reference and explanation in [3]. Npath is the name of a tool computing the NPATH measure and other measures. Quoting from [3]:
    The NPATH metric computes the number of possible execution paths through a function. It takes into account the nesting of conditional statements and multi-part boolean expressions (e.g., A && B, C || D, etc.). Nejmeh says that his group had an informal NPATH limit of 200 on individual routines; functions that exceeded this value were candidates for further decomposition - or at least a closer look.
    Also, [3] references the real source of it all, namely and article by Brian Nejmeh in 1988 [4].

    An example

    Programming Research gave me this example, knowing that I was writing this blog, so I assume it's ok to quote here:
    The following code example has a static path count of 26:
    Each condition is treated as disjoint. In other words no conclusions are drawn about a condition which is being tested more than once. The true path count through a function may therefore be lower than the estimated static path count but will never be less than the Cyclomatic Complexity (STCYC).
    The true path count through a function usually obeys the inequality:
    Cyclomatic Complexity <= true path count <= Static Path Count
    Our code example

    The code is discussed in [5]. Here is the start of it:

    There are 3 visible synchronization points in this code: lines 08, 17 and 21. The channel based scheduler is one return away from the code you see here. The PROCTOR_PREFIX in line 04 is an invisible branch table to invisible labels hidden inside the macros of the synchronization points.

    When a channel (or one of a set of channels in the ALT) is first on the channel, there is a return to the scheduler, to implement the blocking semantics of a synchronous channel based scheduler. This scheduling is called cooperative scheduling, and all the code you see is plain ANSI-C with zero tricks - other than the fact that the branch proctor table is made with a script which searches for the labels. "The channels is the scheduling" with these systems, except for the asynchronous no-data signal channel we have implemented. The Wikipedia article [6] is not very relevant though, since it does not explain cooperative multitasking in use for a CSP (Communicating Sequential Processes) based system (March '10). Ada could have used that scheme, occam [7] does, as does our system. These systems could be, and ours is, non-preemptive. Therefore no assembler or tricks, since I don't think it is possible to write a preemptive scheduler in ANSI C without relegating to assembly. I cannot find a suitable Wikipedia article for this.

    The problem with this code is that it is much more complex than it looks.

    Some of the channel macros are in-line code with conditionals. (I don't like macros, but with ANSI-C I tend to love them.) And, when we receive something on a channel, it's not handled inside the ALT structure (as would occam or Ada), but in the nice switch (g_ThisChannelId) case block below. Also, the fold in line 20, which would contain 5 case branches. Inside these cases one would usually check the received protocol in a level or two, and then innermost we could send off a CHAN_OUT, which

    This inner CHAN_OUT cannot be done from a function! There is no other way, with the non-preemptive cooperative scheduler we use.
    Aside: it's not as it was with the SPoC (Southampton Portable occam Compiler, [8]) which generated ANSI-C from occam. Occam allows communication on channels from any PROC (except from its side-effect free functions). So SPoC had to allow it. It used the same return to the scheduler (that's where we learned this). The trick was to "flatten out" any PROC by not calling it, but starting it as a special type of PROCess, only to push the return down (any number of levels) to one above the scheduler. Alas, this is too complex for us to do by hand-written ANSI-C!
    Still I would argue that the code above is as simple as it could be. In a code example at work, the STPTH value started at 13000, but I was able to reduce it to 6000. Then my willingness to fine tune any more came to a halt.

    The fact that I took the energy to reduce by 50% showed that STPTH had some value!

    A suggestion

    Perhaps the tool that calculates the STPTH for us could see these inner synchronization points that cannot be put into called functions and "excuse" them? We could decorate that inner point with an exception rule that could enter into the formula. That way we could get STPTH down to some value that we won't have to explain.

    In the reports we get an overview of all exceptions, so it won't be forgotten.

    I would certainly like comments on this. How should the exception modify the STPTH formula?

    One solution

    After some lamenting to the tool vendors and pondering, we decided to make some conditional compilation #ifdef __STPTH_TOOL__ and remove the PROCTOR_PREFIX's automatically generated jump (goto, scheduling) table and also the synchronization points (channel input and output, gALT_END etc.). Nowhere to jump from and nowhere to jump to! Or, hide scheduling from the tool.

    Here is the "Code Review Document" with a process containing the jumps to synchronization points:

    And here it is after we made the scheduling and synchronization linking invisible to the tool:

    The STPTH went from 33 mill to 220 thousand! The file has 2000 lines of code. This is a process file that relates to process context structure that we never parameterize in an extern call. We take elements of it and parameterise, of course. The file is folded, so I only relate to "single screens" anyhow. This is rather good, even if the purists would say the STPTH number is way too high! But look at the structure now. In my opinion it is rather organised.

    We only placed the #ifdefs in the scheduler's .h file, where all the macros we use are defined. Not a single change in the process files. But we had to rewrite to PROCTOR_PREFIX table generator program to also include this #ifdef in the generated file.


    [1] - ISO 9126 international standard for the evaluation of software quality

    [2] - Improving Software Quality through Static Analysis, in (Pugh) and (Foster, Hicks, Pugh)

    [3] - npath - C Source Complexity Measures, see

    [4] - Brian Nejmeh: NPATH: A Measure of Execution Path Complexity and its Applications, in Communications of the ACM, Februrary 1988.

    [5] - New ALT for Application Timers and Synchronisation Point Scheduling. Two excerpts from a small channel based scheduler. Øyvind Teig and Per Johan Vannebo, Autronica Fire and Security (AFS) (A UTC Fire and Security company). In Communicating Process Architectures 2009 (WoTUG-32)
    Peter Welch, Herman W. Roebbers, Jan F. Broenink, Frederick R.M. Barnes, Carl G. Ritson, Adam T. Sampson, Gardiner S. Stiles and Brian Vinter (Eds.) IOS Press, 2009 (135-144) ISBN 978-1-60750-065-0 © 2009 The authors and IOS Press. All rights reserved. See

    [6] - Cooperative multitasking -

    [7] - Occam -

    [8] - Southampton's Portable Occam Compiler (SPOC) (1994) by Mark Debbage, Mark Hill, Sean Wyke, Denis Nicole. Just search for this on the net. I also have several hands on papers with it, see

    [9] - The fairy tale "Hansel and Gretel" -

    Archive and about

    Popular Posts

    Øyvind Teig

    My photo
    Trondheim, Norway

    All new blogs and new home page start at

    Overview of old blog notes here

    My technology blog was at

    and my handicraft blog was at

    PS: just call me "Oyvind"