The horse clinic is over, my book review has been finally submitted, and suddenly there's
nothing left to do except to process the photos that Yvonne took. Despite our expectations, she didn't quite make 2,000 of them—only 1,816:
I don't
think I have ever taken that many in such a short space of time. In addition, many were
videos, so the total space was considerably more than 50 GB:
The new disk for lagoon arrived today. I suppose it's a sign of the times that a 1
TB disk is now pretty much the lower size limit, and this one was only half the thickness of
the one it replaced.
What partitions? I've been recommending a two file system approach since the first edition
of “Installing and Running FreeBSD” in nearly 20
years ago. In 2003, I changed from / and /usr to / and /home,
implicitly leaving /usr in the root file system. That makes a lot of sense: despite
the name, /usr now contains mainly system files, while user files are
in /home. That makes it easier to upgrade: the /home file system stays the
same, and when you upgrade there's little you need to carry over to the new root file
system.
But how big? In 2003 I recommended a root file system of 4 to 6 GB. Over the years I've
increased that size considerably. Soon it was 10 GB, and on eureka and teevee
it was 20 GB. On stable I raised it to 40 GB, and I did the same for the latest
iteration of lagoon.
Strangely, things still fill up. Here a comparison of the machines, basically df
output:
System
Installed
Filesystem
1048576-blocks
Used
Avail
Capacity
Mounted on
eureka
7 October 2008
/dev/ada0s1a
19,832
17,017
1,228
93%
/
teevee
17 May 2013
/dev/ada0p4
19,832
9,591
8,654
53%
/
stable
2013
/dev/ada0p2
39,662
32,215
4,274
88%
/
Where did all the space go? /usr/local can get enormous. eureka has over 9
GB, more than half the total file system. stable has over 10 GB. The reason that
its root file system is so much bigger than on eureka is because of system
builds: /usr/obj is now also nearly 10 GB in size.
So what do I do on lagoon? It was also 40 GB in size, and 30 GB of that was in use.
But we have so much space—Yvonne only uses about 20
GB. So for the fun of it I created a 100 GB root file system. I now have:
/destdir is a second root file system for upgrades. The intention is to install a
new version in that partition, then swap the partitions /dev/ada0p2
and /dev/ada0p4. That's why it's currently empty. And the usage in /home is
mainly a backup of the Microsoft disk /dev/ada1 (dischord).
In passing, it's interesting to note how dependent the M.Zuiko digital
ED 14-42mm f3.5-5.6 EZ is on postprocessing. At 14 mm it has
impressive chromatic
aberration and distortion. Here left the original image, right as processed by
DxO Optics “Pro” with the standard conversion palette. Run the cursor over either
image to compare with the partner:
Clearly the new lens cap is useless. Time to initiate a return. To make it easier, it
makes sense to send a couple of photos, roughly like the ones above. Fought my way through
this emetic eBay form, climbed down the directory
trees, and tried to upload the first image. No go:
Well, a change of language helps, doesn't it? Additional retries alternated between English
and German, and there was no way to break out of the loop. In the end I shut the tab and
started again, this time with a smaller image (the original was full size). That worked.
So it seems that eBay's way of saying “file too large” is to ask you, alternating between
two languages, to retry for ever.
It's a known problem with this unit that you need to tilt the head down for close-up shots,
but I had already done that. Sometimes I wonder what on-camera flash is good for.
Yesterday's image comparisons showed that image conversion software has a lot to do with
modern lenses. DxO
Optics “Pro” does a good job, but it's not the only game in town. There's also
Olympus' own Viewer 3, and once upon a time I had used UFraw. But there's other free stuff out there too.
Went looking for Bibble, but
that has gone away. There is, however, RawTherapee. Tried each of them, but didn't come to a conclusion. In the meantime I
have:
UFRaw still doesn't have ICC
profiles, and I don't know where to get them for my current cameras. Without them,
the images look terrible.
RawTherapee behaves strangely. On starting it, I get:
=== grog@stable (/dev/pts/1) /usr/ports/graphics 17 -> rawtherapee (rawtherapee:46054): GLib-GObject-WARNING **: The property GtkWindow:allow-shrink is deprecated and shouldn't be used anymore. It will be removed in a future version.
cannot create directory monitor: Unable to find default local directory monitor type
(many times) rawtherapee: Fatal IO error 35 (Resource temporarily unavailable) on X server eureka:0.1.
Unable to find default local directory monitor type
What's a local directory monitor type?
After some time it produces a screen with lots of buttons and things, but they don't
behave quite the way I expect. I'm not sure whether it's just extremely slow (though
I am used to DxO), or whether my intuition is defective. The home page gives the
impression that there is only a Wiki as documentation, but it seems that there's a
format-on-demand book, which I've downloaded and will look at. In any case, it
looks good enough to warrant further investigation, so for today there are no tangible
results.
One of the side effects of Yvonne's photo spree of the last
few days is that my contact print scripts can't handle the sheer number of images. They're
written as PHP web pages, and we were
getting:
Request-URI Too Large
The requested URL's length exceeds the capacity limit for this server.
Took a look at the code, and discovered:
function docontacts ($desc, $dirdate, $imagelist)
{
$method = "get"; /* transfer method. Set to get for debugging */
global $me; /* name of this script */
...
OK, that's easy enough. But there were a number of other annoying buglets, and I spent a
lot of time trying to fix them, but didn't manage them all. It looks like I might need a
restructure.
While unpacking cartons, found a couple
more S-100 memory boards. One is the
fourth Econoram board, similar to the ones I had already photographed, but I'm surprised
that I hadn't missed the other one last time round, in particular because of the modifications on it:
What's that for? Do I still have the circuit diagrams somewhere? This is a 64 kB memory
board, so it covers the entire address space of
the Z80. My best bet is that I have carved a
hole in the address space to allow inserting
a PROM board.
A few weeks back ALDI had a “quadcopter” (a
camera drone) on special offer. I bought one and had intended to test it, but either I'm
incapable or it is. According to the packaging (an important part of the documentation;
it's the only place where they mention the resolution of the camera) it comes with propeller
guards:
The buttons to the left and right of the screen are just fakes. The main switches are
really old-fashioned slide switches between and below the joysticks.
I tried several times to use the thing. It's almost impossible, and it almost invariably
landed on one of the propellers. The guards are really needed. It'll go back, but it makes
me wonder if the more expensive devices are worth the trouble.
More playing around with my PHP processing
scripts today, and I think I have fixed all the bugs. Yvonne is still backed up to 26 September, and it looks like it will take her the rest of the week
to process her thousands of photos and videos.
And of course she's complaining about DxO Optics “Pro” “Pushing
the limits of your patience camera”. In the past I have noticed big differences
in processing speed, and sure enough, it was slow today too. It looked as if there were
lots of “hard” page faults (which I presume means causing disk I/O). That's not so
surprising given the relatively small memory of the machine.
I was “logged in” too, if that's what you call sequential access to the machine. I had my
own instance of DxO running, also Olympus Viewer 3 and
a couple of other things. Since the login is frozen, they shouldn't make any difference:
swap out to disk and wait for the next login.
But where did the memory usage come from? Logged out, and the 3.8 GB memory usage dropped
to 2 GB. So although I wasn't active, and hadn't been for hours, “Windows” clogged up 1.5 GB of memory with the processes.
Why? I don't know of any operating system that handles virtual memory that inefficiently.
Things still weren't good, though. So I tried the normal Microsoft trick: reboot. And how
about that, after rebooting the system “only” used about 800 MB, and with DxO it was still
only about 2 GB. It's hard to say how much the difference was due to Microsoft and how much
due to memory leaks in DxO, but the result was a very significant increase in
responsiveness.
As it says, it's from ARP Networks. The
offer doesn't look bad, though nothing to entice me away from RootBSD, but it's nice to see other companies offering
FreeBSD VPS.
Olympus has announced release
4.0 of the firmware for the OM-D E-M1, to be delivered in
late November. It's by no means just cosmetic: it provides focus bracketing (multiple shots
with slightly different focus settings), so that you can process them to a blended image,
and focus stacking, which clearly means that the whole processing is done in-camera.
What a good idea! But the details look less exciting. In focus bracketing mode you can
take up to 999 (clearly a round number) of shots in steps of “1” to “9”. What does that
mean? 1 is the smallest step, 9 is the largest. This article gives some examples, but also
shows that the steps are not directly related to the depth of field. Halfway down, near an
image of a frog on a particularly green background, the author writes:
With the camera stabilised on a tripod, and the aperture set to f/5.6, I took 50 shots
each at focus steps 1 and 2. Although 50 shots were not enough on focus step 1, 50 shots
on step 2 proved sufficient for a beautiful composite photo.
The camera knows the focal length of the lens, and it can calculate the depth of field if it
knows the exact distance. Does it? Even if it didn't, it could step until the image focus
changes. Putting the settings in the hands of the user suggests that they don't quite have
their act together yet.
In focus stacking mode, you take exactly 8 shots, merging them into a single image. What
focus step? They don't say. At first it sounds like a very good idea, since the memory
card will become the bottleneck. Unfortunately, you're limited to 8 images, and if you read
the fine print you discover that 7% is cut off each corner for some reason, and it only
works with three lenses. Fortunately, one is the m.Zuiko 12-40 mm
f/2.8 “Pro” that I also have, which also looks like the best of the three for the
purpose.
But are 8 images enough? In general, almost certainly “no”. One of the images on the page
was put together from 150 shots. How long does it take to write all that to disk? A raw
image is round 15 MB in size, so this is 2.25 GB of data. I don't know exact figures for
the write speed of the E-M1, and I don't even have the fastest
available SD Card (that will have to
change), but I'd guess that you wouldn't get more than about 3 images per second, so a 150
image sequence would take about a minute. Why the limit of 8 images for focus stacking? My
guess is the amount of memory available in the camera, so we won't see an improvement here
until a new camera is released.
That's not all the new firmware offers, just the most exciting. Others of interest are
“silent mode” using an electronic shutter (up to 1/16000 s). There are also a whole lot of
other features, many for video, and some of which I don't really understand (“MF Clutch and
Snapshot Focus Disable”, for example). Hopefully it will make more sense when it's
installed.
And why the vapourware announcement? I suppose it's to make the cameras more attractive for
the Christmas season. It also suggests that it'll be a while before we see a successor to
the E-M1.
While walking the dogs this afternoon, came across no less than three plants that I can't
identify. This one is the plant from which we took cuttings the other day:
Took a cutting of that one; we'll see how it progresses.
The two Grevillea
rosmarinifolia bushes that I've commented about before are still flowering noticeably
differently. Clearly it's not a question of season:
So I tried again, looking through and out of my office. My light meter tells me that the
exposure at the bottom, behind the desk, is EV 3.1. Outside the house, through the window,
it's 13.0—a range of 10 EV. Not surprisingly, a normal photo shows almost no detail behind
the desk, and outside is burnt out (first image). Putting it through DxO Optics “Pro”'s
“Artistic HDR” profile improves things significantly (second image)
The in-camera HDR functionality shows a little improvement over an untweaked image, but HDR1
(first image) does not compensate as well as DxO's processing. HDR2 does marginally better,
but the outside is only barely recognizable:
So why not process the HDR1 image with DxO? Because all you get is
a JPEG image, and DxO wants raw images.
On the other hand, manually processing a 5 exposure bracket with 3 EV between the images
does much better. The first photo was made from only the 0 EV, -3 EV and +3 EV images,
while the second was made with all 5 (-6 EV, -3 EV, 0 EV, +3 EV, +6 EV). Now we have both
detail in the shadows and also outside:
I'm still not happy with these last two. I suspect that I could get the outside to look
better. But I can play with these images, and I might be able to improve it. With the
in-camera HDR, what you get is all you have.
But Martin's images look good. What am I doing wrong? It looks as if the HDR functionality
is deliberately very limited. But in that case, why does it need 4 images? I can get much
better results with 3 images.
We've established that putting a Microsoft login session on ice isn't enough to free up the
memory, so I'm currently logging off. Microsoft doesn't make it easy:
On one of my rare excursions into Facebook-land I was asked to participate in a survey.
Opinionated as I am, I accepted. But all I got was a collection of postings, most as
unrelated as these two:
One's in Malay, though I can't
understand enough to be sure of the topic: the Malays use a jargon that I can't decipher;
neither can Google Translate. The other is in German about the cost
of data retention. Do you prefer
a fish or a bicycle?
It's been over 4 years since
I last compared raw image converters. I've learnt a lot since then, and on the whole I'm
happy with DxO Optics
“Pro”. But 3 days
ago I had reason to examine things, and it took a while.
I have now read the documentation for RawTherapee. The first discovery was that it doesn't use lensfun, but instead profiles from
Adobe Camera Raw, and you have to install them manually. In addition, the Adobe page
states that only preliminary support is available for the Olympus OM-D E-M1, and newer
models aren't mentioned at all. In general, the list looks about 2 years out of date. I
didn't bother with this step. I was, however, able to use RawTherapee to convert images.
I now have output from DxO, both with my standard settings and without any correction at
all, and also UFRaw, RawTherappe and
Olympus
Viewer 3, and unlike 4 years ago, I can now compare the output directly.
But how do you do a good comparison? I can compare each with its neighbour, or each with
the uncorrected version, but that doesn't necessarily help. I've decided to go each with
the uncorrected version first. Here goes. In sequence, they're converted by Viewer 3, DxO,
UFRaw and RawTherapee. Run the cursor over an image to compare it with the uncorrected
version, and click to enlarge.
It's interesting to see how differently Olympus and DxO handle distortion, and it's also
difficult to compare. Here's an alternation between the two, Viewer on the left and DxO on
the right:
The difference in field of view is particularly noticeable.
UFRaw uses lensfun, but the results aren't very convincing (despite the pincushion
framing of the resultant image). And RawTherapee manages to do some distortion correction
although it says it can't. Here RawTherapee on the left, DxO on the right:
Surprise, surprise: Olympus doesn't
correct chromatic aberration
correctly! Neither does UFRaw, though it's still better than Olympus. RawTherapee does.
Here an excessively enlarged view of the up pipe:
I'm particularly impressed by the results from RawTherapee. Yes, they're not as good as
from DxO, but without lens profiles, that's not surprising. And I really can't understand
why Olympus can't do better. Looking at the controls, it seems that it doesn't do automatic
aberration correction:
Why not? I'm sure it happens in-camera. This makes Viewer even less desirable than I had
thought it to be.
Apart from the chromatic aberration, most of the converters seem to misinterpret the shape
of the tank: it looks like this (two more tanks of the same kind):
But there's a step in all the conversions except for RawTherapee. It's most obvious with
Viewer, but it's there with UFRaw and even DxO.
The most interesting conclusion here is that RawTherapee is definitely worth a look. The
results weren't as good as with DxO, but this was out of the box, and I really don't
understand how it managed any lens correction at all, since I didn't download the modules.
One of the results of Yvonne photo orgy is an incredible
amount of processing. I store the photos on disk on eureka, and Yvonne accesses them
by NFS. That's not ideal:
some things, like making contact prints of video clips, require a lot of I/O, and over the
net it's particularly slow.
So why not log in oneureka? The simple answer is because my X configuration doesn't
do it right. The fvwm2 menus look like
this:
That's an ssh started from the window manager, so it needs an ssh key to be
loaded. Otherwise a particularly emetic bug in X causes the entire X session to
hang.
OK, do what I do and load your ssh key before starting X. A quick change in the
config files and all is well. In the process found this gem in .bashrc:
# Not sure what good this is any more, but it can't harm, and it'll
# help not to forget it.
PATH=$PATH:/src/Samba/tivo/vplay/i386
How long has that been there? It clearly refers to hacks
that tridge made round 15 years
ago. It would have been on my TiVo, to which I
last referred 11 years ago. blame tells me it came in in revision 1.26
on 23 May 2008, but I suspect that is just when I incorporated it
from another file. One way or another, it's clearly long past its use-by date.
Somehow everything went wrong. After over an hour of frantic hacking of .bashrc
and .xinitrc, got to the stage where xterm and some other programs could no
longer open the X server. In the end gave up and installed eureka's config files
on lagoon (they're supposed to be the same, and have conditional code dependent on
the system) and rebooted. And it Just Worked. What did I do wrong?
The return of my automatic lens cap has been agreed, though they want me to send it back (at their
cost, admittedly). That cost is about 60% of what I paid for it. But what is the problem?
One option would be dust. OK, how about an experiment: wash it. It's all plastic, so a bit
of kitchen detergent shouldn't harm. Tried that and—it worked! Not only did it open
completely, it did it with a click that had never been there before. That's good to know
for two reasons: first, that the thing works after all, and secondly that if it gets dirty
again, we can clean it.
It's only a little over a month
since Ballarat's coldest September night
on record, but temperatures are now unseasonally high. We measured 31.8° today before a
sudden cool change in the middle of the afternoon. And there were high winds—ideal bushfire
weather. Time to look at the bushfire web
site.
It would be wrong to say “no reaction”. The site redirected
from http://osom.dse.vic.gov.au/ to a different URL on the same site, at least
once. But then nothing for a long while, and then:
The connection to osom.dse.vic.gov.au was interrupted.
Error 101 (net::ERR_CONNECTION_RESET): The connection was reset.
What's wrong with the site? Going to the CFA web
site gave me the chance to search a busy page for the map, which proved to be at
http://warnings.cfa.vic.gov.au/#map. A new map, still hard to use,
still needing multiple pages to display the information you're looking for. So of course
there's no need for the old site any more. Nobody would have saved a link to it. It's not
even worth removing the DNS entry: just leave it there to do nothing and time out.
And sure enough, we did have a bushfire, once again in the Ferrers Road area (or Ferrars Road, as
the report put it). I'm glad we live on this side of the main road.
We really need to get the garden sorted out. But for that we need to get the sprinkler
system installed, and for that we need to repair the underground piping that Brett Chiltern
damaged months ago. I've
given up trying to contact Brett, so Warrick Pitcher will have to come and do it, and I'll
try to find Brett to serve the bill.
In the meantime, Gage from round the corner in Progress Road came along and did a bit of
weeding and mulching. We really need to spray the weeds, but it's been far too windy lately.
Yvonne has been working on her photo processing all week.
Today she finished the work on the photos for 28 September, all 618 of them. Then she threw them away.
Why? Not deliberately, of course. It's a misfeature in one of my scripts. I use two
scripts to make life more bearable with DxO Optics “Pro” and
Microsoft. The first, fordxo, links the images I want to process into a static
directory, /Photos/00-grog or /Photos/00-yvonne. When processing is done, I
use another script to move those images back that haven't already been processed.
Why 00-<name>? It gets displayed at the top of the directory tree, so it's
easier to find.
But on this occasion I had already processed all images for Yvonne so that she only needed
to process the ones that required special treatment. Normally fordxo checks if the
image has already been processed, and if so doesn't link the source image to the 00
directory. And fromdxo doesn't copy any images that have already been processed. To
override it, I added an -f option to the scripts,
All that works well. And as it happened, Yvonne reprocessed almost every image. Then she
ran fromdxo without the -f flag, copied no images, and then moved on to the
next day, overwriting the contents of 00-yvonne. What a pain! Fortunately, it looks
as if a backup I did during the processing might have saved the day. And clearly I need to
get fromdxo to count the images and complain if nothing gets copied.
Yvonne spent most of another day processing her photos from
last week. In the process she managed to trip over many misfeatures of my processing system
that I had never seen before. Some of it has to do with the change of user, but mainly with
the change of approach. It brings home how important it is to get other people to test the
software that you write. I blame it all on her German-layout keyboard, which I can barely
use.
Our cleaning lady is looking for information about
an ancestor called John Doyle—not exactly an uncommon name. She thought he had lived
in Dereel in the 19th century, so I went
off looking for web sites about Dereel. There's history.dereel.com.au, of course, but the Dereel
Historical Society doesn't update it, and I could no longer find the other site that I had
seen on Facebook. Is it gone, or is it just
my “Facebook syndrome”? I can almost never find what I'm looking for.
But then I came back to Trove, the National Library of Australia's online database. It has
grown since I last visited it, and now it has over 9,000 articles about Dereel. The
oldest dates to 11 March 1804, long before the area round Dereel had been explored. It's also a
false positive; the text doesn't occur there. Still, the article is interesting because
it's the most recent document I have seen in English that still uses the long S (not Castle,
but Caſtle).
Government Ihntse, Sydney, Saturday, Gth April, 111 15 IS EXCELLENCY the GOVERNOR having
deemed it expedient to erect SLAUGHTER HOUSE in COCKLE BAY, near to Dawes's Point, as well
for the Convenience and Accommoda- tion of the Public as for the necessary Purposes of
Government itself, He is pleased to order and di- rect, lliat in futuie all Animals, of
whatever De- scription, ulrich may be intended fur the Go-ein ment Stores, shall be
slaughtered at this Place; an«! the Storekeepers whose particular Duty it n:.tv he to
receive Fiesh Meat on the Account of Go veinmeut, a;e ¡>l rielly 01 dereel and direct eel
not to
What's that? Poor optical character recognition. Trove is full of it
But finally, on 19 January 1849, there's a report of a
land lease, describing the boundaries entirely with text:
George and David Aitcheson
Name of run—Kuruck Kuruck
Estimated area—34,140 acres
Estimated
grazing capability — 5,000 head of cattle or 15,000 sheep
Commencing at a point on the river Wardy Yallock 1 mile 72 chains north from crossing
place over said river at the Frenchman's Inn ; and bounded on the south by a line bearing
285 ° 10 chains 90 links ; thence by a line bearing 273 ° 40 chains, thence by a line
bearing 279 ° 2 miles 2 chains ; these last three lines dividing said run from that
occupied by Messrs Williamson and Blow ; thence on the west by a line bearing 347 ° 52
chains dividing said run from that occupied by Messrs Williamson and Blow, thence by a
line bearing 43 ° 1 mile 66 chains to the Wardy Yallock river, thence by a back water of
the said river 21 chains 50 links, thence by a line bearing 47 ° 2 miles 51 chains, thence
by a line bearing 1° 1 mile 36 chains ; these last three lines and back-water dividing
said run from that of Glenfine occupied by Thos. Downie ; thence by a line bearing 96 ° 1
mile 47 chains to Kuruck Kuruck creek, thence by the said creek bearing from point to
point 24 ° 30' 3 miles to the confluence of the Corindap Creek, thence by the said
Corindap Creek, bearing 354 ° 4 miles 13 chains 50 links, the said Corindap Creek and
Kuruck Kuruck Creek and last line dividing said run from that of Commeralghip occupied by
Messrs M'Millan and Wilson ; thence on the north by a line bearing 91 ° 1 mile 60 chains
to the White Hill gully, dividing said run from that of Dereel occupied by Messrs
M'Millan and Wil- son ; thence on the west by the said White Hill gully, bearing 135 ° 30'
34 chains 50 links, thence by a gully bear- ing 227 ° 29 chains, thence by a line bearing
200 ° 70 chains, thence by a line bearing 132 ° 1 mile 39 chains to White Hill gully,
thence by a line bear- ing 155 ° 30' 1 mile 1 chain to Kuruck Kuruck Creek, thence by a
continuation of the same line 45 chains to the point of the stony rises, thence by a line
bearing 195 ° 5 miles 19 chains, thence by a line bearing 145 ° 57 chains; these last six
lines and gully dividing said run from that occupied by Compton Ferrars, thence by a
continuation of the last line 39 chains, thence by a line bearing 236 ° 1 mile 60 chains,
thence by a line bearing 216 ° 2 miles 60 chains ; these last three lines dividing said
run from that occupied by Mr. James Austin, and thence on the south by a line bearing 282
° 2 miles 42 chains, and thence by the river Wardy Yallock, bearing 2S2 ° 16 chains to the
commencing point,
That's a genuine reference; names like Corindap
(now Corindhap) and Wardy Yallock
(now Woady Yaloak) clearly refer to
the area. And it's much earlier than any other reference I have found (starting more like
1860). But how difficult it is to guess the boundaries of this run! Surprisingly,
references like the Frenchman's Inn can still be followed: it was
in Cressy, and only
closed in 2008. And it seems that at the time of the document, the inn had been there
for over 10 years.
There's much more there, including reports on the amount of gold dug in Dereel.
Spent quite some time today looking through the National Library of Australia archive sites, and came up with more information
about Dereel. I had already established that Dereel was
already mentioned on 19 January 1849, but what was it at the time? A sheep run? It wasn't
until a quarter of a century later that a publication in the State Library of VictoriaState Government Gazette, whose web site appears
to have been infiltrated by people who know better than the style guides. On “Friday,
July 02nd 1875” the Town of Dereel was proclaimed. God save the Queen!
And back at the NLA there are 7
maps relating to Dereel. Only two are online, and one, dated 1925, is only available at minuscule scale. The other, however, is much more interesting. It
was prepared by Ferdinand M. Krausé, and dates from 1889. It is guarded by a JavaScript
application that gives you zoom and display sizes between 400×400 and 1200×1200.
Fortunately, however, the map is out of copyright, and the Javascript communicates via the
GET method, so it
was relatively trivial to ask for http://www.nla.gov.au/apps/cdview/?pi=nla.map-rm2337-5-sd&rgn=&width=5120 and
get an image 5120×5120:
This map is amazing for a number of reasons. Firstly, the number of properties, which far
exceeds the number there are today. And they bear little relationship to today's
properties. About the only one I recognize is our old house at 47 Kleins Road. Here the
map excerpt and a copy from the title:
The numbers along the edges show the compass direction and length in links (7.92").
Both are difficult to read, but I interpret:
Side
Length (Map)
Length (Title)
Direction (Map)
Direction (Title)
N
1197
1197
S 89° 51 E
90° 09'
E
951
951
S 0° 12 E
179° 48'
SW (1)
622
621
(?) N 29° 53 E
209° 53'
SW (2)
833
833
N 44° 49 E
224° 49'
S
299
299
S 88° 49 E
268° 49'
W
2091
2090
S 0° 8 E
359° 52'
The dimensions aren't quite the same, but it took some analysis to understand the
difference. The compass directions in the title only make sense if you start at top left
and go round clockwise. Clearly 179° 48' in one direction is 0° 12' in the other
direction. But the map uses a different convention: east or west of north or south, and
starting from the north or west. Thus the northern and southern boundaries, which are
almost parallel, are S 88° 51 E and S 88° 49 E on the map—clearly showing that they're
off by 2'—but on the title they're 90° 09' and 359° 52'.
So in the end the only difference is 1 link (roughly 20 cm) in two dimensions. How did they
arrive at this difference? It's unlikely they would have re-surveyed it; if they had, they
would have taken account of magnetic deviation. As Google Maps show, and as I have
discovered in the past, Kleins Road does not run 359° 52' to the south; it's more like
20°/200°, as I estimated nearly 4
years ago.
Looking at the names, J.J. Klein also owned the other side of the road, now subdivided into
4 blocks, and also the one down the end. No wonder it was called Kleins Road.
But things are missing here too: Dereel-Rokewood Junction Road now joins the two roads at
the top of this extract. How did people get through in those days? There are a number of
missing roads. And there are an amazing number of roads on the map for which there is now
no evidence that they ever existed. Further north, where we now live, there was almost
nothing—quite contrary to what we had been told:
The road to the east is now the
main Ballarat—Colac
road, but that's about the only one that's easy to recognize. Here's what Google Maps makes of the same area today:
Almost none of the roads correspond. About the only one is the section of road running
north/south, now part of Harrisons Road. On the other hand, there are roads which have now
completely disappeared, like the one coming from the south-east corner. Looking at the
satellite image there's still some evidence of that road to the east of the bend in
Harrisons Road.
Why did people change the roads so much? A road isn't an easy thing to build, and we're
talking about a time frame of only 126 years. It's really puzzling.
Coming closer to home, this is the area round Stones Road:
The road north/south in the centre is Harrisons Road again; the east-west one is then Bliss
Road (note that a J Bliss lived down Harrisons Road, in what is now a paddock), and the road
on the east side of the school is Stones Road. Our house would be roughly south of the
second i in Diggings.
Other things of interest is E Speary, in what is now Speary's Road, and the post office in
J. Smith's property (now a sheep paddock). Somehow it's surprising to find a school and a
post office in such a sparsely populated area. Possibly the area marked “Diggings” was
really something like a shanty town.
Then there are the diggings. The crosses represent mine shafts, and as far as I can tell,
they correspond well with the shafts we find today.
Looking further afield, to the west of Dereel it looked like this:
It's also interesting that the names of the property owners are included in the map, but not
the names of the roads. And there are surprisingly few surnames, including ones that we
know from road names today, like Cahill, Judge and Farley. Somehow the map asks more
questions than it answers.
Yvonne wanted to go to a garage sale this morning,
conveniently set for the first day of
the Dereel Spring Fair. I went with her
with the intention of visiting the fair on the way home.
Surprisingly, we found a number of interesting things: an ironing board, a bedside lamp, a
couple of Pastis glasses and a couple of
apparently brand-new cookbooks—all for a total of $10! My guess is that the prices for all
the items together wouldn't have been more than $300. And for that four people stay at home
all weekend.
The fair was much as the Dereel fairs always are, a combination of sales stall, exhibition
and random stuff:
She prefers it to the other. My understanding is that it'll come home soon to be attacked
by the animals.
After taking photos, one of the people watching me pointed out that I wasn't supposed to
take photos without the artist's permission. So we asked Julie, and she gave the
permission.
While our new garden struggles, the one in Kleins Road is getting by well without us. We
dropped in on the way back to get
some Clematis shoots which I hope to
propagate. There's lots of stuff going on there, notably
the Paulownia kawakamii and
the Echium candicans:
We have planted a sucker of the Paulownia in Stones Road, but the dogs (we think) bit off
most of the stem, and until a week or so ago we weren't sure that it had survived. But it
seems it did, though it hardly looks comparable to the parent tree:
The cookbooks that we bought at the garage sale were both sort of Indian, and both, it
proved, by the same author, Mridula Baljekar: The
Food & Cooking of India and “150 Curries”, apparently too old to be worthy
of mention on the web site. They're particularly English in outlook, and some of the
recipes look downright strange, more Chinese than Indian, such as numerous applications
of five spice powder.
It's almost as if I have discovered a parallel universe, but then a closer examination
reveals that the five spices are panch
phoron—not the only case where I've been confused by her terminology. In a completely
unrelated part of the book (the general introduction), she confirms this, but spells the
word “panchforon”. Other strange spellings include kariveppila karuvepillai
(curry leaf; note that doubled letters
are pronounced doubled).
Other details are also interesting. This map shows Madras
(called Chennai since 1996) in the middle
of Andhra Pradesh state. That's a
strange place for the capital of Tamil
Nadu:
In any case, the books are nicely presented, well photographed, and for $1 each you can
hardly go wrong.
Interestingly, one of the photos in the first book shows (from afar) the Vishvanatha temple
at Khajuraho. But
clearly close-ups like this one aren't appropriate in a cookbook:
Isn't that a good idea? I suppose it is if you like your recipes standing up. But with a
normal bench height that means you have to bend down to read it, whereas if it's flat on the
surface you don't. In addition, the ring binding makes it particularly difficult to turn
the pages:
Spring is here, and I'm still discovering new plants. Within 300 metres of the house I've
found three different plants that I have only just identified. This one is almost certainly
a Bossiaea prostrata:
Decided to try out one of our new cookbooks this evening, and we settled on a prawn pillau (or is that pilau? that's how it's spelt in
the book) from The
Food & Cooking of India. In retrospect not a good idea; it's too similar
to the paella that we cooked yesterday.
On the whole, not a bad dish, but once again I get the feeling that the dishes that the
photographers get are not cooked to the same recipe. Here the photo from the book and our
version:
I have connections with people round the world, and I've had the same email address for 18
years (before that my domain name was lemis.de, which didn't export well). So it's
not surprising that I'm bombarded with spam. It probably doesn't help that I keep telling
Facebook different things about my past.
Currently I claim to have been born
in Kandahar, studied at Новосибирский государственный университет (Novosibirsk State
University), and to live in Харків
(Kharkiv). So it's not that surprising that a lot of my spam is in Russian and Ukrainian.
But that's only part of it:
Click on the image to make it readable. Apart from
Russian (I don't recognize any Ukrainian), I have Portuguese, Spanish, French, Greek,
German, Chinese, Japanese, and at least one language written in Arabic script. And the
German subject lines are either deliberately or accidentally written in broken German.
Yesterday's prawn pillau was quite a
success, and though we were supposed to be eating the rest of the paella today, I decided to try another recipe from
the new cookbooks. This time I tried the other, “150 curries”.
One thing is clear: the title is wrong. My guess is that not more than half of the recipes
could be called curries. There are noodle dishes, nasi goreng, sambals and all sorts of other dishes. They don't look bad, but they're
not curries.
The big issue is the ingredients, of course. In the end we drummed up 8 chicken drumsticks
and made a murgh dopiaza, chicken with lots
of onions:
Once again, Yvonne was happy with the dish, though the timing
in the recipe was clearly intended for chicken breast and not drumsticks.
After the meal, I went looking for other mentions of the dish—a good time to do so. And in
Wikipedia I found:
As many other Hyderabadi dishes, the addition of a sour agent is a key part of
dopiaza. Most often, raw mangoes are used; however, lemon juice or cranberries can be used
as well.
There's nothing sour in the recipe, nor in any other recipe I found on a quick trawl through
the web. Who's wrong?
CJ Ellis has trouble with his phone again! Once again he can make calls out, but calls in get automatically diverted to
voice mail. He asked me for help. I confirmed the behaviour, and suggested that he got
MyNetFone to contact me for problem
resolution, since he has difficulty understanding the people. Sometimes I do too: they
asked him what kind of modem he had. Modem? What's that in a National Broadband Network system? All he has is
an ATA.
Sure enough, within minutes I got a call from Akbar of MyNetFone support, asking what the
problem was with my phone (to which he referred as “landline”). I explained the situation,
but it didn't seem to sink in. Round the second or third time it did, though. He checked
CJ's configuration and found nothing wrong. That, at least, was different from last time,
where their system claimed he wasn't registered, though he could make calls out.
So he tried another call, and said everything was OK: after a few rings it diverted to voice
mail. What kind of confirmation is that? It all depends on what “a few rings” is. Tried
it myself. Sure enough, it diverted after exactly 20 seconds, as configured. So the
problem was no longer there. But how difficult it is to establish these things. They still
haven't got the ring tones right, probably because they don't understand the problem.
After Saturday's garage sale, Yvonne decided that she would like to buy some dining
chairs there. They were still there, three of them for $1 apiece, so she went off and
picked them up. On the way she was given a dog toy, which Sasha really loved. So does Nikolai, who normally doesn't play with stuffed toys:
Why this one? It's much bigger than most stuffed toys, and maybe that's the attraction.
We're trying to decide whether it's supposed to represent a bear or a monkey.
We made the conscious decision not to move the greenhouse here. It was more pain than it
was worth. And we've now discovered that we have sufficient space in the dining room for a
kind of winter garden:
That has an additional aspect: when insects attack the plants, we find out about it, though
not necessarily as fast as I would like. Today I found these insects on the hibiscus buds,
though they looked more active before I gave them a dose
of pyrethrum:
Taking the photos of the hibiscus buds was difficult. I really wanted to come closer, but
past experience with the Zuiko Digital ED 50 mm
F2.0 Macro suggests that it focuses too slowly for handheld shots with extension
tubes. So what about the M.Zuiko 12-40 mm
f/2.8 “Pro”? Put it on with the 11 mm extension tube and... I couldn't focus At All.
In the end I had to take the photos without the tube.
Later I tried it under more controlled conditions. Yes, I could focus manually, but
autofocus failed completely. It worked with the 50 mm lens, though. Is there something
about the autofocus system that causes the problem? Do the original Olympus tubes do it
better?
20 years ago today I had a visit from Jack Velte and
friends of Walnut Creek CDROM. After dinner we did a bit of quick hacking and came up with
what was to become “The Complete FreeBSD”.
The book went through 5 editions, but it's completely out of date now. How times have
changed! And how many things haven't!
Warrick and Mari are coming on Friday to fix the damage that Brett Chiltern caused,
involving trenches in the garden. The garden is a mess! And we're not really motivated to
do anything about it until we have the water. But I could at least mow the lawn—I thought.
Despite my misgivings, I had left the layout to Yvonne, and
getting the ride-on mower between the plants is really difficult. A good thing we have a
push mower and people prepared to push it.
Today I had to do something with despair, my Microsoft “Windows” 7 box. A popup: software updates installed, rebooting
in 3 minutes.
Why that? I had explicitly told it not to install anything by itself. But now I had a
problem. Yes, “remind me later” buys me time, but it seems not much. And currently I
didn't have a display on despair, nor even a cable to connect it to
the KVM.
Out into the shed to look for a cable. I really needed a second one for swamp, one
of my test boxes. For some reason, I have hundreds of Ethernet cables,
even AUI, but after
much searching and reshuffling moving cartons, I only found
one VGA cable.
Back into the office to hear the
local UPS quietly
buzzing to itself. Clearly it was running. Back into the shed to find that my box
reshuffling had disconnected the output cable for the main UPS. Plugged it back in
again... it went on battery. Into the garage. Yes, the input circuit breaker had tripped
yet again. Thanks Jim. So far, since we've lived here, the UPS has been behind more
problems than it has solved.
Back into the house to tidy up the resultant mess. Having eureka, my main box, on
the second UPS has shown itself to be a good idea. But lagoon came up without
network connectivity. So did stable. So did cvr2. So did
even despair.
Clearly there was something wrong with eureka. But what? Switch? The switch
wouldn't explain why stable, on the same switch, could access other machines.
Restarted dhclient, which was definitely not doing its job, and natd for good
measure. Nothing. Firewall config? A good choice. Re-initialized the firewall. Still
nothing. Started a tcpdump to see if that could give me any insight. Yes, clearly
an ICMP echo
request from stable to eureka, and an ICMP echo reply from eureka
to stable...
Wait a minute. It wasn't doing that 5 minutes earlier. Sure enough, everything was working
normally again. What went wrong? It came good too much later than the firewall
reinitialization for it to be that, but I still have no idea what it was. Looking at the
firewall stats later, the only rejections I saw were:
00040 3678 204152 unreach filter-prohib tcp from not 192.109.197.0/24 to any setup
Gradually I'm running out of excuses not to upgrade eureka to the latest and greatest
FreeBSD. But there's still
one: kgames, some card games
that Keith Packard wrote decades
ago. The code seems to have rotted, and I can't find any version that will build in a
modern environment.
OK, that's a question of porting, and when it comes to porting, I wrote the book. But the kind of porting described there is
almost as old as the code. Still, got off to a start.
First I need a Makefile. That's easy: run imake:
=== grog@eureka (/dev/pts/6) /home/ports/x11/kgames/kgames-1.0 264 -> imake Imakefile.c:16: error: Imake.tmpl: No such file or directory
imake: Exit code 1.
Where's my Imake.tmpl? locate tells me that it's
in /usr/local/lib/X11/config. OK, imake -I/usr/local/lib/X11/config did it.
But why didn't the imake in /usr/local/bin know that?
Next: varargs is dead and gone, long live stdarg. How do you convert them? I
haven't found a good set of instructions, but here are the diffs which I ended up with:
--- Xkw/Message.c 1996/03/13 15:46:31 1.1
+++ Xkw/Message.c 2015/10/15 02:25:26
@@ -30,7 +30,8 @@
# include <X11/Xaw/Cardinals.h>
# include "Cards.h"
# include <X11/Xutil.h>
-# include <varargs.h>
+# include <stdarg.h>
+# include <stdio.h>
Then there were a number of minor gripes, like using NULL for 0, or not casting
function arguments. The next serious one was in kklondike/kklondike.c:
=== grog@eureka (/dev/pts/6) /home/ports/x11/kgames/kgames-1.0/kklondike 272 -> make cc -O2 -pipe -ansi -pedantic -Wno-system-headers -Dasm=__asm -Wall -Wpointer-arith -Wundef -I.. -I. -I./exports/include -DCSRG_BASED -DFUNCPROTO=15 -DNARROWPROTO -c klondike.c
klondike.c:26:28: error: X11/Intrinsic.h: No such file or directory
klondike.c:27:29: error: X11/StringDefs.h: No such file or directory
(etc)
Why doesn't the Makefile include /usr/local/X11/include?
Round about here I have a decision to make: do I just fix it without the help
of imake, or do I fix it so that imake still works? Does anybody
use imake any more? I wanted to quote from Software Portability with imake,
but I can no longer find the book. Maybe I did the right thing and threw it out. It's now
20 years old, and times have changed since then. I have a recollection of a biography
including the statement that the authors had been so brain-damaged by contact
with imake that they were only fit for travelling carnivals, but that doesn't sound
like Paul DuBois for a couple of
reasons. Maybe I was confusing it with a different book.
A couple of weeks ago I made a mistake measuring the amount of flour for baking bread (too
much), and ended up having to bake two partial loaves. That didn't work out as badly as I
had feared, so when I started a new batch yesterday, I planned for two full-sized loaves:
one in the newest pan, and one in the second-newest pan, which is only half the size.
The starter handled the 50% more flour well—too well, in fact: it overflowed the container.
A considerable mess, a lot of calculation, but in the end things worked out well. Next time
I'm going to have to create less intermediate starter and see if things still work out.
Mari Hendriks and Warrick Pitcher along today to repair some of the damage that Brett
Chiltern and mate did months back: the punctured underwater irrigation line.
But how was the stuff laid originally? At the time, nearly a year
ago, I took lots of photos. Except, of course, the ones I needed. Still, Mari found it.
They started in the corner near the dog run, and looking in that direction we found the
white stormwater pipe on the left, the red control cable close to it, and the green-striped
black irrigation line off to the right:
While they were there, got them to put some pipes under the driveway. This one is a drain
for the ground water and also some low-density poly irrigation hose in a rural poly sleeve:
Letting Mari on to the property didn't go as smoothly as hoped. Nikolai and Leonid were in the front garden, and when I opened the gate, they shot off down
Grassy Gully Road. Niko didn't have a collar, so I went inside to get one, and then headed
off looking for them. It didn't take long before they came running back, Niko with
something in his mouth:
Things have changed since then, and today there was only one other dog there, about the size
of Sasha's head. I didn't take any photos, only videos, and I've left it to Yvonne to process them.
What do I need that for? I already have a Zuiko Digital
ED 50 mm F2.0 Macro, a better lens. But its focal length is frequently a problem, and
it has a maximum magnification of 1:2 (area 37.6×26.0 mm), while the 35 mm macro has a
maximum magnification of 1:1 (area 17.3×13.0 mm). Even with both extension tubes, I can't
get that ratio with the 50 mm lens. Both make it convenient for a number of applications,
including taking photos while walking the dogs.
Conveniently, the lens was located
in Ballarat, so I was able to pick it
up. The seller has an amazing amount of equipment. I thought I was a hoarder, but he has
me beat. I counted at least 2 OM-D E-M1s, 2 OM-D E-M5s (I think), a
E-PL7, and he also
had two E-330s and
an E-300, admittedly the latter
four all for sale. He also had a Sigma 30 mm f/1.4 lens for sale, against which I had decided because it's not
supported by DxO
Optics “Pro”, but I took a look at it anyway. Yvonne's
camera (the E-PM2)
could hardly focus with it at all. Mine would have, but why buy something like that when
there are lenses like the Leica DG Summilux 25
mm f/1.4 available?
Chris also had a M.Zuiko Digital
ED 14-42mm f3.5-5.6 EZ with an original Olympus automatic lens cap. He's not happy
with it: it's smaller, and it doesn't keep dust out of the mechanism. I'm not sure ours
does either, but here's the comparison. The lens on (Yvonne's) camera has the aftermarket
cap, and the other has the Olympus cap:
It's the middle of spring, but looking at the garden you wouldn't know it. The recent hot
spell and the relative lack of water mean that things aren't growing the way they should.
As a result, there aren't many flowers.
One exception are the wildflowers that I had noticed last year, and which I have mainly been
able to identify. This is without doubt
a Burchardia umbellata:
Indeed, I can't recall having received a message
in Azerbaijani before. But
this one wasn't one either: it an invoice from Citylink, the operator
of Melbourne's
tollways (which they
call freeways, presumably because the
term “road toll” has a completely different meaning
here. Certainly they're anything but free).
But Citylink enjoys games. Instead of just sending you your invoice, like every other
company, they hide it on the web and get you to come and look for it. The whole thing takes
multiple clicks on a very slow web site, with response times as long as 40 seconds, but then
you can send a PDF formatted invoice to your
email. Why don't they do it automatically?
So somehow it's funny that gmail gets confused
by their messages. I wonder what makes them think it's Azerbaijani.
I have two UPSs in
series. The big one (3 kVA) is in the shed. It runs all the low-power stuff in the house,
mainly electronics and lighting. It also feeds the small one (1 kVA), which is there only
for eureka, my main machine. I have never got round to installing any monitoring
software: if the power fails for any length of time, there's currently not much we can do
about it.
Today, however, the second UPS was not feeling happy. On several occasions it beeped once
and then stopped again. Why? Nothing else had any issues. The display went on, but it's
on the floor, and by the time I looked down there was nothing to be seen. About the only
thing of interest was that the input voltage was 245 V, relatively high. Could it be that
the first UPS was doing something to keep its output voltage in range? Put a mirror in to
be able to see it more quickly:
We receive a daily newsletter from Cuisine et vins de France. They're usually not overly interesting, but a week or so
ago we found one for filet mignon au
bacon, which we tried today. Bacon is, of course, not a French ingredient. The filet
was stuffed and then wrapped in the bacon:
Juha (or is that Matti?) Kupiainen went for a motorbike ride today with his mate Glenn.
What does that have to do with me? He took some photos and published them—as a
directory with the original images. To look at them, you need to select each image
separately.
OK, this isn't Juha's fault: he can do it like that, which simply requires linking the
directory into the web server, or he can put them on Flickr, like he did 8 years ago.
But that requires lots of mouse-pushing, and you still don't have much control over the
display format.
So for the fun of it I created a quick and
dirty web page that would display all the images. But Juha wasn't happy: two of the
images were still on their side (the way he took them), and it didn't display
the EXIF data. For that they need to be
local.
So I downloaded the images, and for good measure put them through DxO Optics “Pro” and
Ashampoo Photo Optimizer. To my surprise,
DxO knows the device (an iPhone 4S), and has optical corrections for it which show that, in
fact, the lens isn't badly corrected. I can't see
any chromatic aberration,
and there's only minimal barrel distortion.
But the image quality! The normal size (“thumbnail”) images are OK:
What are those marks? It could be rain, but one way or another it looks really terrible.
Somehow smart phones are the lowest common denominator of technology. Yes, they work, and
20 years ago they would have been science fiction. But nowadays I think people have
forgotten that we can do much better.
My UPS issues
continue. There's increasing evidence that the UPS in my office is reacting
to something on its input; on one occasion the lights in the hallway dimmed slightly
when it beeped. Hopefully it's not the upstream UPS. Time to install some monitoring
software, something that I haven't needed in over 20 years of using UPSs.
My network reliability with the National Broadband
Network is still not what you'd expect of a modern network. Another short dropout
today while I was in the office, so I was able to confirm that the ODU LED was red.
That's supposed to mean something wrong with the Outdoor Unit (NBNese for “antenna”), but in
the cases I've seen it had
nothing to do with the antenna. Power cycled
the NTD, taking the opportunity to
connect it to the
office UPS, and
watched it gradually come up again. When all status lights were OK, tried a ping to the
world. Nothing. ifconfig showed that the interface didn't have an IP address.
Restart dhclient. Couldn't get an address, backgrounding. Restart dhclient.
Got an address immediately.
What's the cause? I don't really trust dhclient to do the right thing, but the most
likely explanation is that there really was one of these typical short outages, and nothing
I did really made much difference: it was just a matter of time.
I wish the NBN could be held accountable for this sort of thing. Every time (I think 3
times in 7 years) my external web server goes down, I get a month's fees credited to the
account. If the NBN had to do something like that, it would certain change a lot.
More food from last week's cookbooks today, this time from “150 curries”: Masala channa, not exactly a very specific title. This
one was different because it
included tamarind:
I've finally found a use for measuring spoons! Mridula Baljekar uses only volumetric measures in her recipes. How do I
convert them to weights? Mainly guesswork, since it seems that I have thrown out all my old
measuring spoons.
After my porting attempts on Thursday, Callum Gibson reminded me that there are multiple ways to
invoke imake. I've been there before, but it was 20 years ago:
You don't normally run imake directly, since it needs a couple of pathname
parameters: instead you have two possibilities:
Run xmkmf, which is a one-line script that supplies the parameters to
imake.
Run make Makefile. This assumes that some kind of functional
Makefile is already present in the package.
Strangely, make Makefile is the recommended way to create a new
Makefile. I don't agree: one of the most frequent reasons to make a new
Makefile is because the old one doesn't work, or because it just plain
isn't there. If your imake configuration is messed up, you can easily
remove all traces of a functional Makefile and have to restore the
original version from tape. xmkmf always works, and anyway, it's less
effort to type.
I suppose it's a sign of the times that xmkmf is now 80 lines long. I tried it, and
sure enough, it worked. And after fixing a lot of errors (mainly functions
returning void but not declared as such), managed to build kklondike. Run
it? It didn't crash. It didn't do anything else, either: it just hung.
What next? This was on eureka, running FreeBSD 9.2-STABLE. How about a modern system? Tried again on stable,
running a very recent 10.2-STABLE, and things looked completely different. First, I
couldn't compile the output of lex: it claimed multiple definitions
for input(). And sure enough, Xkw/laylex.l contained the line
#define input() (*yysource)++
But lex defines its own input() function—always? In any case, removed
the define and it compiled. This is somewhat puzzling: gcc seemed to be
happy enough with the redefinition. What am I missing?
There were quite a number of functions that specified no return type (and thus defaulted
to int), but which really didn't return a value. Went through them one at a time.
But then there was another strange message:
extern.c:19:6: error: redefinition of 'expl' as different kind of symbol
extern.c:19:6: note: previous definition is here
Huh? How can a definition contradict itself? The line in question didn't have any
strangenessess about it:
char expl[128]; /* explanation */
This was a tiny source file, and I ran the thing through the preprocessor (-C -dD
-E still works) and found no other definition of expl. But it seems it's now
a function:
SYNOPSIS
#include <math.h>
...
long double
expl(long double x);
...
DESCRIPTION
The exp(), expf(), and expl() functions compute the base e exponential
value of the given argument x.
It also seems that the compilers somehow know these functions even if, as in this case, the
header file hasn't been read in. Is that an advantage? I couldn't find a way to get rid of
it, so in the end I had to change the name of the variable. And it's clear that clang could greatly improve its error messages.
Another case in point:
In file included from io.c:12:
/usr/include/varargs.h:34:2: error: "<varargs.h> is obsolete with this version of GCC."
A strange version of “GCC” indeed. But that's in the header files.
In the process, it's interesting to note the antiquity of this code, over 30 years old:
That predates X11 (15 September 1987) and is less
than a year before the original
X. This file (kcribbage/crib.c) also contains a Berkeley copyright dating to
1980, which sounds dubious.
OK, set CDEBUGFLAGS accordingly. No change. Put it explicitly in
the Imakefile. No change. I can put it in the individual Makefiles, of
course, but there must be a cleaner way. Some other time.
Spam is illegal in Australia. They passed a law against it about 15 years ago. But who
cares? Nobody pays attention to it. As if to prove a number of points, today I got this spam:
Date: Tue, 20 Oct 2015 00:00:03 -0600
From: "Herald Sun" <HeraldSun@e.newsdigitalmedia.com.au>
To: <CITYLINKED@LEMIS.COM>
Subject: Dame Edna has a very special offer for you
X-Spam-Status: No, score=0.0 required=5.0 tests=T_DKIM_INVALID autolearn=ham
Subscribe to get the paper with added pizzazz and moreWeb version
Get a subscription with added pizzazz for half the price!
Possum, right now you can get all the news and all the sport on all your clever little devices for half the normal price for the first 12 weeks. That's only $3.50 per week.
As well as unrestricted digital access you can also sit back, relax and get your weekend papers home delivered at no extra cost.
There's so much wrong with this:
Citylink has violated my trust by
giving to a newspaper the email address that I created especially only for Citylink.
Herald Sun has deliberately broken the
law to send me this spam.
My spam detector claims that the DKIM
information is invalid (it isn't), but fails to see any evidence of spam. So it's
pretty useless too.
By comparison, their inappropriate time zone (US Central time) is typical for this kind
of operation.
Is this the other side of the false positive I received a few days ago?
Why am I trying to resurrect Keith
Packard's games? There are alternatives, such as xpat2. A much more
important issue is to ensure that Hugin will still work when I update the system.
OK, I have the latest port installed on stable. Tried it out. It ran, but the
result of the alignment stage showed almost complete lack of alignment. What went wrong
there?
Report the problem? Hardly. The latest FreeBSD release is 2013.0.0, released on 27 October 2013, nearly 2 years ago.
There have been two releases since then, and potentially they fix the problem. So: back to
upgrading Hugin, something that I put on the “too hard” queue 4 months ago. That was due to
strangenesses in vigra that seem to have since Gone Away. Instead I had relatively simple problems with libpano13, which GNU autotools managed to blow out of all
proportion. But finally it was done, and I could try to update Hugin. Of course, it
didn't work, but at least I'm not at such a dead end as I was with vigra in June.
In the afternoon Yvonne told me that her system had hung. In
to find the X server displaying only a small white
square on a black background. What's that? On the console these messages:
What an incomprehensible mess. Instead of ###!!! and lots of square brackets, how
about something that a script can parse easily, and information that tells you what produced
it?
Stopped and restarted the X server and things worked again. Looking through the web, it
seems to be related to
Adobe flash. And this report has a “solution”: disable hardware acceleration.
People, when will you understand the difference between solutions and workarounds?
While trying to port Hugin this
morning, was presented with this error message:
/usr/bin/sed -i.bak 's/-pthread;-D_THREAD_SAFE/-pthread -D_THREAD_SAFE/g' /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/base_wx/CMakeFiles/huginbasewx.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/hugin/CMakeFiles/hugin.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/hugin/CMakeFiles/hugin.dir/link.txt /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/link.txt /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcher.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcher.dir/link.txt /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcherGUI.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcherGUI.dir/link.txt /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/stitch_project/CMakeFiles/hugin_stitch_project.dir/flags.make /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/stitch_project/CMakeFiles/hugin_stitch_project.dir/link.txt
sed: /eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/flags.make:
No such file or directory
The original looks even worse, of course, with no line breaks whatsoever. What does that
mean? Isn't it so much easier to replace the spaces with newline characters?
/usr/bin/sed
-i.bak
's/-pthread;-D_THREAD_SAFE/-pthread
-D_THREAD_SAFE/g'
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/base_wx/CMakeFiles/huginbasewx.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/hugin/CMakeFiles/hugin.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/hugin/CMakeFiles/hugin.dir/link.txt
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/link.txt
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcher.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcher.dir/link.txt
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcherGUI.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/ptbatcher/CMakeFiles/PTBatcherGUI.dir/link.txt
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/stitch_project/CMakeFiles/hugin_stitch_project.dir/flags.make
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/stitch_project/CMakeFiles/hugin_stitch_project.dir/link.txt
sed:
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/hugin1/nona_gui/CMakeFiles/nona_gui.dir/flags.make:
No such file or directory
The error itself was simple: this was a list from the old port Makefile,
and nona_gui has been removed. But it's so much easier, for me at any rate, to
understand the second version, though I suppose the web generation is used to 230 character
lines.
25 years ago I wrote a B-tree storage system called Monkey
in C++. At the time I saw it as being the
logical development
of C, as long as you
ignored some of the more bizarre features.
Since then I have returned to programming in C, mainly because that's what the environment
required. 11 years ago I was required
to backport Monkey to C. In the process I discovered that C++ had become even more
bizarre, and the backporting brought insights that were hidden when I wrote in C++. The C
version was slightly more verbose, but much clearer in intention.
Today I had less pleasant experiences with C++. Much of Hugin is written in it, and while trying to
port it I got:
[ 45%] Building CXX object src/tools/CMakeFiles/autooptimiser.dir/autooptimiser.cpp.o
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/tools/align_image_stack.cpp:196:38: error: reference to 'lock' is ambiguous
hugin_omp::ScopedLock sl(lock);
^
/eureka/home/src/FreeBSD/svn/ports/graphics/hugin-2015.0.0/work/hugin-2015.0.0/src/tools/align_image_stack.cpp:124:24: note: candidate found by name lookup is 'lock'
static hugin_omp::Lock lock;
^
/usr/include/c++/v1/mutex:424:1: note: candidate found by name lookup is 'std::__1::lock'
lock(_L0& __l0, _L1& __l1, _L2& __l2, _L3& ...__l3)
^
/usr/include/c++/v1/mutex:350:1: note: candidate found by name lookup is 'std::__1::lock'
lock(_L0& __l0, _L1& __l1)
^
What's that? Took a look in /usr/include/c++/v1/mutex (an unlikely but correct name)
which, not surprisingly, proved to be definitions for
some mutex implementation, and there are
two different templates for lock. Is that what I want? No, the code in question
simply used the word lock as a variable, defined on line 124:
static hugin_omp::Lock lock;
What kind of header file hell has got me into this situation? There's one way to find out:
take the compiler invocation, strip the -o align_image_stack.o and replace it with
a -C -dD -E, which will display the entire preprocessor output. And make
always shows the invocation—doesn't it? Not, it seems, with cmake.
RTFM time. It seems there's a debug
option l to make: make -d l shows the invocations even if they have been
suppressed with an @ “or other ‘quiet’ flags. And sure enough, it
worked—for the main make. Subordinate makes, including the one that
interested me, reverted to the standard behaviour.
Spent quite a bit of time trying to find out how to display the invocation, unfortunately
without success. This all takes far longer than it should.
Finally, as my time was running out, I was reminded of a little trick I did decades
ago. Tandem
Computers' TAL compiler had an irritating habit of not outputting the number of
errors it detected. So I wrote a little wrapper procedure that ran the compiler and then printed the error count and a snide comment.
Paraphrasing,
if error^count > 50 then !Very rude remark
print (" This program is the work of a subhuman and bears no relationship to the TAL language");
...
elif error^count then !Rude remark
...
else
print (" Although no syntactical errors were discovered, it is unlikely that this program will run");
It's easier to do these things now. The compiler has 7 different names, but there's a good
chance that it's invoked as c++. That's in /usr/bin/c++, and
since /usr/local/bin is in front of /usr/bin in my PATH, all I needed
was a tiny script in /usr/local/bin/c++:
#!/bin/sh
echo 'c++' $*
/usr/bin/c++ $*
Unfortunately, that didn't work: cmake is clever enough to put full pathnames into
the Makefile, so it still invoked the original. It looks as if I'll need to rename
the compiler just to get the invocation line.
Haven't we come a long way in the 20 years since PUS?
At university I discovered this wonderful new
language, Algol. It was so much
better than FORTRAN, and it was so easy
to program. And then I discovered that the version we were using was Algol 60, codified (in
the pre-Y2K days) in 1960. There was also
an Algol 68, and although this was in
1970, no compiler was available for it for our university computer
(an ICLSystem 4/50,
a Spectra 70 copy).
Why? Over the years I investigated the language, and looked for compilers for the computers
with which I worked. None came. The language was too complicated.
How times have changed! C++ runs rings
around Algol 68 in terms of complexity. So-called
“Object orientation” allows
individual programmers to build their
own Tower of Babel. And lately
I've found that, more and more, modern languages are polluting their name spaces. The
original Algol made a very clear distinction between basic symbols and variables, so that I
really did once write:
for for:=start step step until until do beginreal z;
z:=poly (a,for,m);
write (30,format((-sn.8d@-nd)),for);
end end
But in the last few days I'm running into more and more needless namespace pollution. Why
do modern compilers implicitly know expl(), even though the vast majority of programs
don't use exponentiation? Why am I
getting mutex implementations interfering
with simple programs who, long before the mutex implementation, chose a simple and obvious
variable name? O tempora! O mores!
Once upon a time advertising served a useful purpose. How long ago was that? Now there's a
whole industry of people who create buzzwords and improbable claims.
One of the less obnoxious things are the labels the manufacturers put on their items. These
containers, for example:
What are they? In fact, the small print tells you:
Fluoriguard® Toothpaste. Well, it
tells you that it's a kind of toothpaste. What's Fluoriguard®? It seems to be a kind of
mouthwash.
But that's not what the designers consider important: it has a Great Regular Flavour (what's
an irregular flavour like?) and
LIQUID CALCIUM (quite an achievement at
room temperature). Yes it has
a Calcium carbonate base, but
that's neither liquid nor something that is worth stressing. By comparison, “Strenghtens
Teeth Freshens Breath” sounds perfectly plausible.
Then of course there's a seal that you have to break to use the product. What's written on
it? “Do not use if this seal is broken”. How else can I use it?
But the big difference is between the two tubes. The one on the left is empty, having
offered “Maximum Cavity Protection” for several months. The new one isn't as ambitious:
just “Cavity Protection”. Why is it degraded? The real information, in more small print,
shows that the composition hasn't changed, and it doesn't mention calcium, liquid or
otherwise. My best guess is that the designers wanted to emphasize “Cavity”.
I wonder what advertising will be like in 50 years' time.
Yvonne told me that Facebook is full of the fact that today is the date in the ”future“ when Marty McFly
and Doc Brown arrive in the future in Back to the Future Part II. Well, yesterday, but it's always yesterday in the USA. So we watched part I tonight.
The whole thing started 30 years ago, in October 1985, when we were at the Beach of Passionate Love in Kota Bharu. And Marty travels back
in time to 5 November 1955, when I was in... Kota Bharu. It's really difficult to think
that these times are 30 and 60 years ago. And some of the details are interesting, like
having the 1955 incarnation of Doc viewing the content of a 1985 video camera on a 1955 TV.
But sure enough, in that time the standards didn't change that much.
Spent a bit more time trying to understand the Hugin compilation problems. Finally got my wrapper scripts to output useful
invocation lines, which, as I expected, were long:
How do you fight your way through that? There was a way to show which header files were
included, and from where. Spent some time looking before I found that this is
the -H flag:
That's 2,476 header files! For a single source file of 40 kB and 1,146 lines, more than 2
header files per source line, and a total output nearly 300 times the source file. The
header files are wagging the dog.
Looking through the include sequence didn't show anything of much use:
None of these #includes were conditional. It seems inevitable that once you
include fstream, you end up including mutex along with its namespace-polluting
templates. What a mess C++ is!
Could this be a compiler dependency? There's nothing obvious in the Makefile from
the previous version, apart from the note
OK, let's try that. Configuration fails, but not before telling me that it was
using gcc. But config.log showed that it was using clang. Somehow I
have a feeling that everything is going crazy.
Surely somebody else has run into this problem before. Sent a message out to
the Hugin forum asking for help.
For my first home-made computer I built a “console” or “control panel” with which I could
single-step the machine and monitor its activity. Older computers had these as a matter of
course, but I don't know of any other for a Z80:
During my time with Tandem
Computers we had some reason to examine the Tandem/16 processor in more detail. By that
time, production had stopped, and I was given one of the three control panels that
manufacturing had built for the three factories
(Cupertino
CA, Reston VA,
and Neufahrn BY). Mine
came from Neufahrn, and I'm pretty sure it's the only one left in existence:
What did it do? I have a vague recollection, but only a vague one. I can't even recall if
it was necessary to connect all the side connectors on the back to the CPU:
That's quite possible. The T/16 CPU had multiple boards (CPU, MEMPPU and memory), and the
connectors between them looked similar. Here a photo I took of a “Twin Mini” T/16 machine
in Reston in August 1992, after I had left Tandem, and clearly after the machine had ceased
being used:
The next part selected specific processor registers (register stack and environment). The
important environment registers were E, which was something like a processor status word, P,
the program counter, and two stack pointers S and L, which correspond to something like a
stack pointer and frame pointer on modern processors. MASK could have been the interrupt
mask. I have no idea what A, B and C were.
The register stack was an 8 word wraparound stack numbered R0 to R7, though the exact
allocation depended on the register stack pointer in the bottom 3 bits of the E register.
It's not clear what X1, X2 and X3 meant, and I'm left wondering if it wasn't a hangover from
when the micromachine may have used these three registers as index registers. The legend on
the PCB shows that it's part number 51072, revision B, and dates from 1976. That's round
the time Tandem shipped its very first machines, so it's possible that this is really a
hangover from pre-production models.
I really no longer have any idea what the Microfunction and Interrupt selections meant.
Again, I'm guessing at a lot of this. The machine had four “maps”, 128 kB memory segments:
system date, system code, user data and user code, and that's what's shown on the right hand
side of the rotary switch selections (“memory”). But what's the left hand side? Setting
the base address for the maps, maybe?
It would be nice if somebody could find the micromachine description.
Somehow I wasn't feeling my best today. Nothing obviously wrong, apart from listlessness
and lack of appetite. The last couple of nights I ate far too much, so that in itself isn't
significant, but I didn't eat any lunch and only a little in the evening. To be observed.
It's been unseasonally warm this month, which of course means lots of sunshine. How much do
we need to get our hot water only from sunshine? Last weekend I turned off the electric
boost, and until today we had no problems with hot water. But things have become marginally
cooler. In the last two days the maximum temperature was more like the monthly average, and
now we note that the water is not hot enough. So that electric boost is necessary,
even in spring.
For some reason my message to the Hugin forum didn't arrive. More attachment stupidity? But it's run by the
Google behemoth, so it should be able to accept
mail from Gmail.
What a pain Gmail is! It's good for filtering spam, but the user interface! After I
enlarged the tiny window, I got:
Juha Kupiainen is thinking of buying a smaller camera for use when riding his bike. Given
the quality his mobile phone
delivers, that's understandable.
But what camera? That's his decision, but it gave me time to think (and talk) about how the
small Micro four-thirds cameras
fit. Here's an example with three of my old compact cameras, my old Pentax Z1 and the Olympus E-PM2 with the
M.Zuiko Digital ED 14-42mm f3.5-5.6 EZ lens. Here all images with lens retracted,
where applicable:
My office UPS is
still beeping at irregular intervals. Does it have something to do with it being in series
with the shed UPS? Connected it to a non-UPS power socket. So far it hasn't beeped again.
Yvonne off to sell some horses this afternoon, so I had to
take the dogs for a walk by myself. Not an easy thing at the best of times. But today we
saw a dog—looks like a Border
Collie—who wanted to play.
How do you stop three dogs on leads from running around in all directions when a fourth dog
is taunting them? With great difficulty. It proved that I had put Leonid's lead on his collar, not his harness, and he
quickly got rid of that. Sasha wasn't
quite sure of the whole matter and ran around in circles, tying himself up with his lead.
And a little later Nikolai managed to open
his harness. I'm not sure if it's damaged, but it happened a couple of times, and despite
the fact that it's tied round his left foreleg, he managed to lose it completely.
Dragged them back home as best I could, losing Niko about 50 metres before the house gate.
Got the other two in, shut the gate, untangled them sufficiently to let them go, and then
paid my attention to Niko. Got him in as well, with the other dog outside.
Turned around, and they were all inside the gate! Smaller dogs can get between the planks
around the gate. So I ended up having to start all over again and put our dogs in the area
around the garden, which is fenced off well enough to keep smaller dogs out.
Who does he belong to? He's friendly enough. Put out a post on Facebook, but got no replies. Yvonne returned and
found, along with his registration number, initials and a phone number: C.A.R. 03 9706 3187. Surely
not my old school mate C.A.R. Hoare?
Called the number. Central Animal Records.
We can't take your call right now, but if you leave a message, we'll call back. The message
menu was strange enough that I timed out, but I was left with an assurance that they'd call
back. They didn't.
We left him in front of the house with water but no food. Later we checked; he was gone,
hopefully home.
It's been over a week since the middle of the month, when I normally do my garden photos, and I'm still not done. Somehow
things keep changing so fast. Last week the roses were just coming out, and
the Dicksonia antarctica was
showing its first new fronds. Now already things look completely different:
My Olympus OM-D E-M1 has various support for HDR imaging: exposure
bracketing and even in-camera HDR images. There are clear strangenesses about both of them.
There are two different forms of exposure bracketing, both of which insist on an uneven
number of images. One is intended for HDR and offers brackets of 3, 5 or 7 exposures at 2
or 3 EV intervals; the other is
intended for unclear exposure situations and offers 2, 3, 5 or 7 exposures at 0.3, 0.7 or
1.0 EV intervals. Why the difference? Clearly the design considered the application rather
than the functionality.
I tried the in-camera HDR shortly
after buying the camera and rejected it as being useless. But a few weeks ago I
found a good out-of-camera image. I tried myself and got results as poor as the ones I had
two years ago. In the discussion on the German Olympus forum Reinhard “Olympus can do no wrong” Wagner
explained the errors of my ways and pointed to his
blog, in which he demonstrated his own photos:
These aren't the sequence he showed in his blog, but I think they're in the sequence least
to most dynamic range. And clearly the big difference is between the first three and the
last one.
The sequence is, in fact, “HDR1”, normal, “HDR2” and processed by an old version “HDR
Project Platin” (5 exposures in 2 EV steps). He also comments that the newer version
produces better results.
So for me, this sequence confirms my opinion that the in-camera “HDR” is useless. The first
impression is that “HDR1” produces worse results out of 4 images than the camera does with a
perfectly normal non-HDR image. In fact, there are differences: clearly the normal image
has more sun in the church, which improves things, and the “HDR1” has better detail in the
outside. Still nothing to be happy about. But Reinhard? “I think most people will agree
that the blokes at Olympus have done a pretty good job”.
Reinhard is no fool. On the contrary, he's an expert in his field. Why is he happy with
these results? I've been puzzling about this for two weeks. One thing he said was that I
was trying sequences with too much contrast (why? That's what HDR is for, and the in-camera
HDR takes 4 images where mine get by with 3). So today, while taking the garden photos, I
tried a simpler subject: the shade area in the garden, which with sunlight outside is about
5 EV darker than the surroundings. That's not beyond the range of a normal photo:
And clearly postprocessing can improve that. Here's what DxO Optics “Pro” and
Ashampoo Photo Optimizer do with that
result. First the original, secondly DxO's “HDR” processing, thirdly both DxO and Ashampoo,
and fourthly just Ashampoo. Run the cursor over any
image to compare with the next:
Certainly the DxO processing improves the skies, and together they improve the dark area.
But it's not so spectacular. The result with enfuse is far more convincing. Here the original single image for comparison,
then a three-exposure bracket processed with enfuse, then that image optimized with
Ashampoo:
It's all washed out! Yes, “HDR2“ shows the shadow details better, but the grass looks
burnt-out. I can't really see any advantage in this kind of image. Why did Olympus do it,
and why does Reinhard think it's even acceptable, let alone a “good job”?
Are they any better than the photos I took last year?
Doubtful.
The real question is to be able to see what you want to take. Yes, you have a viewfinder
display, but it shows what the camera is pointing at, and not necessarily what you want to
see.
How can I change that? The big issue with macros at this size (roughly 7 cm from the
subject, depth of field at f/22 of 0.7 mm) is just positioning the camera correctly.