Articles

Mercury’s crust revisited

Airy vs. Pratt isostasy

Figure 1: Pratt (left) vs. Airy (right) isostasy. There are two main ideas how mountain masses are supported. In Pratt’s theory (left), the density changes and less dense crustal blocks “float” higher, whereas the more dense blocks form basins. In Airy’s theory (right) the density is constant, but the crustal blocks have different thicknesses. Higher mountains have deeper “roots” into the denser material below. Image credit: Shih-Arng Pan

In today’s volume of the “Earth and Planetary Science Letters”, Michael M. Sori from the “Lunar and Planetary Laboratory” of the University of Arizona (US) writes about how he used data obtained with the MESSENGER (Mercury Surface, Space Environment, Geochemistry and Ranging) orbiter to re-measure the crust thickness of Mercury. Crust thickness is an important geophysical parameter, which allows to further constrain terrestrial planet formation scenarios. And since Mercury is always good for a surprise, the new calculations show that Mercury’s crust is only 26±11 km thick, i.e. much thinner (and also denser) than previously thought.

First estimates of the Mercury crust thickness were published by Anderson et al. (1996). Their estimates were based on data obtained with the Mariner 10 spacecraft. They concluded that the crust is 100–300 km thick. Almost ten years later, with a wealth of new instruments on-board MESSENGER to create gravity and topography maps, Padovan et al. (2015) concluded that Mercury’s crustal thickness is on average 35±18 km. The authors assumed topography was predominantly compensated by Airy isostasy, where columns contain equal masses. The equal mass approach was now shown to overestimate the thickness of Mercury’s crust, and instead an equal pressure approach (first described by Hemingway and Matsuyama 2017) should be used. In the next paragraphs, further explanations follow, describing the meaning of isostasy and the equal-mass vs. equal-pressure approaches.

Airy vs. Pratt isostasy and the “equal mass” vs. “equal pressure” assumptions

Mercury grain density

Figure 2: Grain density measurements on top of a MESSENGER image of Mercury. Image Credit: Michael M. Sori (2018)

Mercury grain density vs. elevation

Figure 3: The data shows that Mercury is inconsistent with Pratt isostasy (red dashed line), because no correlation between density and elevation is observed. Image Credit: Michael M. Sori (2018).

Isostasy is a fundamental concept in Geology, meaning that lighter crust floats on the denser underlying mantle. It thus explains why mountains and valleys are stable over large timescales. This is called isostatic equilibrium (this equilibrium can be disturbed by erosion or volcanic activity). There are two main ideas how mountain masses are supported (see Figure 1). In Pratt’s theory, the density changes across the surface and less dense crustal blocks “float” higher, whereas the more dense blocks form basins. On the other side, in Airy’s theory the density is constant, but the crustal blocks have different thicknesses. Higher mountains have deeper “roots” into the denser material below. Thus, in case of Pratt isostasy, one would expect a correlation between density and elevation across the surface of a planet, with mountains having lower densities.

In the study, Sori (2018) shows grain density measurements across several regions of Mercury (see Figure 2). Using MESSENGER’s topography maps, the author could then look for a correlation between density and elevation. As shown in Figure 3, such a correlation does not exist. Thus, it can be assumed that Airy isostasy is a better description for the topography of Mercury.

Now we come back to the meaning of “equal mass” and “equal pressure” approach. The latter one was used by the author of the study. This is the crucial difference that finally led to the new lower value for the crustal thickness. First, it is important to know that the gravitational potential is typically not constant across topographic lines (=lines of constant altitude) of a planet. This is due to variations in density. However, lines of constant gravitational potential (equipotential lines) can still be calculated. One such line of constant gravitational potential is the zero-level (on Earth roughly the sea level) and is called geoid. The quantity called geoid-topography ratio (GTR) thus reflects variations in density. And finally, the GTR is used to calculate the thickness of the crust. The main question is how equipotential surfaces are calculated. As shown by Douglas J. Hemingway and Isamu Matsuyama (2017), the spherical geometry of the problem must be taken into account when calculating equipotential surfaces (which will affect the crust thickness calculation). And here is the problem. Previous publications have assumed a constant width of the crustal blocks (in cartesian coordinates). This is what is called the “equal mass” approach, but in fact one would need to take into account the spherical symmetry (polar coordinates) and thus cone-shaped blocks that put different “pressure” on the surface (compare Figure 1 and Figure 5). This is why the newly calculated thickness is roughly 25% lower than previous results. Note that, the same issue will also affect previous calculations of other objects in the solar system. However, since the difference is larger for smaller objects, Mercury is affected most of all planets, since it is the smallest planet in the solar system.

Mercury Crust Thickness

Figure 4: Geoid-topography ratios (GTRs) as a function of crustal thickness (for an Airy isostasy). The “equal mass” (red) and “equal pressure” (blue) approach are compared to each other, showing that equal pressure reduces the derived crust thickness to the published value of 26km. Image credit: Michael M. Sori (2018)

Airy vs Pratt in polar coordinates

Figure 5: Airy vs Pratt in polar coordinates. This is the same as Figure 1, but showing the crustal blocks in polar coordinates. It can be seen that the crustal blocks are not constant in width, but cone-shaped. The bottom of the cone is the area where pressure is put on the underlying surface. An equipotential surface is then found along lines of “equal pressure” rather than “equal mass”. Image credit: Johannes Puschnig

As explained, the equal pressure approach is a better representation of a state of equilibrium. This is also supported by the fact that the new average crustal thickness value of 26±11 km agrees well with other MESSENGER based models and observations, e.g. with Mercury’s crust being of magmatic origin or excavation of mantle material onto the surface, which was proposed by Padovan et al. (2015).

With this publication another issue of Mercury could be resolved, but many things are left unknown and Mercury still keeps scientist busy. The next large step forward is likely to come when BepiColombo finally orbits Mercury in 2025.

Spotting the zodiacal light in spring

The zodiacal light is a nocturnal phenomena that is revealed only to those who dare to escape the city lights. In spring, after sunset and once twilight fades away into a dark and moonless night, a gentle luminous band opens up when looking towards west. Its majestic cone then seems to stand high above the horizon, as if it was trying to guide the observer. In fact, the zodiacal light directs us to the very beginning of the solar system, roughly 4.5 billion years ago, when our Earth and the other planets were formed from and within a circumsolar dust disk. Although the solar wind steadily sweeps away dust, new dust grains are formed through outgassing comets and minor planet collisions. Most of these objects orbit the sun in a relatively well defined and narrow plane, which is called the ecliptic, i.e. the plane of the Earth’s orbit. As a result, the ecliptic is continuously fed with fresh dust and gas, which causes the redirection of sun rays through reflection and scattering, which are then captured as zodiacal light by some enthusiasts on Earth. Although zodiacal light can be seen all year round, spring and autumn are best suited for observations from mid latitudes, because then the path of the sun crosses the horizon at a steep angle, making the twilight zone short.

zodiacal light

Zodiacal light observed from Roque de los Muchachos Observatory, La Palma, Canary islands, Spain in April 2016.

SQM night sky brightness measurements at 26 locations in Eastern Austria

I am glad to announce that our recent light pollution paper entitled Systematic measurements of the night sky brightness at 26 locations in Eastern Austria will be soon published in JQSRT. In the article, we show that a correlation between light pollution and air pollution (particular matter) exists. We examine the circalunar periodicity of the night sky brightness, seasonal variations as well as long-term trends. Novel ways to plot and analyze huge long-term SQM (‘Sky Quality Meter’) datasets, such as histograms, circalunar, annual (‘hourglass’) and cumulative (‘jellyfish’) plots are presented (see example below).

Hourglass plot

Hourglass plots. The x-axis is a time axis, containing the months of one full year. The y-axis is a time axis as well, but covering the hours (and fractions of hours) of the individual nights. A colour scale is used to denote the measured night sky brightness in units of mag arcsec-2 at each time of the night and of the year. The circalunar periodicity or a lack of periodicity can be well recognized in the plots. Also other features emerge, e.g. the natural variation of the night lengths, which creates the ‘hourglass’ shape.

Jellyfish plot

Jellyfish plots. The x-axis is a time axis indicating hours, the y-axis is the night sky brightness in units of mag arcsec-2. These plots show measurements throughout one full year (here: 2016) and the colour indicates the number density of measurements in the (hour, brightness) plane. Here we show urban, light-polluted sites, which are characterized by two clustered regions, that have little to do with the lunar phases, but correspond to clear nights with moderate skyglow on the one hand and overcast nights with strongly enhanced scattering of the city lights.

The Lyman Continuum Escape and ISM properties in Tololo 1247-232 – New Insights from HST and VLA

As of April 19, 2017 our paper entitled “The Lyman Continuum Escape and ISM properties in Tololo 1247-232 – New Insights from HST and VLA” is accepted for publication in Monthly Notices of the Royal Astronmical Society (MNRAS). In the paper, we report on our work based on data from the Hubble Space Telescope (HST) and the Karl G. Jansky Very Large Array (VLA). Using an advanced data reduction procedure for our COS (Cosmic Origins Spectrograph) spectra, we confirm weak LyC flux emerging from the central region of the galaxy, corresponding to an escape fraction of less than two percent, i.e. the lowest escape fraction reported for the galaxy so far. We further study far ultraviolet absorption lines of Si II and Si IV, as well as 21cm hydrogen radiation and bring them into context of physical processes that drive the LyC escape in the galaxy.

Nikon D90 astromod VS. Nikon DF unmodified

It is fact that Nikon’s DF is among the most sensitive camera’s available on the market today. Its FX format CMOS chip offers 16.2 million pixels. The corresponding pixel size of 7.3μm is thus large compared to most other state-of-the-art cameras (with typical sizes of less than 5μm). As a result the Nikon DF has much better low-light, high-ISO performance.

However, as all unmodified cameras also DF’s CMOS detector is covered by an infrared (IR) blocking filter. This is unsatisfactory for astrophotography, in particular when imaging nearby star-forming regions. The reason is that young, massive stars emit hard UV radiation that leads to the ionization of the surrounding hydrogen. Subsequent recombination of free electrons with ions then produce strong emission lines such as the Hα line at approx. 656nm (in the red part of the spectrum). This wavelength unfortunately is already blocked by the IR filter found in almost all digital single-lens reflex (DSLR) cameras.

For that reason, some companies such as DSLR Astro Tec in Germany recently have specialized on modifying DSLRs. Different modifications exist, the one for astrophotography is basically a replacement of the IR-blocking filter with a clear-glass filter. This modification drastically increases the sensitivity of the camera at the wavelength of the Hα emission line. This modification comes at the cost of the camera’s white-balance, which then needs to be set manually. However, for astrophotography this doesn’t play a role anyway.

Since I own an unmodified Nikon DF and a modified Nikon D90, I was wondering how these two cameras would compare to each other when imaging star forming regions such as e.g. M8, the Lagoon nebula. In order to perform the test, I have used my Nikkor AF-S VR 200-400mm 1:4 lens, operated at 400mm f/4 and took images of the nebula using both cameras. In both setups the exposure time was set to 30 seconds at ISO 800. The result is shown below. Both images were taken in raw format and only brightness and contrast were adjusted in the same way. The result makes clear that an astro-modified D90 clearly outperforms even Nikon’s low-light market leader, the Nikon DF.

Nikon D90 astromod vs Nikon DF unmodified

Testing Nikon TC-17E II and TC-20E III with Nikkor AF-S 70-200 1:2.8 ED VR and Nikkor 200-400 1:4 ED VR

For more than one year I am now carrying Nikon’s 2x teleconverter TC-20E III in my camera bag. I bought it from a local store in good used condition, with intent to get more reach with my Nikon D300 (which is APS-C sized) and the Nikon AF-S 70-200mm f/2.8 VR lens. Since this lens is very fast and its image quality superb, the 2x teleconverter would still allow for high shutter speeds at f/5.6 on bright summer days when doing wildlife, e.g. bird photography.

So far the theory, but after taking my first shots with the 2x converter attached to Nikon’s 70-200mm f/2.8 VR, I was really disappointed with the results. Images taken at the widest aperture through the TC are of poor quality and very smooth, not sharp at all. Stopping down improves the quality, but still not to a level I would be satisfied with.

Now comes the surprise! Just recently, I got hold of a very nice and sharp Nikkor AF-S 200-400mm f/4 ED VR lens, which came together with the teleconverter TC-17E II, both in very good used condition. When using the 1.7x teleconverter on that lens for the first time, I was really “shocked”, because the image quality was only slightly degraded and very sharp. Next, I attached the 2x teleconverter TC-20E III to the Nikkor AF-S 200-400mm f/4 ED VR as well and was likewise astonished by the image quality, which was still good and reasonably sharp.

Remark: Teleconverters and Autofocus performance

Autofocus is getting much slower with the TCs attached. However, although the D300 is not explicitly mentioned on Nikon’s TC compatibility chart, apparently the camera supports f/8 autofocus and the Nikkor AF-S 200-400mm f/4 ED VR will autofocus when either of the TCs under consideration is attached.

TC Image Quality Comparison using SpyderLensCal

In order to make a fair comparison, I decided to setup a typical lens calibration session with SpyderLensCal (the distance was 5m, so that enough focus path was left on both lenses). That way, I would get a fair image quality comparison of the lenses and the TCs, and would at the same time calibrate all my camera+lens+TC combinations. Both, SpyderLensCal and my camera were mounted on a tripod. Shots were made with my D300 using different AF finetuning settings. Vibration Reduction (VR) was turned off, ISO set to 200 and the largest aperture was chosen using aperture priority mode. The resulting shutter speed was always faster than 1/500s. SpyderLensCal and my D300 were brought onto the same optical axis through leveling SpyderLensCal with its integrated bullseye bubble level and the camera using the hot-shoe to level with a common level meter.

Results

The distance between the camera chip and the calibration device was always 5m, but since the focal length changed with each lens+TC combination, I decided to scale down each frame to a focal length of 200mm and then make equal crops around SpyderLensCal’s ruler and save a JPG file. That way, all images can be compared on a pixel-by-pixel basis and more easily displayed here. However, down-scaling and cropping does not have any effect on the results and all images shown below are very good representations of the true RAW images I have taken.

Nikkor 70-200mm f/2.8 ED VR @ 200mm f/2.8

Nikon_70-200_VR_05_AFp10

AF +10

Nikon_70-200_VR_04_AFp05

AF +5

Nikon_70-200_VR_03_AF000

AF 0

Nikon_70-200_VR_02_AFm05

AF -5

Nikon_70-200_VR_01_AFm10

AF -10


This basic setup of camera and lens gives already good results, even without AF finetuning. However, slight frontfocus can be identified and an AF correction of +5 seems to give sharpest results.

Nikkor 70-200mm f/2.8 ED VR + TC-17E II @ 340mm f/4.8

Nikon_70-200_17x_05_AFp10

AF +10

Nikon_70-200_17x_04_AFp05

AF +5

Nikon_70-200_17x_03_AF000

AF 0

Nikon_70-200_17x_02_AFm05

AF -5

Nikon_70-200_17x_01_AFm10

AF -10


With the 1.7x teleconverter attached, the image quality decreases and it seems that the frontfocus issue is getting worse than without TC. Moreover, the overall smoothness makes it hard to find the best solution. However, AF finetuning of +10 gives good results.

Nikkor 70-200mm f/2.8 ED VR + TC-20E III @ 400mm f/5.6

Nikon_70-200_20x_05_AFp10

AF +10

Nikon_70-200_20x_04_AFp05

AF +5

Nikon_70-200_20x_03_AF000

AF 0

Nikon_70-200_20x_02_AFm05

AF -5

Nikon_70-200_20x_01_AFm10

AF -10


With the 2.0x teleconverter attached, the image quality decreases quite drastically and strong frontfocus can be identified. The tendency of how AF finetuning changes the results is clearly seen in the above images. The total focal plane shift is so large that my final best result is found with a AF finetuning value of +20, which is not shown above. However, in the real world I would not consider using this combination since the image quality is very poor.

Nikkor 200-400mm f/4 ED VR @ 400mm f/4

Nikon_200-400_VR_04_AFp10

AF +10

Nikon_200-400_VR_03_AFp05

AF +5

Nikon_200-400_VR_02_AF000

AF 0

Nikon_200-400_VR_01_AFm05

AF -5


This lens is really great and extremely sharp out of the box. However, also here a slight correction for frontfocus, i.e. an AF finetuning value of +3 was found to give best results.

Nikkor 200-400mm f/4 ED VR + TC-17E II @ 680mm f/6.8

Nikon_200-400_17_04_AFp10

AF +10

Nikon_200-400_17_03_AFp05

AF +5

Nikon_200-400_17_02_AF000

AF 0

Nikon_200-400_17_01_AFm05

AF -5


In contrast to the poor performance of the TC-17E II in combination with the Nikkor 70-200mm f/2.8 VR lens, the image quality here is reasonably good, in particular after applying an AF finetuning value of +7.

Nikkor 200-400mm f/4 ED VR + TC-20E III @ 800mm f/8

Nikon_200-400_20_04_AFp10

AF +10

Nikon_200-400_20_03_AFp05

AF +5

Nikon_200-400_20_02_AF000

AF 0

Nikon_200-400_20_01_AFm05

AF -5


In contrast to the extremely bad performance of the TC-20E III in combination with the Nikkor 70-200mm f/2.8 VR lens, the image quality here is still reasonably good, in particular after applying an AF finetuning value of +7.

Conclusion

Teleconverters decrease AF speed, in particular in low-light, low-contrast situations. However, when using a Nikon body which allows to autofocus at f/8, AF is still working considerably well even with the Nikkor AF-S VR 200-400mm f/4 lens. The image quality of teleconverters can drastically change when using different lenses. In the case presented here, either of the two teleconverters, TC-17E II and TC-20E III performed very bad on the Nikkor AF-S VR 70-200mm f/2.8 lens, i.e. producing very smooth images. On the other side, when attaching to the Nikkor AF-S VR 200-400mm f/4 lens, the image quality was only slightly decreased (in particular the 1.7x converter performs very well) with images that are reasonably sharp. However, the loss of light is then significant and such combinations presumably only work in environments that provide a sufficient amount of light.


SVT Local News Report: Light Pollution in Stockholm

The Swedish public broadcaster SVT recently made a short report about light pollution in Stockholm, for which I was interviewed (German and Swedish).

Swedish readers might find this story about light pollution in Stockholm interesting. Jonatan Loxdal, reporter from the Swedish news website “kit.se”, interviewed me this week about our recent results based on our nightsky brightness measurements.

Nikkor Lens Comparison for Astrophotography AF 80-200 f2.8 ED vs. AF-S VR 70-200 f2.8 ED

I bought two used Nikon lenses, both very similar in their specifications, which is not surprising as the Nikkor AF 80-200 f2.8 ED is a precursor of the Nikkor AF-S VR 70-200 f2.8 ED (see Ken Rockwell’s history page). With the highest bid, I got the 80-200mm for 280 EUR + 20 EUR shipping from the big bay and for the 70-200mm a private seller in Sweden claimed 8200 SEK, equivalent to 870 EUR. Then I asked myself if it is really worth spending 570 EUR more on the new lens, especially when doing astrophotography, where vibration reduction (VR) and fast autofocus is not needed.

To answer this question, a simple startest was performed. Both lenses were mounted to a Nikon DF (FX format chip: 36mm x 24mm). The camera was then fixed on a tripod without any further star tracking. The scenery was a strongly light-polluted sky. At ISO 1600 an exposure time of 4 seconds was chosen and the camera was pointed towards NE. The startest was performed using an aperture of 2.8 and 4.0. The resulting images are shown further below for reference. The cutouts (center, top right, top left) are 100%, when the images are viewed in their original size.

Conclusion

This Nikkor AF 80-200mm f/2.8 ED I got from Ebay is bad for astrophotography, no it’s terrible! The star images at focal lengths of more than 80mm are frustrating, producing very strong coma effects more or less all over the image and even stopping down to f/4 does not make a big difference. Interestingly at 80mm the quality is OK and the coma disappears in most parts of the image. I am really surprised by this result, because the lens operates really good under daylight conditions when sufficient light is available. This is proven by the testshot below, which is taken at 200mm f/8 and a shutter speed of 1/800 at ISO 800.
On the other side, the Nikkor AF-S 70-200mm f/2.8 VR is superb. It shows only little coma over all focal lengths, with slightly better results when stopped down to f/4.0. Note that the reason for elongated stars is due to the rotation of the Earth during the 4 second exposure and not necessarily coma.

Although really bad for astrophotography, the Nikkor AF 80-200mm f/2.8 ED is really good under daylight conditions; here a shot at 200mm f/8 with a shutter speed of 1/800s and ISO 800. The image shown is a cutout without further processing of the full frame image.

Although really bad for astrophotography, the Nikkor AF 80-200mm f/2.8 ED is really good under daylight conditions; here a shot at 200mm f/8 with a shutter speed of 1/800s and ISO 800. The image shown is a cutout without further processing of the full frame image.


Startest at f/2.8

Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 80mm f2.8

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 70mm f2.8


Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 135mm f2.8

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 135mm f2.8


Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 200mm f2.8

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 200mm f2.8


Startest at f/4.0

Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 80mm f4.0

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 70mm f4.0


Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 135mm f4.0

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 135mm f4.0


Startest of Nikkor AF 80-200mm f2.8 ED

Testing Nikkor AF 80-200mm f2.8 ED at 200mm f4.0

Startest of  Nikkor AF-S VR 70-200mm f2.8 ED

Testing Nikkor AF-S VR 70-200mm f2.8 ED at 200mm f4.0


Observing Comet C/2013 US10 (Catalina)

Comet C/2013 US10 (Catalina) was first discovered by the Catalina Sky Survey on October 31, 2013. It originates from the Oort cloud, a vast spherical reservoir of comets far beyond Neptun. By chance, gravitational perturbations can push Oort objects into the inner solar system, where they are eventually discovered. In some cases, comets get bright enough to be observed by naked eye or with small amateur telescopes or binoculars. The latter one is true for Comet C/2013 US10 (Catalina).

On Jan. 17, 2016 the comet passed its closest point to Earth at a distance of 110 million km. Using my 10-inch Newtonian telescope, I have imaged the comet that day from a suburban location. The result shown below is a stack of 17 frames of 120 sec. exposure time each. The inverted version on the bottom clearly shows the two tails of the comet.

Comet C/2013 US10 (Catalina)

Comet C/2013 US 10 (Catalina) imaged in L with a GSO 254mm f/5 Newtonian telescope and an ATIK 383L+ mono CCD camera

Comet C/2013 US10 (Catalina)

Comet C/2013 US 10 (Catalina) imaged in L with a GSO 254mm f/5 Newtonian telescope and an ATIK 383L+ mono CCD camera