I am happy to announce that the first online version of the Dense Gas Toolbox
(DGT) is now available. DGT allows to derive gas densities and temperatures from observed molecular lines. The toolbox contains a novel set of radiative transfer models which take into account that observed molecular line intensities usually arise from a multi-density gas rather than from a single zone. Hence, the models allow for more realistic interpretations (in terms of gas density and temperature) of observed molecular lines.
The models were calibrated using data of the EMPIRE survey, i.e. the abundances and line optical depths are fixed based on observations of local star-forming disk galaxies. In the current version (v1.2) of DGT the following rotational transitions are implemented in the frequency range between ~88 and ~345GHz:
Circa-monthly activity conducted by moonlight is observed in many species on Earth. However, due to the steadily increasing amount of artificial light at night, the periodicity is progressively disrupted. In a recently published paper, we investigate for the first time in a quantitative way the relationship between light pollution and the recognition of the circalunar variation.
Our main result (Figure 9 of the aforementioned paper), is a linear relationship between the mean zenithal night sky brightness (<NSB>) and the amplitude of the circalunar variation (see Figure below).
Im Vortrag werde ich zunächst auf die Geschichte der künstlichen Beleuchtung eingehen: von prähistorischen Lampen bis hin zu modernster LED Technologie. Als Astrophysiker werde ich dabei auch immer wieder Bezug zur Astronomie nehmen. Der Schwerpunkt des Vortrags liegt aber bei den Auswirkungen der künstlichen Beleuchtung auf Mensch und Umwelt und zum Schluss zeige ich noch einige Beispiele aus der Praxis.
Figure 1: Pratt (left) vs. Airy (right) isostasy. There are two main ideas how mountain masses are supported. In Pratt’s theory (left), the density changes and less dense crustal blocks “float” higher, whereas the more dense blocks form basins. In Airy’s theory (right) the density is constant, but the crustal blocks have different thicknesses. Higher mountains have deeper “roots” into the denser material below. Image credit: Shih-Arng Pan
In today’s volume of the “Earth and Planetary Science Letters”, Michael M. Sori from the “Lunar and Planetary Laboratory” of the University of Arizona (US) writes about how he used data obtained with the MESSENGER (Mercury Surface, Space Environment, Geochemistry and Ranging) orbiter to re-measure the crust thickness of Mercury. Crust thickness is an important geophysical parameter, which allows to further constrain terrestrial planet formation scenarios. And since Mercury is always good for a surprise, the new calculations show that Mercury’s crust is only 26±11 km thick, i.e. much thinner (and also denser) than previously thought.
First estimates of the Mercury crust thickness were published by Anderson et al. (1996). Their estimates were based on data obtained with the Mariner 10 spacecraft. They concluded that the crust is 100–300 km thick. Almost ten years later, with a wealth of new instruments on-board MESSENGER to create gravity and topography maps, Padovan et al. (2015) concluded that Mercury’s crustal thickness is on average 35±18 km. The authors assumed topography was predominantly compensated by Airy isostasy, where columns contain equal masses. The equal mass approach was now shown to overestimate the thickness of Mercury’s crust, and instead an equal pressure approach (first described by Hemingway and Matsuyama 2017) should be used. In the next paragraphs, further explanations follow, describing the meaning of isostasy and the equal-mass vs. equal-pressure approaches.
Airy vs. Pratt isostasy and the “equal mass” vs. “equal pressure” assumptions
Figure 2: Grain density measurements on top of a MESSENGER image of Mercury. Image Credit: Michael M. Sori (2018)
Figure 3: The data shows that Mercury is inconsistent with Pratt isostasy (red dashed line), because no correlation between density and elevation is observed. Image Credit: Michael M. Sori (2018).
Isostasy is a fundamental concept in Geology, meaning that lighter crust floats on the denser underlying mantle. It thus explains why mountains and valleys are stable over large timescales. This is called isostatic equilibrium (this equilibrium can be disturbed by erosion or volcanic activity). There are two main ideas how mountain masses are supported (see Figure 1). In Pratt’s theory, the density changes across the surface and less dense crustal blocks “float” higher, whereas the more dense blocks form basins. On the other side, in Airy’s theory the density is constant, but the crustal blocks have different thicknesses. Higher mountains have deeper “roots” into the denser material below. Thus, in case of Pratt isostasy, one would expect a correlation between density and elevation across the surface of a planet, with mountains having lower densities.
In the study, Sori (2018) shows grain density measurements across several regions of Mercury (see Figure 2). Using MESSENGER’s topography maps, the author could then look for a correlation between density and elevation. As shown in Figure 3, such a correlation does not exist. Thus, it can be assumed that Airy isostasy is a better description for the topography of Mercury.
Now we come back to the meaning of “equal mass” and “equal pressure” approach. The latter one was used by the author of the study. This is the crucial difference that finally led to the new lower value for the crustal thickness. First, it is important to know that the gravitational potential is typically not constant across topographic lines (=lines of constant altitude) of a planet. This is due to variations in density. However, lines of constant gravitational potential (equipotential lines) can still be calculated. One such line of constant gravitational potential is the zero-level (on Earth roughly the sea level) and is called geoid. The quantity called geoid-topography ratio (GTR) thus reflects variations in density. And finally, the GTR is used to calculate the thickness of the crust. The main question is how equipotential surfaces are calculated. As shown by Douglas J. Hemingway and Isamu Matsuyama (2017), the spherical geometry of the problem must be taken into account when calculating equipotential surfaces (which will affect the crust thickness calculation). And here is the problem. Previous publications have assumed a constant width of the crustal blocks (in cartesian coordinates). This is what is called the “equal mass” approach, but in fact one would need to take into account the spherical symmetry (polar coordinates) and thus cone-shaped blocks that put different “pressure” on the surface (compare Figure 1 and Figure 5). This is why the newly calculated thickness is roughly 25% lower than previous results. Note that, the same issue will also affect previous calculations of other objects in the solar system. However, since the difference is larger for smaller objects, Mercury is affected most of all planets, since it is the smallest planet in the solar system.
Figure 4: Geoid-topography ratios (GTRs) as a function of crustal thickness (for an Airy isostasy). The “equal mass” (red) and “equal pressure” (blue) approach are compared to each other, showing that equal pressure reduces the derived crust thickness to the published value of 26km. Image credit: Michael M. Sori (2018)
Figure 5: Airy vs Pratt in polar coordinates. This is the same as Figure 1, but showing the crustal blocks in polar coordinates. It can be seen that the crustal blocks are not constant in width, but cone-shaped. The bottom of the cone is the area where pressure is put on the underlying surface. An equipotential surface is then found along lines of “equal pressure” rather than “equal mass”. Image credit: Johannes Puschnig
As explained, the equal pressure approach is a better representation of a state of equilibrium. This is also supported by the fact that the new average crustal thickness value of 26±11 km agrees well with other MESSENGER based models and observations, e.g. with Mercury’s crust being of magmatic origin or excavation of mantle material onto the surface, which was proposed by Padovan et al. (2015).
With this publication another issue of Mercury could be resolved, but many things are left unknown and Mercury still keeps scientist busy. The next large step forward is likely to come when BepiColombo finally orbits Mercury in 2025.
The zodiacal light is a nocturnal phenomena that is revealed only to those who dare to escape the city lights. In spring, after sunset and once twilight fades away into a dark and moonless night, a gentle luminous band opens up when looking towards west. Its majestic cone then seems to stand high above the horizon, as if it was trying to guide the observer. In fact, the zodiacal light directs us to the very beginning of the solar system, roughly 4.5 billion years ago, when our Earth and the other planets were formed from and within a circumsolar dust disk. Although the solar wind steadily sweeps away dust, new dust grains are formed through outgassing comets and minor planet collisions. Most of these objects orbit the sun in a relatively well defined and narrow plane, which is called the ecliptic, i.e. the plane of the Earth’s orbit. As a result, the ecliptic is continuously fed with fresh dust and gas, which causes the redirection of sun rays through reflection and scattering, which are then captured as zodiacal light by some enthusiasts on Earth. Although zodiacal light can be seen all year round, spring and autumn are best suited for observations from mid latitudes, because then the path of the sun crosses the horizon at a steep angle, making the twilight zone short.
Zodiacal light observed from Roque de los Muchachos Observatory, La Palma, Canary islands, Spain in April 2016.
I am glad to announce that our recent light pollution paper entitled Systematic measurements of the night sky brightness at 26 locations in Eastern Austria will be soon published in JQSRT. In the article, we show that a correlation between light pollution and air pollution (particular matter) exists. We examine the circalunar periodicity of the night sky brightness, seasonal variations as well as long-term trends. Novel ways to plot and analyze huge long-term SQM (‘Sky Quality Meter’) datasets, such as histograms, circalunar, annual (‘hourglass’) and cumulative (‘jellyfish’) plots are presented (see example below).
Hourglass plots. The x-axis is a time axis, containing the months of one full year. The y-axis is a time axis as well, but covering the hours (and fractions of hours) of the individual nights. A colour scale is used to denote the measured night sky brightness in units of mag arcsec-2 at each time of the night and of the year. The circalunar periodicity or a lack of periodicity can be well recognized in the plots. Also other features emerge, e.g. the natural variation of the night lengths, which creates the ‘hourglass’ shape.
Jellyfish plots. The x-axis is a time axis indicating hours, the y-axis is the night sky brightness in units of mag arcsec-2. These plots show measurements throughout one full year (here: 2016) and the colour indicates the number density of measurements in the (hour, brightness) plane. Here we show urban, light-polluted sites, which are characterized by two clustered regions, that have little to do with the lunar phases, but correspond to clear nights with moderate skyglow on the one hand and overcast nights with strongly enhanced scattering of the city lights.
As of April 19, 2017 our paper entitled “The Lyman Continuum Escape and ISM properties in Tololo 1247-232 – New Insights from HST and VLA” is accepted for publication in Monthly Notices of the Royal Astronmical Society (MNRAS). In the paper, we report on our work based on data from the Hubble Space Telescope (HST) and the Karl G. Jansky Very Large Array (VLA). Using an advanced data reduction procedure for our COS (Cosmic Origins Spectrograph) spectra, we confirm weak LyC flux emerging from the central region of the galaxy, corresponding to an escape fraction of less than two percent, i.e. the lowest escape fraction reported for the galaxy so far. We further study far ultraviolet absorption lines of Si II and Si IV, as well as 21cm hydrogen radiation and bring them into context of physical processes that drive the LyC escape in the galaxy.
It is fact that Nikon’s DF is among the most sensitive camera’s available on the market today. Its FX format CMOS chip offers 16.2 million pixels. The corresponding pixel size of 7.3μm is thus large compared to most other state-of-the-art cameras (with typical sizes of less than 5μm). As a result the Nikon DF has much better low-light, high-ISO performance.
However, as all unmodified cameras also DF’s CMOS detector is covered by an infrared (IR) blocking filter. This is unsatisfactory for astrophotography, in particular when imaging nearby star-forming regions. The reason is that young, massive stars emit hard UV radiation that leads to the ionization of the surrounding hydrogen. Subsequent recombination of free electrons with ions then produce strong emission lines such as the Hα line at approx. 656nm (in the red part of the spectrum). This wavelength unfortunately is already blocked by the IR filter found in almost all digital single-lens reflex (DSLR) cameras.
For that reason, some companies such as DSLR Astro Tec in Germany recently have specialized on modifying DSLRs. Different modifications exist, the one for astrophotography is basically a replacement of the IR-blocking filter with a clear-glass filter. This modification drastically increases the sensitivity of the camera at the wavelength of the Hα emission line. This modification comes at the cost of the camera’s white-balance, which then needs to be set manually. However, for astrophotography this doesn’t play a role anyway.
Since I own an unmodified Nikon DF and a modified Nikon D90, I was wondering how these two cameras would compare to each other when imaging star forming regions such as e.g. M8, the Lagoon nebula. In order to perform the test, I have used my Nikkor AF-S VR 200-400mm 1:4 lens, operated at 400mm f/4 and took images of the nebula using both cameras. In both setups the exposure time was set to 30 seconds at ISO 800. The result is shown below. Both images were taken in raw format and only brightness and contrast were adjusted in the same way. The result makes clear that an astro-modified D90 clearly outperforms even Nikon’s low-light market leader, the Nikon DF.
For more than one year I am now carrying Nikon’s 2x teleconverter TC-20E III in my camera bag. I bought it from a local store in good used condition, with intent to get more reach with my Nikon D300 (which is APS-C sized) and the Nikon AF-S 70-200mm f/2.8 VR lens. Since this lens is very fast and its image quality superb, the 2x teleconverter would still allow for high shutter speeds at f/5.6 on bright summer days when doing wildlife, e.g. bird photography.
So far the theory, but after taking my first shots with the 2x converter attached to Nikon’s 70-200mm f/2.8 VR, I was really disappointed with the results. Images taken at the widest aperture through the TC are of poor quality and very smooth, not sharp at all. Stopping down improves the quality, but still not to a level I would be satisfied with.
Now comes the surprise! Just recently, I got hold of a very nice and sharp Nikkor AF-S 200-400mm f/4 ED VR lens, which came together with the teleconverter TC-17E II, both in very good used condition. When using the 1.7x teleconverter on that lens for the first time, I was really “shocked”, because the image quality was only slightly degraded and very sharp. Next, I attached the 2x teleconverter TC-20E III to the Nikkor AF-S 200-400mm f/4 ED VR as well and was likewise astonished by the image quality, which was still good and reasonably sharp.
Remark: Teleconverters and Autofocus performance
Autofocus is getting much slower with the TCs attached. However, although the D300 is not explicitly mentioned on Nikon’s TC compatibility chart, apparently the camera supports f/8 autofocus and the Nikkor AF-S 200-400mm f/4 ED VR will autofocus when either of the TCs under consideration is attached.
TC Image Quality Comparison using SpyderLensCal
In order to make a fair comparison, I decided to setup a typical lens calibration session with SpyderLensCal (the distance was 5m, so that enough focus path was left on both lenses). That way, I would get a fair image quality comparison of the lenses and the TCs, and would at the same time calibrate all my camera+lens+TC combinations. Both, SpyderLensCal and my camera were mounted on a tripod. Shots were made with my D300 using different AF finetuning settings. Vibration Reduction (VR) was turned off, ISO set to 200 and the largest aperture was chosen using aperture priority mode. The resulting shutter speed was always faster than 1/500s. SpyderLensCal and my D300 were brought onto the same optical axis through leveling SpyderLensCal with its integrated bullseye bubble level and the camera using the hot-shoe to level with a common level meter.
The distance between the camera chip and the calibration device was always 5m, but since the focal length changed with each lens+TC combination, I decided to scale down each frame to a focal length of 200mm and then make equal crops around SpyderLensCal’s ruler and save a JPG file. That way, all images can be compared on a pixel-by-pixel basis and more easily displayed here. However, down-scaling and cropping does not have any effect on the results and all images shown below are very good representations of the true RAW images I have taken.
Nikkor 70-200mm f/2.8 ED VR @ 200mm f/2.8
This basic setup of camera and lens gives already good results, even without AF finetuning. However, slight frontfocus can be identified and an AF correction of +5 seems to give sharpest results.
Nikkor 70-200mm f/2.8 ED VR + TC-17E II @ 340mm f/4.8
With the 1.7x teleconverter attached, the image quality decreases and it seems that the frontfocus issue is getting worse than without TC. Moreover, the overall smoothness makes it hard to find the best solution. However, AF finetuning of +10 gives good results.
Nikkor 70-200mm f/2.8 ED VR + TC-20E III @ 400mm f/5.6
With the 2.0x teleconverter attached, the image quality decreases quite drastically and strong frontfocus can be identified. The tendency of how AF finetuning changes the results is clearly seen in the above images. The total focal plane shift is so large that my final best result is found with a AF finetuning value of +20, which is not shown above. However, in the real world I would not consider using this combination since the image quality is very poor.
Nikkor 200-400mm f/4 ED VR @ 400mm f/4
This lens is really great and extremely sharp out of the box. However, also here a slight correction for frontfocus, i.e. an AF finetuning value of +3 was found to give best results.
Nikkor 200-400mm f/4 ED VR + TC-17E II @ 680mm f/6.8
In contrast to the poor performance of the TC-17E II in combination with the Nikkor 70-200mm f/2.8 VR lens, the image quality here is reasonably good, in particular after applying an AF finetuning value of +7.
Nikkor 200-400mm f/4 ED VR + TC-20E III @ 800mm f/8
In contrast to the extremely bad performance of the TC-20E III in combination with the Nikkor 70-200mm f/2.8 VR lens, the image quality here is still reasonably good, in particular after applying an AF finetuning value of +7.
Teleconverters decrease AF speed, in particular in low-light, low-contrast situations. However, when using a Nikon body which allows to autofocus at f/8, AF is still working considerably well even with the Nikkor AF-S VR 200-400mm f/4 lens. The image quality of teleconverters can drastically change when using different lenses. In the case presented here, either of the two teleconverters, TC-17E II and TC-20E III performed very bad on the Nikkor AF-S VR 70-200mm f/2.8 lens, i.e. producing very smooth images. On the other side, when attaching to the Nikkor AF-S VR 200-400mm f/4 lens, the image quality was only slightly decreased (in particular the 1.7x converter performs very well) with images that are reasonably sharp. However, the loss of light is then significant and such combinations presumably only work in environments that provide a sufficient amount of light.
Swedish readers might find this story about light pollution in Stockholm interesting. Jonatan Loxdal, reporter from the Swedish news website “kit.se”, interviewed me this week about our recent results based on our nightsky brightness measurements.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.