Category Archives: Astrophotography

The Joy of LRGB (M81)

1753_traversi_operation_anagoria

Source: FreeStarCharts.com


Bode’s Nebula is a spiral galaxy around 12m light years from us in the constellation of Ursa Major. It is a target I have tried previously using a DSLR, but at a wider image scale than I can achieve with my newer ATIK camera, and had mixed results.

My first effort can be seen below in the same frame as M82, The Cigar Galaxy. It’s a fairly wide view, as expected with a large-frame DSLR. It was processed using basic GIMP tools after stacking with DSS.

A nice pair of subjects, but I ran out of time with the imaging and didn't do enough of the calibration frames. But not a bad first effort.

It’s not the greatest image in the world, but I was very happy with it at the time especially considering my lack of experience, the relatively basic equipment and what were relatively short subs; most likely around 3 minutes.

As I have since upgraded some of that kit, I have wanted to have another go, getting “closer” to the target. The much smaller sensor in my ATIK camera reduces the field of view which is available to my equipment. This comes with mixed blessings as although it does give a good narrow view for some Deep Sky Objects (DSO) it also struggles with the larger ones; M31 – Andromeda will never be “doable” with this camera/scope combination – at least outside of mosaic imaging, and I am not quite at that stage yet!

So, with my mono camera and ED80 set up on the NEQ6, polar alignment complete and not a cloud to be seen, I ran off 33 x 10 minute subs of  M81. Over later nights I added about 2 hours each of RED BLUE and GREEN subs binned 2×2 so I could get away with less time for each filter. Binning effectively combines groups of pixels into “superpixels” to speed up data collection. When binned 2×2, each group of four pixels becomes one, and total integration time is quartered. Although the resolution of the image is reduced by doing this, it is not overly detrimental to the final image quality as the detail in the image is provided by the Luminance frames which are not binned. Binning helps speed up data collection, under these cloudy and unpredictable skies. It is much faster to collect binned data for the RGB frames. For the image above, with just under 6 hours of Luminance frames, I would need 6 hours each of R, G and B. By binning the RGB frames I need just 1.25 hours of each.

After taking the Light Frames, I took to a darkened room and sorted out the relevant DARKS, BIAS and FLAT frames for each filter. I ended up with hundreds of individual frames that needed calibrating, aligning, integrating and processing before I could end up with any sort of image.

I used PIXINSIGHT and Warren A. Keller’s Inside Pixinsight book (highly recommended) to work through the process of calibrating all these frames and combining them to form a MASTER LUMINANCE and MASTER R, G and B Frames. I won’t go into the details of the workflow here as I am refining it as I go and will perhaps post a more detailed account of LRGB processing when I have a better handle on it myself!

When prepared the LUMINANCE and R, G and B MASTERS are combined to produce the colour image before some final processing is applied. From PIXINSIGHT I moved onto PHOTOSHOP for a few final tweaks and ended up with the image below.

 lrgb-combination-final-stretch-in-basic-learning

Which I was quite happy with as a first “proper” LRGB image, but it did feel a bit garish and a little too saturated colour-wise; it didn’t look natural and I felt I may have “overdone” it with the processing, getting a bit carried away with trying to push PIXINISIGHT to get as much out of the image as possible. I wanted to try and get the fainter nebulosity in the background to stand out but, in doing so, pushed the rest of the image too far. Still, that might be something I CAN achieve when I develop more skills in processing. One step at a time and all that!

Because of this, I went back to the MASTERS and combined them a second time, working through the processing tutorials in the book, but holding back a little bit and not pushing it so much. Being a bit more restrained led to the following second effort.

image25aaa-second-second-lrgb_dbe_dbe

Some of the fainter nebulosity is still visible, but only just, next time I may need to try some longer subs to get more signal into the image. While still not perfect (there is some ringing around some of the brighter stars) it felt a lot more natural and I was a lot happier with the image.

Finally.

I started this image back in November last year and got to this point halfway through January. Processing astronomical images is not for the impatient, but it is very rewarding. Perhaps one of the most important skills to learn is patience, especially with the processing side of things. Although it does take hours and hours to collect the data required for these images, it can take just as long to process them carefully. If you don’t take your time and process with care and a light touch, you can very quickly ruin the data you have spent so long collecting. I’m getting better…but have a long way to go!

Widening the net by going Narrow!

IMG_6566fs


As wonderful as the moon is to look at, when it is full, it pretty much acts like a spoilt child, taking all your attention away from the fainter objects in the sky that you might want to be looking at. While I can cut through the light pollution (to an extent) with my camera and light pollution filter, wideband imaging (using LRGB filters) does suffer greatly when the moon is bright.

To counter this, Narrowband imaging comes into play. Using a variety of filters to image light in the narrowband wavelengths of Hydrogen-Alpha (HA), Oxygen III (OIII) and Sulfer II (SII) (among others) we can continue to image objects that are harder to capture with LRGB filters, even when the moon is pushing fullness. They are particularly useful when imaging emission nebula which emit light in these narrow wavelengths.

(NB, this is my layman’s explanation – the link above will give you more information if you wanna get technical!)

As I often find myself in the situation where the sky is clear, but the moon is bright, I decided to get myself a Ha filter to try my hand at Narrowband imaging.

On its own the Ha filter provides a Mono image and some imagers use it in conjunction with their LRGB imaging to provide greater fine detail. Normally, though, the HA, OIII and SII captures are used to provide a tri-colour image, where the narrowband images are mapped onto traditional RGB channels in processing software. But I’m not quite ready for that yet – I’m starting simply.

So, last week, with a bright moon in the sky, I set up the scope and camera as usual, but added the Ha filter to the mix and trained my sights on the Pacman Nebula (NGC 281).

Around 5 hours and 30 minutes later (if you ignore the cursing and swearing involved in actually setting up) I had a collection of 10 minute, guided Ha subs, which were subsequently stacked in DSS and tweaked in Photoshop. I cheated slightly, using one of Noel Carboni’s Astronomy Tools Action Set to add some false colour to the HA Channel and came up with the following image:

Picture saved with settings applied.

And I was rather happy with the result. A minimal amount of processing was required, showing just how much “easier” it is to work with a dedicated, cooled camera and, of course, with a much larger number/duration of subs than I usual have to play with. I might have a fiddle again later to try masking the stars when adding the colour as the brighter one are a little unnaturally pink!

A fellow astronomer from the Stargazers Lounge helped tweak it a little further to bring out some more detail as you can see in the next photo.

img_20160915_130940-jpg-381dd10a96b624a699d04dbd16a8fae1-thumb-jpg-02cdeb0168e56cdfb2a127fa03c8d546

I am still not sure which I prefer. Certainly it is great to see the detail in the core, but it just highlights the difficult nature of post-processing astro images and the subjective nature of the results. Either way, it was a great first foray into narrowband imaging.

A couple of nights later I had a crack at the Bubble Nebula (mono only) and, although I only had half the data, and the moon was fully full at the time, I did get further assurance that narrowband will become a common part of my toolbox! These are images I don’t think I could have come anywhere near achieving with just my LRGB filters (although they will still come in handy for galaxy imaging).

bubb3-thumb-jpg-7197f65945f8e4b4970f41899b3abbe6

So, if you are daunted by the price of a full set of narrowband filters (and I’ve seen some VERY high prices) but want to get into narrowband imaging, why not start off with an Ha filter that you can use for mono imaging and, potentially, to enhance your RGB imaging?

Sunspot Imaging – 22nd July 2016

mdi_sunspots

SOURCE: SOHO – Solar and Heliospheric Observatory


It’s been so long since I have had the telescope out, I have been getting withdrawal symptoms so, while the kids were busy roller-skating, I thought I would take advantage of the bright, clear sky and try my hand at some solar imaging. I have taken a few snaps before, mostly around the eclipse last year, but haven’t tried capturing detail in sunspots, so that was the plan.

I didn’t have much time, so I just did a basic alignment of the mount and scope, centered on SOL and set the mount to solar tracking rate – which seemed to keep things central so that was good enough for me.

***NB – The images were taken with the addition of a home-made Baader White-Light Solar filter. DO NO try imaging or viewing the sun without a proper filter in place!***

I used my ZWO ASI120MM camera on the telescope to take a number of 60 second videos of sunspots 2565 and 2567. I also used a 2.25x Barlow with the camera for some additional videos to get a bit closer. The videos gave me over 4,000 frames to process, which is probably a bit much, but better to have too many than too few. Next time I might drop the length of the video though as it does take a bit longer to process larger numbers of frames. SHARPCAP was used to collect the images, using a fairly low gain and very short exposure times.

The videos were initially processed in PIPP using the Solar Close-Up and Surface Feature settings and I only kept around 10% of the best frames, leaving me with around 400 to stack. Stacking took place in AUTOSTAKKERT2 and the resulting stacked image was tweaked in REGISTAX, using the wavelets feature, before a quick trim and tidy in Photoshop. If anyone has any questions about this workflow, let me know. I should write a brief tutorial when I get more time.

Anyway, the first image below is without the Barlow in place.

final tiff 2

and the second with the Barlow

Final TiFF

Considering that I was rushing to take advantage of a spare hour, and that I couldn’t see the Laptop screen without holding the remains of a cardboard box over my head, I am very happy with the results and look forward to having another go when there are some more sunspots to look at.

Images are also posted in the Gallery.

M13 – Great Cluster in Hercules – And an experiment in LRGB

M13_M92_Finder_Chart

Image sourced from FreeStarCharts.Com


I have, this week, been experimenting with LRGB photography, using a Mono CCD camera and coloured filters. This will explain the lack of a writing post this week and, indeed, the lack of any writing. However, like hen’s teeth, clear skies are a rare occurrence, and I have been taking advantage of them this week.

As I am new to this form of imaging I thought I would try for a relatively “easy” target – i.e. one that was in a good position and relatively bright. The spring skies are getting darker later and later, and the time available for imaging is limited, so I didn’t want to go for something too faint.

M13, the Great Globular Cluster in Hercules is one of the most popular clusters in the night sky and a good target for testing out a new camera. I really just wanted to make sure I could get everything to work and then process the images into one LRGB colour image, so I wasn’t planning a long session. I decided to go for 1 hour of Luminance and 15 minutes each of Red, Blue and Green Channels.

The Luminance filter lets all wavelengths of light through and provides the majority of data and detail for the main image – essentially it produces a normal mono image. The Red, Blue and Green channels can be “binned” which means several pixels on the sensor are combined into 1 “superpixel” covering a larger area of the target. This allows the data to be captured much faster in the colour channels. It does mean the quality of the image is lessened as the resolution is reduced but the main Luminance image collects the detail, so this is not a major problem, unless your pixels are very big in the first place. Collecting data at 2×2 binning combines the data from 4 pixels (3×3 9, etc etc,) and reduces the time required by a factor of 4, so for 60 minutes of Luminance, we only need 15 minutes each of Red, Blue and Green – so an hour and 45 minutes in total. Higher binning levels will reduce the time further, but also greatly reduce the data quality and is probably not suitable for my camera.

Once set up and aligned, I found M13, got focussed and guiding and then set the camera to take 20 x 180 second images, followed by several BIAS and FLAT images for later calibration. I did not take the usual DARKS on this run as the new camera is actively cooled, meaning I can control the temperature down to 30 degrees below ambient temperature. DARKS are designed to help calibrate your images and account for hot pixels and other problems caused by a heated sensor. With the on-camera cooling, DARKS are, theroretically, not required. I may well experiment with them in the future but, on this occasions, I decided not to. However, with the cooling available, the DARKS can now be taken at any time before or after the imaging session as the cooling allows me to easily replicate the temperature of the imaging run – something not so easy to do with a non-cooled DSLR!

After the Luminance issues were complete, I did the same for the Red, Green and Blue channels, selecting the appropriate filter as I went along. I have a manual filter wheel that holds 5 filters. I currently only have 4 slots filled with the LRGB filters, but it allows me to easily select the filter I want before each run.

At least that was the plan.

Somehow I managed to forget to change the filter wheel from Green to Blue and ended up with two sets of Green images, just as M13 disappeared behind the house! Very annoying… so I had to wait a couple of nights for another clear sky so I could get the final 15 minutes of Blue images, along with the relevant calibration frames. Unfortunately the night wasn’t as clear as it was supposed to be and many of the images were “tainted” by cloud. Although still visible, the clouds diffused the Blue channel light which affects the final picture as you will see below.

Anyway, I then had 4 sets of data for my LRGB channels, along with the required calibration frames. Each set was combined, stacked and processed in as similar way as possible to ensure they were of similar quality using PIXINSIGHT. The process was fairly basic with just a CROP to get rid of the dodgy corners left over from stacking. DBE to get rid of any gradients/vignetting, a quick HISTOGRAM stretch to bring the data out and then HDR Multiscale Transformation to bring out some detail in the core of the cluster. The colour channels also had to be resized to match the Luminance channel as the binning process also reduces the size of the image.

I processed the Luminance channels (mono) first off, before I had the Blue images, just to see what the data was like and, for just an hour, I was suitably impressed. I never got anywhere near this with a DSLR.

Image15

Next step was to combine the RGB channels using CHANNEL COMBINATION in Pixinisight and I then applied this to the Luminance Channel using the LRGB Combination Process. Finally I took the image from Pixinsight and tweaked the levels and curves in Photoshop.

As I mentioned above, the clouds affected the quality of the blue channel, meaning the light was diffused around the brighter stars and you can see this in the final image below. However, I was still happy with the result as I was really just trying to see if I could “do it” and get four sets of images from a Mono camera and combine them into a colour image.

Image26LRGB

There is definitely colour in the image…. maybe not all the right colours in the right places with the right weights, but there is colour! As mentioned above, you can clearly see the effect of the clouds on the two stars with blue fringing which shouldn’t be there. Next time out I just need to make sure there aren’t any clouds about!

Which should be easy!

Larger images are in the Gallery!

Transit of Mercury – 9th May 2016

Mercury_Globe-MESSENGER_mosaic_centered_at_0degN-0degE

MERCURY – (Wikipedia) image taken by MESSENGER (2008)


We’ve all heard of Mercury haven’t we? Planet closest to the Sun? You may have even seen it at some point in the sky, although it’s tendency to appear in the lighter morning or evening skies can make it difficult to pick out.

On the 9th of May, Mercury will be making a Transit of the Sun, meaning it will pass between us and the Sun, so we will be able to see it as it passes across the bright solar disc. This is a great opportunity to view the planet and see it in context with the Sun.

If you have suitable equipment, you will be able to observe the Transit as it occurs from around lunchtime on the 9th into the evening (UK). It isn’t a “blink and you’ll miss it” event, so there is plenty of time, but it is a fairly rare event, having last happened in 2006. When they occur, transits are generally visible in May and November and the May ones are the best opportunity to view taking into account the position of the Sun and Mercury. This is the last May transit for 33 years.

I’m planning to have a look and see if I can’t get some images of the sun using my white light filters and video camera. I would love to have some dedicated Ha Filters or a PST telescope – but they don’t come cheap, so that is something for the future – but a white light filter is still enough to see the action as it happens.

****A GENTLE REMINDER THAT VIEWING THE SUN WITHOUT TAKING THE PROPER PRECAUTIONS CAN LEAD TO PERMANENT BLINDNESS, SO PLEASE BE SENSIBLE IF YOU PLAN TO LOOK FOR MERCURY AND ENSURE YOU HAVE THE RIGHT EQUIPMENT****

(I am not taking responsibility for anyone not taking heed of the above)

Right, disclaimers out of the way.

As you can see from the tutorial HERE it is fairly easy to make a white light solar filter with some Baader film, which is easily available (there is a link in the tutorial). The tutorial shows a method for an 8 inch Dobsonian, but the principle applies to any aperture, as long as you make sure it is sealed, that you can secure it to your scope without it falling off and there are no pin-holes in the film. You can even make them for binoculars and finder scopes should also be covered with filters, caps or removed from the scope to prevent any accidental viewing of the Sun.

If you are unsure what you are doing, you really shouldn’t be doing it. Check out the Internet where I am sure there will be plenty of websites offering recordings and images of the transit if it happens.

So, barring lots of cloud or equipment failure, I will hopefully be able to post a picture or two of the event and record it for anyone unable to see it.

If you are going to have a look for yourself (or even attempt imaging) it would be great to hear how you get on, so let us know in the comments and post your pictures!

Have fun, but be careful with the Sun!

M51 – The Whirlpool Galaxy – Take 2

whirlpool resize


The above image is an attempt I made at M51 – The Whirlpool Galaxy just under a year ago. It is a combination of 29 x 120s LIGHT frames and the requisite BIAS/FLATS and DARKS. It was stacked in DSS and then processed in Photoshop.

I have been waiting to have another go at this target as it is a nice looking galaxy and I was quite chuffed with my original effort. It is easy to find, floating just off the “handle” of the Plough/Ursa Major, roughly opposite to M101, the subject of another recent post.

M51_M63_M94_M101_M106_Finder_Chart

Star Chart from FreeStarCharts.com


The Plough itself is nice and easy to find in the northern skies, revolving around the Pole Star or Polaris. By the time the night is dark enough at this time of year, the handle is high in the sky and, hence, so is M51, hopefully cutting down on the amount of atmosphere that needs to be “seen” through to image the target.

So, on Thursday last week, I set up the scope and equipment and aimed for M51, firing off an 8 minute (480 sec) test sub first to check for trailing. Luckily, the image came back free of trails, suggesting the polar-alignment and guiding were working nicely together. I thought I would give 8 minute subs a go partly because I hadn’t gone that long before and wanted to test the mount but, also, because the Moon is rising late and waning at the moment, so was out of the way for this session.

About 2 a.m. I finished up with 3 hours worth of 480 second LIGHT frames plus BIAS and FLAT Frames. I packed up and got warm again, before starting another run for the DARK subs with the equipment inside.

Then the fun started.

Over the next couple of days I used PIXINSIGHT to process the images before a final tweak in Photoshop. I may get around to writing up some tutorials for Pixinsight sometime in the future but, at the moment, I am still a complete novice, lapping up other imagers’ tutorials, so will stick to the basic workflow for now – but feel free to ask any questions as you read it.

  1. BATCH PREPROCESSING Script to register and calibrate the LIGHT/DARK/BIAS/FLAT Frames
  2. SCREEN TRANSFER FUNCTION to apply an automatic stretch to “see” data
  3. IMAGE INTEGRATION tool to align/stack the calibrated frames
  4. DYNAMIC CROP
  5. DYNAMIC BACKGROUND EXTRACTION
  6. COLOUR CALIBRATION Tool
  7. SCNR to reduce green tinge
  8. Clear STF Screen Stretch to return to LINEAR DATA
  9. HISTOGRAM TRANSFORMATION Tool (applied twice)
  10. HDR MULTISCALE TRANSFORMATION – to bring out details
  11. ACDNR – noise reduction
  12. MASKED SATURATION BOOST – boost colour
  13. Saved as 16bit TIFF files and opened in Photoshop.
  14. Simple LEVELS and CURVES adjustments in Photoshop to finish image and a final tweak of the colour saturation.

I suspect I have overdone some of the tools and adjustments, but I’m still learning, so I have an excuse. Noise is also still a bit of a problem, but an expected side-effect of un-cooled DSLR imaging. I was dithering this plan which does seem to have helped keep the noise in the image down, and I will have a go at processing the image again without the DARK frames in the stack. Many, more experienced, imagers suggest that DSLR DARK Frames can introduce more noise than they eliminate, so it will be interesting to go through the Workflow again and see how it compares without the darks.

One of the other drawbacks of the DSLR is the size of the sensor. Compared to many dedicated CCD cameras, the DSLR sensor is quite large. This is fine for widefield imaging of large galaxies (M31) or large areas of nebulosity but, for smaller targets like M51 it means a lot of cropping of the original frame is required. This increases the size of the pixels and can degrade the quality of the final image. This was the case for my M51 as it was very small on the original LIGHT frames – as you can see below. Cropping to that extent also helps show up any failings in focusing, as in my final image below – but practice makes perfect.

L_0001_ISO800_480s


Anyway, after a few days of tweaking, cropping and tweaking some more, I finally got an image I was happy with. Hope you like it and you can see the clear difference that having longer subs on a dark night can make – and an extra year’s practice in processing! You can also see it in the Gallery!

M51 finished 1st

Any questions or thoughts, let me know below!

Thanks for reading!

Clear Skies!

The Moon and Jupiter (and more moons…).

Solar_Telescope,_Ondřejov_Astronomical


 

I didn’t get around to a writing post this week. It just didn’t work out this week, with other priorities and things to do. However, on the 17th, there was a nice clear night, so I did manage to get the scope out. The moon is approaching fullness, so deep sky photography is out, but I thought I would have another go at Jupiter and maybe try my hand at a Moon Mosaic for the first time. I’ve seen other people’s and I’m really impressed with the detail, so thought I’d give it a go.

First off, I pointed the scope at the Moon, it was in a better position and higher in the sky than Jupiter, so I left the planet to rise a bit higher to, hopefully, minimise the effects of the atmosphere on the view.

The ZWOASI120MM camera I currently use has a much smaller sensor than my DSLR, so gives a much narrower field of view. It can’t get the whole of the moon in one go anyway, so a mosaic was definitely the way to go. I found the top left of the moon in the scope view and took 3x60s AVI movies on the left and then 3 on the right, going back up the other side of the moon.

By the time this was finished, I turned to Jupiter which was a bit higher and a took a 60s AVI using the ZWO camera, but with the addition of a 2.25x Barlow to increase the size of Jupiter slightly. As luck would have it, I managed to time it right to coincide with the transition of Calisto (one of Jupiter’s moon) across the surface of the planet, so I caught the moon’s shadow as well.

I also took another 60s AVI of Jupiter, but over-exposed it so that Jupiter was a bright blob, but 4 visible moons showed up in the image – more about that later!

Then I packed up and withdrew to the warmth of the house and a chance to play with the images.

I started with Jupiter as that was probably easier.

Each video was processed in PIPP to get rid of the dodgy frames (I kept 10% of them) then into AUTOSTAKKERT to stack the frames. Finally, I put the images into REGISTAX to play around with the wavelet function to try to pull out some detail.

I then had two images – one of Jupiter and the shadow of Calisto, and one of the moons. After a bit of trial and error in Photoshop, I managed to get the two layers to work together and merged them. I used the magic eraser function to bring Jupiter up into the image with the moons, replacing the over-exposed Jupiter with the more detailed planet.

I was pretty happy with the result, which included Jupiter, the four moons and the clear shadow of Calisto on the bottom of Jupiter (or it might be the top… it’s space, who knows….).

Jupiter, Calisto Transit - 17th March 2016

It still bakes my noodle that something (r) 700,000,000 km away can be seem through such a small piece of kit.

Then it was on to processing the Moon. Each AVI was run through the same process as the Jupiter AVI but, after sharpening in Registax, I cropped the images with clean edges before putting them together in Microsoft ICE, a free bit of software that stitches panoramas together and works pretty well with Lunar Mosaics as well. It is pretty easy to use, but you do need to make sure your images overlap, so it can line them up. However, you also need clean edges, otherwise it all goes a bit awry. Other people recommended iMerge and HUGIN, but I didn’t get on with them so well, and MICE worked, so will use that until it finds something it can’t cope with.

Finally, when all stitched together, I popped it into Photoshop for a quick tweak and ended with a nice, detailed Mosaic, that I am very happy with as a first effort.

Moon - 6 Panel Mosaic - 17th March 2016

Of course, now I’m thinking about scopes with longer focal lengths to get closer to the target…

The full size images are in the Gallery, if you want to take a look and let me know if you have any questions!

Clear Skies!

M101 – The Pinwheel Galaxy – The Power of Processing

Last week, on Tuesday evening, there was a rare clear night that coincided with a moonless sky. The forecast was good with no cloud expected until 5am so I thought I would grab my chance and get the kit out for a session of astrophotography.

I got everything set up in the back garden and pointed the scope towards the handle of The Saucepan, Plough, Great Bear, Ursa Major – whatever you want to call it, and the very faint Pinwheel Galaxy or M101.


M40_M97_M108_M109_Finder_Chart

Star Chart from FreeStarCharts.com


The galaxy probably is a bit faint with my skies and equipment, but the night was favourable, so I thought, ‘why the hell not?”

Well, as it turns out, because the forecast was wrong and the clouds came in several hours ahead of schedule, cutting short what I had hoped to be 3 to 4 hours of imaging. In the end I only got 16 subs of 420 seconds each to a total of just under two hours. Overall, not long enough to get all the detail out of the galaxy, but enough to have a go at getting a final image.

I also didn’t take any DARK subs as I was running out of time and getting a little cold, so it was calibrated with just LIGHTS, FLATS and BIAS sub frames. I was hopeful that, as I was dithering the plan, the need for darks would be diminished. The use of DARK frames seem fairly well debated anyway, so I wasn’t too worried.

Anyway, the next day, I stuck the images through Deep Sky Stacker as usual and put the resulting frame into PhotoShop for processing. I am still not sure I fully know what I am doing in Photoshop and still struggle to get the colour right in my images, so was happy that I got the galaxy, but not too happy with the resulting image which was rather purple. I was struggling in Photoshop to get colour out of the image without introducing too much noise.


M101 - 8th February 2016 - PS

M101 – 8th February 2016 – 16 x 420s LIGHTS. Stacked in DSS with 30xFLATS and 30xBIAS frames. Processed in Photoshop

Despite that, the image is not too bad for something that is 20 million-odd light years away and a damn site better than my first effort from last year which I think may have been deleted somewhere along the line… or at least is on some back-up drive in a dark, dusty cupboard somewhere.

The focus seems good and there is some detail in the galaxy, so not too disappointed. But not 100% happy with the colour.

So I gave it another go, bit the bullet and had a crack at it with PIXINSIGHT which I am currently trialing to see how I get on with it. It has a reputation for having a MASSIVE learning curve but, then again, astrophotography isn’t exactly a walk in the park.

Anyway, I followed some tutorials at Harry’s Astroshed and reprocessed the sames sub-frames to come up with a different final image. At this point I can heartily recommend Harry’s tutorials. There clearly is a steep learning curve with PIXINSIGHT, but they certainly seem to be a good place to start.


M101 - 8th February 2016 - Pixinsight

M101 – 8th February 2016 – 16 x 420s LIGHTS. Registered and stacked in PIXINSIGHT with 30xFLATS and 30xBIAS frames. Processed in PIXINISIGHT.

 

Both images are from the same set of LIGHT frames, but have slightly different crops, so the first looks slightly bigger than the second – but they are from exactly the same source. Unfortunately, because of the size of the DSLR sensor, the images are heavily cropped which does increase the visibility of noise in the image, PIXINSIGHT has done a good job of minimising this – as it is a lot less noisy than my first try.

To me, it is clear that the second image has much better colour balance and is, overall, a more pleasing image and probably not far off the best I could hope with a relatively small number of subs.

PIXINSIGHT is clearly a much more powerful (and dedicated) astrophotography processing tool than PhotoShop and I like what I have seen so far. That’s not to say I couldn’t achieve the same in Photoshop but PIXINSIGHT just seemed to make more sense. I feel that I learned more about Star Masks in 6 minutes of tutorial with Harry than I have learned in the last year – so I am looking forward to good things going forward!

Hope you like the photos, and if you have any thoughts on PIXINSIGHT or capturing/processing M101, please feel free to share them below!

Cheers!

 

M101 Data Table (courtesy of Free Star Charts)

Messier 101
NGC 5457
Name Pinwheel Galaxy
Object Type Spiral galaxy
Classification SAB (rs) cd
Constellation Ursa Major
Distance (kly) 22,000
Apparent Mag. 7.9
RA (J2000) 14h 03m 12s
DEC (J2000) 54d 20m 55s
Apparent Size (arcmins) 28.8 x 26.9
Radius (light years) 90,000
Number of Stars 1 Trillion

The Moon

IMG_6566fs

It has been a while since I posted anything about astrophotography, simply because it has been a while since I have been able to do anything. The terrible weather has been a big contributing factor, not helped the last time I did get out by my own haste and failing to set everything up correctly – so that I ended up coming in empty-handed, although those hands were very cold. All this means I have only been out a couple of times in the last few months.

The forecast was good for Thursday night, so I planned to get out no matter what. Well, actually, there was the small issue of the Moon. Not quite full at this point, but big enough and bright enough to be a real pain for imaging any deep sky objects like galaxies and nebula with a DSLR.

But I wanted to do something. So I had a look at the Moon for the first time in ages and remembered just how interesting it is… and how little I really know it. That was when I thought it was about time I had a go at imaging the moon in more detail using my guide camera, which also doubles up as a handy planetary camera. I just haven’t got around to that yet, largely because it was something new, and it was easier not to try it, than try.

Actually, as it turns out, it is easier to set up for planetary imaging than DSOs, so I was up and running with a quick polar alignment and stuck to manual slewing, with Lunar tracking rates. I stuck the ZWO camera in the ED80 and away I went.

The images start as video captures of the target. 10,000 frames for each video, processed in PIPP to assess the quality of the frames and keep the best 25. Those remaining frames were analysed and stacked in AUTOSTAKKERT2 before I finished up the processing in REGISTAX to play around with the wavelets – effectively sharpening the image. A final tweak in Photoshop and the images were finished.

The general seeing wasn’t great, so it was hard to get accurate focus and the atmospherics make the target “wobble” – looking like something seen through a heat-haze. The purpose of taking a video is to capture thousands of frames in a short-space of time, hopefully ending up with a usable number of shots where the target comes into focus.

I also had a quick go at Jupiter. Unfortunately, as it was lower in the sky, I was shooting through a thicker cross-section of atmosphere, so the issues with seeing were worse. However, I got a better outcome than my first effort which you can see in the gallery, and the detail is getting better, with the Great Red Spot clearly visible. I shall look forward to having another go when Jupiter is in a more favourable position. Oh, and just before anyone asks, the ZWO Camera is a MONO camera, which is why the GRS is not red!

Anyway, I hope you like the images and, if you have any question about how I took them and finished them, let me know. I am very new to planetary, so I’m just figuring this out myself.

[And anyone who knows about my other interests, will probably be able to guess why Tycho was one of my first targets….]

Tycho Crater

Tycho Crater

Tycho Crater 2

Tycho Crater 2

Unknown Crater

Unknown Crater

Mare Imbrium/Plato Crater

Mare Imbrium/Plato Crater

Copernicus Crater

Copernicus Crater

Copernicus Zoom

Copernicus Zoom

Jupiter with GRS

Jupiter with GRS

Andromeda Galaxy Revisited – 31st October 2015

Something strange happened at the weekend… there was a clear night in the UK. So, in the dark, I set about getting the telescope and cameras out to have a go at something… anything really. I haven’t been out since 3rd October when nothing worthwhile resulted. Just because it ahead of the still large and bright moon, I plumped for the Andromeda Galaxy. As a reminder, Andromeda (M31) is our closest galaxy at 2.5m light years and is visible with the naked eye on dark clear nights… and if you know where to look!

I wanted to see if I could improve on the effort I posted last year.

An early attempt at Andromeda that suffers from the limitations of unguided subs on an EQ3-2 mount. There is some detail coming through in the dust lanes and some colour, but longer subs are going to be needed to improve on this.

While the galaxy is visible it still lacks detail and the stars suffer with pushing the colour a little too far in post-processing. This was also in the days before a field flattener on the scope that would prevent the visible “stretching” of the stars in the corners of the image. The image was also taken on my older mount and without guiding assistance, so the mount was limited to 90 second images.

With my newer set-up I pushed the single images up to 210 seconds, so just over double what I managed before and also guided the scope while it was tracking to improve the quality of the image. There is still a long way to go. As I get sorted with the skills of guiding the imaging scope I will be able to up the image length and, if the timing is right, and the moon is out of the way, it could work on over 5 minute subs and even longer.

Processing, to be honest, is the hard part, and I am still learning how to use Photoshop to my best advantage. There is rather a lot to learn!

Anyway, here is the first effort, which is a result of 2 minutes short of 3 hours of imaging, so at least 3 times the amount of data I had for the picture above.

M31 - Andromeda 31st October#1

Clearly there is a lot more data and detail in the image, so I am happy with how that turned out. But the core is blown out, so I thought I would have another go at processing the image and came up with something that is a little softer.

M31 - Andromeda 31st October 2015#2

I think I prefer the second version, but then it is fairly subjective how the images are processed and the can be tweaked ad infinitum if you are not careful. It is often the case that you have to get to a point that you are happy and then leave it. The core is still a little “blown” but I will need to incorporate two sets of images taken at different exposure lengths to counter that… one step at a time!

The images still have a fair bit of background noise, and I will need to work on reducing that in future images, but I am very happy with the improvement over the course of the year.

Hope you like them!

The only way is up!

As ever the images have also been uploaded to the gallery!

Dead Letters

El blog de la serie de TV para teatro

atwhatpriceliberty

learning the freedom to do anything...in VFX & animation

Sabina Giado

Muslim. Mom. Writer. Geek. Hopeless romantic.

The Novice Screenwriter

A friendly blog and resource for writers, screenwriters and wannabes young and old

Mumblings & Musings of a Rookie Screenwriter

...you might want to avert your eyes.

Barataria - The work of Erik Hare

I don't break news, I fix it.

Elan Mudrow

The Ridges of Intertextuallity

1001 Scribbles

Random and Abstract Lines

theuniverseity.wordpress.com/

educational astronomy articles and videos

Storyshucker

A blog full of humorous and poignant observations.

Above the clouds

My adventures with amateur astronomy

Grady P Brown - Author

Superheroes - Autism - Fantasy - Science Fiction

Pancakes

are like screenplays. The first one is usually a mess.

Never Get Off The Bus

a blog about screenwriting, film, and story