Skylab1 those are good numbers

Keep in mind that the native periodic error of my RST-135 mount is about 70 arc seconds peak-to-peak. Since it is mostly sinusoidal (with a small third harmonic), this translates to an unguided error of a whopping 25" RMS; you can't even take short unguided exposures with a fisheye lens! This is probably the highest periodic error of any mount sold over $1000.

The Harmonic Drive (strain wave gears) has all the nice properties (lightweight, lots of torque and thus requires no counterweights, virtually no backlash), but it also comes with large periodic errors. It has been used mainly for robotic arms and even to drive the wheels of the original Mars Rover; applications that don't require small periodic errors, while torque is king.

The darn thing is machined from a block of aluminum, and not a cast part. Even the serial number is laser etched. I have had mine outdoors (covered just with dry bags when not in use) 24/7 for the past year where it had sat outside through rain and snow. It is built like a tank, except it does not weigh like a tank.

On paper, it should be easy to guide it down as long as one uses a fast update to match the time derivative of the periodic error. It needs a 0.5 second exposure time, or less.

Therein lies the rub -- under typical seeing conditions, the centroid of a single star jitters so much that one typically needs to use 2 or more second to average the turbulence out. But a 2 second update time (the fastest you can get with a 2 second exposure), is too long to get the guided error of the RST-135 to go lower than 1" or so.

Multi-star guiding changes all that by allowing 0.5 second exposures since the use of multiple stars reduces the centroid variances in lieu of longer exposures.

The result appears to show that the mount indeed works quite well with even the experimental Multi-Star autoguiding in ASIAIR, reducing the 25" native error down to 0.5".

Your mount's problem may not be coming from a case where it needs a short guide exposure time, or even from lousy centroid estimates. I.e., the centroid error could have been reduced by using longer exposures instead of using more stars, which presumably, you have tried already, or by inspecting the PHD2 Guide assistant numbers.

Perhaps I should repeat what Multi-star guiding does and does not do: what it does is to achieve the equivalent (or even better) centroid estimates without requiring a long exposure time. It does not do anything else, although some of the other problems may not rear their heads when there are no extraneous bogus guide pulses that are caused by poor centroid estimates.

If the PHD2 Guide Assistant recommends a smaller MinMo than 0.2 pixels, then v1.6 will have something for you, independent of whether Multi-star guiding is of any use for your mount.

Chen

    w7ay so I checked the phd2 log from the last time I ran it, the MinMo setting was at .2 while the log from AAP was .1.

    Confirmed with the developers, it will be released to the public by the end of this month.

    @asiair@zwo#45358 Thank you, boy end of the month is still 3 weeks away.

    Cant believe your able to add Canon Eos R, but not Eos Rp. They are practically identical in firmware!! Come on!

    Cant believe your able to add Canon Eos R, but not Eos Rp. They are practically identical in firmware!! Come on!

    Hello. this testing the canon 250d in the beta version. everything is going well but there is a big problem, it can only take captures in L format but not in RAW format. I hope they correct this bug. Greetings and thank you

      How does one get on the pre-public beta list ๐Ÿ™‚ Iโ€™m ready to test!

      Using multiple stars should also reduce "star lost" cases, since seeing will not affect all the stars within the same exposure; the probability of losing all 12 stars at the same time is like winning a lottery.

      Well, I'll be a monkey's uncle. I won the Star Lost lottery last night!

      Seeing was not very good. The star picture in the top left corner of the auto guide window was mutating to all kinds of shapes, very rapidly.

      When that happens, one usually packs up for the night. But I wanted to see how much multi-star autoguiding helps.

      Seeing was so bad, even the calibration stage produced a slightly non-orthogonal red and blue vectors. The two vectors in the Calibration Data popup shows up to 15 degrees of non-orthogonality. I recalibrated a few times to get better orthogonality and settled for some 5ยบ off. So, there will be a few percent of RA correction applied by mistake to the Declination motor, and vice versa.

      Multi-star picked a good number of stars (sometimes up to 12, but sometimes as low as only 7 stars).

      Then the terrible thing happened.

      Slowly, over the course of a few minutes, one by one of the green circles disappeared!

      You expect one or two star loss when seeing is poor at the stars' location, but they should come back again when the turbulence leaves the star after a guide plate or two. In the case of ASIAIR, those dropped guide stars never came back.

      Eventually, all stars dropped out and I lost guiding completely :-). They never came back even when the condition for each star improves. I have already written ZWO about this, so by the time there is public Beta, it might be fixed.

      When I restart guiding after the stars are lost, all the guide stars magically reappeared. This should be a simple algorithmic bug to fix.

      I had also suggested in my Beta feedback that they initially identify more candidate stars, even if the Raspberry Pi can only handle say 12 of them (seems to be the number they chose at the moment) while it is in the middle of autoguiding. So, when a star is temporarily lost you can pick the alternate stars and still be guiding with the maximum number of stars.

      While the guide stars were slowly disappearing, I could see the Total RMS error climb -- so multi-star is indeed working; fewer stars, bigger error. And even with 4 or 5 stars available, multi-star seemed to be doing better than single-star.

      The other thing is why should stars drop out to start with -- I got an answer from them that they are using HFD to determine hot pixels. When the star shape ever jitters so that the HFD is below a threshold, it is marked as a hot pixel and it is no longer identified as a star and hence the star is lost! (I have submitted a suggestion of a more robust way to find bad pixels during the calibration phase that won't cause a real star to be misidentified as a hot pixel while guiding is ongoing; this might fix the single star "Star Lost" problem too, we shall see).

      By the way, the smaller MinMo seems to help with my mount (harmonic drive gearing). Even with poorer seeing, I seem to get better guide numbers than I get from ASIAIR v1.5.3 (1" last night vs 1.3" typically with v1.5.3). Still not as good as v1.3 days, but seeing was also horrid last night.

      With hindsight, it is good for ZWO not to push the Beta yet to everybody. Beta is suppose to be for people to test and feedback problems, it is not for the casual user to get a new feature -- for most companies, that comes in the GR, general release.

      By the way, ZWO does not appear to follow the Golden Master (GM) paradigm -- a final beta before the full release. I have seen the build number skip a few updates between the last available beta and the general release. This means that there were some changes that never went through external beta. That's like playing Russian Roulette.

      The problems that I have encountered so far in Multistar guiding require simple fixes (seems to be algorithmic and mathematical in nature, so far), but they will probably mislead the casual user into thinking multi-star autoguiding does not work on ASIAIR (remember my earlier comment about scaring horses and children). Beta feedback is also merely feedback -- there is no guarantee that ZWO will fix the problems that you submit, or accept a single recommendation.

      You probably need to be a shareholder like Warren Buffet to get them to listen more seriously to every bug report :-) :-).

      Chen

        Chen, why do you think guiding changed so drastically since v1.3. When I used v1.4, guiding was never a problem. v1.5 is a disaster. I know it is the Asiair, because I can switch between PHD2 and the Air on the fly (I have a USB switch that controls mount and guide scope, connected to both laptop and Air, so I can go back and forth). Guiding is .5 with PHD2, 1.5 and above with the Air - same equipment, same night.

        • w7ay replied to this.

          billx Chen, why do you think guiding changed so drastically since v1.3.

          That is the $64,000 question. I have no idea (otherwise I would be richer, ha ha).

          Believe me, I have personally emailed them multiple times and asked what they changed in the code. The answer that came back had always been "Nothing was changed."

          My error used to average around 0.6" to 0.7"total RMS error (I have kept v1.3 logs). With V1.5.3, I averaged closer to 1.3".

          Yep, I agree it is something ASIAIR changed, not something in the original PHD2, nor even how PHD2 was used in v1.3. And since PHD2 guides fine, it is not the camera hardware, either.

          Chen

            billx I know it is the Asiair, because I can switch between PHD2 and the Air on the fly (I have a USB switch that controls mount and guide scope

            Hmmm, do you mind a simple (everything is simple when I don't have to do it myself :-) experiment? Switching between them but using the ST-4 guide port of the mount? See if you get similar disparity between RMS errors.

            This will isolate if the problem is how images are processed, or if the problem at the other end -- how the correction signals are sent to the mount. Both are part of the feedback loop.

            Chen

              w7ay Chen, why doesn't ZWO simply use the full version of the PHD2 instead trying to do their custom version, screwed it up at some point and not admit to it? Unless the the processor on the Air isn't powerful enough to run the full phd2, it would be much simpler just to use the already made version where they don't need to spend man power trying to maintain and perfect.

              • w7ay replied to this.

                I already wrote. I suspect the image acquisition
                In 1.5 something changed as they added the video acquisition capability

                • w7ay replied to this.

                  Skylab1 Chen, why doesn't ZWO simply use the full version of the PHD2 instead trying to do their custom version, screwed it up at some point and not admit to it?

                  You have to ask them, but have you tried writing real time imaging code with real time feed back loop on the toy Raspberry Pi?

                  Chen

                    stevesp In 1.5 something changed as they added the video acquisition capability

                    I think there just might be a way to figure out if they are actually using video mode now and not before.

                    The two schemes (async/frame-by-frame and sync/video) should look something like this:

                    The red x's are frames that should not be used for autoguiding. You get into trouble if they are not discarded.

                    For the async case (top diagram), because of the different length of guide pulses (the "Guide" boxes), the time between the guide frames in the PHD2 log will be more erratic, but are quite constant when no guide pulses are issued.

                    In the synchronous (video mode) case (bottom), they occur close to integer multiples of the video frame rate. I.e., the first Find Centroid is right after the 1st Take Image, and the second Find Centroid is close to the end of the 4th Take Image, and for video mode, each Take Image should take a constant amount of time (reciprocal of the FPS). Assuming that the video mode is not stopped and restarted along the way (which would negate the advantage of video mode).

                    You have old PHD2 logs from ASIAIR, don't you, Steve?

                    Chen