Welcome, Guest

Author Topic: Stellar age issue [KNOWN]  (Read 9693 times)

cavok84

  • **
  • Posts: 18
Stellar age issue [KNOWN]
« on: February 15, 2017, 03:12:48 AM »
First off I'd like to say, this simulation is amazing! I don't know how I managed to go this long without hearing about it.

My issue: When I attempt to age a star, for example the Sun in the climate simulation, the age increases yet no other parameters (e.g. surface temp or radius) change.

This prevents me from inducing a nova or expansion to a giant unless I adjust temp directly. Not sure if this has been mentioned before...

This issue holds for all created and stock simulations...
Also a few questions:
1.) are there any plans to limit supernova to stars that are actually massive enough? And to differentiate between nova and supernova

2.) Is the stellar remnant or nova remnant supposed to be a white dwarf? Under composition both the stars and stellar remnants are 100 percent hydrogen.

3.) Does the game take account of magnetics (say around a pulsar) in regards to nearby planetary heating? Will the magnetics of a pulsar contribute to evaporating the atmosphere of planets in the sim?

Thanks,

cavok
« Last Edit: February 23, 2017, 11:20:13 AM by cavok84 »

cavok84

  • **
  • Posts: 18
Re: Stellar age issue
« Reply #1 on: February 16, 2017, 09:45:14 PM »
Just a quick bit of additional information about he issue:

If I cycle 'radius by composition' the star responds to 'age' as well as the effects of being a 'nova remnant'.

I must continuously cycle this however, as the changes only respond by selecting 'radius by composition' on then off, over and over.

I've seen some on this forum say they cycle the setting once at the beginning of a simulation, and the star behaves appropriately. This is not the situation for me.

Also, as I stated before... my nova remnants of type II super nova seem to be an identical copy of the original star, unless that star is massive enough to turn into a black hole.

In addition to these odd issue, I'm having sim crashes whenever I end a simulation and try to load a new simulation. This appears to occur randomly.
« Last Edit: February 16, 2017, 09:55:30 PM by cavok84 »

Jar

  • Developer
  • *****
  • Posts: 732
    • Universe Sandbox
Re: Stellar age issue
« Reply #2 on: February 17, 2017, 04:21:26 PM »
Sorry for the issues, cavok84. We are aware of a number of issues with stellar evolution right now. The system is a little broken. Instead of attempting to put band-aids on all of the specific issues, though, we're working on a complete rewrite of the stellar evolution code. The new model should hopefully address most, if not all, of the issues you mentioned. Unfortunately this means we'll just have to wait a bit to see these addressed. We're hoping to have the new model ready for Alpha 20, though, our next big update.

Regarding magnetics, no, only internal magnetic fields for planets have any effect right now, which is reducing mass loss erosion from solar winds. Hopefully we'll improve on magnetic simulation in the future.

As for those crashes, I see that you sent in a log, thanks. There doesn't appear to be anything unusual there. Can you send us the log after you experience a crash? Once you run Universe Sandbox ² again, it overwrites the log, so you'll need to send it to us after a crash and before you run it again.

Thanks!

cavok84

  • **
  • Posts: 18
Re: Stellar age issue
« Reply #3 on: February 17, 2017, 09:28:31 PM »
Thanks for the reply, I'm glad to know it's being worked on. I'll send another log as soon as it happens.

One more quick question, and this is something I've looked up extensively and without any conclusive answer.

I'm running a skylake 6700k, overclocked to 4.4ghz. GPU: r9 390 8gb Ram: 32g @ 3000mhz OS: windows 10 professional 64bit

I've noticed something perplexing regarding in game Vsync, simulation tolerance/error, and cpu usage.


For Example: the default simulation of the solar system

When v-sync is off: 150-300 FPS:  simulation error appears to be at its lowest state in comparison to v-sync on. Simulation error converges with the error I observe with v-sync ON as I reduce the time step to very slow (sub 1 sec/sec) Once I'm below a sec/sec Vsync ON is as accurate if not more accurate than Vsync OFF

However, at normal to very high time steps V-sync OFF appears to provide an accuracy far better than ON.

When v-sync is ON:  set at 60 FPS: As stated above, error increases unless at very low time step values. The PHYS rate (fps viewer) decreases to about half of what it is when V-sync is OFF. At very high time steps orbits appear non-circular (i.e. hexagonal or some other geometric shape, I assume this is due to some error and FPS issue)

Vsync OFF gives an average CPU usage (reasonable time steps) of 30-50 percent
Vsync ON gives an average CPU usage of 10-20 percent.

Actual Data:

Solar System Sim (default)
Default time step of 15 days/second

Vsync OFF: FPS- approx. 240 fps PHYS- approx 930, 1 step
Simulation Error: .05 m/s- .25 m/s
Tolerance: 11.4 million km

Vsync ON: FPS- 60     PHYS- 623, 1 step
Simulation Error:  .9 m/s- 4.0 m/s
Tolerance: same as above

Do I understand this correctly? Is it more accurate at a given time step (say 14 days/sec) which is default for the solar system sim, to have Vsync OFF?

If I'm running for accuracy on a long term sim it appears I'd never want Vsync ON

Jar

  • Developer
  • *****
  • Posts: 732
    • Universe Sandbox
Re: Stellar age issue
« Reply #4 on: February 23, 2017, 10:39:40 AM »
Sorry for the delay. I asked the team about Vsync and accuracy, and you are correct. When the FPS is higher with Vsync off (if it is then higher than 60 FPS), the physics calculations can take smaller steps, and smaller steps means a lower simulation error. Essentially, enabling Vsync limits the CPU load, and disabling it can increase the CPU load and increase the physics calculations. We try to maintain a low error even when CPU load is lower, but if your aim is for utmost accuracy then disabling Vsync is your best bet right now. We've thought about ways to improve this in the future. You can also manually adjust accuracy and tolerance by clicking Sim > Show More in the bottom bar.

"At very high time steps orbits appear non-circular" -- this is because at higher time steps there are fewer positions calculated. The trails you see show the path from position to position, so for example, a tight orbit with only six calculated positions per orbit will appear as a hexagon. If you enable Orbits instead of Trails, you will see a circular orbit.

Hope that helps explain what you're seeing!


cavok84

  • **
  • Posts: 18
Re: Stellar age issue
« Reply #5 on: February 23, 2017, 11:15:57 AM »
Thanks Jar that helped a lot.

It was just a bit odd that something like vsync would have such a noticeable effect upon error, as typically I would assume the GPU would handle limiting FPS whilst the CPU could still provide high PHYS updates or additional steps to increase sim accuracy.

Like you said, increasing accuracy manually does have a noticeable effect of the values, but only if I manually enter a very low tolerance, the sliding 'accuracy' bar doesn't change anything and I suspect his is because my sim accuracy is high enough at a given time step to not warrant a change.

I'm currently using 'Native cpu' rather than 'managed'. I'm guessing this is still optimal?

Is there anything we can do to better utilize our GPU in computations? Especially if we have an AMD with high floating point processing?

Greenleaf

  • Thomas Grønneløv
  • Development Team
  • *****
  • Posts: 211
Re: Stellar age issue [KNOWN]
« Reply #6 on: February 28, 2017, 01:04:43 PM »
While the nbody code is running asynchronously, and is not strongly tied in with the render loop, they do synchronize. The timing is such that the user can specify a desired speed scalar, such as 1000 times faster than realtime, and for every frame nbody is asked to advance the simulation 1s/fps * 1000s


The system is really designed to work for situations where the nbody step is slower than the render step, and not the other way around, so currently a new nbody step is only launched once every n frames. In situations where the nbody step takes multiple frames, as is commonly the case, this works well, but when the nbody step takes much shorter time, it is not optimal.
This is why it can run faster with vsync off, since frames are rendered more frequently and therefore nbody stepping is also launched more frequently.


Generally a very low tolerance will force the nbody code to take multiple smaller substeps internally, which will make it more accurate, and also slower, thus utilizing the computation time better. The question is then if that is what you want, since there is such a thing as "accurate enough" and you may not always want "even more accurate" at the cost of 100% cpu utilization.


As to native vs managed, managed means the c# implementation of the nbody code while native is the c++ implementation (not user friendly names, I know), which still has managed collision-resolution parts, though. Native is generally some 3-5 times faster, and should be the default mode. The nbody code is currently being re-re-rewritten to be even more pure c++, with still better performance, while dropping managed mode entirely.


Currently we are not using the gpu for computations. We did a long time ago use OpenCL for the core of nbody, but with support for multiple platforms, and now even mobile, the pure cpu implementation won out. This is likely to change eventually, but we will not provide gpu computation any time soon.


I hope some of the above made sense. If not, don't hesitate to ask again :-)




cavok84

  • **
  • Posts: 18
Re: Stellar age issue [KNOWN]
« Reply #7 on: March 01, 2017, 09:53:19 PM »
Thanks for the detailed explanation! Your reply really cleared up some confusion I had as to the relationship between fps and internal physics updates/steps.

My curiosity came as a result of considering the accuracy of long term simulations. For example, let's say that I'm running a basic solar system simulation where I've introduced a rogue planet that enters an elliptical orbit around the sun. My goal is to view long term perturbations to the system.

If I'm running vsync ON, and I have an average error rate of 4.0 m/s and 10 percent CPU usage: after letting my sim run for a week (real time) and at a fairly high speed step.... will that 4.0 m/s eventually be a cumulative error that would render my simulation erroneous?  This is in contrast to vsync OFF and an error rate of .25 m/s (40 percent cpu usage). To clarify, are there internal physics steps that correct previous ambiguity or error so that, given a reasonable time step, accurate enough* is just as good as most accurate?

In other words, for the simulation to set a reasonable tolerance in the 'auto' settings, it would seem to be that the simulation would need to know for what length of time the user intends to simulate. Am I off about this?

As to your explanation, I'm very appreciative of your taking the time to fully articulate the specifics. I won't pretend to know anything about coding or what you and your team are actually doing, but the info you provided clarified the majority of my confusion.

The lack of GPU computation also explains such a low GPU utilization/temps. I came across some old info describing GPU computing and types of GPUs best suited to the sim, it seems this has been completely changed now and might offer another reason to upgrade to the new AMD ryzen processors!

Thanks!

cavok
« Last Edit: March 01, 2017, 09:57:24 PM by cavok84 »

Greenleaf

  • Thomas Grønneløv
  • Development Team
  • *****
  • Posts: 211
Re: Stellar age issue [KNOWN]
« Reply #8 on: March 02, 2017, 05:11:01 AM »
The concept of error in the simulation is perhaps a bit misleading.
Given the chaotic nature of a nbody simulation, a calculation error of 1m in one step can cause an error (from the true solution) of 1km the next step, while the error, from the already diverged path is perhaps again only 1m.


It is like a magnetic pendulum like in
https://www.youtube.com/watch?v=hTe6tRPROjI&t=23s where any ever so tiny offset in the initial position can cause entirely different motion.


The error we report is the error for _one_ step, which can reasonably well be estimated. The next step, though, will now be starting from that slightly wrong position which has already diverged from the "true" motion and the error reported the next time is from the already diverged path. In other words, you cannot say that after ten seconds of an error of 5m/s you have certainly no more than 50m of error. The only thing you can say is that if the simulation is mostly non chaotic then that estimate is true, otherwise it is truly per step and is only helpful for when you want to set a reasonable upper limit, which you find to be reasonable in light of other scales in the simulation. This mostly being orbital scales.


I hope that made some kind of sense. If not, I will try to give a better example.
I should probably consider writing something up about this to make it more clear what guarantees we can and cannot give related to error.



cavok84

  • **
  • Posts: 18
Re: Stellar age issue [KNOWN]
« Reply #9 on: March 03, 2017, 05:07:31 AM »
That helps a lot. Thanks for the explanation, as I'm sure you suspected, I had a different concept of what the error reporting system was telling me.

I imagine when simulating objects at orbital distances this-- as you stated-- is close enough.

Thanks

Cavok

Hidexas5

  • *
  • Posts: 1
unblocked game
« Reply #10 on: May 11, 2017, 10:25:45 PM »
Your question helped me see unblocked game.
« Last Edit: September 09, 2017, 09:47:00 PM by Hidexas5 »