Welcome, Guest

Author Topic: Coding  (Read 263906 times)

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #690 on: January 19, 2018, 07:53:44 PM »
in each of these images, one row was generated by a neural network and the other one is human generated

Jorster

  • ****
  • Posts: 37
  • karma chameleon
Re: Coding
« Reply #691 on: January 21, 2018, 09:47:56 PM »
I made a Darvince IRC Markov chain bot
Code: [Select]
import json
import markovify
import socket
import random
import time
       
server = "irc.freenode.net" # Server
channel = "##universesandbox" # Channel
botnick = "BotVince" # Your bots nick
print("Opening Brainfile")
corpus = open("darvincelines.txt").read()
print("Converting Brainfile to json")
text_model = markovify.Text(corpus, state_size=3)
model_json = text_model.to_json()
print("Success")
reconstituted_model = markovify.Text.from_json(model_json)

def ping(): # Respond to server pings
  ircsock.send("PONG :pingis\n")

def joinchan(chan): # Join a channel
  ircsock.send("JOIN "+ chan +"\n")

ircsock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
ircsock.connect((server, 6667)) # Here we connect to the server using the port 6667
ircsock.send("USER "+ botnick +" "+ botnick +" "+ botnick +" :Jister - A bot made by Jorster on USF\n") # user authentication
ircsock.send("NICK "+ botnick +"\n") # here we actually assign the nick to the bot

joinchan(channel) # Join the channel using the functions we previously defined

while 1: # Be careful with these! it might send you to an infinite loop
ircmsg = ircsock.recv(2048) # receive data from the server
ircmsg = ircmsg.strip('\n\r') # removing any unnecessary linebreaks.
ircmsg = ircmsg.lower() # Converting to lowercase for easier finding of kol
print(ircmsg) # Here we print what's coming from the server
if ircmsg.find("ping :") != -1: # If the server pings us then we've got to respond!
ping()
if ircmsg.find("##universesandbox") != -1: # Make sure we're getting a message from the irc channel
if ircmsg.find("darvince") != -1:
if random.randint(1,10) == 1:
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
markov = "markov failed yell at jorster"
ircsock.send("PRIVMSG " + channel + " :" + markov + "\n")
if ircmsg.find("!dar") != -1:
print("!dar found")
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
try:
markov = str(reconstituted_model.make_short_sentence(340))
except:
markov = "markov failed yell at jorster"
print(markov)
ircsock.send("PRIVMSG " + channel + " :" + markov + "\n")
It was a fun experiment in markov chains and irc bots

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #692 on: January 22, 2018, 06:19:04 PM »
some thoughts on a next gen irc bot

gen 0: old old ubv, using some weird-ass bespoke key word search i made up. could repeat previously said text. actually was quite coherent

gen 1: trigram model (old ubv and current jister). attention span of 2 words max.

gen 2: standard lstm model with single line attention span. would also expect much lower perplexity than trigrams

gen 3: both a line and paragraph lstm model, where the paragraph level lstm model produces some context vector which is fed in to the line lstm. this would allow bot to have a long attention span

gen 4: gen 3 + pointer networks attention mechanism, which would be very useful for irc, due to the limited amount of training data.

gen 5: gen 4 + gan loss, cause why not.

references
thought vectors: https://arxiv.org/abs/1506.06726
pointer networks: https://arxiv.org/abs/1506.03134
seqgan: https://arxiv.org/abs/1609.05473
« Last Edit: January 22, 2018, 06:25:11 PM by vh »

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #693 on: March 04, 2018, 07:04:31 PM »


x-axis: log2(hours i think some task will take)
y-axis: log2(hours it actually took)

performed gaussian process regression. inner blue region is 1 std, outer blue region is 2 stds

red points are actual tasks from my collected dataset-- slightly jittered so you can see overlapping points. note that the majority of tasks i predict as taking 0.5, 1, or 2 hours. note the incredible spread of times taken for "1 hour" tasks -- predicting time is hard.

although it's not visible from this plot -- gaussian processes can model heteroskedastic trends -- so if i became really accurate at predicting tasks which took less than 1 hour but really bad at tasks which took more, then the predictive intervals would vary in thickness along the horizontal axis


blotz

  • Formerly 'bong'
  • *****
  • Posts: 813
  • op pls
Re: Coding
« Reply #694 on: March 07, 2018, 01:21:20 PM »
welp
i might've mostly used someone elses code as a template and this picture may have been really easy and i might've only had 3 letters to pick from and i might've used the simplest machine learning algo in existance but:

Darvince

  • *****
  • Posts: 1842
  • 差不多
Re: Coding
« Reply #695 on: March 07, 2018, 02:09:16 PM »
in each of these images, one row was generated by a neural network and the other one is human generated
i wonder how it would look if you asked the human brain to generate the numbers instead of the human hand

atomic7732

  • Global Moderator
  • *****
  • Posts: 3849
  • caught in the river turning blue
    • Paladin of Storms
Re: Coding
« Reply #696 on: March 15, 2018, 12:27:30 AM »
(a quick note: paladinofstorms.net/cyclone == tropicalcyclonedata.net, the folder is literally the same)

dae have any idea what's going on

[07:17]   Kalassak   http://paladinofstorms.net/cyclone/adv/view.php
[07:17]   Jister   title: Tropical Cyclone Tracking Data: page 1
[07:17]   Kalassak   http://tropicalcyclonedata.net/adv/view.php
[07:17]   Jister   title: Tropical Cyclone Tracking Data: page 1
[07:20]   Darvince   the second one has no CSS
[07:20]   Darvince   for some reason
[07:20]   Kalassak   yeah
[07:21]   Kalassak   i can't figure that one out
[07:21]   Kalassak   interesting
[07:21]   Kalassak   if i link to http://paladinofstorms.net/cyclone/adv/tables.css
[07:21]   Jister   type: text/css, size: 624 bytes
[07:21]   Kalassak   it works
[07:21]   Kalassak   but
[07:22]   Kalassak   why can't i just link to tables.css
[07:22]   Kalassak   it's in the same
[07:22]   Kalassak   fucking
[07:22]   Kalassak   folder
[07:23]   Kalassak   yeah
[07:23]   Kalassak   the link
[07:23]   Kalassak   <a href="search.php">
[07:23]   Kalassak   takes you to search.php
[07:23]   Kalassak   which is in the same folder
[07:23]   Kalassak   that works
[07:23]   Kalassak   why doesn't <link rel="stylesheet" type="text/css" href="tables.css" />
[07:23]   Kalassak   take you to tables.css
[07:23]   Kalassak   which is in the same folder
[07:24]   Kalassak   https://i.gyazo.com/0dd62a22a8f61600400bc2297ed38aa3.png
[07:24]   Kalassak   wow look at that
[07:24]   Kalassak   they're in the same folder
[07:25]   Kalassak   <link rel="stylesheet" type="text/css" href="http://tropicalcyclonedata.net/adv/tables.css" />
[07:25]   Kalassak   doesn't work
[07:25]   Kalassak   http://tropicalcyclonedata.net/adv/tables.css look what's here
[07:25]   Jister   type: text/css, size: 624 bytes
[07:25]   Kalassak   the fucking file
[07:26]   Kalassak   i guess we'll use the way that works
[07:26]   Kalassak   since it works
[07:26]   Kalassak   by linking to paladinofstorms
[07:26]   Kalassak   which makes no fucking sense

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #697 on: March 17, 2018, 11:04:17 AM »

this time, with proper h e t e r o s k e d a s t i c i t y
and no, it's not a linear trend, it just looks like it (cause it's pretty close)


vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #698 on: March 21, 2018, 11:43:33 AM »
it's good to have a simple "dump this object to disk" and "read this file as an object" function in every programming language, so here's mine for java

Code: [Select]
public class Database <T extends Serializable> {
   
    private final String filename;
   
    public Database(String filename) {
        this.filename = filename;
    }
   
    public void save(T obj) {
        try {
            FileOutputStream fos = new FileOutputStream(filename);
            ObjectOutputStream oos = new ObjectOutputStream(fos);
              oos.writeObject(obj);
              oos.close();
              fos.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
   
    public T read() {
        File f = new File(filename);
        if(!f.exists() || f.isDirectory()) {
            return null;
        }
       
        T obj = null;
        try {
            FileInputStream fis = new FileInputStream(filename);
            ObjectInputStream ois = new ObjectInputStream(fis);
           
            @SuppressWarnings("unchecked")
            T tmp = (T) ois.readObject();
            obj = tmp;
           
            ois.close();
            fis.close();
        } catch (IOException e) {
             e.printStackTrace();
        } catch(ClassNotFoundException e) {
             e.printStackTrace();
        }
        return obj;
    }
}

in python pickling is builtin. why can't you be more like python, java
« Last Edit: March 21, 2018, 11:53:38 AM by vh »

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #699 on: March 22, 2018, 08:57:36 AM »
here's my famously complex backup script which i run once whenever i remember and upload to dropbox

Code: [Select]
cat /dev/sda3 | gpg --symmetric --output backup.gpg
i should probably back up the entire /dev/sda while i'm at it, since my boot sector is in /dev/sda4. it's not technically important but it would be a pain to have to set that up again

i did consider compression before encryption but note that
1. there are attacks that exploit compressing data before you encrypt it -- so you have to be very careful
2. most of my drive either contains many small files which take up little space and are not worth compressing, and a small number of large files -- usually binary data, images, or videos -- all of which have either already been compressed using image/video compression algs which would do better than any general purpose compression -- or in the case of binary data files, probably cannot be compressed much, if at all. so i suspect compressing wouldn't actually shrink the size of the backup
« Last Edit: March 22, 2018, 09:02:27 AM by vh »

Gurren Lagann TSS

  • *****
  • Posts: 120
Re: Coding
« Reply #700 on: March 22, 2018, 09:31:49 AM »
Code: [Select]
public class Database <T extends Serializable> {
   
    private final String filename;
   
    public Database(String filename) {
        this.filename = filename;
    }
   
    public void save(T obj) {
        try {
            FileOutputStream fos = new FileOutputStream(filename);
            ObjectOutputStream oos = new ObjectOutputStream(fos);
              oos.writeObject(obj);
              oos.close();
              fos.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
   
    public T read() {
        File f = new File(filename);
        if(!f.exists() || f.isDirectory()) {
            return null;
        }
       
        T obj = null;
        try {
            FileInputStream fis = new FileInputStream(filename);
            ObjectInputStream ois = new ObjectInputStream(fis);
           
            @SuppressWarnings("unchecked")
            T tmp = (T) ois.readObject();
            obj = tmp;
           
            ois.close();
            fis.close();
        } catch (IOException e) {
             e.printStackTrace();
        } catch(ClassNotFoundException e) {
             e.printStackTrace();
        }
        return obj;
    }
}

Code: [Select]
cat /dev/sda3 | gpg --symmetric --output backup.gpg
wut

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #701 on: April 11, 2018, 02:21:18 AM »
thinking about how resources should be allocated on a cluster. specifically gpu resources

proposal: users use an artificial currency to buy or sell gpu time on an open market.

1. every user gets a baseline number of gpu-hours (GPUH) per hour/day. the reason we hand out GPUH directly instead of handing out credits is because it avoids the need for the sysadmins to act as a market maker.

2. by default, users have a recurring sell order which sells their GPUH for dirt cheap on an open exchange. users can remove this order and configure their own buy/sell orders at will.

3. obviously you need credits to buy GPUH on the market. the nth user starts off by being given 1/n of the credits on the market, which is collected by the admins using some form of taxation (maybe tax stored credits at 1%/day, or tax transactions with a 1% fee). but the overall monetary supply is constant.

4. whenever the computing resources increase (a new node with gpus added to the cluster), all users get a bonus paycheck to spend

5. in times of little activity, everyone will be selling their GPUH on the market, and the price will drop. before a paper deadline, when everyone wants GPUH, the prices will skyrocket. this is reasonable and efficient behaviour, rewarding users who make the effort to help everyone out by using GPUs when they are least needed.

6. of course, users are not obligated to participate in the market at all, since they may simply use their own GPU hours as they would otherwise.

7. want to do some long term planning? need to run an experiment for 1 week and don't want to be surprised by GPUH price spikes? no problem. introducing the GPUH futures markets, where you can lock in GPUH at a fixed price ahead of time.

The Major Issue

despite the fact that GPUH is an asset which allows gpu access on the cluster, it can't possibly guarantee access on the cluster (consider, for example, the case where all 50 users decide to spend their GPUH at once on a cluster with only 40 gpus).

there seems to be a lot of ad hoc tricks you can do here (perhaps the admins can buy back GPUH at a high price to prevent the server from being oversubscribed), or maybe something like first-come-first-serve could work, but all those seem like sloppy, fragile fixes, stemming from the fact that theses GPUH are not *pinned* to a time slot

----------------

8. Pinned GPU hours (PGPUH)
in this scenario, instead of there just being one asset (GPUH), a new type of asset is created hourly. so if there are 100 users and 10 gpus, then a user will get 0.1 of a march-18th-5-am GPU hour, 0.1 of a march-18th-6-am GPU hour, and so on.

obviously users will want to sell their GPU unless they want to run an experiment, in which case they'd spend enough money to buy one full PGPUH (out of 10 available). because each time slot is a different asset, you don't run into oversubscription problems

9. if you want to run a 48h experiment, it's pretty bad to have to scramble to buy 1 full PGPUH every hour, or risk getting your experiment be terminated. when running an experiment, you'll want to buy a block of GPU allocations. this can be fairly easily automated

10. the GPU hogging problem.
suppose 90% of your users are inactive, but also disabled the autosell order. how do you cope with the fact that 90% of your PGPUH are being distributed to users who aren't participating in the market and no one else can use their PGPUH?

one answer: automatically sell PGPUH after 3 days of inactivity or something
another answer: raise the amount of PGPUH distributed by the server so that GPUs may be oversubscribed in theory, but rarely so in practice

11. the user interface
ok all this market nonsense is pretty complicated to the end user, who usually wants nothing more than the ability to compute things without trouble. so we've got to automate it somehow...

first, let's say that PGPUH are distributed maybe 2 months ahead of time, with the assumption that no one wants to plan their experiments down to the hour more than 2 months ahead of time.

second, to allow for ability of people to plan ahead, an automatic sell order for all PGPUHs is issued by every user. so for example, if PGPUH on april 3 is worth 3 credits historically, a sell order will be automatically issued for that.

third, users can buy blocks of PGPUH ahead of time. so if you plan on running experiments in 3 days, you can run a script which will automatically issue an order for that.

finally, a user who doesn't want to participate in any of this nonsense (mainly point 3), can just run their GPUs at any time, and purchase unused PGPUH at runtime at the current market value. this is basically how the current system works, except instead of being able to do this forever, you have to be able to afford it of course, which is more fair.

12. if everything is taken care of automatically, how will accurate pricing be done? for example, if everyone wants to use GPUs simultaneously, and 80 of 100 users have sold their 0.1 PGPUH for the hour, leaving 2 free GPUs, will the pricing actually go up? what if only 1 out of 100 users want to use a gpu for the hour. will the price of the next PGPUH fall to near 0, as expected of an efficient system? of course this is true in a real market with traders, but we are trying to automate most of it so researchers don't have to waste their time.

answer: each user will have a customizable hard-sell price, where if GPU prices reach a certain level, they are willing to kill their currently running experiments and sell their GPUH to another user. this will make it so that you can almost always get a gpu by paying enough

answer: a reasonable hard-coded auto-sell order based on remaining gpu capacity and historical data will further result in reasonable prices. users can customize this too

answer: as an hour starts, the value of a PGPUH gradually decreases to 0. so the auto-seller takes care of things, by adjusting the auto-sell price to be able to consistently sell of unused PGPUH a good fraction of the time. this should converge to a level where if only one user wants a GPU, all the auto-sellers would be willing to sell for pretty cheap.

a


vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #702 on: April 11, 2018, 02:46:45 AM »
what if 3 users want to share 2 gpus -- a worked example

suppose there are 3 users, A, B, and C, each starts with 10 credits, and owns 0.7, 0.7, and 0.6 of the next PGPUh. and suppose they all want to run a job at 5 pm today. what happens?

well first suppose the historical price is 1 credit per PGPUH, which means there are already 3 autosell orders on the market for 0.7, 0.7, and 0.6 credits. furthermore, suppose A, B, and C have their hard-sell price at 2, 5, and 8 credits resp, so they're willing to sell their PGPUH no matter what for 1.4, 3.5, and 4.8 credits resp.

then, suppose A, B, and C are willing to pay 1, 1.5, and 2 credits to run the job in that one hour block.

so here's the order book, ordered by the issue date

1. A: sell @ 2, all
2. B: sell @ 5, all
3. C: sell @ 8, all
4. A: sell @ 1, amount 0.7
5. B: sell @ 1, amount 0.7
6. C: sell @ 1, amount 0.6
-- (below: didn't happen yet) --
7. A: buy @ 1, amount 0.3
8. B: buy @ 1.5, amount 0.3
9. C: buy @ 2, amount 0.4

when order 7 comes out, that cancels order 4, and suppose A buys from B, so the book looks like this

1. A: sell @ 2, all
2. B: sell @ 5, all
3. C: sell @ 8, all
5. B: sell @ 1, amount 0.4
6. C: sell @ 1, amount 0.6
-- (below: didn't happen yet) --
8. B: buy @ 1.5, amount 0.6
9. C: buy @ 2, amount 0.4

now A has 1 PGPUH in the bank and 9.7 credits, and B has 0.4 PGPUH and 10.3. B adjusts order amount up from 0.3 to 0.6, cause it needs to buy more now.

next B executes. i don't know how exchanges actually do this, but i'll go with the convention that the trade price is determined by the first order, so C sells at 1 instead of 1.5, leaving us with

1. A: sell @ 2, all
2. B: sell @ 5, all
3. C: sell @ 8, all
-- (below: didn't happen yet) --
9. C: buy @ 2, amount 1

B has now 1 PGPUH and 9.4 credits as well, and C has 10.6 credits

note that order 3 and 6 was cancelled, since C can't sell anymore, and C adjusted it's order amount

next, C buys from A, which basically clears the book.

2. B: sell @ 5, all
3. C: sell @ 8, all

Now A has 0 PGPUH and 11.7 credits, B has 1 PGPUH and 10.3 credits, C has 1 PGPUH and 8.6 credits, so GPUs were properly allocated.

but wait, what if C's order went first. let's rewind a bit...

1. A: sell @ 2, all
2. B: sell @ 5, all
3. C: sell @ 8, all
5. B: sell @ 1, amount 0.4
6. C: sell @ 1, amount 0.6
-- (below: didn't happen yet) --
8. C: buy @ 2, amount 0.4
9. B: buy @ 1.5, amount 0.6

C fills B's order

1. A: sell @ 2, all
2. B: sell @ 5, all
3. C: sell @ 8, all
-- (below: didn't happen yet) --
9. B: buy @ 1.5, amount 1

C ends up with 1 PGPUH and spends 0.4, B now has 10.7 credits

Now A has 1 PGPUH and 9.7 credits, B has 0 PGPUH and 10.7 credits, C has 1 PGPUH and 9.6 credits, again we have success


vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #703 on: April 15, 2018, 10:08:07 AM »
huh, organizing classes in OOP is actually pretty similar to solving a k-cut problem, using semantics as a heuristic

https://en.wikipedia.org/wiki/Minimum_k-cut

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #704 on: April 23, 2018, 08:41:41 AM »
so i had the sin and cosine of an angle, and i wanted to add perturb that angle by some delta value. so i looked up some sin / cos addition identities on wikipedia. here's the code.
Code: [Select]
sin_delta = tf.sin(delta)
cos_delta = tf.cos(delta)
#from wikipedia
#sin(α + β) = sin α cos β + cos α sin β
#cos(α + β) = cos α cos β - sin α sin β.
sin = sin * cos_delta + cos * sin_delta
cos = cos * cos_delta - sin * sin_delta

can you spot the bug?

atomic7732

  • Global Moderator
  • *****
  • Posts: 3849
  • caught in the river turning blue
    • Paladin of Storms
Re: Coding
« Reply #705 on: April 23, 2018, 10:24:05 AM »
you are recalculating sine before calculating the cosine

mess

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #706 on: April 23, 2018, 10:42:12 AM »
gj. i was printing out sin^2 + cos^2 and getting confused about why it wasn't 1

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #707 on: April 26, 2018, 03:26:04 PM »
decided to test out some numerical integrators here. using a very simple spring motion a(x) = -x

first compare euler to "semi-implicit euler" to leapfrog. euler and semi-implicit euler are first order while leapfrog is second order (supposedly better). semi-implicit and leapfrog are symplectic (good) and euler isn't symplectic.

so first, euler vs si-suler vs leapfrog, time steps of 0.1:


so euler goes crazy pretty fast and the other two are so close to each other you almost can't tell the difference and you definitely can't tell which one is better.

so then i unironically named a function "make_god" which basically modifies an integrator into a god integrator which secretly does 100 steps at 1/100th the delta t but only reports the results from the last step. so there's a god euler (really god si euler) and a god leapfrog integrator. i made two gods just incase dividing the time-step by 100 didn't actually make them sufficiently accurate and there would still be errors. i also ramped the time-steps up to 0.5 from 0.1



so just starting out, both the gods are in such close agreement with each other you can't even see the orange line. it looks like in terms of the amplitude of the oscillations, leapfrog wins over si-euler.



very interestingly, the error accumulates in the form of precession for both si-euler and leapfrog, and the amount of precession is the same for both. after doing some quick google searches, this is supposed to be the mode of failure for symplectic integrators: they conserve a quantity which is almost energy, so obviously the error comes from elsewhere (aka phase error)

conclusion: hypothesis was confirmed that leapfrog > si euler > euler

note that the popular RK4 integrator is not symplectic and would probably blow up like the euler integrator over sufficiently long time-spans, so i didn't bother trying to implement it, which is a pain anyway

question: wait a second, i just read the wikipedia page and si euler is identical to leapfrog and the only difference is that one of them has the velocity offset by half a time-step. so are they the same integrator?

answer: as far as i can tell, pretty much. the initial starting configuration i used was x = 1, v = 1, so the si euler integrator is actually equivalent to the leapfrog integrator with starting configuration x = 1, v_-1/2dt = 1. notice this has slightly less total energy, since at time 0, velocity would be slightly less than 1, corresponding to the lesser amplitude.

so yeah, if i just initialized the si-euler configuration with slightly more energy it'd be identical to the leapfrog method, which makes me wonder why leapfrog is second order and si euler is first order.

code attached
« Last Edit: April 26, 2018, 03:40:38 PM by vh »

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #708 on: April 26, 2018, 03:53:13 PM »
hm, but then i came up with another devious scheme to compare leapfrog and si euler. this time the spring force is nonlinear: a(x) = -sign(x) * x**2 (basically the typical -x but quadratic)

this is apparently a substantially more difficult problem and i had to turn the time steps back down to 0.1 or else even the gods started diverging.



ok so clearly si euler is messing up. let's zoom in



veeery interesting. this definitively shows the superiority of leapfrog over si euler

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #709 on: April 26, 2018, 08:38:16 PM »
based on playing around with that integrator i found some bugs in my old gravity sim code

so here's it with everything fixed: a one million body gravity simulation, where each point mass is an entire galaxy, played back at 5 quadrillion times real time

https://youtu.be/pUGAGZvvMpk

edit: eww youtube compression artifacts

Darvince

  • *****
  • Posts: 1842
  • 差不多
Re: Coding
« Reply #710 on: April 26, 2018, 11:04:33 PM »
when is now?

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #711 on: April 27, 2018, 02:03:33 AM »
near the end, seeing as the simulation starts near the beginning of the universe and runs for 16 billion years

the area of the simulation is about 6 millionths of the observable universe (assuming it was 2d)
« Last Edit: April 27, 2018, 02:10:16 AM by vh »

Bla

  • Global Moderator
  • *****
  • Posts: 1013
  • The stars died so you can live.
Re: Coding
« Reply #712 on: April 27, 2018, 02:17:53 AM »
Wow

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #713 on: May 07, 2018, 06:19:38 PM »
playing around with sounds -- pretty messy code i whipped up in 5 minutes just to try it out. sample attached.

atomic7732

  • Global Moderator
  • *****
  • Posts: 3849
  • caught in the river turning blue
    • Paladin of Storms
Re: Coding
« Reply #714 on: May 07, 2018, 06:23:05 PM »
is that a univision entry in the making i hear?

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #715 on: May 07, 2018, 06:25:56 PM »
lmao maybe. technically if i put enough work into it, i could generate sounds as varied as anything you can get out of professional software. i was thinking of doing some xenakis type stuff

Darvince

  • *****
  • Posts: 1842
  • 差不多
Re: Coding
« Reply #716 on: May 07, 2018, 08:12:58 PM »
wow is ponyreiter going to be making new avant garde music

Gurren Lagann TSS

  • *****
  • Posts: 120
Re: Coding
« Reply #717 on: May 08, 2018, 02:34:31 PM »
NERD

vh

  • formerly mudkipz
  • *****
  • Posts: 1140
  • "giving heat meaning"
Re: Coding
« Reply #718 on: May 08, 2018, 02:43:00 PM »
i ain't no nerd i'm a MUI nerd

Darvince

  • *****
  • Posts: 1842
  • 差不多
Re: Coding
« Reply #719 on: May 08, 2018, 05:08:43 PM »
>calling someone a nerd on an astronomical simulation program forum