forum.comicostrich.com Forum Index forum.comicostrich.com
ComicOstrich Forum
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Technobabble!
Goto page Previous  1, 2, 3
 
Post new topic   Reply to topic    forum.comicostrich.com Forum Index -> PodWarp 1999
View previous topic :: View next topic  
Author Message
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Sun Jul 06, 2008 11:27 pm    Post subject: Reply with quote

Arioch - Same process works for both. Change things into scientific notation as x.ljljl; times 10 to whatever power, whether it be positive or negative. Be careful what you say has a 0% chance of happening. There's a big difference between 0% and 10^-10 % chance. Repeat the second experiment a large number of times and that non-zero probablity event goes up to .1%.
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
tbowl
Hatching


Joined: 09 May 2007
Posts: 57

PostPosted: Mon Jul 07, 2008 11:30 am    Post subject: Reply with quote

Yea I can see how one path of this argument does go down the .9999(repeating) = 1.0 debate.

I guess I work in quality though, and even we work in tenths of thousandths because in the suppliers we do business with, 99.98% quality still sucks pretty bad, as stupid as that sounds.... we're in the truck building business, lots of parts go into a truck. Sad You don't hear us going... Their quality is so good, it's just too hard to calculate!!!! Hehe. I might say that though now. Razz
_________________
Back to top
View user's profile Send private message Visit poster's website
Adam_Y
Egg


Joined: 02 Jun 2008
Posts: 32

PostPosted: Wed Jul 09, 2008 7:48 pm    Post subject: Reply with quote

dph_of_rules wrote:
The problem isn't you can't calculate those large numbers, but rather how much accuracy you can give.

Ooo really. How many bits is a million! ? It's one million * 999,999 * 999,998 *999,997*999,996* and so on until you reach 1. I don't know what that number is that us but you sure as hell will overflow whatever processor you are using. It's just really odd but getting a number as large as one million factorial is within the realm of statistical calculations.
PS:
The exclamation point is a mathematical notation just in case you were wondering what the hell that random ! was for.
Quote:

I guess I work in quality though, and even we work in tenths of thousandths because in the suppliers we do business with, 99.98% quality still sucks pretty bad, as stupid as that sounds....

Well it depends on what 99.98% quality means. I think the best stastic is the odds that earth will be consumed by a man made black hole.
Back to top
View user's profile Send private message
Chaos
Egg


Joined: 12 May 2008
Posts: 17

PostPosted: Wed Jul 09, 2008 8:48 pm    Post subject: Reply with quote

It seems a few people are worried about overflows. This is only really a major problem if you use native/standard double-precision floating point data types. You can bypass these entirely and write a set of algorithms for manually crunching massively large or tiny numbers. This moves the limitation from CPU registers (64 bits usually) to main memory (34,359,738,368 bits or 32 gigabits total given 4GB of unallocated RAM).

To give some idea of scale, a 64 bit integer can represent (roughly) 18,446,744,073,709,552,000 numbers (2^64). For every extra bit you have available, you double the size of the number you can represent. The number of atoms in the universe (around 4x10^80) is a barely noticeable fluctuation in a 32 gigabit number. To be honest I can't even comprehend it.

Still... Some smart-alec will always come up with some number that's too small/large to work with accurately... Until we have the infinite improbability drive we're stuck with these clunky old finite calculations. Smile
Back to top
View user's profile Send private message
Adam_Y
Egg


Joined: 02 Jun 2008
Posts: 32

PostPosted: Thu Jul 10, 2008 4:02 pm    Post subject: Reply with quote

Chaos wrote:
It seems a few people are worried about overflows. This is only really a major problem if you use native/standard double-precision floating point data types. You can bypass these entirely and write a set of algorithms for manually crunching massively large or tiny numbers. This moves the limitation from CPU registers (64 bits usually) to main memory (34,359,738,368 bits or 32 gigabits total given 4GB of unallocated RAM).

It doesn't counteract the fact that the person who said this probably spends a significant amount of time doing these calculations and probably knew this all ready.
Quote:
Still... Some smart-alec will always come up with some number that's too small/large to work with accurately... Until we have the infinite improbability drive we're stuck with these clunky old finite calculations. Smile

Finite calculations won't work either though. It isn't smart alecky either though. Think of it this way how many bits are being sent between your computer and a server. It's on the range of a lot.
Back to top
View user's profile Send private message
Chaos
Egg


Joined: 12 May 2008
Posts: 17

PostPosted: Fri Jul 11, 2008 12:28 am    Post subject: Reply with quote

Adam_Y wrote:
Chaos wrote:
It seems a few people are worried about overflows. This is only really a major problem if you use native/standard double-precision floating point data types. You can bypass these entirely and write a set of algorithms for manually crunching massively large or tiny numbers. This moves the limitation from CPU registers (64 bits usually) to main memory (34,359,738,368 bits or 32 gigabits total given 4GB of unallocated RAM).

That's really nice. It doesn't counteract the fact that the person who said this probably spends a significant amount of time doing these calculations and probably knew this all ready.

I said the above. It is not a quote or paraphrase of anything I can recall reading. I'm not offended or surprised that you thought otherwise. I do take mild offence at your tone, however.

Your second sentence boggles my mind. By giving out my knowledge I'm trying to stop other people who already know from knowing?

Look, all I really intended to convey was; there is effectively no limit to the size or precision of finite numbers you can work with; it's simply a question of how much time (or processing power) you've got to spend and how much storage space you have to work with. I simply doubted that anyone would accept such a statement without some form of evidence. Apparently all I get in return is attempted sarcasm though. EDIT: Which I now see that you've edited.

Adam_Y wrote:
Finite calculations won't work either though.

As opposed to...?

Adam_Y wrote:
Think of it this way how many bits are being sent between your computer and a server. It's on the range of a lot.

Do you mean counting the bits? If so, that's not really a very good example. By my calculation it'd take well over 500 years to overflow a single standard 64 bit unsigned integer (assuming a direct gigabit ethernet connection at maximum theoretical utilisation with 100% uptime and no packet loss).

Counting every bit on every piece of hardware in the world would give a much larger number. However, it would still be an insignificant number relative to things on a universal scale.

If per chance your intention was to compare gigabit ethernet (for example) with my 32 gigabit-wide integer example... That is invalid as a direct comparison. You are confusing a bit stream for a bit string. I think the only real conclusion you can draw is that it would take 32 seconds to transfer a single number from computer to computer (at absolute best).

Quote:
I think the best statistic is the odds that earth will be consumed by a man made black hole.

This is perhaps a good example of odds that are impossible to calculate with any degree of accuracy. The above statement cannot be defined mathematically at the present time without making enormous assumptions which render any answer you might attain almost (if not) entirely meaningless.
Back to top
View user's profile Send private message
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Fri Jul 11, 2008 3:48 am    Post subject: Reply with quote

Here's a statistic which should be difficult. We all know that the Earth is constantly being bombarded with particles from the sun. How much would it take for the bombardment of particles from the sun to have any significant impact on the mass of the Earth? Let's define significant as within measuring capacity, not necessarily even .01% increase in math.

Or a deeper sci-fi question: would the required volume to hold .001 milligrams of light?
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
Arioch
Egg


Joined: 05 May 2008
Posts: 43

PostPosted: Fri Jul 11, 2008 9:48 am    Post subject: Reply with quote

dph_of_rules wrote:
Here's a statistic which should be difficult. We all know that the Earth is constantly being bombarded with particles from the sun. How much would it take for the bombardment of particles from the sun to have any significant impact on the mass of the Earth? Let's define significant as within measuring capacity, not necessarily even .01% increase in math.

In order to calculate this answer, one only needs to know the rate at which Earth is gaining mass from the solar wind. One source puts the solar wind output at 6.7 billion tons per hour, but I haven't found any figures on how much of that lands on Earth. You could try to make an estimate based on the diameter of Earth and its distance from the sun, but this would be incorrect, since Earth's magnetic field deflects a significant percentage of the charged particles in the solar wind... but it does give an upper limit.

Using an Earth diameter of 12,756 km, and orbital radius of 149,597,871 km, if we assume that solar wind radiates evenly in all directions (which it doesn't), then the disc of the Earth should catch .0000000004544 of the solar wind. Assuming the 6.7 billion tons per hour figure is correct and in metric tons, and ignoring deflection by the Earth's magnetic field, then a maximum of 3.0446 tons per hour of solar wind could strike the Earth.

.01% of Earth's mass = (5.9742×10^24 kg * .0001) = 597,420,000,000,000,000,000 kg. At 3.0446 tons per hour, it will take 22,385,146,380 years.

So your minimum is 22 billion years. But since the Sun is expected to burn out in about 4-6 billion years, this threshold will never be reached.

dph_of_rules wrote:
Or a deeper sci-fi question: would the required volume to hold .001 milligrams of light?

Since photons don't have a volume (or, for that matter, a rest mass), this is a meaningless question.
_________________
Jim Francis


Last edited by Arioch on Fri Jul 11, 2008 5:57 pm; edited 5 times in total
Back to top
View user's profile Send private message
tbowl
Hatching


Joined: 09 May 2007
Posts: 57

PostPosted: Fri Jul 11, 2008 11:37 am    Post subject: Reply with quote

Quote:
Here's a statistic which should be difficult. We all know that the Earth is constantly being bombarded with particles from the sun. How much would it take for the bombardment of particles from the sun to have any significant impact on the mass of the Earth? Let's define significant as within measuring capacity, not necessarily even .01% increase in math.


Hm. When the Earth grows in mass from space particles, is it adding to the atmosphere or the ground? Assuming it isn't a giant state-sized meteor that lands out in the pacific. . hmm

Quote:
Or a deeper sci-fi question: would the required volume to hold .001 milligrams of light?


I have noooooooooo clue about photons. i can't even begin to understand that. I would probably put some air in something flat and get it so small to where i cant pass light through it anymore besides a tiny pinpoint and measure it that way? Would that even work? Hehehe.
_________________
Back to top
View user's profile Send private message Visit poster's website
dph_of_rules
Ostrich


Joined: 20 Dec 2007
Posts: 359
Location: theoritically and only theoritically somewhere in this universe

PostPosted: Fri Jul 11, 2008 5:53 pm    Post subject: Reply with quote

Ok, I guess you're right that photons have no mass. http://imagine.gsfc.nasa.gov/docs/ask_astro/answers/960731.html
_________________
Whatever happened to simplicity?
Back to top
View user's profile Send private message
Adam_Y
Egg


Joined: 02 Jun 2008
Posts: 32

PostPosted: Sat Jul 12, 2008 2:06 pm    Post subject: Reply with quote

Chaos wrote:

Look, all I really intended to convey was; there is effectively no limit to the size or precision of finite numbers you can work with; it's simply a question of how much time (or processing power) you've got to spend and how much storage space you have to work with.

That's wrong though. You will always introduce errors when going from an analog to digital and the other way around. Mathematically speaking you need an infinite number of bits to reduce the quantization error to zero which is physically impossible.
Quote:
Do you mean counting the bits? If so, that's not really a very good example. By my calculation it'd take well over 500 years to overflow a single standard 64 bit unsigned integer (assuming a direct gigabit ethernet connection at maximum theoretical utilisation with 100% uptime and no packet loss).

Nope. Try taking the factorial of a billion bits. Mathematically that is multiplying every single number between 1 and a billion. The problem Im thinking of is the probability of getting a few bits that were sent with the wrong information.
Quote:
As opposed to...?

Sterling approximation.
Back to top
View user's profile Send private message
Chaos
Egg


Joined: 12 May 2008
Posts: 17

PostPosted: Sat Jul 12, 2008 10:17 pm    Post subject: Reply with quote

Adam_Y wrote:
Chaos wrote:
Look, all I really intended to convey was; there is effectively no limit to the size or precision of finite numbers you can work with; it's simply a question of how much time (or processing power) you've got to spend and how much storage space you have to work with.

That's wrong though. You will always introduce errors when going from an analog to digital and the other way around. Mathematically speaking you need an infinite number of bits to reduce the quantization error to zero which is physically impossible.

"Wrong"? That doesn't refute my statement at all! Confused

Of course we can't completely solve problems which are asymptotic, uncertain or immeasurable in some way.

You might like to note that I never stated otherwise.

Adam_Y wrote:
Quote:
Do you mean counting the bits? If so, that's not really a very good example. By my calculation it'd take well over 500 years to overflow a single standard 64 bit unsigned integer (assuming a direct gigabit ethernet connection at maximum theoretical utilisation with 100% uptime and no packet loss).

Nope. Try taking the factorial of a billion bits. Mathematically that is multiplying every single number between 1 and a billion. The problem Im thinking of is the probability of getting a few bits that were sent with the wrong information.

Right. Well for a start factorials are used to calculate permutations. Getting the factorial of a billion elements gives you the number of possible orderings of the elements. Totally inappropriate for your problem! You need more information to attain an accurate probability (like the factors which potentially induce this "wrong information" you speak of), but it is regardless a very simple problem given the right information. Without this information you might as well be trying to solve your "man-made black hole" problem for the amount of useful information you'd gain.

FYI: It'd be very, very difficult to work at bit-level anyway, due to the nature of current network hardware and software stacks. You could take measurements of loss/error at a packet level at best. Or make your own hardware/software (essentially just a glorified cable-tester).

Adam_Y wrote:
Quote:
As opposed to...?

Sterling approximation.

And which part of Stirling's approximation isn't a finite calculation? Shocked
Back to top
View user's profile Send private message
tbowl
Hatching


Joined: 09 May 2007
Posts: 57

PostPosted: Tue Jul 15, 2008 9:37 am    Post subject: Reply with quote

Even with quantum computers we couldn't work at the 'bit' level. We'd have to have a computer that operated natively in states where whole entire chunks could be on and other chunks could be off, so it'd have something to do the math with, instead of using the bits through a program, and the program using code?

Am I closer to the conversation yet? Confused

I mean it'd be okay to do the permutations in quantum, but if you wanted to work in whole numbers, like a 0. you'd have a bit, thats off then other bits reminding you that that is the bit thats supposed to be off. then you wanted to add .000000000000000000000000000000001 to that bit.....

Okay yea I'm confused again. Sad
_________________
Back to top
View user's profile Send private message Visit poster's website
AaronLee
Egg


Joined: 27 May 2008
Posts: 25

PostPosted: Tue Jul 22, 2008 3:27 pm    Post subject: Reply with quote

So, in episode 21, the guys mentioned visual technobabble. They mostly covered visual interfaces (designed to look pretty, they concluded.) However, what happens to "no step" and "danger, hot exhaust" centuries in the future. Or, better yet, with alien cultures involved?

What I mean is, what sort of visual clutter do we see on our vehicles, mechanisms, decals, intakes, engines... the visual technobabble of mechanics has a voice all its own (just like Banks of Blinking Lights, the table doohicky from star trek and the stream of random techno drivel that pops out of character's mouths so regularly in science fiction shows.)
_________________
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    forum.comicostrich.com Forum Index -> PodWarp 1999 All times are GMT - 5 Hours
Goto page Previous  1, 2, 3
Page 3 of 3

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group