Search Keyword :

Tuesday, September 6, 2011

Beyond Metaphysics: The true human design

To all those people who consider humans to be a specifically equipped ‘Conscious’ SOUL bearing race. This post might pinch.
I’m going to tell the story of the human being (you, me) and bring about an analogy as we go along which will put forth my view on the human ability and its repercussions.
The Birth :
When you were born you were born clean, your mind ready to accept, understand facts and your memory ready to store experiences you would have. But in essence you were born clean just like an un-programmed computer system ready with a distinct hardware configuration but, yet to be programmed.
The Experience :
The significant and insignificant experiences you had since your birth were stored in your permanent memory as programs. Thus over the years your hard drive (memory) begun to store the various set of programs you had run in your mind (RAM) by the input you got from the outside world some consciously some sub consciously but every one of them had an affect invariably . At points your parents also told you how to react to certain stimulus and give the necessary output (or reaction) in case the response wasn’t innate, this was also stored into your sub conscious irrespective of your acknowledgement of it.
Now these programs ( or experiences ) would decide the kind of reaction you give to an input(stimulus) provided to you later in life. Like affinity towards a certain kind of music, food, certain preferences, or other choices associated with free will.  Since to every person, every significant or trivial experience is altered in many little ways, each human being reacts differently to the same stimuli.
The Response :

How many varied types of programs you run at a time or how well your processor runs is given by the 
hardware specification you received via the(coding of) genes at birth along with unchanging but degrading slightly over as you age.
Thus even though you can optimize your software programs (thought process) to reach a more suitable result the limit of optimization would be determined by the hardware that you were provided with.
But then you may always say I’m CREATIVE , I’ve EMOTION, but yet I would stick to “you’re a processor machine” . To understand WHY, the construction of the human mind is to be understood, which is far superior to computers hardware architecture. 

 Neuron Electrical Impulse Flow. 

The Physics Of operation :
In terms of processing speed the human brain is limited upto a megahertz (10^6 calculations per second).
The common processor has reached GIGAHERTZ (10^9 calculation per second) and higher, so we can’t essentially match a machine processor there in terms of calculations per second. But the superiority lies in the fact that there are over a 1000 more number of interconnections among each neurons (basic processing entity) in the brain than there are among the transistors (basic processing entity) on a processor chip.
Resulting to what we see here : If a processor were to run an algorithm. It has two paths in the transistors (say a YES=1 and a NO=0 , compared to a BIT) which can branch out from a single deciding comparison step. Where as in the neuron to simply say we have lot many more options to processes implying more paths, say a YES=1, NO=0, MAYBE= 0.50, MAYBE YES= 0.75, MAYBE NOT=0.25 and assigned values so on forth leading to an increase in the availability of BITS) so now any algorithm the brain runs through the neurons can branch out into the billions of numbers of more
Transistor Electron Flow Simulation 
Select Flow Pathways
interconnections as there are over 15 billion neurons.  The availability of numerous choices to choose a varied number of processing paths results to a superior processor. Therefore the neuron can provide a comparison step to a more refined level compared to a bi-path processor.


Innumerable interconnection(complex decision step)
Select possible interconnections.
The Creation :
When you find you have created something creative, the algorithm you’ve used has run one of the many available thousands of paths  among  the interconnections. Moreover since there are over 15 billion neurons in the brain the number of possible combinations are more than enough from getting exhausted. Moreover there seems to be no visible end of creativity as the program data in the neurons keeps updating itself and adapting every second. If I were to define emotion on the same pretext it would be in terms of a conflict in decision process, as one of the neurons generates a yes and the other a no, the minute conflicting comparison leading to a superposition of results. When this occurs you’ve what you term ‘emotion’. Emotion can never be completely pure as in a sure yes or pure no lot many factors affect any one of your emotions.

The Termination :

A machine processor now can be shut down very often, whenever you like by ceasing the electric supply to it. A persons minds is never completely shuts down until death unless otherwise DECLARED brain-dead . To clear/refresh the Ram you shut down the Processor,  but it eventually runs the stored program only once switched back on from the ROM. In analogy with human mind, you sleep temporarily  to refresh your ram, if it has saturated . Switching off the supply to the brain is something humans can never do consciously (fortunately). At maximum, If you fall into a coma you hibernate.

A highly trained (coded) response.
The Vanity Of the human Race :
Now every person’s experiences are different and hence every stored program is different from one person to another as mentioned before. Your distinct special experiences give you the illusion of being different or being distinct from a another human. From here arises the well layered illusion of

 ‘MY CONSCIOUS’ , ‘THE SELF’ . There is no ‘self’ no separate ‘consciousness’ as you are just a processor varying due to the sum of your experiences and stored memories which run your thought process differently,  for any given input as stimulus.
When the human finally shuts down the RAM is completely cleared like in a processor is and the data is simply erased. We all know digital data is a form of virtual data and can be erased as easily as it is written. There is no death and no physical loss when this data is erased. Similarly there is no ‘soul’ that can leave the body as the ‘soul’ is a virtual creation due to this illusion of ‘self’ and exists only till the program data persists in the storage medium i.e up until death.

And that in itself is the complete analogy behind the true human design.

Friday, August 26, 2011

Wave-Particle Duality in the 3rd Century BC?

Wave-particle duality leading to an advent of quantum mechanics in 300 BCE (300-201 year BC), sounds absurd does it not ? But take a look at this picture, the ‘Yin-Yangand you could give it another thought.


The Yin –Yang was one of the most intriguing and influential symbols, first found records of existing in the Han Dynasty (206 BC to 220 AD). The sole reason behind the mysticism and the intrigue surrounding it is the many possible interpretations that it has in the Chinese philosophy. In Taoist philosophy, the shade and light (☯) yin and yang, arrives in the dàodéjīng. It is impossible to talk about yin or yang without some reference of the opposite; since yin and yang are bound together as parts of a mutual whole (i.e. you cannot have the back of a hand without the front)...
We all might know about the principles behind the whole symbol as it “represents the ancient Chinese understanding of how things work. The outer circle represents "everything", while the black and white shapes within the circle represent the interaction of two energies called "yin" (black) and "yang" (white), which cause everything to happen in a dynamic system...
While "yin" would be dark, passive, downward, cold, contracting, and weak, "yang" would be bright, active, upward, hot, expanding, and strong.
The shape of the yin and yang sections of the symbol, actually gives you a sense of the continual movement of these two energies, yin to yang and yang to yin, causing everything to happen: just as things expand and contract, and temperature changes from hot to cold.”
They therefore define how the polar opposites or seemingly contrary forces and energies are interconnected and interdependent in the natural world, and how they give rise to each other in turn. Opposites thus only exist in relation to each other.

In the so-called
yin-yang wuxing teaching , which was considered an “early Chinese attempt in the direction of working out metaphysics and a cosmology” (Chan 1963: 245)

Now this yin-yang- favoured explanation claims to illuminate natural phenomena, to what extent is what surprises me, maybe from cosmology to quantum mechanics all answers hidden within this unassumingly artful symbol ?

So now that we understand the origin of the yin-yang and the whole purpose of its creation. We could digress a little further.

How many analogies are there ?

Wave–particle duality in its essence means that every particle exhibits both wave and particle nature. In terms of its observable properties.  The crux of the concepts behind the entire quantum mechanical theory.

Meanwhile this duality also points out the incapability of classical concepts of ‘particle’ and ‘wave’ to fully describe the behavior of quantum-scale objects. There are many interpretations to the understating of this paradox but yet it is accepted as a fundamental property of the Universe.

Electron Interference experiment : Wave functions of electrons superimpose to give an interference pattern.  
Present accepted scientific theory holds that all particles also have a wave nature (and vice versa). This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For interference experiments including fullerenes and c-60 molecules as shown in the above figure.
Since every particle now has wave nature and every macro object consists of these quantum-scale objects, the same should be exhibited as wave nature of these macro objects. Therefore carrying this interpretation we could simply say that Louis-Victor de Broglie in 1924, extrapolated it further and formulated the de Broglie hypothesis, claiming that all matter and not just light, has a wave-like nature. He related wavelength (denoted as λ), and momentum (denoted as p) given in the equation:
λ = h /p

This having being proved time and again through experiments.


One look at the  yin-yang and you could see the corpuscular quantum 'particle' and it being embedded in a wave sort of character.
From the above interpretation of the data and a single look at the image we could derive that the particles namely the white dot in the black region and the black dot in the white region are clearly embedded in complementary coloured waves. Both particles being represented by their respectively coloured waves on the opposite side and they themselves stuck in the complementarily coloured wave. This is nothing but a distinctly clear representation of wave-particle duality.

To go on a little further we know that for particles their wave functions exists and a superposition of these wave functions can occur, just as it occurs in the interference experiments to give the interference patterns on the screen a shown above. This symbol has a similar depiction of the superposition of the two complementarily coloured waves as shown where the two waves merge into each other.
To know how much the Chinese were aware about the wave-particle duality depicted in the yin-yang is difficult to justify or comprehend as it is mentioned, that the Han dynasty chronicle “Shiji (Records of the Historian) lists Zou Yan (305-240 BCE) “as a representative of the yin-yang school who possessed a profound knowledge of the theory of yin-yang and wrote about a hundred thousand words on it.

However, none of his works have survived...

The super-position of wave functions

To remind people that Issac Newton had not worked on his corpuscular theory on light until the AD 1660's. Following a few centuries from there onto the advent of quantum mechanics.Wherein again in AD 1920’s where Schrodinger and Heisenberg worked on the un solvable problems in classical mechanics and Quantum mechanics  (wave-particle duality) came to the rescue into solving them. It further becomes even more fascinating and captivating.

For people with a little more knowledge into the workings of quantum mechanics. To recollect: In the Born Interpretation, There is a ghost field (Gespensterfield)
or a wave, guiding the corpuscular quantum but it shows the path for the momentum – energy carrier the light quanta. Comparable again to the depiction of the particle being supported by the wave it is embedded in.
To further bring in Bohr into the equation, who was one of the pioneers of the quantum theory as mentioned earlier. When awarded the Order of the Elephant by the Danish government, he designed his own “coat of arms” which featured a “taijitu” (symbol of yin and yang) and the motto in Latin: contraria sunt complementa: "opposites are complementary". The reason as to why he included the yin-yang except to depict the complementarity associated with the yin-yang in Quantum mechanics is hardly known.

The Galaxy Structure.
Representing The Sun's Cycle
For the understanding the analogies behind the cosmological interpretation, I will leave you here with a few self explanatory images.
The entire celestial phenomenon. It contains the cycle of Sun, four seasons, 24-Segment Chi, the foundation of the I-Ching and the Chinese calendar.
IS there further more left to uncover, hidden within the 'yin-yang' ?

Thursday, August 4, 2011

How to make your own, 'Nth' dimension?

So we start off by making a 2-d figure which we are very familiar with.. Now take two separate lines on a sheet of paper not intersecting.
Now join the start points and the end points(vertices) of the two, you have a '2-d' figure.
 Make the lines straight and you have a square.
Next make two squares, close to each other as shown
 and join the respective vertices, ones lying on one side join with one lying on the same side.

Now you've a 3-d figure (a cube) on paper, or a projection of a 3-d object on 2-d paper.
Now comes the interesting part, to make your fourth dimension and view it, find out the point of origin of the cube, or the hidden edge so as to say the one that you cannot see if you made a solid cube. This point is actually from where all three of the mutually perpendicular axes originate on paper. Then take your paper and pierce a pencil (or a thin pen refill) right through the sheet perpendicular to it.
 So now you can view the fourth dimension in the 3-d world.
How about taking it further ? Make a 4-d cube, which is quite simple again. Like we did above, following steps joining 2-2 vertices to make a 2-d figure on paper, then joining 4-4 vertices to make a 3d figure on paper, now take two cubes 
and join the 8-8 vertices (respective) of the two cubes and in effect you have a 4-d cube also called a hypercube. 
This is a projection of the 4-d cube on 2-d paper. This hypercube also has a point of origin from where you can draw all the axes. Now take your 'refill' again and pass it through this point on the paper.

In effect you have the projection of 5-d object in the 3-d space. Now take a picture of the whole process and you have the projection of the 5-d object on 2-d space. Take a print of this....

Pass your pencil through the point of origin on the print and you have the projection of a 6-d object in 3d space.
You can keep on doing this for 'N' dimensions.

We realise that the origin is a point which we consider to have no dimension, but in fact lies on all possible axes that can exist in an N dimensional space. It's almost as if the N dimensional space has stretched out or pulled apart the origin along all possible axes making it itself 0-d but nonetheless lying on all possible axes. Making the origin  or the 'Point' the most effective dimensional tool.   
Here's my 5 dimensional object projected on 2-d page (sample not to scale).

Sunday, April 3, 2011

Time’s Illusion Of Existence....

Researchers all around are cracking open theirs and each other’s heads in the process of their quest to know the absolute nature of time. But why ?
Foundational problems in Q.M(Quantum Mechanics) and G.R(General Relativity) are a reason. In QM time is assumed to be as it was classically interpreted. No changes are made to it and it exists as the same ‘flow of a river’ kind of analogy which states how time eternally flows in one direction that is from the past into the future. Whereas in General relativity time is dynamical in nature(changes with the geometry of spacetime). So this leads to a problem of merging the two into a single unified theory of Quantum Gravity. We ask how are two definitions of time possible, moreover when its application is leading to so many issues and varied interpretations ?
Inspired from Julian Barbour’s thesis on the ‘Nature Of Time’ I would like to probe towards an answer for this question.

Time as a dimension:
How is it that we accept time as a dimension? Suppose, I want to locate you anywhere in space, which I can using three spatial co-ordinates. Proceeding from there, how would I specify any particular event that includes your interaction and involvement in its occurrence in that space ?  Using the dimension of time to
A.Star at actual position. B. Apparent position
Light bent by gravity of the Sun . Can you use
speed of light as a standard to measure time/duration ?
specify its point of occurrence, in the time coordinate, which eternally flows from the past to the future, from time ‘0’ to time 0+‘tick’ to infinity, where 'tick' the basic entity of time. Now let us assume a random activity by assuming two separate events one after the other occurring through cause and effect, the interaction of the object in this experiment with the surroundings is very minimal such that the spatial coordinates can hardly tell you the difference between two different, separate events involving the same object. Here the spatial coordinates become invalid criteria to completely judge the outcome of the event for this instantaneous period of time, as the movement of the object in that event is almost zero. Here we introduce time to have other criteria of measurement to distinguish the separation between two events within the experiment. Conclusively explaining how one event happened at one time and the other event happened at another time. In another example take a stationary object. The spatial coordinates again become redundant to distinguish between two separate events involving the stationary object (it is interacting with nature at all times) and these interactions constantly fluctuate, the measure of this change could be expressed as the time lapse between two events knowing the initial conditions. Hence the need for time as a dimension.

Need for a perfectly isolated system
for measuring 'absolute' time.
Absolute time:
Every system of measuring and defining time, how much ever isolated interacts with its environment and surroundings in some way or other at some point or another, hence even atomic clocks cannot measure absolute time for that matter as they are affected by gravity. If you say we can use the speed of light as the basis of measurement then I’m afraid that’s not possible either as even light bends under gravity as proved through the postulates of general relativity and through the experiments of gravitational lensing (bending of light by heavy gravity). If light bends, this means that some additional velocity component is added to its  velocity. Gravity exists universally hence it will interact with light more or less at every point or another, depending upon the value of the gravitational acceleration and by how much it changes from place to place. Universally present microwave radiation of 2kelvin again ensure  incomplete isolation of any system to perfectly measure time. No perfectly isolated system can hence exist, to measure the ‘ticks’ time with absolute perfection. Therefore the notion of an measuring the ‘ticks’ of an absolute time duration is wrong and unacceptable. Concluding from this, how can we let this be a dimension if time's very basic entities are subject to change with position and place and forces. Its inclusion in any equation would only allow more errors to creep into it. Which as we see is the current problem along all lines of research. Struggling with different conceptions of time in order to include it in their theory. The zest of the above para could then be summed by asking as to : How is it possible to measure ‘absolute’ time and duration if there is no perfectly secluded, isolated system, not interacting with its surroundings permanently to measure it accurately?

The Nonexistence of time:  
Time as we saw above cannot be absolute, now we’ll shall see how it’s not even a fundamental but just a derived entity. Time is just a result of the various forces and
Gravity causes a warping of spacetime.
objects interacting through cause and effect leading to different events, these events could be easily distinguished using time coordinate. Suppose I were to distinguish two separate events. I could do this in term of listing out all fundamental parameters such as entropy, temperature, energy, mass, etc for each separate event . Two separate events would give me two different lists of values, as these values keep fluctuating owing to the random and probability based nature of the universe. If the two lists differ by extremely small changes in these parameters, displaying some kind of order in change, we could conclude that these two events happened one after the other or any other order in which the lists differs. Thus leading to the notion that one of these events happened after the first event. This is where time takes its roots from. Now on the other hand If I didn’t have these instruments of measurement I would declare these two events are different by specifying the times(or the duration between them) at which they occurred. We have been taking the easier way out so far, using time for understanding the nature of two different events. Problem lies at the fact that it’s not fundamental but derived. As shown above, lot many parameters would affect it's change.

Hence it would make no sense in asking on 'how to measure absolute time' as Time and duration, in the first place do not exist but are merely consequences of a number of events happening in a particular order which being a sub-result of cause and effect. Conclusively time is nothing but an abstraction of the human mind to make it simpler to understand nature.

The Universe as a possible absolute Clock ? :
A question that would hover in the minds of people would be the existence of a perfectly isolated system. Can the whole universe then be this perfectly isolated system ? The possibility of it being, is high because of the law of conservation of energy. No energy could ever escape the Universe making it perfectly isolated from whatever possibly lies beyond it (if there is any sense in asking that question). This would lead to a thought wherein the entire universe's existence could be the fundamental unit of time. The duration of Universe's existence being the 'tick'.

Concluding hence, how could you possibly solve all the errors that enter QM and GR and other fundamental theories of nature? By completely removing time from the equations and replacing it with an equation, this equation would contain the relationship between different parameters that affects time’s change and nature. Proceeding among these lines of research could be advantageous leading to new discoveries and could help in resolving many problematic issues in Physics.

Sunday, March 6, 2011

Adapting Universal Constants...En route To A Single Law ?

Do these universal constants, intricate
values change over time ?
Gathering from the previous post, what could the consequence of the variations in the gravitational law depending upon the expansion of the universe and the dynamic nature of spacetime mean ? For one, this critical value of acceleration mentioned C^2/R,('C' speed of light, 'R' Scale of the universe), occurring wherever leads to phenomena which we generally wouldn't observe, an insight I gained from this could be the changing nature of constants. In other words the evolution of the entire universe, to how we see it today. The universe is undergoing an accelerated expansion. Therefore the connected and related value of C^2/R changes over a significantly immense period of time owing to the dynamic nature of ‘R’. Now this is interconnected to the orbital accelerations of the stars as shown in the previous post. The stars would fall out of orbit and the entire galaxy would
Rotation curve of a typical spiral galaxy:
predicted (A) and observed (B).
Do fundamental laws change ?
collapse in on itself, provided this value were to change even by a slight value. But this happens in a time scale quite colossal compared to even the time scale of creation of our solar system. Hence this leads to other consequences, if the value of the gravitational constant changes it being related to the expansion in spacetime, the changes should be reflected by the properties of matter lying within spacetime as gravitational constant is associated to matter. If matter’s properties are changing over this period of time, the remaining forces, laws, and constants governing their interaction, formation and appearance must also change with time. Everything hence in the world should be relative and is relative as seen in maximum cases. Thereby pointing to the fact that even these constants aren’t absolutely constant but just so within the conception of our (short) existence in the universe. This further leads to conclusions such as if the constants are changing overtime, the universe could have evolved into its present state like the evolution of humans from the single celled organism through constant adaption. The constants could have adjusted overtime to have attained the value they have as we observe them now. The value so attained so as to make sure the existence of the fundamental entity is not compromised. The entities of nature matter, forces, fields, would have corresponding to their environment provided by the dynamic spacetime fabric adjusted themselves accordingly to attain stability and gain the necessary values to sustain. Here I would like to mention the post the ‘Atomic life force’. From where I concluded that the very existence of the basic entity could itself signify a fundamental law.
An Evolving Universe ?

This could be one of the answers as to why the 20 constants in the Standard Model of Particle physics are fine tuned to these exact values, because they have  gradually adapted to these values overtime so as to work in perfect synchrony. Once they work in sync by adjusting to values to attain a universal balance, thereby not causing the universe to collapse into nothingness or spin out of control, they secure their existence in the universe. Now as expansion occurs slow stretching of the spacetime fabric’s fundamental entities to allow for the expansion might be occurring, this would lead to a point where sufficient stretching could reveal the true nature of spacetime. To probe  ‘empty space’ and reveal its true relationship with matter is an ongoing quest that is hot caught in the web of String physics which incidentally, leaves inquisitive ones dazed and puts up more questions than it answers ! 

Dark Matter – Dark Energy Congruence!

The Energy Density Pie 
A revelation that leads from performing the calculation C^2/R, wherein ‘C’ is the speed of light and ‘R’ is the entire universe’s scale is that it gives you an acceleration. As discussed in the previous posts, this acceleration magically falls in place of repulsive acceleration produced by the hypothesized Dark Energy, driving all galaxies apart. Thereby also giving it the nature of being the cosmological constant. 1.2 x 10^-8cm/sec^2 being the value of this acceleration.

Apart from that the universe is also hypothesized to contain Dark Matter a startling 23% of the energy density of the universe. The nature of which was also discussed in the previous posts. The discovery of the dark matter was a problem that was established by measuring the accelerations of the stars in the orbit about the center of its galaxy. The problem arises due the fact that given the measured acceleration astronomers could deduce from it, the distribution of the galaxy’s matter. In almost all cases a disagreement of calculation occurs with the matter observed experimentally.
3D map of the large-scale distribution of dark matter
What is happening here ? Two thing are possible. Either laws of gravitation as we know them are flawed and they’re breaking down at the scales of starts orbiting galaxies. Or there is some extra unobserved matter contributing to this anomaly in the acceleration of the star.
Another anomaly observed right outside our solar system was the Pioneer 10 , 11 spacecraft anomaly. There were accelerations that seemed to have popped out of nowhere affecting the trajectory of the spacecraft which was on its orbit around the sun. There is something common in all of these anomalies , an acceleration of unknown origin crawling out of nowhere leading to altered trajectories leading to results conflicting with those calculated with the known gravitational formulae. All of this could be attributed to Dark Matter.

Now let’s take a look at these stars that are actually orbiting around the galaxies center. The problems in confirming with calculated values do not occur for the stars in the orbits closer to the center of the galaxy, no problems there. The acceleration here is the one that is cause by the visible matter, so within this region the gravitational laws we know work fine. Go further out into exterior orbits and the problems start ascending. So where does the dividing line for this anomaly lie ? Where exactly do the results start opposing the ones we know. This line is determined by the ‘acceleration’ of the stars in the orbit around the center of the galaxy. What is this value of acceleration?
It’s the same as C^2/R, the critical value off acceleration responsible for the universe’s expansion (dark matter). This should surprise you out of your wits. Two completely unrelated phenomena are related at extreme scales. This was discovered in Mordelha Milgrom in the 1980’s. Also worth mentioning here is the variance in acceleration deduced for the Pioneer 10,11 spacecraft are roughly equal to this value.
The intriguing question is why does the nature of expansion on scales of the entire universe have anything to do with the trajectory of stars on orbits in any given galaxy? Universal expansion does not affect individual galaxies. The galaxies remain gravitationally bound by matter. Agreed the fact they are only moving apart from one another.  Hence the next post deals with the implications that this new found relation between Dark Matter, Energy and the extreme (lowest and highest) scales of the universe might have on the nature of physics.

Saturday, March 5, 2011

Breaking Down ‘Dark Energy’!

A source of light waves moving to the right.
frequency is higher on the right, and lower on the left.

Heard of the Doppler shift ?
Sure have, when the cars zooms past us sound undergoes a Doppler shift . Now imagine the car speeds away from us. Hence the successive wave like crests of sound that are emitted from the source car arrive later, further take more time to reach you (fixed observer),because the car constantly speeds away. Time taken for the sound crests to reach you increases, frequency decreases, the wave gets ‘REDSHIFTED’. Why Red ?. Red light has lower frequency than violet within the visible E.M spectrum hence the name. Observe this happens because the sound travels in a medium ‘air’ here. Therefore velocity of the observer and of the source are RELATIVE TO THE MEDIUM in which the waves are transmitted.

This simple phenomenon of ‘REDHIFT’ was also surprisingly universally observed for light during the 1910’s by Vesto Slipher for spiral galaxies and soon after for all galaxies. Implications weren’t clearly understood then. Ten years later Alexander Friedmann derived equations from general relativity showing that the universe might be expanding(or contracting cause of the dynamic nature of the spacetime) in contrast to the Static Universe model. Then keeping in light with the discoveries, Edwin Hubble empirically proved these observations of redshift from galaxies rushing away in all possible directions. Important to keep in mind is that it does not contradict Special Relativity or General relativity as for light waves or gravitational waves, the shift in wavelength necessarily compensates for the shift in frequency keeping the speeds of the waves constant ‘C’. Another essential fact is that there is no medium. When light is in observation the calculation WON’T INCLUDE THE MEDIUM of propagation as a reference point.   
The implications were extremely clear consequently, the known universe is indeed expanding. Every other galaxy is not just moving away but ACCELERATING away from our galaxy . Acceleration implies a force. The value of this repulsive acceleration is 1.2 x 10^-8cm/sec^2 . This force being the REPULSIVE kind. Force corresponds to energy over a period of time. According to the relativity theory, energy is equivalent to mass as a source of gravity. Which then according to general relativity means producing a positive curvature in space-time. So it curves the universe over the scale of ‘R’ equal to 10^27  cm^2  or 10 billion light years. Termed ‘DARK ENEGRY’ because of its undefined origins and little known dimensions of working and no known force carriers.  A very fascinating value, whose significance may well lie beyond our current understanding. This could possibly be assumed to be the famous cosmological constant that was hypothesized through the equations in General Relativity by Einstein. He proposed changes in G.R to accommodate its value, so that the repulsive force from the cosmological constant would balance out with the gravitational attraction to give a STATIC(never changing) model of the universe. That was the held belief for the past millennium, very hard to get out of, even Einstein’s insightful mind. 
Dark Energy's Significance in the Evolution of the Universe
(click to expand view)

Fascinating features worth mentioning involve, its density which appear exactly the same to all observers, with no preferred position, velocity, time, direction of motion extending uniformly throughout the universe. For positive values it causes the universe to expand, for negative causes it to contract and for a zero value causes it to remain static.
Energy as we know it now, is associated with matter. Dark Matter suggests exceedingly more. Its value is constant irrespective of any variance in the measurement circumstances. It has no known force carrying ‘BOSONS’ or massive ‘FERMIONS’. Majorly it comprises of 73% of the energy density of the universe still we’re unaware of its true nature. This should blow anyone away and it does accomplish that every time.
A few other facts that kick start some intuitive questions I’ve noticed are given here.
The cosmological constant now assumed to be dark energy extends over the scale of the entire universe of ‘R’. Perform a few related calculations at this scale, using universal constants to surprisingly get uncommon constants.
‘C’ the speed of light, perform R/C and you get time, which is roughly the present age of the universe.
Inverse this value and you get a frequency, probably the lowest possible achievable oscillation. Once per lifetime of the universe.
Next up perform C^2/R, gives acceleration, the value turns out to be the familiar value of the repulsive acceleration we know of, mentioned earlier
1.2 x 10^-8cm/sec^2. All of this gained simply out of operations using the speed of light. So many revelations from this data are yet waiting to be discovered.

One of the most exciting prospects that arise from experimental and theoretical observations, is the relation between the two very distinct phenomena ‘Dark Matter’ and ‘Dark Energy’. The features they hold in common might reveal completely new physics in the ‘R’ scales. This could trigger a re-formulation of many of the existing ground breaking laws in Physics. Even String Physics to that extent does not predict the existence of either Dark Matter or Dark Energy, while this does not mean String theory is wrong. You can always tweak it in ways that does accommodate ‘Dark Stuff’. This may also convey we’ve just seen the tip of the iceberg.
As we often see every few centuries. Precision of the experiments increase, we find anomalies that do not comply with existing data and suggest theories to overcome those anomalies which also predict further phenomena , if confirmed classify into established theories. Unraveling the usually overlooked and enlightening relationship between Dark Energy and Dark Matter would be done in the following posts. 

Saturday, February 12, 2011

Electromagnetic Continuum Into Gravity

The E.M spectrum , Continuous and Infinite along either directions

The nature of gravity has always been in question since the time Newton’s Law of Gravity was put forth. General Relativity (G.R) was introduced in 1915 by Einstein. He gave gravity a nature within a geometrical framework which is explained by the curvature of spacetime. The curvature being proportional to the mass-energy density and linear momentum of matter and radiation. In a nut shell spacetime tells matter how to move, matter tells spacetime how to curve. Several experiments have confirmed that Einstein’s equations stand correct , but only within limitations.
Violations of Einstein’s general relativity do exist and ‘Dark Matter’ is a hypothesized evident form of this violation. When G.R is employed to calculate orbits of the stars in a galaxy or the total mass, gravitational forces acting among various gravitational systems. Anomalies in results tend to occur, visibly missing but physically present mass that keeps the stars at the edge of the galaxies from getting shot out of the orbit, for the missing mass of galaxies, cluster of galaxies, mass of the universe etc.  From experiments confirmed by gravitational lensing we know there is more mass than what meets the eye. This unexplained mass account for the 23% of the mass-energy density (which is a huge factor in galactic terms) visible mass being 4%. This goes to show that either there are unresolvable issues with General Relativity and it does not give a complete explanation of the nature of gravity or there is indeed dark matter. Since dark matter is hypothetical and does not interact with EM radiation as such detection seems to be an challenging feat. Taking up the case with Newton’s Laws of Gravity, which seemed to explain gravitational systems and mass presence centuries ago, up until anomalies were observed . This provoked physicists to come up with a more stringent theory of Gravity that could explain these anomalies and resolve the conflicts therefore G.R was born. Similar analogues drawn seem to point to the fact that we seem to have gotten gravity all wrong yet again. Within the reasonable ranges G.R works efficiently but on further larger galactic scales, the experimental results don’t seem to tally with the expected theoretically deduced results. 
Gravitational 'Ripples' or 'Waves'

In recent times through the observation of binary pulsars, the existence of ‘gravitational waves’ have also been gaining light. The Standard model of particle physics has hypothesized the force carriers of these gravitational waves as to be ‘gravitons’, the gauge boson for gravity . When compared analogues could be drawn between E.M waves, whose force carriers (gauge bosons) are ‘Photons’. It has also been calculated through recent experiments that the speed of these gravitational waves is equal to that of the speed of light (of the E.M waves). Another interesting fact that comes into mind is that the E.M spectrum does not have fixed transitions boundaries, one kind of E.M waves say RADIO seem to smoothly merge into the other's frequency range say MICROWAVE. There is no discreteness in the spectrum, it exists as a continuum. The limiting boundaries are also not specified as the E.M spectrum is defined only for those range of frequencies and wavelengths which have applications for human usage and there on.
E.M Radiation interacts with matter in different ways depending on the which part of the spectrum it belongs to. Each type of radiation interacts is specified way entirely different from another type of radiation within the spectrum. In principle, the E.M spectrum also is infinite and continuous. The long wavelength limit being the estimated size of the universe of the order of Yottameter(10^+24). The short wavelength limit being in the vicinity of the Planck length (1.616252(81)×10^−35) . The main idea being tried to convey here is that gravity could be just another form of the radiation that all matter emits irrespective of its properties and nature. All matter interacts with E.M waves and emits E.M waves at the appropriate energies of excitation. Thus they seem to have an familiar relationship with E.M forces. Both Electromagnetic and gravitational forces vary as the inverse square of distance without limits (to infinity). The range of the wavelength of these gravitational waves would be of the order of the size of the universe, thus appropriate and convenient to extend it throughout the known universe and the corresponding frequency of the order of attohertz (10^-18).  This frequency corresponds to extremely low energies of the orders of the gravitational force (from E=hf , where ‘h’ is the plank’s constant, ‘f’ is frequency) and could explain why the gravitational force is such low orders of magnitude weaker than the E.M force the weak and strong nuclear forces. Given the different interactions with matter for various parts of the spectrum, gravity’s nature would be defined by this part of the spectrum. Further analogy with E.M waves brings about the question of attractive and repulsive gravity alike E.M attractive and repulsive forces. Throughout the universe we are familiar with the attractive form of gravity .We now also know repulsive gravity exists, which is nothing but dark energy resulting in the accelerated expansion of the universe, could also be a reason for the cause of the big bang (due to negative pressure which is another concept in itself).

Thus by incorporating the gravity into the E.M spectrum(extending its boundary horizon)into very low orders of magnitude in frequency and paramount orders of magnitude in wavelength, we could possibly have an explanation to dark energy and resolve the conflicts of dark matter. A more thorough explanation of how dark energy could possibly be of this nature would be given in the following posts.

Sunday, January 23, 2011

The Big Bang “Infinitely Regressive?” – A Quantum Mechanical Answer

Following from the last Post . This gives me an opportunity to describe at this point how Quantum Mechanics effectively gives an explanation for this Infinite range of choices that the singularity has, to have arisen out of nothing. Quantum mechanics is that fundamental science that incorporates within its solution every possible choice of outcomes (therefore an infinite number of choices) within a single experiment. Therefore it is the most mathematically apt and technically sound subject to define such a problem as we have mentioned in the last post. Since it is a clearly established subject with over 80 years of confirmed experiments as I understand. We shall use its equations directly.
Consider the simple equation in relation to the very famous Science dude of all time Heisenberg and his famous Uncertainty Principle. Its follows mathematically
The Inflationary Model Showing Quantum Fluctuations 

dE.dt >= h/(2pi)

E = Energy
h = Planck’s Constant ,6.623 x10-34

Infinitesimal (extremely small) amounts of Energy dE can arise into existence for extremely short period of time dt,  it is a quantum phenomenon called “Quantum fluctuation” of the vacuum. It allows the creation of particle-antiparticle pairs of virtual particles from vacuum. We now know that vacumm is not "empty" but it is actually a mass of activity with particle-antiparticle pairs being repeatedly produced out of the vacuum and immediately annihilating themselves into nothingness again. The creation of a particle-antiparticle pair out of the vacuum  violates the principle of Conservation of Energy but The Heisenberg uncertainty principle allows this to occur for a very limited period of time dt. This phenomenon has observable and measurable consequences, which have been tested and confirmed. Laboratory experiments have proven that quantum fluctuations occur everywhere, all the time . It is also responsible for the highly debated BLACK HOLE RADIATION.
Next within the framework of the BIG BANG, a period  of inflation occurs , INFLATION is the theorized extremely rapid exponential expansion of the early universe by a factor of at least 1078 in volume, driven by a negative-pressure vacuum energy density (repulsive side of gravity arising from General Relativity). It lasted from 10−36 seconds after the Big Bang to sometime between 10−33 and 10−32 seconds. Following the inflationary period, the universe continues to expand. Now Consider the following steps in the above equation .
1. The Value of h/(2pi) = 1.0546 x 10-34
2. For infinitesimal value of  dt= 1 x10-34
3.The Resulting Value of  dE = 1.054
Now we know that Gravitational energy is a negative quantity. You can see this by realizing that in order to separate two objects, one has to overcome the attractive gravitational force and this requires one to supply positive energy from outside and its counterpart the Positive energy is represented by the matter we see around us .
“One can show that this negative gravitational energy exactly cancels the positive energy represented by the matter. So the total energy of the universe is zero.”
I quote STEPHEN HAWKING from his book  “A Brief History of Time”
For above Refinements in the calculation would give the exact value of dE=0 ( approximately equal to zero) .Which thus results in a simple conclusion that the Entire Universe is nothing but a quantum fluctuation whose energy is zero and an expansion has taken place by a factor of at least 1078 in volume during the Inflationary phase. Hence the combined total being zero .This completely satisfies that the entire universe is nothing but a random quantum fluctuation whose probability of expansion was higher than usual resulting in the OBSERVABLE UNIVERSE arising out of quantum mechanics. These principles of QM could be applied only because of the infinitesimal (extremely small) order of size of the singular entity that banged. Also within the equation if we were to place in the values of energy approximately equal to zero . To retain the value of h/(2pi) (a Constant ) an input of time equal to approximately the current age of the universe could be deduced.
Conclusively no energy was required to "create" the universe. The zero total energy of the universe is an observational fact, within measured uncertainties. What is more, this is also a prediction of inflationary phase of the Big Bang Theory, which we have seen has now been strongly supported by observations. Thus we can safely say,
No violation of energy conservation occurred if the universe arose out of an initial vacuum of zero energy and the universe came from "nothing" and will probably lead into "nothing" .

Kindly Leave comments as to how clear the presented concept is .
Thanks in anticipation.