tag:anudeep.posthaven.com,2013:/posts A blog about randomly interconnected things 2022-12-27T15:24:32Z tag:anudeep.posthaven.com,2013:Post/1921407 2022-12-27T15:24:17Z 2022-12-27T15:24:32Z Entropy, IQ, and overfitting

Why young minds often come up with the paradigm shifting ideas

I am going to be making some statements without explaining too much. The statements are explained in some of the other essays I have written, so I am not going to spend too much time explaining from first principles. I'll eventually add the hyperlinks.

Entropy is a fundamental principle of the universe and the only known principle in physics that can distinguish between the past and the present. The third law of thermodynamics states that the entropy of any closed system can only increase, never decrease. This simple rule has embedded in it the mechanism for the emergence of all the complexity we experience, including the complexity of intelligent life, arguably the most fantastic byproduct of the universe. 

Mathematically, the entropy of a particular configuration is a measure of probability, specifically of that configuration, over all possible configurations that make up a macrostate. High entropy means that the probability of occurrence of that configuration is high, while a low entropy means that the probability of occurrence is low. For example, tossing 100 coins simultaneously, the probability (and thus the entropy) of the configuration 50-Heads, 50-Tails is highest, whereas that of 100-Heads is the lowest. Low entropy sequences is the same principle applied to sequences.

(credit: Hyperphysics)

Matter, life, fuel, chemicals, energy and almost everything we interact with spans a wide range of probabilities, but the most interesting, valuable and critical things are inherently low entropy. The wood you burn to generate a fire takes an ordered configuration of carbon molecules mixed with other atoms and loses most of its structure, leaving behind ash. The food we eat is low entropy configurations. 

Entropy is so fundamental to the universe that the brain as an organ evolved to predict entropy. We don't need a clear definition of life; observing something at our scale is enough to distinguish between animate and inanimate objects. Thus the brain, in the long evolutionary run, acquired the ability to distinguish the low entropy events and sequences and store a mental representation of that as memory. 

Due to the complexity of our brain and the number of neurons we have been endowed with as a species, there are shared abilities common across our species. These include the ability to use tools, learn and speak language, make sense of what we are looking at, understand speech, recognize faces, and so many more. These subconscious abilities are programmed through billions of years of evolution and are a repository of fundamental low-entropy events and sequences. G-factor or intelligence is a measure of the brain's ability to sift through low entropy memory space and see similarities (what we call pattern recognition). Our intelligence or g-factor is the ability to build up our repository with new low-entropy sequences by learning and understanding new concepts. Learning and understanding are associating a new event that the brain calculated as low entropy to others in the repository.

Thus g-factor is the ability to take our repository of low-entropy memory space and apply it to a new event we haven't encountered yet. The repository helps predict entropy and see the similarity to something in our memory. 

This brings us to fitting. In artificial intelligence, overfitting is when the neural network becomes so good at predicting the idiosyncrasies of the training data that its performance on the training set is very high. However, show the same network an example it has not seen, and it paradoxically struggles. Thus the network is unable to generalize. Underfitting is a better place to be in, where the network's performance is reasonably high and thus has learned the general characteristics in the training data. Thus it can extract the same characteristics in a sample not encountered and offer a better prediction. 

Under and overfitting find an analogy in knowledge work like science and entrepreneurship. Many groundbreaking ideas come from young minds who have a high g-factor but are learning that subject for the first time and hence aren't bogged by the biases and idiosyncracies of the past. They come at it with a "fresh pair of eyes." I think this is analogous to older professors overfitting the data because of how much time they have spent, and the younger minds underfitting and thus can generalize and pick the most interesting threads to pull that may unravel the problem. 

Notice I said young minds and not youngsters. By this, I mean those who are learning a subject matter anew and constantly adding to their repository. As long as you are constantly adding to your repository, you are young in your mind. When you say most of what you need to know you know, you start tending towards overfitting. The brain is so complex and magnificent that even with an overfit you can still do amazing work, but the paradigm-shifting ideas will likely stay out of reach. 




]]>
Anudeep Yegireddi
tag:anudeep.posthaven.com,2013:Post/1881153 2022-09-19T15:19:05Z 2022-09-19T15:31:58Z Sense of beauty and state space reduction

I have been teaching myself reinforcement learning and have made some interesting connections that I want to share. In reinforcement learning (RL), you use games as a substrate to teach an AI how to learn the optimal strategy to win. You craft an algorithm that the RL agent, given enough computing power, can theoretically converge to the optimal strategy to play the game. 

This is the algorithm that was used by Deepmind to train AlphaGo beat a human player at the game of Go. A truly marvelous achievement. 

The way you do this is to teach an agent the value of the specific state the game can take. An RL algorithm has a few essential components, an environment, a state, a transition function, a value function, and a reward function. 

Essential concepts of a Reinforcement Learning Algorithm

Environment
  • In the context of an RL agent, this is the game you are playing. So if an agent has been trained on the game of chess, then the board, the pieces, the rules to their movement is the environment. Essentially the game is where the agent gets to play and receive feedback on the moves they make. 
  • Since RL algorithms are so deeply inspired by biological agents such as humans, the analog to an environment for RL agents in the real world. The universe and the earth we inhabit are the environments where we act in, and give us feedback on our actions. 

State 

  • Each distinct configuration of the environment is a state. So in the case of the chess board, a specific position of the pieces is a state. The total number of possible states in chess is humongous, but these also include impossible states given the game. (e.g.: while a state where the king is in the front row and a pawn is behind it in the zeroth row is technically possible, but impossible given how the game starts and is played.)
  • Even considering all possible legal states of a chess board, the number is humongous - it is on the order of 1E45.


Action

  • An action is the set of allowed moves that the agent can take that transitions the environment from one state to another. For example, moving the pawn one step ahead transitions the environment from one state to another. 

Transition function 

  • This is a learned function that takes as an input the current state, and an action and outputs the resulting state of the environment i.e., how to transition the environment from one state to another given an action. (e.g.: moving the pawn by one step forward is the action, the current configuration is the input state, and the resulting configuration is the output)
  • Since many moves are possible, the transition function may have many entries for a particular state, but for the combination of state and action, there is only one possible mapping.

Value function 

  • This is the value of every state that has been encountered. The value is learned through the algorithm, and this entry is how the RL agent decides which action to take, given their current state. If two different actions are possible, transitioning into two different resulting states, and the value of the first state is higher than the second, then the agent will choose the first action. 
  • The concept of value is an abstract measure of how valuable a state is in attaining the final goal. In chess, the goal is to win the game, and not all prior states are equal in value. The ones where you control more of the board and more of your pieces are on the board is a better state to be in. In comparison, having your opponent control most of the board and have more pieces in play than you is a worse state to be in. Thus these intermediary states have differing abstract values in the pursuit of the win. It is these values that the RL algorithm learns. 

Reward function

  • The reward for the action you take. In its purest definition, the only reward of playing a game of chess is if you win. The complexity is in figuring out intermediate rewards, otherwise the agent would be randomly moving through the state space until it reaches a win. This is akin to a group of monkeys randomly banging on a typewriter for perpetuity, and ultimately one of the random combinations would be a coherent sentence and one would be Shakespeare's works. 

The explosion of the state space

Now we have a sense of the essential components of an RL algorithm. As we start with simple games, we can encounter every state and create an entry in the "value function" table. For example, in the game of tic-tac-toe, there are 3^9 (or 19,683 possibilities) which a computer can easily work with. Note that this number includes illegal states, so the actual number is smaller. A minor upgrade to a game like (Connect-4)[https://en.wikipedia.org/wiki/Connect_Four] where you drop a coin and aim to make the longest sequence of 4 coins, the number of states explodes to 3^42 (or 1.09418989E20)! The total bytes of information humanity has produced so far in totality is estimated to be 1E22 bytes. This is comparable to the state space of a simple children's game! It is a wonder we are able to play it at all. 

If you start conjuring up state spaces for games like chess and Go, the number of possible states is truly mind-boggling. Go, for example, has 1E170 (1 followed by 170 zeros!) possible states. The total number of atoms in the universe is estimated to be 1E80. So you can appreciate the magnitude of the size of the state space of Go. Yet we humans play it, and the experts play it profoundly well. 

Ultimately, the reason that RL works is that it is modeled after human intelligence. When we are born, we are not given 10,000 labeled images of a car; we see a car, we interact with it, which is feedback from the environment and store quite quickly the concept of a car. This is especially more visible in our actions as adults. If we decide to go to the supermarket, there are theoretically infinite number of microsteps we take to reach our goal. If a particular road is blocked, we find a way around it. Even if this means passing through an unfamiliar part of the environment, we do it since our focus is on reaching the supermarket. A game is a "toy environment" created to exercise the same faculties. 

When we play a game, we do not map the entire state space and calculate the best move. In some intuitive way, we know which of the intermediate states are valuable and which positions are advantageous even if we haven't seen or experienced those states before. That is how we can comprehend a game like Go with its mind-boggling complexity and still play creatively. 

Beauty and state space reduction 

I posit that the abstract word we use, "beauty," encapsulates the value of state space. This word may go down as the most complex word to mathematically define because if we can, then we can program it into an AI. The AI then may be able to exercise creativity of the sort that is, even now (with all the advance of AI), firmly in the human realm of possibility. Mathematicians talk about elegance and beauty as a precept for the correctness of new mathematical work. The greatest equations are often transfixingly simple looking. Scientists who discovered new paradigms like how atoms are structured, how electricity flows, and the influence of DNA on life, know they are right even if they don't have the proof yet. The ideas often seep into their minds years before they rigorously prove their idea. Einstein, as an example, called the first time he thought of the equivalence of frames of reference for acceleration as his happiest thought. It took him many years to compose it into a Theory of Relativity. 

If a simple-looking game like Go has so many possible states, imagine the game of life. The number of possible states is truly and completely infinite, in the most infinite sense of the word infinity. We still manage to move forward and, most interestingly, discover new science. We are able to because we are guided by our innate sense of beauty that has guided us with an intuitive sense of valuing states. When you learn new science, and you truly understand, you experience that sense of beauty. Over time you hone that sense and start recognizing new ideas that have the potential of greatness. 

An AI does not yet have that ability to distinguish beauty or feel that trembling sense of awe when you understand how sunlight is converted to glucose in chlorophyll, and how that is responsible for the complex life on earth. It even applies to other humans; we see beauty in people (both physical and beyond), it is that same sense at play. The truly deep experiencers of life have figured out a way to recognize this sense of beauty at levels deeper than the our automatic reflex sense. 

We all have the mechanism to recognize beauty, it is on us to put it into practice. 


]]>
Anudeep Yegireddi
tag:anudeep.posthaven.com,2013:Post/1814701 2022-04-04T11:46:31Z 2022-04-04T11:49:36Z Why an over-reliance on system-thinking may be holding you back

Why an over-reliance on system-thinking may be holding you back

Table of Contents: 

  • An analogy between an over-reliance on genetic gifts: brute-strength and logic-based intelligence 
  • Types of thinking - Accumulatory vs Exception based 
  • The bias in choosing the pattern 
  • Escaping the bias of low-depth pattern repositories 
  • Becoming multidimensional thinkers

Notes: - I use the word pattern but the meaning is interchangeable with first-principles, first principles are a collection of axiomatic patterns, patterns we internally do not question the veracity of. The more you have of these, the more complex patterns you are able to build by using the first-principles like lego building blocks of logic.

An analogy between an over-reliance on genetic gifts: brute-strength and logic-based intelligence

We live in a period of unprecedented peace. After all, until about 80 years ago mankind was a warring species. This becomes even more true if you go further back. Physical security was constantly at risk, and hence, the male of the society may have come to be prized and preferred for their physical abilities. This was not the only reason but in my view an important one. In these times, the highest achievements were associated with the physical capabilities of the body. Thanks to their genetics, people who were born with brute strength were able to easily overpower the average and the weaker. These people, you can imagine, did not have to work very hard to earn their keep. In any war, an excess of 90% of the soldiers are within two standard deviations from average. Those gifted with abilities three standard deviations out could conceivably fight many battles before coming upon someone who posed a threat. If this opponent was similarly gifted, then it was a battle of innate strengths; the more gifted one wins.

However, there was a different kind of opponent that they could encounter. The ones who went beyond their strength (or lack thereof) and disciplined themselves into becoming the finest warriors they could. From a young age, these individuals honed their physical capabilities to their limit, the difference being that they don’t rely on a single overpowering ability. Put simply, they don’t rely just on their strength. They develop the ability to strike at vital points so that they are most efficiently able to immobilize a person. Even the strongest person is not much of a threat if their knee cap or legs are taken out. They practice their swordsmanship so they can parry a blow before it comes so that they can see an opening in an attack that the attacker themself isn’t even aware of. They are able to tell which way a sword is going to swing based on the grip, the leaning of the body weight and the movement of the eyes. These are subtle clues that operate at the most granular level of operation. You wouldn’t think to consider such details if you are quite strong, you never needed to rely on these subtleties. Your strength was enough.

Fast forward to the present world, pure physical strength is not valued as much by society because of unprecedented peace. Now we value mental prowess, what we generically refer to as intelligence. Howard Gardner posits the existence of at least 8 types of intelligence that go beyond the purely cognitive ones we like to default to. These include kinesthetic, clerical, spatial, musical and logical (to name a few). Let us focus on logical intelligence, which is associated with a more common-sense meaning of the word intelligence. A particular aspect of this that is favored in technical jobs, academia, computer science applications, and scientific research is “systems thinking”.

Types of thinking - Accumulatory vs Exception based

Thinking in systems is a powerful ability. It is the ability to take a set of facts and glean the underlying pattern that connects those facts. You could say that most people are system thinkers by that definition, and that is generally true. However, the difference is in the edges. There are two types of thinking that I will lay out, one is called the accumulatory and the other is called the exceptional. In accumulatory thinking, the brain takes a set of facts and is constantly coming up with a hypothesis of the underlying pattern and adopting that as a general rule until contrary evidence presents itself. When contrary evidence presents itself, the pattern is updated to account for the new fact and updated throughout.

For example, seeing a few chairs you are able to generate a hypothesis pattern that chairs usually have 4 legs, a base, and a backrest. This pattern holds until a chair with 3 legs shows itself. You update the general pattern of the number of legs and leave being a stable, supported system with a base and backrest unchanged. Then you come across an office chair with wheels, and now you again update the rule. Then you come across a stool, and you realize this can also function as a chair, but it doesn’t have a backrest, so you generalize the definition to something stable you can sit on. Then you come across a beanbag, and that cannot be called a stable system. So you further refine your pattern of a chair. This process continues, indefinitely. However, with each change, the probability of a subsequent change decreases.

Since most of us do this without even trying, we can see that this is a general skill humans have. The difference is in our ability to use this skill in more complex domains. Science is the process of extracting these underlying patterns by running experiments and using those as the facts to build a hypothesis of a pattern on. Since we know not all of us can be scientists, there must be another way of thinking.

The exceptional way of thinking is one where the most important characteristic is not the pattern but the exception. If a pattern can explain 80% of the actual happening (which is a pretty good explanatory power) the pattern is still not accepted and internalized because of the 20% that it cannot. An example of this is in the work of Noam Chomsky, who has extensively studied language out of his own curiosity and noticed the pattern of similarity in the mechanics of language across cultures, geographies, and time; and posited the general pattern of a Universal Grammar. This is an appreciation of the fact that human brains seem to have a common underlying structure that seems to hold a universal grammar system that enables us to learn something as complex as language so easily when we are young. While he calls it universal, people have found remote tribes in the Amazon that seem to have developed a simple language system that deviates from the Universal Grammar rules that Chomsky laid out. In essence, a vast majority of languages seem to fit the pattern, but exception based thinkers argue that we cannot give it as much importance because of the few deviations observed from it. Here the important thing is not the pattern, but rather the exception. That is what I mean by exception thinkers.

Exception thinkers do very well in operations, supply chain, logistics, crime-analysis, and so on where the most important things are the exceptions, while accumulators make good scientists, entrepreneurs and politicians.

The bias in choosing the pattern

Hence system-thinkers tend to be accumulators. As accumulators the skill is in choosing the pattern. Recognizing novel patterns on the fly is extremely hard, especially in adulthood. When we are children we engage in some of the most sophisticated pattern recognition and accumulations that we will ever do in our lives. The reason is that a baby’s brain is a hyperconnected collection of neurons. This means that our brain is never more interconnected in its neurons than it is when a brain is in early development. Childhood and puberty is the sequential pruning of these connections, deciding which connections to keep and which ones to discard. This hyperconnected brain is an incredibly sophisticated pattern recognizer. No wonder something as complicated as language is understood so easily by the brain. This is also when we play games, form social connections, and learn love from our family. So many of our abstract patterns are learned at this stage. Our interests, proclivities, biases, and experiences shape this initial set of patterns we build a repository of.

However, no matter how intelligent we may be, we are a reflection of the diversity of the universe we experience. As children, we experience only a small cross-section of the universe from a relatively safe vantage point (care of our parents). It is rarely representative of the realities of the world. However, we don’t know that, and to us the patterns we build seem fairly robust. So when a system-thinker identifies a pattern that explains the facts they witness, they are drawing from this repository no matter the depth of the repository.

This reflects the analogy of the physically gifted brute strength of a soldier. As somebody who has relied on their genetic predisposition and ability to system-think, and never really developed it, they too rely on their given strength, and rarely invest the time to develop that skill. If you personify the system-thinking ability, it develops a false sense of importance from being always relied upon. This is not unlike a merchant who is a middleman that passes the goods to the appropriate recipient, doesn’t add much value but since all trade passes through them they grow wealthy and arrogant. They do not allow the development of other businesses that may pose even a remote threat to their hegemony. In this way, they suppress innovation, unless it is of the type that helps them grow their own influence and become even wealthier. The arrogance develops into selfishness. If the system-think is not able to system-think, it questions the data, even the very problem it is given to solve. If I cannot see the larger pattern here at a first glance, then maybe the problem is wrong or unsolvable, or at minimum incomplete.

Escaping the bias of low-depth pattern repositories

Is there a way out then? Let’s recap the weakness imminent in always relying on system thinking. We know that system thinking allows you to quickly glean the pattern that explains the data. The individual is less worried about the relevance of the pattern, rather that they were able to see a pattern. However, these patterns are typically drawn from a pattern repository. This repository can also be thought of as a collection of first-principles. If you have very few patterns to pull from, then your system thinking is going to go around in circles or choose the same one repeatedly. What you then need to do is add to this repository. But isn't that hard in adulthood?

Yes, it is. There is still a way out, and it is easier because we live in an information age. If system thinking is the input of facts or experiences and the output of a pattern (or series of patterns with a hierarchy of their own), then how about changing those two variables. Philosophy and logic is the study of the art of developing patterns that have been collectively honed in the minds of the extraordinary humans that came before us. Study of philosophy and logic helps you avoid making false equivalences, see a pattern’s overfit or underfit, and look beyond your personal bias in building up patterns. The other way is the study of science, where people like Galileo, Newton, Darwin, Turing, Maxwell, Einstein, Planck, and Dirac, to name a few, spent their lives dedicated to the study of the universe and gleaned some of the most profound patterns. If philosophy is the meta-skill to aid pattern extraction, then science is an incredible repository of patterns gained from the study of the universe.

The universe is THE repository of applied patterns, where certain patterns find resonance and are elevated at different scales of reality. As an example, gravity is a pattern of particles gaining power in accumulations, the larger the accumulation the larger the pull. This resonates at the level of atoms, stars, and planets, but also at the level of an organization in human societies. The bigger the organization, bigger the pull.

Some of the simplest patterns are incredibly far-reaching, as the emergence of evolution and descent into chaos of entropy. Quite likely evolution and entropy were the basis of yin and yang. The human body survives when homeostasis is maintained, balancing the calming (sympathetic nervous system) and the excitatory (parasympathetic nervous system). In our brain too, we have excitatory and inhibitory neurotransmitters. Evolution and entropy are far-reaching in their implication. These were but a few systems at different levels of scale that manifest the applied patterns.

Another way to improve your system-thinking is (ironically) by suppressing it, and allowing other skills to develop. The personification of system-thinking is quite apprehensive of this move. It does not like being relegated to a non-primary position where it is not the lead character running the show. So it protests and tries to create pandemonium. When the slightest hints of difficulty arise from deliberately focusing on other skills, system-thinking protests the loudest and screams - see, you need me, look at you unable to find the solution that I so obviously can see. Put me back in charge, and I’ll show you how it’s done. This is the trap laid by the loudest skill in your repository, the one that enjoyed the spoils of being the most experienced and adored skill. So it’s easy to fall back into that trap and seek the comfort of knowing rather than go through the discomfort of deliberately thinking.

Becoming multidimensional thinkers

The beauty is that system-thinking is a meta-skill, like how philosophy and logic are meta-skills, specifically the art of recognizing patterns. A meta-skill does not grow stronger by itself, it is only as strong as the number of hard skills that it has access to. So when you sharpen the other skills, your system-thinking grows stronger. It also gains hubris and realizes that it doesn’t have sway over you like it once did, and that is for the best. Since, now, you truly are a multi-dimensional thinker, relying on a growing repertoire of skills.

]]>
Anudeep Yegireddi
tag:anudeep.posthaven.com,2013:Post/1689564 2021-10-03T16:14:44Z 2021-10-03T16:14:44Z The emergence of the entropic brain


Part 1: Emergence, Entropy and the Universe

Emergence is the phenomenon of properties of a system arising (or emerging) due to the interaction of parts in a wider whole. Think of these as abilities unlocked when large-scale network effects come into play. An example of this is consciousness. Today, we understand neurons pretty well, their structure, how they fire electrically and chemically, and how they connect to neighboring neurons. Despite this rich detail on a single neuron, we do not understand how consciousness emerges when 100 Billion of them interact in the structure we call a brain. 

An example more in our wheelhouse are computers. Transistors are miniaturized electric switches used to represent binary electronic switches, the most fundamental of them being AND, OR and NOT gates (see reference #1). NAND and NOR are universal gates because all other gates can be derived from them. From the absolute simplicity of a switch that has two states, "0" and "1," emerged ways to store data, process it algorithmically and distribute it over large-scale networks of the internet. Recent advances have led to surreal applications of Artificial Intelligence and Machine Learning Models that can discern language, drive autonomous vehicles, and help us stay in touch with people across the world. It can even go so far as to model two black holes crashing into each other (Two Black Holes Merge into One Reference #2). From binary logic gates thus emerged human wrought intelligence. 

There are other examples of emergence; 26 letters in the alphabet creating an infinitely extendable communication medium we call language. The four basic compounds called base pairs (abbreviated as A, G, C, and T) interlock in different combinations to form the double helix of DNA. From that aperiodic crystal emerges the complexity of life (base pairs of DNA Reference #3). The four base pairs of the DNA encode 20 amino acids that combine to form inter and intra cellular machinery we broadly call proteins. A cascade of firing neurons leads to thought. There are many more examples of emergence, where deceptive simplicity leads to unfathomable complexity. 

The universe too, in all its complexity, emerged from the interaction of elementary particles constrained by fundamental universal laws. Theory of General Relativity is an example of such a law. One of the most fundamental of such laws is the one concerning entropy, which simplified states that the amount of disorder in a closed system always increases. Entropy is incredibly fundamental to the universe, with Einstein once remarking that 1000 years from now, our species may discover new laws that overwrite current ones but not the second law of thermodynamics. Stated more generally, the second law of thermodynamics states that the universe is inevitably heading towards states of higher disorder i.e. higher entropy.

(In this essay, the terms entropy and disorder are used interchangeably. Higher entropy and higher disorder mean the same thing.) 

Let's take a slight detour to understand entropy, because of how fundamental it is to the universe, and as we will see to life and the mind. Entropy is a statistical representation of disorder. To explain this, let's take 100 standard coins with Heads and Tails and toss all of them at the same time and record their configuration when they fall. The sample of possible states is every possibility between (100H, 0T) to (0H,100T) with the middle case being (50H, 50T). Since each coin can be either an H (head) or a T (tail), i.e 2 possible configurations and there are 100 coins, so the total number of possible configurations (including repeats) is 2^100. 

The above picture which maps out the probability space of 3 coins hopefully gives you an intuition on how quickly the complexity grows.

Let's just take a pause and wrap our head around this number 2^100, which is 1,267,650,600,228,229,401,496,703,205,376. This is a massive massive number. To put it into better context, imagine you have a standard paper that you fold into half, then you continue folding the same paper in half again for a total of 100 times. How thick would the resulting paper stack be? The answer, a mind-bending 13.7 Billion Light years, that is to the edge of the observable universe (see how this is so here reference #4). Fun fact, 42 folds of the paper gets you to the moon. Only 58 more gets you to the edge of the observable universe, which is unreal. A human brain does not easily grasp exponential growth, watch this (provocatively titled) video to build an intuition "The most important video you will ever see. (reference #5)"

Coming back to our experiment, of the total 2^100 possible outcomes, only one branch in the outcome space corresponds to 100H, 0T. Similarly, only one outcome corresponds to 0H, 100T. Hence the probability of such an event randomly occurring is 0.000000000000000000000000000000788 (7.88e-31) which is infinitesimally small. This is the most ordered state, and per the 2nd law, the configuration of the universe of these particles will tend to higher disorder, ultimately converging around the most probable of all outcomes 50H, 50T. You would be very surprised if while doing these random tosses you see an outcome of 100H, 0T or 0H, 100T, but you would hardly bat an eyelid if some configuration around 50H, 50T showed up. Your brain intuitively grasps the improbability of highly ordered configurations. 

Ordered configurations in fact are energy-rich, because building order takes energy, and breaking that order yields energy. In the above example, if you landed with a 100H, 0T configuration, that is the most energy-rich configuration of those particles, and as you extract energy to do other things the configuration will progressively tend towards 50H, 50T, the least energy configuration. So in a simplistic essence, order is correlated to energy. How did order even emerge then if disorder is significantly more probably, and how do you explain stars giving off massive amounts of energy? This is a fair question and one that physics helps answer. 

Without going too much into detail, when the Big Bang happened, fundamental particles exploded from a compressed, incredibly dense core and flew out from which particle clumps started to form. When some of these clumps reached a critical mass, they started exerting gravitational pull and started pulling other particles towards themselves thus transforming dispersed gaseous giants into stars. Stars have a highly ordered core, tightly packed elemental fuel sources that are being fused together due to the intense pressure of gravity. But the further away from the core you go the more disordered the particles are, such that the 2nd law of thermodynamics still holds. The entropy decrease in the core is more than sufficiently balanced by an increase in the entropy towards the outer edge of the star. The sun too, if you observe, has a solar atmosphere at the edge that becomes visible during a total solar eclipse, which is the high entropy part of the star (The Sun's Atmosphere - reference #6). Many years later (~4 Billion years) and a rollercoaster journey, the sun will burn out and settle into being a dead star, entropically equivalent to a 50H, 50T state. 


Part 2: Entropy and Evolution, two sides of the same coin

What explains the emergence of life then? Life emerged from evolution, but why was evolution even allowed to occur? This is where things start transferring from the metaphysical to the personal. Human beings are a byproduct of evolution; we are highly ordered cellular structures and we are blessed with the ability to reverse local entropies to our benefit. The phone you hold, the laptop I am typing this on, a chair, and almost everything we have invented is an ordered structure. Do we break the 2nd law of thermodynamics? No, nothing breaks the 2nd law and as we will see, life favors entropy's thirst for disorder. So much so, that is willing to let order some order emerge so long as the net disorder is always on the rise. 

Evolution is in some sense the twin of entropy, the yin to its yang, the heads to its tail. Entropy allows for evolution to occur, in fact, encourages it. Ordered life forms emerge because the universe wants to reach its entropic potential. In slightly more lay terms, you know when you say that kid has great potential but they did not apply themselves and hence squandered their gifts away? Or that other kid who worked really hard to achieve their full potential, and then did. The universe is kind of like a kid with the potential to do great things, but the universe has to work hard to achieve its full potential. Potential here is analogous to change in entropy, literally how much entropy changed. If the universe stayed in the same configuration that the "Big Bang" wrought about, over vast swathes of time (10^100 years) the energy would eventually decompose but short of star or other mega-structure collisions, entropy doesn't change too much. Entropic potential is achieved, when the energy in existing ordered systems is leveraged to impose order on unordered (random) configurations of particles. Since we know ultimately everything tends to disorder, these newly minted ordered structures too will dissipate into disorder. Additionally, to create a certain amount of order in the universe, you need to expend more order (energy) than you are creating to satisfy the 2nd law. Thus the net entropic change increases, and the universe moves closer to achieving its entropic potential. In a sense, the universe is trying to maximize it's entropic change.

Of all the processes that could enable a universe to achieve its full entropic potential, few work as consistently well as evolution, maybe only beat by evolution's own progeny, an intelligent and conscious species. How does evolution help? Evolution is a process that is enabled by the existence of favorable conditions, the most important ones being - the presence of water in liquid form, a specific temperature range, gravity to prevent the atmosphere from escaping into space which creates a greenhouse effect leading to warmer conditions and thus leads to a cascade of favorable conditions for evolution to start. These conditions are so ubiquitous in our understanding of life that we look for habitable planets using similar metrics (see reference #7 Goldilocks zone). 

The existence of these conditions (a statistical anomaly by themselves) and the presence of carbon and select heavy elements in the earth's core lead to particle interactions, leading to agglomerations with a curious property - their motion cannot be explained by the laws of physics, they seemed to be able to move non-randomly. This ability to non-randomly move and exert control over their own agglomerations is a characteristic feature of life. The earliest agglomerations were single-cell organisms that didn't move much (if at all) but were indirectly responsible for all of life as we know it. These early agglomerations we call cyanobacteria used sunlight to fix the carbon dioxide dissolved in water and released Oxygen (O2) into the atmosphere (reference #8 Cyanobacteria and the Formation of Oxygen). Through a slow process that took billions of years, these ordered agglomerations leveraged the energy in sunlight and through complex metabolic pathways, make for themselves "food." Food is a colloquial term for energy, which a living thing needs to sustain its ordered structure. The need for food to survive is the second core characteristic of living things. 

(Sidenote: If what cyanobacteria do sounds like what plants do, then you are prescient because cyanobacteria form the basis for photosynthesis. In the tree of evolution, photosynthesis was invented only once, every other organism that uses it has cyanobacteria to thank.)

Let's take a moment to see how entropy likes all of this. Even the simplest organism has an ordered core (the body) that is low in entropy but needs to consume energy. This energy is obtained by converting low entropy structures to high entropy, which releases energy, that is consumed as food to maintain its low entropy state (reference #9 Entropy and Biology Photosynthesis). In photosynthesis specifically, cyanobacteria are able to take advantage of the sun raining down energy that is perennially being produced in adherence to the 2nd law. Evolution by virtue of its ability to create ordered life forms, is appeasing entropy's endless thirst by utilizing order but compensating the universe with a much higher increase in disorder. 

This oxygen-rich environment (first oceans and then the atmosphere) was critical to support the next level of complexity in life. The tree of life started bearing the fruit of complex multi-cellular organisms, one of those fruits being the human species. With an increase in cellular complexity, came an increase in energy requirements, which we have established come from ordered configurations. So evolution started imparting species with means to recognize these ordered configurations because it was critical to survival. These newly minted configurations endowed by evolution were responsible for enabling living things to identify energy-rich sources (food), and to chart a path towards these sources. These new structures have names themselves "brains." 


Part 3: The emergence and evolution of brains

The most ordered structures out there to consume were, unsurprisingly, other life forms. Identifying life is not an easy classification to make. Today, we have a good sense of what is edible and what isn't, but when the first organisms were starting out, they needed to be able to differentiate between life and non-life, to classify as edible or not. Let us stop to ponder the complexity of this question, how, without taking for granted hindsight, would a biological brain have been able to differentiate between things that live and those that didn't? Entropy again, I argue, provides the foundation for that answer. 

Entropy is a descent into disorder, and as we covered before, an increase in entropy is a fundamental truth about the universe. (In fact, an accepted theory of time is that time IS the direction of entropy increase.) If everything is tending to disorder, then change to disorder is a constant, whereas life is a structure that maintains its order. In scenes observable to life forms, disorder tends to emerge from the interaction of particular configurations with the environment. The brain against a backdrop of disorder, keeps track of life as the structures that has a few common properties - their structures don't disintegrate into disorder, and the particles seem to be able to engage in self-directed motion. Structures that fit this bill maintain their entropic state, through configurations that persist despite interaction to atmosphere, and they seem to move non-randomly. Dead life, on the other hand, has the entropic configuration of life, but is losing its entropy quickly, decomposing to interaction with the environment, and is also incapable of self-directed motion. 

Thus as an early step, brains evolved an ability to classify. Classification is the ability to distinguish an object from its background and identify the characteristics that demarcate it as such. Or in other words, in the vast web of complexity that comes from particles interacting with each other, classification is the ability to isolate particular configurations, based on common sub-particular configurations that we call properties. 

As an example: We can use the general properties of living things classify a chimpanzee, a fish and a human as living things. But because we are able to distinguish properties, we realize that a human and a chimp are more similar because they live on land, and the sub-configuration of their limbs reveals a common architecture of two hind and two fore-limbs used for moving the body.

Brains, however did not evolve in isolation. Most complex multicellular species evolved brains and in order to preserve themself, they realized that not only do they need to identify entropy to consume as food, but also realized themselves as a source of such energy to animals higher up on the food chain. Evolution selected for brain characteristics that additionally optimized for self-preservation. Now not only did brains need to mobilize bodies to chart a path in three-dimensional space towards energy, but now towards moving targets, food sources actively trying to preserve their own lives, cognizant of an approaching predator. 

This is incredibly complex when you stop to think about it, and huge complexity gains in the brain were needed to be made to support these battles of "eat, be eaten." (This may have been the first time differential calculus was unconsciously applied to close a gap, and integral calculus to anticipate the gap close and increase their chances of survival). This finds resonance in the human world too, incredible technological strides were made during battle. In fact in Guns, Germs and Steel, a seminal book, it was argued that Europe was able to progress technologically rapidly because of nations sharing borders on all sides resulting in constant skirmishes. Battles have evolutionarily sharpened our brains, and we see the resonance of that in our anthropomorphic history. 

However, brains were not the only way to gain an evolutionary advantage in the hunt for food. Size and strength, became important evolutionary advantages. If an organism had the advantage of size (as in the case of dinosaurs, blue whales, sharks) or strength (as in case of lions, sharks, crocodiles) they didn't need to think too hard about getting their food. Their size gave them a vantage point that enabled them to look far, and their strength meant they could take down any food source without too much trouble. Essentially, size and strength put the organism higher up on the food chain, which reduced their survival risks. Hence, lacking in natural enemies, they did not need to breed aplenty to keep their numbers, nor did they need to become ever better-thinking machines. (Reference #10 Brain to Body Mass Ratio)

Humans are at that size where they have several potential enemies that are larger and stronger than them, but not so small that they were in a constant cycle of "be eaten." Human beings were a fairly reproductive species, as up until a few millennia ago, mortality rates of youth was quite high at 46.2% and so was infant mortality at 26.9% (Reference #11 Mortality Rates Of Children Over The Last Two Millennia), this meant the human species was a lot more genetically active than the other species higher up in the food chain. DNA is such that during the life of a species, information can only ever be translated from DNA to RNA to proteins, and never in the opposite direction (this is called the Central Dogma of Biology - Reference #12). This means that changes in DNA, which were the only way for a species to evolve, could only happen via more births. It is not possible for you to change your DNA during life and pass those changes to your offspring.  

So humans were in a sweet spot, where they were sufficiently small to reproduce quickly enough that their genetic diversity was constantly changing in the early days and not so small that they were constantly under threat of extinction from having a natural enemy that marked them for food. In a sense, they were the "upper middle class" of the food chain, due to which they could as a species focus on self-actualizing pursuits like forming societies, growing their own food and inventing religion and nationhood to bind societies. I cannot help but notice how similar this is to Corona Virus, where it was only mildly deadly, but spread easily allowing it to genetically diversify. It too was the "upper middle class" of flu-like viruses.   

Due to human's size disadvantage, they made up for it in massive gains in the capacity of their brain. With nearly 86 Billion neurons, we had more than we needed to just survive. Most of the brain of any organism is devoted to keeping its various life processes running. This, referred to as the subconscious, is what keeps you breathing, your immune system functioning, and your musculature able to control your body (to name a tiny subset of functions your brain performs). Certain functions like the heart beating is (interestingly) brain independent and is carried about by a specific type of muscle that is unique to your heart called myocardium (Reference #13 Cardiac Conduction). Most organism's brain capacity asymptotes at the point that it is able to carry out all these base functions and keep an organism alive and propagate its species. Hence most (if not all) species are highly "present," which means they live in the moment. So if they get hungry: their brain figures out a path to fulfilling that need; they reach sexual maturity: then their brain figures out a path to reproducing; they sense a threat from a predator: then the brain figures out a way to save itself. This is why dogs are so sad to see you leave, and so happy when you come back. They don't have a consistent sense of time elapsed, so when you leave, unless habituated, they have no idea when or if you are coming back. 

Once the base processes were accounted for, the excess capacity of the brain is, for an evolutionarily first time, available for "luxury" purposes. Luxuries that led to superpowers like the ability to sense the passage of time, the twin miracle of speech and language, and perhaps the most powerful of them all - reflection. Our ability to reflect on our thoughts (also called metacognition) is, I would argue, at the center of most of the technical progress we have made as individuals and as a society. It is that last superpower that is our window into entropy, and by extension our theoretical peephole allowing a glimpse into the fabric of the universe through such mental concoctions like quantum physics, string theory and general relativity.  Reflection is also what allows us to observe and deconstruct our own biology. 


Part 4: Brains as a reflection of the universe

Our discussion of entropy touched on how the order of stars came to form from the "Big Bang" while respecting the 2nd law of thermodynamics (entropy always increases). Stars are enormous, mind-bendingly large hearths that not only transmit light and heat in the form of radiation that enables photosynthesis and thus life, but equally importantly, also act as a massive forge to combine the simplest element of Hydrogen into complex elements all the way up to Nickel.  

(The stars are the crucible that combines the basic emergent element of Hydrogen (1p (proton) and 1e (electron)) into more complex compounds, starting with Helium (2p, 2n (neutrons) and 2e) and going all the way to produce Nickel (28p, 28n, 28e). 

Starting from the time of the Greek in the west, we used to think of atoms as the smallest unit, indivisibility conferred to it in its name. We have later come to understand that, atoms are made of protons, neutrons and electrons. We now know even those have more fundamental constituents called quarks. The universe at this layer of abstraction is made of electrons and quarks; quarks constitute protons and neutrons (a proton and neutron can transform into the other). 

(Sidenote: String theory is an even deeper layer of abstraction that posits that the most fundamental particles are not quarks and electrons, but actually infinitesimal strings in extremely high tension that generate all the subatomic particles and forces of the world through string vibrations)

So the universe started off as an agglomeration of electrons and quarks (also photons); quark agglomerations manifest as protons and neutrons; proton, electron, and neutron agglomerations manifest as atoms, atom agglomerations manifest as molecules, and carbon-based molecular agglomerations manifest as the molecular machinery that we call proteins, and proteins manifest life. At every step of the way, particles associate into larger structures, obeying the rules that inform their mutual interaction. But at every stage, what emerges is fantastically different from what came before it. We have almost no hope of predicting an emergent property.  

If you try to visualize a simplified universe, with only 25 atoms (that formed from prior quark, electron agglomerations that we will ignore for simplicity) and 4 types of atoms that agglomerate into molecules. These molecules then combine amongst each other to form proteins, and proteins combine with an energy source in the form of a molecule, and together in our simple 4 atom universe, they are the simplest life forms.

------------------ A simple 25 atom universe ------------------ 


Let's do something to simplify this universe even further. Let us hold constant the same entities, and represent them as one. So all the green molecules, we will represent as one, and draw lines from all the atoms that went into forming it. This way we keep track of individual atom starting points and destinations, however we obscure away the details of the exact path they took. We repeat this at every level and the simplified structure looks like this. 

------------------ A universe with similars grouped  ------------------ 

This looks suspiciously like a neural net, which is a collection of artificial neurons and form the basis of how we program intelligence into machines. 

------------------ Picture of a neural net  ------------------ 

Our neurons have two sub-types of neurons, one is called a pyramid neuron (because of its shape) and another is called an interneuron (or an association neuron). Pyramid neurons look remarkably like the path of one of these particles in the forward direction. Purkinje cells are another type, but they are in essence a pyramid neuron with significantly higher nerve density. 

------------------ Pyramidal neuron - courtesy Nature.com  ------------------ 

Interneurons, or association neurons, are what discover similarities and "associate" them all. When we held all similar molecules constant, we were able to do that because our association neurons noticed the similarity and allowed us to hold that feature of the system invariant, i.e. hold a feature of a system constant to simplify the mental representation of the complexity. We would be overwhelmed by the complexity of the world (emerging from entropy) if we lacked the ability to hold something constant. Association neurons are also vital to improve computational efficiency (see Appendix for an explanation and an analogy).

------------------ Interneuron - courtesy School of Biomedical Sciences  ------------------ 

Is it not remarkable how analogous to one another the universe and the brain are? The brain is quite literally a reflection of the universe, and maybe that is how we perceive the universe indirectly, first through the senses and then by recreating it in our brain. While most animals do the same, what makes us different is that a part of our brain is sectioned off to store a core pattern, our pattern of self. I talk more about it in another essay but the core idea is that our identity comes from a set of memories and influences that shaped the brain. Memories are but a reflection of a slice of the universe that we call our environment. In recognition of the importance of these memories it stores them separately and ties them together into a super pattern. Since our brain loves telling stories (the story of the universe for example) these memories and influences when linked together lead to the emergence of a central story. This is the story that defines our sense of "self." Since any memory is of the real world, and the only thing that is capable of interacting with the real world is not our brain but our body, our bodily perception are a critical part of our "self."  Our sense of self is the part of our brain held invariant, that is able to experience, witness, and influence the other parts that change and are free to change.


Part 5: The brain, language and entropy

This brings us to language, one of the most beautiful inventions of the human mind. It is impossible to know the origins of language because language originated as far as we can tell in the human mind alone. Our records of what we can loosely call language started off as cave paintings, followed by hieroglyphics and many intermediate forms before formal writing came along. Some writing survived the ravages of time and helped us piece together a history of language's emergence. The word emergence is important here, and it is a reinforcement of a pattern we talked at the very start of this essay, a property of a system arising from the interaction of parts in a wider whole. Here the interactants are human brains, and the emergent property of the network of brains is language. 

While we can never be sure of language's exact story of emergence because of it's inverse nature (Reference #14 Inverse Problem) we can piece together a story by studying its effects. We know that there is overlap in the areas of the brain responsible for language and action (motor) area (Reference #15 Brain Mechanisms Linking Language and Action). This helps identify a causal relationship between the two, with hand gestures like grasping having come first followed by a linguistic representation of that gesture. Are animated gestures when speaking are a resonance of the overlap. 

How do humans converge towards a particular gesture or a word that denotes it given the infinite variations that are possible? This is where network effects come in. Since language of one i.e. a language that only you understand is not very useful in communication, there needs to be a wider consensus. This is naturally achieved in societies through the wisdom of crowds, which comes close to optimal solutions. Let me give you an example to expand on this. In grad school our behavioral economics professor showed our class of ~60 students a picture of a big cow, and asked us to guess it's weight. We had no additional information beyond that picture, and we were all to write our answers on a piece of paper and pass it along to the TA who collected and tallied it. What came out was memorable enough that I remember it to this day many years later. When tallied the answers had a wide range, with comically low and absurdly high guesses, but curiously, when averaged out, the final answer was remarkably close to the actual weight of the cow, within ~2% of the actual. This is not a one-off result but rather repeatable, where every class before us that the professor had conducted this experiment on were within ~5% of the actual weight. This is the underlying principle behind democracies; if the people are informed and free to independently choose they are likely to arrive at the most optimal candidate.

Even in lab experiments where people were asked to choose gestures for a particular set of words without speaking, first individually and then collaboratively, in a matter of 10 group iterations, the gestures for the words came out to be consistent both within and across subjects (Reference #16 Evolving artificial sign languages in the lab). You can think of language as having emerged from many such iterations where the initial gestures and sounds to denote a particular action was random but through every brain it inhabited, language was refined and grew closer to the most ordered representation of sounds stringed together to represent something in the real world. The birth and death of languages (put picture of language tree here), helped more refined and finer grain order to emerge. 

------------------ A Language Tree - credit SSSScomic  ------------------ 

This is not to say that language is all structured, in fact, most of our word choices are completely arbitrary. Why the word banana for the fruit vs any other combination of sounds? It is likely somebody must have come up with it, the word slowly became popular with others and after a critical mass of people started using it, the group collectively agreed. In today's time, dictionary companies perform this function by accepting new words into the language every year. However, what collectively came out was a common syntactical structure to language lacking which we wouldn't be able to understand it uniformly. Our understanding is also helped along by fixed words like pronouns (he, she, they, I, you, etc) and determiners (the, a, an, etc). These parts of language don't change every year, they are largely invariant. You can maybe start seeing the similarity between evolution and language, where language is the emergence of information from symbols. A natural question to ask then is what is information?

Part 5.1: Information, association and entropy

While it may sound tautological, information is the ordered configuration of symbols that refers to entropy in the universe, or, more specifically, the aspects of the universe that reduces "referential entropy" or "interaction entropy." These are new terms and let me explain what I mean. 

Referential entropy is the name given to associations in the entropic space like we discussed previously. Association of quarks and electrons are called atoms, associations of atoms are called molecules, some associations of carbon-based molecules are called life and so also is a chair an association of particles where we can comfortably sit on and lean our back on. 

Interaction entropy is the name given to change, the changes that caused a "referential entropy" to move in spacetime or to change it's agglomeration structure. This is difficult to easily explain (probably because I too am not very clear) but think of it as the name given to the links that lead to particle agglomerations to change their location, or their configuration. Verbs for example represent interaction entropy, where move, push, walk, or breathe represent changes to locations of agglomerations. Words like force, gravity, heat are a quantitative measure of the interaction entropy because they are able to explain how particles will interact without waiting for the interaction to play out. When something is decomposed, built on or torn down (at the atomic, and higher levels of agglomeration) is a form of interaction entropy that leads to new referential entropy.

------------------ Referential and Interaction Entropy  ------------------ 

I gave this somewhat involved explanation to arrive at the point that information is the notation given to the order in the disorder (or entropy) of the universe. The law of gravity is incredible because it has the capacity to take almost any slice of the universe with objects of referential entropy (planets, stars, space shuttles, and every particle agglomeration past a certain size) and predict its entropic interaction, i.e the interaction entropy. In fact we have a mathematical law that reveals a fundamental force that is able to hold its own against entropy. 

Similarly relativity is magnificent because it was able to identify a bound to the universe, that the speed of any object with referential entropy cannot exceed the speed of light, which is an interaction entropy. It was essentially able to prune the tree of the universe by invalidating a set of interaction entropies, the entropy achieved of particles moving at speeds greater than the speed of light. By reducing the entropic space of possibilities, Einstein was able to follow other branches and realize the curving of space. I do not mean to trivialize the achievements of Einstein in discovering this. To be able to model the universe and it's entropy on such a titanic scale and derive the central order from all of it is unparalleled in its creativity and cognition, something only a human brain could do since it is the reflection of the universe. 

It makes sense now why language is so difficult for computers to understand because when we speak a sentence, every word plays a part in progressively reducing the entropic space of the universe in focus. We also call this attention. Language is also inherently recursive, we can write a sentence and then in the middle of the sentence choose to add detail about a specific part, kind of like I did with this fragment right now. The recursivity of language is an important feature, because it allows us to use language to talk about language. It is an excellent reflection of consciousness, central to which is our ability as a thinking being to look inward and reflect on our own thought. No wonder humans were the ones to come up with language, our brain reflected itself out into the real world in the form of language. 

Part 5.2: An aggregation of abstract entropy

This is an interesting avenue lets dig a little deeper by considering a trivial example. Take the example of the word "chair" and its conventional definition of being a place we can park our buttocks on and rest our backs. We associate it with comfort because we transferred the work our body was doing to keep itself upright into the external world, thus saving energy expenditure that kept our muscles contracted. We recognize a wide spectrum of chairs, from human-engineered ones to flat rocks in specific formations that afford us the comfort of resting our muscles and leaning our back. This is a different form of referential entropy (or particular agglomeration), because there is no inherent uniformity. Up until now when we considered objects as having referential entropy, we were looking at increasing levels of associations, where atoms associated to form molecules, molecules to proteins, and proteins to life. The similarity is much higher than in what we refer to as a chair. A chair could be made of wood, steel, rocks, branches and so many other things and it still fits our loose definition of a chair. How then is our brain associating such diversity into a single definition? 

This is where the body comes in. Our brain inherited the capacity to form associations from the universe but then we made it our own. We are not constrained to form associations only of the type that the universe most commonly experienced. We experience the universe through our body, and the experiences received through the sensory inputs of the body are a rich source of potential associations. When Descartes formulated his theory of the mind, he proposed a dualist theory - one where the brain and body are separate, with the body being a sort of homunculus, dumb and stupid on its own but elevated when animated by the mind. (Dualism by Stanford Encyclopedia of Philosophy - Reference #17) The mind was the center of consciousness, the body a necessary appendage to host it. From this rather bacterial definition of life, we have philosophically evolved to a different model where the brain and the body are not separate but a single entity, neither meaningful without the other. This theory called materialism dispenses of the special treatment given to the brain and instead argues that there is no mind, only a body. Genes express and neurons fire in the body, and through that we receive and inhabit the universe. 

Our brain housed safely in our skull, is for all intents and purposes blind to the universe. It only receives it through the senses, then reconstructs it into a simulation and inhabits it. Our body, on the other hand, actually inhabits the world and thus plays a critical role in shaping language. The wide diversity in the agglomeration of particles being called a chair is because the body sees its importance in its interaction with it. So the referential entropy of the body, has an interaction entropy of sitting on a wide variety of objects that are entropically quite different but have perceived attributes of importance. Here the attributes are of comfort to the muscles in the body, specifically your leg and back muscles. 

Perceived attributes are quite important too, they have no entropic significance except that they are a set of abstract attributes that are important to the object that is interacting with it. The beauty is that if the object was lifeless, no perceived attribute would exist and what you see is what you get. However, since life is self mobilizing, and endowed with the ability to act independently (through the consumption of energy) it gets to find its own meaning, or stated more generally, attribute whatever importance it wants to. Since we are still, ultimately, pattern recognition engines, we don't attribute one-off qualities (or qualia) but rather, we converge on attributes that hold common importance. This is an extension of the consensus and the wisdom of crowds' idea, where the attributes that are important evolve from the survival of fittest playing on attributes. These most-fit attributes then associate together into objects of abstract referential entropy. This type of referential entropy is different from what we previously discussed because particles are not actually agglomerating, rather they are agglomerating into mental structures in the mind, vetted by social and cultural standards. 

This idea of an abstract attribute that is only important to us humans also doesn't stop there, as would not surprise you it also agglomerates into higher-level associations. This is different from referential entropy because no actual particles are involved, rather it is a coagulation of attributes. For eg: the word home, is a non-exhaustive combination of attributes of security, comfort, shelter, privacy and safety (most of which are abstract attributes themselves).  

Another category of words are those of physiological needs and emotions, which sometimes go hand in hand. The feeling of hunger, thirst, doubt, sadness, anger, frustration and jealousy (to name a few) are very important to us and come from the interplay of the body and the mind. Physiological needs like thirst and hunger are a body communicating its needs, while words like comfort and safety are a body's response to the attributes of the part of the universe their body inhabits (what we also call environment). 

Emotions are very interesting in this context, because they are the seat of so many things that makes humans creative, thoughtful and reflecting beings. We discussed in an earlier part that our ability to reflect is one of our most powerful abilities and separates us from most of the living kingdom. We established that we perceive the world through our senses and then recreate the senses into a representation of the universe, or what we call reality, and the brain perceives us inhabiting it. Thus we rarely receive the universe as is, we color it with our biases borne from our memories and other facets of our "selves" that were impressed upon us by the very universe we live in. So we simulate the universe as a personal reality and inhabit it. All of us thus inhabit our own slightly different realities, which is why our languages are slightly different but close enough that we can communicate. We will use the word "truth" to refer to the uncolored universe, which I don't even know is possible to mentally receive. We need a control function then to keep our reality simulation in check, otherwise what is to stop us from creating fantastical universes, completely different from anything remotely real and inhabiting it. We associate this with mental disease, a child who thinks they have superpowers and can fly, and attempts to jump off the roof. This clearly is very dangerous and needs evolutionary protection. 

I propose that emotions evolved as this control function on universal simulation. Sadness is when our simulation of a universe fails the reality test, and happiness is when it passes the test. Thus, sadness is the simulation of a universe with referential and interaction entropy that is rejected by reality, while happiness in a sense is one that is accepted by reality. Trust is the interaction entropy between two "selves" that has over time proved itself to fit each other's simulation, while distrust is when reality has taught you of the failure of your prediction. Jealousy is when the autobiographical simulation of your universe is superseded by the reality of somebody else's. Humour is a unique interpretation of the laws that lead to a simulation of a universe that is for all intents and purposes possible but either knowingly impossible, or only remotely possible, or plays up an abstract attribute. Many times humor involves a shared, almost telepathically communicated new parallel universal configuration with changed parameters that enable the impossible, like talking animals or the personification of inanimate entities. However, humor hits hardest not when it is completely nonsensical but sensible per the laws of the parallel universe that was mutually agreed to in the shared consciousness of the humans participating in the humorous experience.


Part 6: Free will and our ability to shape reality

The universe plays itself out probabilistically, where no particular pathway in the entropic space of possibilities is favored, but is a cascade of probabilities. For the current world to have the form and structure we see, every past event had to happen in the exact order that it did, and every living thing in it had to behave exactly as it did. The slightest variation has the potential to create a substantially different universe that we may not recognize. This steps into the arenas of philosophers who use the inevitability as an argument against free will. If everything that happened had to happen for the current reality to manifest, then are we but biological machines playing our fixed roles in the entropic canvas of the universe? Doesn't this presuppose the existence of a god or a higher entity that carved out our roles, or is it just that this entity set in motion the first few cascades, and all of reality is a follow-through of that cascade? 

I will argue that we do have free will, but it is a choice we have to intentionally exercise. I will also argue that free will that can shape reality is a function of aggregating (or agglomerating) probabilities. Simple animals without the ability to execute complex coordination do not have the weight to pull off large-scale changes. This is not to say that they can have no impact. If a pack of wolves hunted every rabbit in their area and if hypothetically the rabits only existed in that one area, then given enough time the wolves would have whittled down the population of rabits to a sparsity that they are unable to find each other to reproduce. In effect, this pack of wolves drove a species to extinction. This is not dissimilar to what human poachers do, driving the populations of elephants to near extinction. 

Human beings when they are born spend almost 18 years in development before they are considered capable enough to become independent adults. During this time of development, the brain is shaped by the immediate environment. We are born with near blank slate brains but equipped with incredible neuroplasticity to drink in our surroundings and start modeling it in the brain. Our parents, our upbringing, our friends, and later, the society, all play a significant role in how we turn out. Each one of them fills us up with their own biases, and their idiosyncratic way of modeling their universe. Our brain has incredible machinery to distinguish features of the universe. We talked about our ability to classify in an earlier section, and this ability is something all of us are born with. However, we can only work with what we have been fed. So our universal simulation is an aggregate of the universal simulation of the people who influence us (with different weights). If we continue in this state, then we exit the interstate of universal possibility and step into a cul de sac of our a social simulation. We slowly lose our ability to exercise free will. 

However there is a way out, and that is to feed the brain with experience, or what I pedantically call new slices of the universe that the brain could not simulate by itself. We use various language analogies to describe this, including to "broaden one's horizons" and "expand one's worldview." This is why a child who grew up in a rural village in India, who only went to school in the village and was taught by the same teachers that went to the school a decade ago, will rarely make life choices outside the norm of that village. However, a child who constantly experiences new cultures, is surrounded by diversity in a bustling metro, spends time in a culture diametrically opposed to his, or is bilingual (since we made the connection between language and universe simulation) is able to calculate probabilities on a much wider scale. The neurons calculate probabilities in the universe that the brain as a whole is able to simulate. So a child in the rural village can imagine the possibility of starting a company and becoming rich, but the neurons calculate that probability to be zero since it has never encountered that happening. Whereas the simple act of doing can expand your horizons. 

The experience of doing something that your brain firmly decided was not possible for you, but due to circumstances or "will power" you powered through and emerge unscathed on the other side is powerful. The brain learns to place less emphasis on its own simulations and realizes that a lot more is possible. As an example say you did not think you were capable of learning to program, but due to being laid off you were forced to pick it up and a few months later you end up with a programming job that you are much happier at. Now the brain realizes that a lot more is possible than you thought and the next time you have a very low probabilistic thought, like starting your company, your brain doesn't automatically shut it down as impossible.

The brain is specific about the types of experiences that can change it, almost always an active experience is required, rarely if ever does passive experience have the same impact. Watching a documentary about somebody building a robot vs you actually building the robot has a huge variance in the impact to the brain. Watching a series about a group of friends in New York City, is very different than actually living in New York City and having friends, no matter how many times you watch it. Knowledge is another way that the brain expands its space of probability. A physicist who understand how the big bang happened, and how quarks and electrons that originated there interact amongst themselves to form planets and solar systems and life, is able to model much larger scales of the universe. A biologist who is witness to the incredibly subtle symphony of molecules and compounds that lead to life is able to model the universe in incredible depth. Having knowledge thus increases the simulated space of your universe in both breadth and depth, and thus increases the scope of possibilities that you assign. You now get to exercise free will, because no longer are you bound by the probability space that you were endowed through your environment. You get to exercise the characteristic feature of life, non-random motion not just in physical space but in entropic space. Which path you chose is a function of your emotions which place a personal value on different paths, and you choose the one that is most meaningful to you.

Now that we understand how individuals gain free will, how is reality affected by individuals exercising their free will? Here again, the concept of aggregation and associations play a central role. While a single individual exercising their free will does not affect reality. However, when a path in entropic space is seen as valuable by others and people gather to support this individual or organize themselves to act as allies or as employees, then the free will of a single ambitious and audacious individual can change the course of the reality. In fact that is probably the only thing that can lead to changes in human timescales. Our objective reality is like a rope made of fibers of all of our individual realities such that through wisdom of crowds, the biases are stripped and only the real parts persist. A small set of people usually act as stewards of reality, and choose which direction humanity as a whole heads towards. We have had luminaries like Socrates, Galileo, Kepler, Newton, Maxwell, Fourier, Euler, Einstein, Schrodinger, Nietszche, Krishnamoorthy, Bohm, and in modern times Steve Jobs, Elon Musk, Bill Gates and Jeff Bezos. We have also had people like Hitler and Stalin do the same.  

We need more people to step up to the role of stewards of humanity. However for that to happen more people need to think outside selfish scales and start imagining on the scale of humanity. For this to happen, you need people who seek diversity, and knowledge and realize that they are a manifestation of a universal consciousness and that their duty lies towards all of humanity and all of nature. 


---- Thank you for reading ----








Appendix:

Association neuron analogy

Another reason association neurons emerged is to support more complex functions in animals, like movement as an example. To do something as simple as putting a leg in front of another to take a step involves effecting many neurons that allow fine-grain control over the individual muscles to maintain balance and coordination. If the brain had to individually compute contractions of every single muscle fibre, it would take massive computing capacity to do the simplest things. Association neurons help increase computational efficiencies by connecting up groups of nearby muscle fibers into an association neuron. Now to move a leg, you activate the respective association neuron and it will automatically activate the muscle fiber groups that it connects to. (see appendix for a good analogy)

Let's take the example of a marionette to drive home this point. If you start off with a simple marionette with 3 strings to control it, this is fairly doable and within the limit of the digits on your hand. As you keep increasing the number of strings, the complexity of managing the marionette increases. With training, you can manage 10, with one string per digit using both your hands. Now imagine the marionette evolved in complexity, and grew to 100 strings. Due to the increase in the number of strings, the marionette acquires more exact and precise movements, but there is no way you can handle 100 strings. You don't have the required number of digits. The only way to still work the marionette is to group strings and reduce the number to a max of 10. Grouping the strings that work the hand of the marionette will lead to much smoother animation of the marionette, than say grouping some of the hand strings, some of the leg and some of the head together. However, these groups can then form a supergroup that is involved in a coordinated movement like walking the marionette, an act that involves the hands, the legs, and the neck. 


Reference: 

  1. Logic Gates: https://www.circuitbasics.com/what-is-digital-logic/
  2. Two Black Holes Merge into One: https://www.youtube.com/watch?v=I_88S8DWbcU
  3. Base pairs of DNA animation: https://www.genome.gov/genetics-glossary/Base-Pair
  4. 100 paper folds to the edge of the universe: https://www.freemars.org/jeff/2exp100/question.htm
  5. The most important video you will ever see - https://www.youtube.com/watch?v=F-QA2rkpBSY
  6. The Sun's atmosphere: https://scied.ucar.edu/learning-zone/sun-space-weather/solar-atmosphere 
  7. Goldilocks zone - https://exoplanets.nasa.gov/resources/323/goldilocks-zone/
  8. Cyanobacteria and the Formation of Oxygen - http://butane.chem.uiuc.edu/pshapley/environmental/l30/1.html
  9. Entropy and Biology Photosynthesis - https://www.ecologycenter.us/population-dynamics-2/entropy-and-biology-photosynthesis.html
  10. Brain-Body Mass ratio - https://en.wikipedia.org/wiki/Brain-to-body_mass_ratio#/media/File:Brain-body_mass_ratio_for_some_animals_diagram.svg
  11. Mortality Rates Of Children Over The Last Two Millennia - https://ourworldindata.org/child-mortality-in-the-past 
  12. The Central Dogma of Biology: https://www.yourgenome.org/facts/what-is-the-central-dogma
  13. Cardiac Conduction and Myocardium - https://www.cliffsnotes.com/study-guides/anatomy-and-physiology/the-cardiovascular-system/cardiac-conductionzoo
  14. Inverse Problem - https://en.wikipedia.org/wiki/Inverse_problem
  15. Brain Mechanisms Linking Language and Action - https://www.researchgate.net/publication/7784335_Brain_Mechanisms_Linking_Language_and_Action
  16. Evolving artificial sign languages in the lab: from improvised gesture to systematic sign - https://psyarxiv.com/be7qy/
  17. Dualism explained by The Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/dualism/
    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1630067 2020-12-21T09:06:07Z 2022-05-11T18:22:27Z The mind, the self and the patterns in our recognition

    At the moment of writing this, I am sitting at an airport looking out onto the runway, at planes landing and flying, at the harmony of airline crew members making sure that the aircraft are tended to and supporting a timely departure (I sound like an in-flight announcement don't I). I cannot help but marvel at how far human ingenuity has brought us. Not very long ago (a blink in time), we were hunter-gatherers living in forests and running around trying to gather food and keep ourselves safe. Long before that (a few billion years ago), we were single-cell organisms living in an ethereal soup that the early earth was. In those days of single-cell organisms, the first multicellular organism emerged from the symbiotic relationship between a cell that consumed other cells as nourishment and a smaller, simpler cell that converted free oxygen into energy. In an act of defiance to their design, the larger cell, instead of consuming the other cell (like the norm), absorbed it. So now, the smaller one could continue to produce energy by fixing oxygen, while the larger one formed a protective membrane around it. This was the birth of the multicellular organism, and that cell that fixes oxygen is found even today in every cell you possess. It is called mitochondria, and it is infinitely more complicated than that first multicellular organism. Still, it does the same thing; it generates the energy that powers all the other cells in the body. 

    Our brain develops using similar pathways, and the cells that constitute it have a very specialized function - to "think" and to "do." The brain evolved to become the control center that coordinates "living" and "being." Other animals have brains too, but I have often wondered what makes us human beings so different. Let me try to think through that in this essay. 

    While other animals have brains, what makes us different is the presence of a pre-frontal cortex (PFC). In many ways, the PFC is a reverse evolution engine and equips us, humans, with the embedded learnings gleaned from billions of years of evolution. Think about it, evolution is a relentless struggle against chaos, the most definite way of making order in absolute disorder and reinforcing patterns that survive time. Realizing that my brain is akin to a machine learning model, trained on billions of years of the universe, and the control function being survival, was pretty profound. Apart from pattern recognition, we are also capable of storing patterns in our memory (in a way that is the only thing we store). Thinking of the brain as a pattern recognition engine and our memory as pattern storage opens us to some interesting ideas. 

    When we are born as babies, the underlying architecture of the brain and the "evolution engine" is present within all of us. However, we are not born knowing things; we learn them (very, very quickly, I might add). So in a sense, the first few years of your life are the priming period of your brain. Your environment and your upbringing lead to your sense of self. Our sense of self, is a stored pattern. Our caregivers and our environment shape our perception of ourselves. As we grow, we add and subtract from that pattern. It is what ties your memories together, and how you are the "hero" (pattern) in your story. Your sense of self is a core pattern, and not changed easily or without effort. If it changed too easily, then your brain cannot tie memories together. This is why people's sense of self only changes after a struggle. If it was too easy, then it isn't a core pattern. In this model, "ego" is a necessary side-effect, a sense of self where the pattern is based on extrinsic factors. As your brain is priming itself, it needs external indicators to formulate its own control function. The desire for money, fame, power, and sex is your ego exerting itself, a pattern that your brain latched on to at an early age and integrated it into the pattern of your "self." As you grow, you have to work on your ego - just because your brain latched on to societally desirable patterns does not make them right. Every human being should contest with their ego and shed those patterns that your older, wiser self knows is weighing you down. 

    Music is a pattern in sound, and hence so stimulating to our brain. We may not be able to create it, but a beautiful pattern does not go unnoticed. This inherent pattern in music is also why there is musical theory and why when an untrained pianist plays, it sounds noisy, but when a trained one does, it is beautiful. The trained pianist knows to generate a pattern through music, not noise. 

    A story is a pattern communicated through words, telling the start, the middle, and an end. The brain's episodic memory is concerned with connecting the cacophony of your sensory inputs into a cohesive story. We love a hero's story because our brain constructs this story for ourselves. We are the hero of our own story, where our sense of self is the main character, and almost everybody else, a supporting cast member. A good movie is an episode constructed by a director, where your brain doesn't need to do the heavy lifting of tying sensory inputs together. Being a good storyteller is hence a gift; you are able to turn words into patterns, and since our brain loves patterns - the coherence of the story can move us, spur us to action, bring down power structures and mobilize masses of people (Mahatma Gandhi, MLK, Hitler).

    Knowledge is a pattern of patterns, which is why knowledge acquired through memorization doesn't stick around for too long. Real knowledge and learning come from attaching a learned pattern to an existing pattern in your brain. Unconnected patterns are lost to time. This is also the underlying theory of chunking, where information is stored in your short term memory for processing and vanishes pretty soon; unless you see a pattern in the information and you associate it to that pattern. When you do associate, the information goes into your long term store where it persists. The more patterns you have in your arsenal, the more incoming information you are successful in transitioning to your long term store. 

    Science is finding new patterns, and the scientific method is a time-tested system to ensure the veracity of new patterns. This is why I find science so beautiful; you are finding new patterns in the universe, thus reducing the information entropy. An engineer is concerned with taking these patterns and turning them into useful contraptions. For example, Bernoulli figured out a pattern in the way fluids behave, where depending on the fluid's velocity, it exerts different pressure on the material it is flowing through. Modern airplanes are engineered around that pattern, where the shape of the wing forces air (which is a fluid) to have different velocities above and below the wing. The different velocities generate a difference in pressures, which generates a lift (that is why the wing is shaped in a bulbous way).  

    Creativity is finding an imaginative way to connect unrelated patterns or see patterns where others don't. Einstein elegantly connecting the pattern of space and the pattern of time through relativity was a feat of scientific and creative daring. 

    Curiosity is a deep appreciation for patterns and a propensity to look for patterns everywhere. 

    Awe, is when you receive a pattern so profoundly beautiful that you are transfixed. 

    Love is a resonance of patterns. 

    What a privilege it is to be alive. To be endowed with a neural engine that is a condensation of billions of years of evolution, and we get total and complete control on what to apply it on. Evolution is a beautiful recursion, it is through evolution that the universe is attempting to understand itself. We are gods, and our kryptonite is time. 

    Shine on, you crazy patterns. 


    Associations: 

    • Society of Minds by Marvin Minsky - https://www.amazon.com/dp/0671657135/
    • Molecular Biology of a Cell - https://www.amazon.com/Molecular-Biology-Cell-Bruce-Alberts/dp/0815345240
    • Bernouli Theorem Explanation - http://hyperphysics.phy-astr.gsu.edu/hbase/pber.html





    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1622328 2020-12-02T23:02:52Z 2020-12-02T23:04:46Z Capitalism, benevolence and inheritance (part 1 of 2)

    A friend asked me what I think about the interplay between governments and big-tech and whether governments should start regulating industries more. It was an interesting discussion, and I got the idea to jot down some thoughts and associations that I made. Most of the associations were happening in the economics of Adam West, a general understanding of Marx's theory and how it played out in Russia, and a general underlying optimism that the pursuit of knowledge and act of creation are our ultimate purposes in life. 

    Let's start with Capitalism; Capitalism is the form of market economics that incentivizes innovation through wealth creation and private ownership. Communism is the form where communities are the only owners, there is a central government that represents the interests of the communities and they own everything. Communism and Capitalism were the two most subscribed branches of governance and they clashed in a near nuclear attack during the Cold War. Capitalism won out, not just because of the US's higher defense budget but also, I believe because Capitalism was better able to align incentives. 

    Communism sounds awesome in theory, no poverty and everybody is able to live a certain quality of life irrespective of their economic contributions. Jobs that existed were in service of the governments, produce was traded to the government at a price set by the government, and the government took care of the distribution. Individuals had no bargaining power, who else were they going to turn and sell their products to if not the government. The government went so far as to make trading with anybody but themselves an illegal, punishable offense. Let's extend that farming example, say you were a farmer, a brilliant and observant one. You notice that the process of plowing the field is inefficient, and you come up with a design for a tractor, a mechanized automaton that would save you 50% labor costs. However, the idea is one thing, execution a whole other thing. Would you as a farmer be incentivized to go out of your lane, invest in this (and seek investment), build and iteratively test it and finally open it up so that others can benefit from it? 

    Well, not really right - even if you did all that and managed to build yourself a tractor (a big if), you most benefit from it if you keep the innovation to yourself. If you share it, the government will realize that farmers can now produce more at the same cost and so they will uniformly reduce prices. All your savings vanish and you're back to where you started, the government still takes 90% of your produce and gives you the same total amount. Another related problem that shows up is a fixed mindset. As an example, imagine you came up with a groundbreaking innovation that could generate power for much cheaper but requires an upfront investment to play the idea out. In a communist empire, the government controls energy production. The people who are in positions of power in these institutions are incentivized to maintain the status-quo because the alternative could grow big enough to strip them of their authority. Once you've tasted power, it's really hard to give up. 

    Let's play the above situations out with Capitalism, which, as we remember, is predicated on private ownership and subsequent wealth creation. If the farmer came up with the design of the tractor, he/she has all the incentives in the world to go out and raise money from willing investors who see the potential in the idea. This allows the inventor to go build-test-iterate till a fully-functioning and well-researched product is out in the market. The tractor achieves product-market fit, and soon every farmer in the US wants it; soon, word spreads and farmers in India and other countries want one too. In the process, the inventor is rewarded with wealth, status, and an ability to use their wealth to incentivize further innovation. It is more resilient too. In the case of a new cheaper power generation product, even if the government is not interested in funding your pursuit, that tractor owner with overflowing wealth sees the potential in it and decides to take a risk and back you. You now have the runway to build-test-iterate loop and, in the process, come up with a product that cuts the consumer's energy cost by 80%. The market loves your idea because you are saving them money, and as you hit critical mass, the government sees the benefits and contacts you to standardize your product more generally. 

    Big-tech is a massive, massive beneficiary of Capitalism. Companies like Amazon were so innovative that they cornered massive markets and in the process, generated enormous wealth, which they redistributed to more innovative profit-seeking endeavors. This is a close cousin of the idea of a perpetual motion machine of wealth creation. There are few industries worth disrupting that Amazon is not in (retail, space, films, healthcare, internet infrastructure, gaming, to name just a few successful ones). I both admire and fear their scale. Amazon is also not the only one in this hallowed category, some of the others worth mentioning being - Microsoft, Facebook, Apple and Google in the US, China has Alibaba, Baidu, Tencent and (newly) Bytedance. Left unchecked, these institutions may grow to become more powerful than a country's government, reaching a point where the power imbalance means that the US needs Amazon more than Amazon needs the US government on their side.

    This is the part where benevolence comes in. An important point to remember is that tech induced wealth creation was so tremendously rapid that Amazon has generated $200B+ of personal wealth for Jeff Bezos in just over 25 years (started in 1994). These are all single lifetime wealths; most if not all of tech is first-generation wealth creation. The people who have stood these companies up and weathered the risk, financial burden, and market economics to become successful are uncommonly visionary (we can never have enough visionary people but that's just me). They also seem to be rational humanists who intend to use their acquired power to advance the human species. Pretty fortunate that these people are benevolent and out to do good right? Well, not exactly, and let me explain my rationale. 

    People who generate outsize successes, the kind that vaults them to the most successful or the richest, are a self-selecting set. The market acts as a forcing function, selecting those individuals who solve a problem on a wide enough scale and do it for impact rather than monetary gains. The bad apples, the Berni Madoffs, the Elizabeth Holmes and the Martin Shkreli's of the world, can sprint ahead but reality usually catches up. It's like the "great filter theory" of evolution (technically abiogenesis), used to explain Fermi's Paradox but for wealth. I do not know of the specific filter that catches the no-gooders and the insufficient visionaries, but it makes associative sense. It's almost like these people should be incentivized to run with their creativity and bring an opinionated world view into existence (see my post on The genius in world building). This is important for humanity's continued existence, or we may be wiped out by a great filter. So in a sense, the existence of these mega-companies with world-changing ambition is our attempt to create knowledge and stave off the great filter. 

    Ok so, the market chooses benevolent people to succeed, sounds like a win-win then? Well, no, not really and for a few reasons, but there are two main ones we will be focusing on. I'll write about that in the second and final part which I'll finish up soon. 


    Association points: 

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1620061 2020-11-22T22:10:55Z 2022-01-29T03:40:32Z A human sense of time

    Evolution is incredibly elegant in the way it reversed chaos and led to the ordered structured of atoms that we call life. We human beings are the pinnacle of that process, having been given a forebrain that is able to consciously exert its influence, as opposed to the subconscious way most (if not all) other organisms live. The conscious brain (CB) is truly beautiful; it condensed the knowledge gleaned from billions of years of evolution and turned it into a computation engine that almost every human being possesses in various capacities. Evolution created patterns in atoms, and your conscious brain is a pattern recognition engine. 

    The subconscious brain (SB) is the ancient brain, the part imprinted by evolution and responsible for keeping you alive and functioning as a member of your species. Things like breathing, digesting, sleeping, swimming (interestingly, we are all born knowing how to swim), and mating, to name a few. You have these subroutines programmed into you before being born; they aren't learned. The process of digesting food, as an example, is amazingly complex. The muscles in your stomach contract and release in a wavelike harmony to push food along the intestines; you generate chemicals to break down this food, convert it into energy (ATP) and excrete the waste. If these are the number of things that need to be coordinated for a single subroutine (digestion), you can start to appreciate how vast the computational capacity of the subconscious brain is. It is astonishingly more powerful than your conscious brain.  

    However, it is the interplay between the conscious and unconscious that differentiates human beings (and a few other species) from the vast majority of organisms that exist. Because you will realize, that the CB is able to actively add to the subconscious subroutines. Take as an example walking; you aren't born knowing how to walk, you learn it as a baby by falling many many times before the CB finally gets it. Once it gets it, it sends the learned subroutine to the SB, where it is near permanently stored. The number of muscles engaged when you walk is staggering. Imagine you consciously having to select which muscles to engage when you put a step forward while keeping track of your center of gravity, incoming traffic, tripping hazards, and maintaining a walking speed. 

    This is the interesting interplay between the conscious and subconscious brain. Your CB can actively update the SB, and because you can choose what the CB focuses on, you can bring to weigh the incredible computational capacity of the SB on complex routines (like learning how to solve a Rubik's cube or play chess). However, your SB is not a thinker; it is a doer. What that means is when your SB is engaged, your sense of time falls away because the SB is just executing subroutines. 


    Think of the last time you drove (a learned subroutine in the SB) with a friend who you were excitedly talking to. An hour later, do you remember many details about the drive itself? Probably not, because most of your attention was focused on your friend. As we get older, we stop relying on the CB because we have built a big store of SB subroutines. If time is what you want to fill then there are a plethora of routine-subroutines to choose from. Which I think is a shame because you are your most human when the CB is in the driving seat.  

    When your CB is engaged, time slows down, you become more present, and your subroutines are not filling time up; rather, YOU are. 

    (to be continued in a later post)


    Interesting reading:

    https://www.quantamagazine.org/reasons-revealed-for-the-brains-elastic-sense-of-time-20200924/

    https://www.amazon.com/Biology-Belief-10th-Anniversary-Consciousness/dp/140195247X


    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1614785 2020-11-10T07:48:13Z 2020-11-10T07:56:39Z The genius in world building

    I have been thinking about people who are 3+ standard deviations out in their respective fields, the people quintessentially associated with the word genius. This includes people like Christopher Nolan (direction), Leo Dicaprio / Marlon Brando / Daniel Day-Lewis (acting), Elon Musk / Issac Newton / Albert Einstein (science), Steve Jobs / Jeff Bezos (visionary), Shakespeare and JRR Tolkien (writers) and Conan O Brien (improv comedy). 

    These thoughts popped into my head after watching Tenet, Christopher Nolan's latest movie, which is set in a world where a device that allows you to co-exist in a backward time loop. Objects with their time backward can exist in the forward flowing time universe; this gives them the unique property of having their entropy reversed. Imagine a bullet being un-shot into the gun. The world he sets up makes that possible. After watching the movie, I got a glimpse into Nolan's mind and his capacity to be a "world builder." 

    Somehow my mind kept coming back to world-building, and I started wondering whether that is a good measure of a person's multi-disciplinary genius. We all build worlds every day, unconsciously. It is easy to understand if you think about a robot, which is a combination of hardware and software. If it wants to move to achieve its goal of picking up an object at a distance, the sub-steps that happen are:

    • Take a snapshot of your world the robot inhabits right now - this means taking in your physical surroundings, the objects in it, how they interact with each other and with you. You also need to keep track of the forces acting on you and how they affect your center of mass.
    • Translate the goal to a final world state - what does the snapshot of the final state look like, hold it in memory and optimize decisions that get you closer to the goal world state
    • Update your current world state with every step you take until you reach your goal world-state
      • A further complication is moving an object to take advantage of it in attaining your goal. These steps change your current world state even if it means you haven't moved (e.g.: moving an obstacle out of the way vs. jumping it) 

    We do that every second we act, and humans maintaining balance when we play a sport or walk a tightrope is an advanced application of this. I believe we approach genius with our ability to use a similar muscle to create abstract worlds without a clear goal and interact with them. A director like Nolan imagines the world and creates it on the screen; an actor imagines it and inhabits it; a scientist creates many different versions and sees which obey reality most closely; a visionary sees it and builds towards it; a novelist imagines it and writes it; a comedian invents absurd ones and jokes about their absurdity. 

    What I believe differentiates genius is the complexity of the world they create or, in simpler terms, the number of real-world variables they account for in this world. JRR Tolkien's world-building in LOTR is legendary for how varied and detailed it was; Mahatma Gandhi's took into account 600MM citizens of India, a British colonialist empire that relied on "dividing and conquering" and violence to come up with a non-violent movement to achieve independence. Albert Einstein's world accounted for so many variables that he saw through the fabric of the universe itself.

    This is not exhaustive, nor a revolutionary idea but I thought useful for recognizing genius - the multivariate nature of their world, and the subtle complexity in it.

    (e.g. Genius musicians like Beethoven may not fit, though a possible reach explanation is that they experience a world where their music exists and the emotional impact it has is so visceral that the music flows out through them).  

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1611904 2020-11-03T17:30:55Z 2020-12-10T07:28:09Z An Ode to the eyes

    Everything visible is reflecting light, 

    Some manufacture their own, some borrow from the wild

    Nevertheless, you catch them all 

    All i need to do is point

    And you take care of the paint 

    Since i was 10 I needed help to paint,

    Colors were vivid, but each point corresponded to more than it needed to 

    So i used lenses, so I needn't squint nor strain

    What started off a window, evolved to become a shield 

    A separation between me and everything I saw 

    Today I try raising the shield

    See what the world looks like, unassisted 

    In doing so I open myself to the risk 

    Of the window being half open 

    or more worryingly, never quite see the same 

    I hope I read you shieldless, the other side 

    But if not, this was a risk you accepted, and one you will live with 

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1611092 2020-11-01T22:43:02Z 2020-11-10T09:13:48Z Associative memory and quantum computing

    I have been making conscious early-stage efforts to wrap my head around quantum computing. As Shohini Ghose explains in her TED video, it is a future class of computing, about as different to current computing as a candle was to a light-bulb. Quantum computing emerged because classical computing reached its physical limitations, and further miniaturization was not possible with the same technology. 

    A brief expose of my understanding; Quantum computing goes beyond the electron into its subcomponents. Qbits (quantum bits) are these subcomponents and are what enable quantum computing. They have both positive and negative charges (or spins), that is deciphered by using a quantum decoder. These bits are in an eternal "Schrodinger's cat" state, i.e in a superposition of charges. Unlike typical bits that we use to encode information, qbits are never in just one of the charged states; they are in fact somewhere in the middle. Their existence on the spectrum is revealed in probabilities, i.e 80% probability charge 1/2 and 20% is -1/2. Expected value calculations don't mean anything here, but when you choose to "observe" them, they choose a state to show themselves in. The probability density function represents the frequency distribution of their choices. 

    Qbits choosing a state is well explained by Heisenberg's uncertainty principle, which says that you cannot observe both location and momentum at the same time, only one. For large objects, both are infinitesimally the same, but observing one facet essentially squashes the other at the quantum scale. If you keep location, you lose the information of velocity and vice versa. Similarly, when you observe qbits they possess both charge states, but observing forces the qbit to manifest one of its states. 

    In the 'jerk' of computational evolution, now a single qbit (significantly smaller than an electron already) doesn't hold not only a 0 or a 1, but everything in between. It can encode so much more information, though it does require specialized tech to read the qbit and force redundancy to avoid encoding errors (which are more frequent in quantum computers). So 4 traditional bits can hold one out of sixteen possible states (1/16 of 0000 to 1111) while four qbits can simultaneously hold all sixteen states and choose to show up as one when measured. 

    This seems curiously similar to how humans store information associatively rather than deterministically. We may forget a memory because it occurred 16 years ago, but a song or a walk through a familiar neighborhood or a text from someone unlocks that memory and it becomes available. Memory champions use this to their advantage by constructing a quirky tapestry of a story to hold information, and they including sight, smell, sound and sensation to aid retrieval. The more things you associate with the memory, the easier retrieval becomes. 

    Could associations act as probabilistic filters to our quantum memory? Maybe our memory is associative and not deterministic because we work at the quantum scale. Quantum computing could unlock huge advances in human cognition. 


    Sources:

    Qbit diagram - https://www.austinchronicle.com/screens/2019-04-19/quantum-computing-101-a-beginners-guide-to-the-mind-bending-new-technology/

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1606289 2020-10-20T06:01:08Z 2020-10-20T06:08:51Z Attentiveness, and the neuroscience of how the Brain enforces it

    The selectively focused, attention aware brain 

    The brain's ability to hone in on a particular stimulus (or a group of) amid a cacophony of stimuli is endlessly intriguing. For the longest time, neuroscientists thought that the brain's prefrontal cortex (hereafter PFC) shone a spotlight on the stimulus that is deemed essential, selectively ignoring what is considered extraneous. Francis Crick (human DNA) theorized that the thalamus, a more interior (and thus ancient) part of the brain, is involved in receiving information and deciding which sensory inputs to pass along and which to gate. 

    More recently, researchers found that such ancient regions as the thalamus and basal ganglia are involved, but the essential participants here are the thalamic reticular nucleus (TRN). The TRN wraps around the thalamus and is engaged in suppressing sensory inputs when the person is asleep. Similarly, it allows a person to focus on the task at hand by ignoring the unnecessary sensory streams to the brain. It doesn't just do this by turning sensory streams off as required, but also has the elegance and the fine-tuned control to selectively stream sensory data. It can selectively tune background noise out, allowing you to hone into the voice of the person you are speaking to. 

    Researchers at MIT verified this on mice by training them on a goal (run on a track) directed by their response to specific audio and light signals. They found that if a task required visual senses, turning those senses fully on, negatively affected their performance. More interestingly, the same act also affected their ability to focus on the auditory senses. This is because the neurons are being selectively silenced, not being excited as conventionally thought.

    The TRN wraps the thalamus, which is right next to the basal ganglia, interior parts of the brain, and some of the brain's oldest components. Some of the oldest fish which retain their original brain structure through evolution have basal ganglia that aid in attention. Attention, which can be thought to be the set of activities that need to happen in a certain order and making sure you don't get distracted by things you shouldn't be is then not a byproduct of the PFC, (the shiny new human-specific part of the brain) but an eons-old process that optimized your chances of survival while also preventing an analysis paralysis. 

    This exciting finding could leak information about consciousness, what it is, and the mechanics of how we have it. Another assumption that has come into question is the brain being a passive sensory input machine, whereas it turns out that the brain takes an active role in choosing which sensory data (information) to process. The flicker of an eye, or the twitch of a finger may play part in the active reconstruction of our surrounding. 

    It is also important to note that ancient structures like the basal ganglia probably didn't evolve filters to protect against social media and electronic notifications. Humanity has collectively developed ADHD through the dopamine inducing stream of electronic data synthesized in a data-science lab to sneak by your hyper-vigilant TRN and basal ganglia to force attention focus. Aren't these interventions starting to sound uncannily like a virus?

    Source: https://www.quantamagazine.org/to-pay-attention-the-brain-uses-filters-not-a-spotlight-20190924

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1604300 2020-10-14T06:22:34Z 2020-12-22T13:22:13Z [RMs] Meditation and its power in regulating the pre-frontal cortex

    My understanding of meditation and its impact from the perspective of a reformed skeptic 

    My mom tried to get me to do yoga and meditate since my late teens. I was a mischievous, distracted, and disobedient kid. Call it the teenage hormonal changes, immature maturity, or whatever else; according to my mom, I needed help. Since psychological interventions were out of the question, she enrolled me in a meditation and yoga class. I picked up some basics but never really took to it, at least not like my mom. 

    She started yoga and meditation to accompany me into picking the habit up (lol) but ended up finding deep meaning in it. She pursued yoga and meditation for many many years to truly understand its nuances. She overcame severe stage fright and doubt stemming from her unfinished education and fluency issues with English to become a yoga teacher. One of her biggest gripes has been that she failed to get both her kids into yoga. Well, that changed a few months ago, at the height of the pandemic (at least for one of her kids). 

    Onset: 

    The pandemic has been a period of intense learning. I broke from the personal and professional schedules to spend time with myself. In my isolation, I reignited my passion for learning and creating. I immigrated to SF to ensconce myself in the lifeblood of global tech innovation, to work with the smallest startups taking on audacious challenges, and ultimately to build something of impact myself. The pandemic was my fodder, and I zeroed in an idea that I loved, felt I could dedicate 10 years of my life to, and answer the question "why me." I'll write about that in a separate post, but suffice to say I was excited. 

    Soon, though, anxiety joined along for the ride. I felt my passion being consumed by a rotten apple of anxiety. I was stuck questioning the idea, whether anybody would find it useful, that it was too derivative and whether VCs would ever invest. I was also a solo founder and slowly realized how incredibly lonely it is to start a company while also working a full-time taxing job. This all came to a head one evening when I had a full-blown panic attack. 

    I got scared, I thought I was dying, that I couldn't breathe and that I needed to call 911 to help me out of my stroke or else I would die. Luckily a friend on the phone recognized my symptoms and guided me out of it. It still took an hour to run its course, and I was pretty shaken up on the other side. I knew I had to do something, and one of the things I picked up and started doing was meditation. Long story short, it has been incredibly helpful. I am going to talk about meditation from the perspective of a reformed skeptic. 


    Meditation - how to do:

    When they think of meditation, most people in the West imagine sitting in silence with your thoughts, taking deep breaths, and chanting "Aum." While that is a great starting point, real meditation is more involved, especially in the beginning. Meditation is the most complete exercise for a human. Yoga and pranayam are the respective preparations of your body and your breath to finally mediate. Pranayam is the process of exercising your internal organs, and most importantly, bringing breath under your control. While not inordinately complex to teach, it takes diligent effort to become good at synchronizing your breathing to a count. Here are the things that you permute with pranayama - both nostrils, left nostril, right nostril, mouth, breathing and holding your breath (at full as well as at empty). You permute and combine these in various ways. E.g.:

    • Close your right and inhale through you left nostril, hold at the highest, and exhale through your right 
    • Calmly inhale and forcibly exhale (almost making a grunt like sound)  
    • Inhale through both nostrils, alternate exhale between left and right nostril (while holding for 6 seconds when your lungs are empty)

    There are many other variations of these, each with their own benefit. For example, "Bhastrika," as #2 above is called, is followed by a period that your brain is completely silent. It is a beautiful feeling. 


    From skeptic to a believer:

    I am interested in neuroscience and the human brain, I read up on anxiety, trauma, and depression and their effect on our brain. Here's what I found and what I intuited (sorry not going to differentiate between the two, if anybody ends up reading this and wants more info hmu). 

    The prefrontal cortex (PFC) is the newest part of our brain; it differentiates us from other animals and is responsible for our ability to see and decipher patterns. It is evolutionarily the most recent, influenced more by the environment than genes. The cities we live in, the mega-structures we build, and the things we create to push humanity forward emerge from this area. It has given us so much, but it also has a dark side. For you see, the PFC likes pattern-recognition so much so, that it defaults to it for everything. 

    This incessant need to compare yourself to others (helped by social media bombardment) is seated in this area. Your PFC is not satisfied with the work you are doing; it needs to know whether it is good or bad. Since the PFC does not have the data-points to judge, it defaults to comparing to people you see in the (social) media. Media portrayal is overrepresented by the already successful, the fake, or the loud ones - a difference that your PFC does not see. Your PFC can't help but look at how far ahead these people are and compare your achievements and progress. It belittles your effort, your speed of execution, the validity of your idea, and your hopeless optimism. It is a perpetual motion machine fueled by doubt. 

    That's with anxiety - being stuck in the future. Depression in a similar vein is the mind being stuck in the past, if not a more debilitating condition. Your constant need to compare, seek approval, and diminished confidence can be attributed to the PFC. The PFC constructs your ego, which, in a way, is you containing your sense of self to a psychological husk created out of your own shortcomings. You live, breathe, and operate in this world through that limiting husk. Doesn't that sound like a terrible way to live?

    Meditation is you taking time to be one with your breath. It is one of the essentials to life that you can control. You can't adjust your heartbeat, but your breath and the delivery of life-sustaining oxygen is within your regulatory control. By focusing on your breath, you observe the flow of chakra (prana, life source, energy). Your breath is a marvelous thing, so many parasympathetic responses are connected to it. Feeling nervous - observe your breath; you will find that it is shallow. Practice bringing your breath under control, and you will see your nerves rest. 

    Meditation is you bringing your PFC under your control. Asking it to stop trying to live in anything but the present, deeply realize that the past is immovable and the future, a hallucination. Meditation helped me understand that the only real thing we all have is time. Consciousness is possibly our ability to perceive time and affect outcomes. You are not racing anybody; if you have a higher goal or a more profound interest, cherish that knowledge; not everybody knows theirs. The journey is all we have, and choosing for the journey and not the goal is key to living a life outside regret. You become kinder to yourself (gosh darn it took me a long time to understand what people meant when they said this). Your anxiety is replaced by a sense of subtle urgency. You know your time is running out. Life's greatest joy is the privilege of deciding what this unrestrained self of yours should focus its time on. 

    Journey along, and I hope to meet some of you on the way. 








    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1603906 2020-10-13T06:52:15Z 2020-10-13T06:52:15Z [RMh] 'Super Saiyan Ultra instinct' Goku and Taoism

    DragonballZ and Taoism 

    One of the central tenets of Taoism is the concept of flow, a state of joy, unparalleled, when all of your mental faculties come to focus on a task, your sense of time goes away as you feel the joy that comes from being in love with what you are doing. It is in these states that your mind is doing some of it's most unperturbed, creative, and efficient thinking. You are going with the "flow" - not planning ahead, not superimposing your fears and anxieties, being totally present and taking every moment in as it comes, staying genuinely curious. You give in, and you do some of the best living of your short life. 

    Dragonballz has had a childhood hold on me. Grew up watching, some of the most rewatched anime of my life (and i used to watch a lot). It is still on air and recent arcs have been awesome! Some of my favorite of the long series. There is much more thought given to power, and how "screaming into the air" for an episode, while the villain just looks on politely waiting. It is deeper. The final form is called "Ultra Instinct" in this state the fighter gains the ultimate speed by giving up on control and just letting the body be. It was mucles and senses taking control, reacting to senses and not to the plannings of the mind. Forget the speed advantage from removing brain from the loop, it was a whole new way of fighting, even the gods were amazed. 

    I would not be surrpised to find out that Akira Toriyami (author) may have drawn inspiration from the Taoist principles.

    So yeah, this guy Goku is a deeply practicing Taoist and i thought that was interesting enough to record for posterirty.  

    Boiling Power Super Saiyan Goku  Dragon Ball Z Dokkan Battle Wiki  Fandom

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1603337 2020-10-11T23:48:22Z 2020-10-14T07:16:48Z [RMs] Quantum observation and the Monty Hall Problem
    My introduction to the Monty Hall Problem was not pretty. There were some logical inconsistencies that the young me could not "fully" wrap my mind around. I recently explored quantum theory, particularly the observer effect on a waveform, and something clicked in place about the problem that I will try to explain here. 

    Monty Hall is a game show host, and he is famous for the eponymous problem. It is a probabilistic puzzle masquerading as a game show; here's how it goes:

    • You are the participant in a game show where you can win a car or a goat. You want the car, NOT the goat. 
    • There are three closed doors in front of you; behind one of the closed doors is a car, while the other two hide a goat each. 
    • You are asked to guess behind which is the car.
    • The host, who knows everything, opens one of the doors revealing a goat behind it. So now you're left with two doors.
    • Do you want to stay with your choice or switch? 

    At first sight, it doesn't seem prudent to switch because probabilities have not changed - the initial likelihood of picking the door with a car is 1/3. However, now that new data is revealed, it sets in motion a new game and you have to update your probabilities. You were 2/3 likely to have picked the goat in the first round of guessing, so by revealing a goat Monty is telling you where the car is. If you had chosen the door with the car, though, you would lose by switching, but the probability of that from the previous game was lower, at 1/3. So if you approach the problem from a Bayesian perspective, where new data updates the decision model, you would benefit by switching. 

    This was not intuitive for a long time; I couldn't see how the game changed if irrelevant data was revealed (the car door is never opened). More fundamentally, I did not see how a new game was set in motion. A basic understanding of quantum mechanics, notably string theory helped the concept click into place. 

    In the nano-scale world of quantum mechanics, the most fundamental particles are probabilistic waves passing down time-strings (approximation). It is the act of observation that determines their position, and in the process transforms the wave function. 

    The wave looking thing is a string vibration passing along minding its own business, and the little clock is what we use to observe the particle. Notice how the wave was smooth, but the instant we observe and thus "determine" it, the wave function changes. Our observation changes the waveform, and if we are all waves, then that change affected the nearby vibrations. These changes propagate through adjoint strings and affect reality. 

    That's what happens when the host opens one of the doors, new information is revealed, and the change wrought about by observing, ripples through to start a new game. The impact of observation on creating new realities is profound when you attempt to wrap your head around. It was interesting how my mind immediately went to the Monty Hall Problem and updated my understanding. In a way, my watching it changed me. 

    Many questions persist around why watching that had that impact on me and not say on my friend. What about the time-string configurations in my body led to that specific response.



    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1603078 2020-10-11T06:50:52Z 2020-10-11T06:50:52Z [RM] time strings and quantum probabilities

    Subject - time strings 

    (current status of a developing understanding of quantum mechanics)

    We, the universe and everything that inhabits it is made of atoms, which we considered to be the building blocks of matter. Recently, the substructures of atoms started becoming visible to human observation. Some sub-particles bestow us our mass, the famed god-particle. Quarks, leptons, bosons, and other name particles grant other characteristics, including charge (or the lack thereof). 

    However, when observing particles at that scale, another curious characteristic shows up that has long confounded us about light. The smallest particles are waveforms, vibrations on a time string (or is time the vibration on a photonic string). While we deterministically exist, the smallest particles only probabilistically do. The act of observation determines them; this shows up on the waveform plot too. 

    These time strings seem able to affect each other, groups of them even gaining sentience - the ability to perceive the flow of time and evolve to human beings, who, in addition to observing, can intentionally affect. Human beings evolved through this chaos to perceive the interactions of the time threads, our sense of sound, smell, sight, touch, and taste were maybe the most important dimensions of these interactions for us to perceive. 

    If collectively, every human, animal, and organism that can observe time cease their sensory inputs, does the universe in its infinitely large form revert to a wave state? Put another way, does the earth and all the activity in it exist deterministically because of one of these:

    • somebody is always observing, that act persists determines reality
    • stack on probabilities from the cosmic intercoupling of time strings, all the particles in you are vibrating but add the wave functions together and if you are the transformed wave output, and your probability of being recorded is always 1, in one of two ways:
      • all the particles show themselves individually but together
      • this "togetherness" induces a single probability function to all the intercoupled strings, so they exist together or don't exist at all, together 

    More dimensions seem favorable for explaining:

    • how quantum pair-particles receive information about their twin's configuration at point (and event) of capture, irrespective of the distance between them. If the information is being passed, there is another dimension it is passing from; it doesn't 'miraculously' know  
    • the time strings interact with each other, our senses perceive these interaction magnitudes as the 5 senses. What additional information do these interactions encode, and is there a theoretical limit? (Shannon's information theory?) 

    Meditators, gurus, and sadhus from early recorded history ask us to be one with the vibrations, feel the vibrations and make peace with it. I would not be surprised if they intuited it somehow. The world is so fantastically beautiful, that it is a privilege to exist and be given the greatest problem of all - a universe to explore and understand. 

    Have humans evolved with the chaotic goal of reducing chaos, make sense of what we came from. Is our ultimate evolutionary goal to reduce entropy? 

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1602715 2020-10-10T05:22:04Z 2020-10-10T05:22:04Z the internet is overwhelming, shouldn't we be able to tune it down

    I turned off my access to social media for the past 4-5 months, and seeing from the outside in, the things that seduce our attention, is horrifying. Our attention is at the behest of the internet giants - tech, media, and celebrities, and they 'earn' it by giving you a rush every time you interact with them. This is the synthetic kind, the manufactured rush of familiarity that is harmful to us in the long run, but hard to know in the moment. These mini dopamine rushes keep your engine whirring while you do nothing, creating an anxiety cycle from realizing every so often that you have been stationary. This is scarily similar to traditional drugs like cocaine, heroin, meth, and even more to soma from Huxley's 'Brave New World.' 

    I found myself having more time, being more present and able to start peeking from the shackles of the conformity-loop, a happy side-effect of the internet we have no choice but to consume. As young apprentices out looking at the world, on a journey to find new highs at a pace we feel 'flow' at, we are fed and taught to seek out the stories of everybody we think is more successful than us or is doing more with their lives. Our brain likes structure, to impose a pattern induced model constructed to fit the evidence, drawn from sensory data. We start biasing our data set towards the 90th percentile (real snd the fakers) and normalize unreasonable standards. The internet companies are taking advantage of that; they pull you into the web and use the billions of hours of behavior data available to "increase retention" by sucking you deeper. They are taking away our creative forces by feeding us depression. 

    I am scared by how much it had a hold on me.

    As somebody who on the inside of these startups, knows backend architectures and how loose security at a growing company is, you are aware of just how much data customers leave behind as they traverse the internet. These bits, a like here, a comment, or a group, or a link you shared or a song you listened to, or a link that blew your mind away, a book you post a review about; it's all there, feeding computer algorithms. We should start by reclaiming that data, create our own data repositories of the 'internet we leave behind' that you can add to, invite to, and even create experiences through. 

    Over the past month or so, I have been thinking about decentralized internet and how smaller self-contained pods could be the breakaway internets of the future. Power needs to be redistributed, so we stop creating "company-empires" benevolent so far, but far too powerful and unchecked. An internet you maintain for yourself, not to share with the world to get likes but because your brain told you that this was valuable to hold on to. All the data would live on your device or in your personal (portable) cloud.

    I keep coming back to this idea of an internet reversal, where your behavior profile hangs out jobs to complete, and the internet finds the economic equilibrium cost to getting it fulfilled. A huge stock market, survival of the fittest. 

    A good entry point is to make the capture of the internet easier, and thats what I am trying to solve with the MVP. As we continue to consume internet at bearkneck speeds, really absorbing almos nothing. The cutting edge of content is coming up on personal websites, self-authored content and we rarely remember to hold on to it. We need to start gathering, containing, and categorizing our traces of the internet. It is time we became foragers again, as we were built to be. 

    Anyway, some thoughts...excited for the vision forming in my head but don't have it articulate enough. 

    (The beginnings of a manifesto for the future of the internet) 

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1602360 2020-10-09T08:48:39Z 2020-10-09T08:51:24Z Random Musings - productivity, and how to ring it in?

    Work, flow, and where magic happens

    There are more complete definitions of the state of "flow," what I connected and felt true the most in this state are as follows:

    • your brain, in all its magnificence, is bought to focus on one task 
    • the easier the task, the easier it is to achieve this state of flow (eg: people do some of their best thinking when washing dishes)
    • the true power of flow is when you bring it to bear on more complex tasks, specifically that require multiple parts of your brain to collaborate (eg: writing, drawing, building products, learning, and in general acts of creation and synthesis) 

    In certain ways, I liken it to your brain engaging it's "deep learning mode." Athletes train extensively to be able to react to baseball in less than 0.3 seconds. Your brain cannot analyze the ball, send signals to all the muscles to start the swing and effectively connect in that time. Instincts honed over years of training kick in and a pro-player's muscles react before a normal person has even analyzed the ball. This is a form of deep-learning, consistently training to do extremely well at that one thing. 

    This "deep-learning" state then is a narrowing of perception, where you create the conditions that the brain feels safe, engaged and peaceful; so it can suspend it's need to monitor "root threads" and bring all that bandwidth to the task (user threads). 

    Analyzing my own flow:

    I find it hard to achieve flow when I am at home, it doesn't matter which room, how peaceful it is but I have find that my best work rarely happens in the comfort of my own home. Binaural-beats help, but randomly. Instead, I find I do better work when I am at any place that is not home. 

    Cafes were my favorite places to go to work, I like white noise (crowd chatterings) that I can tune in and out at will. I like the access to coffee, seeing people work and people watching when taking a break. The pandemic fucked that up. 

    Replacement candidates and their review: 

    • Roof - I have a pretty cool view of the city from the roof of my Bernal Hill apartment. It does get windy up there, and wind is not easy to work around. At night, with all lights off, in the black of the night, with little light-specs shaping the scenery, a hot green tea and a blanket to keep the chill at bay, has been conducive to achieving something close to a flow state. (Disadvantages) Only able to do this at like 11pm at night. 
    • Stairwell - literally the stairwell outside my apartment, because any place but my room. 
    • Walk in closet - very small, just enough for me to sit and trick myself into disassociating from my room that lies less than 5 feet away. Thank god I am not claustrophobic. Tried this a few times, and so far it has worked. 
    • A corner tea shop on the streets of India - I did some of my most important applications for school from a street store, where I would sit on a chair by the sidewalk and work on my laptop while a steady stream of daily laborers would come to drink tea, grab bites and take breaks from their work. They would stare, but soon enough I became friends with the regulars. I did some of my best work there. 

    I think I associate home with distraction, the enjoyable kind - family, roommates, entertainment, reading, things that take me out of the moment. Meditation has been helping, it allows me to be present and makes getting into flow state more achievable. I need to double down on that, and in the meantime figure out the set of circumstances that are able to trigger flow at will. Wouldn't that be convenient?

    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1582284 2020-08-09T05:27:43Z 2020-08-09T05:32:38Z Ted style talks and what I took away from them

    Power foods for the brain - Neal Barnard 

    Context: Dr. Barnard's father passed away due to Alzheimer's, so he understands and explains the way it ravages a human of their identity. If you lose your memories, you lose yourself. Current wisdom knows that this disease has a clear genetic base, your chances of getting it increasing 3X to 15X depending on if you got one or two copies of the responsible gene from your parents.

    Takeaways: 

    • Alzheimer's (henceforth Alz) happens because of the formation of plaques, the constitution of which includes "Beta-Amyloid Proteins" that accumulate in meatball structures in your neurons, iron, and copper
    • Saturated fats are horrible, especially the kind that solidifies at temperature (Bacon grease). 
      • Dairy products are the highest source of Saturated fats (2nd is meat) 
      • People who ate double the amount of bad fat, their risk of developing Alz is 3X higher
      • Similar finding for mild cognitive impairment (high functioning, but forget names and words) and they found the same pattern as Alz
    • Iron and copper is an important part of our body, but:
      • Iron is a double edges sword, too much is bad
      • If you are using a cast-iron pan, then you are getting iron into your diet. Same with water that transports from copper pipes. The free agents and ions travel into your body and get assimilated into your body
      • Vitamin E (mangos, spinach, avocado) is a natural extinguisher - knocks out the free radicals
      • Cannot get from a store, because there's many forms in nature. If you ingest artificial supplements your body only absorbs that version and shuts off the absorption of all others. Hence get it from food, nature intended that way
      • Don't eat nuts as snack food, crumble and use in salad
    • Colorful foods are awesome, very rich in antioxidants. eg: Blueberry, grapes. carrots, apples 
    • Staples - fruits, legumes, grains and vegetables
    • Exercise helps all bodily functions, memory benefits from it too (even something as simple as a 10 minute week)
    • Genes are not destiny
    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1581655 2020-08-06T15:38:23Z 2020-08-06T15:38:23Z The dying art of conversation and how to revive it

    https://podcast.app/celeste-headlee-the-dying-art-of-conversation-e50721834/?utm_source=ios&utm_medium=share

    Takeaways / Observations / Musings:

    • Opera singers are trained listeners, they have to keep track of their own instrument, the conductor, the ebb and flow of the bass and the changing meter. It is hard because listening doesn't come naturally to humans, we evolved as babies to scream for help and we carry that through adulthood
    • types of listening
      • Evaluative - evaluate whether the person is right or wrong 
      • Interpretive - 
      • Transformative - when you learn and transform your own audience 
    • Conversation is almost universally neurologically beneficial. Only two situations where it isn't:
      • when we detect a negative or hostile tone
      • unsolicited advice - even if we walk away having learned something, we do not like unsolicited advice 
    • In written form - don't use 'that', keep ideas simple and brief
    • conversational narcissism - when we insert ourselves into the conversation
      • somebody says their dog died and you say "I know exactly how you feel' is the worst thing you could have said (even if your dog actually die). At the end of this conversation you end up happy because you spoke about yourself, the other person does NOT
      • another way out is to withhold attention - if you do, inadvertently the person will ask you for your experience 
    • An interview is a conversation moderated by the interviewer with a time limit 
    • in an interview nobody cares how smart the interviewer is, the listener is here to hear the interviewee speak and the interviewer to ask the questions that facilitate the conversation 
    • Yes, and - Improv is a great tool to learn conversation 
    ]]>
    Anudeep Yegireddi
    tag:anudeep.posthaven.com,2013:Post/1581264 2020-08-05T07:37:30Z 2020-08-06T15:20:29Z The Hamming Problems (copied)

    I got introduced to 'The Hamming Problem' through Twitter, then books (and audiobooks), and so on. Sam Altman was the first to formally explain the nomenclature and share the source of his inspiration. This is a crossposting from his excellent blog post.

    Richard Hamming gave this talk in March of 1986. [1]  It's one of the best talks I've ever read and has long impacted how I think about spending my time.

    I mentioned it to a number of people this weekend who, to my surprise, had never heard of it.  So I thought I'd share it here:

    It's a pleasure to be here. I doubt if I can live up to the Introduction. The title of my talk is, ``You and Your Research.'' It is not about managing research, it is about how you individually do your research. I could give a talk on the other subject - but it's not, it's about you. I'm not talking about ordinary run-of-the-mill research; I'm talking about great research. And for the sake of describing great research I'll occasionally say Nobel-Prize type of work. It doesn't have to gain the Nobel Prize, but I mean those kinds of things which we perceive are significant things. Relativity, if you want, Shannon's information theory, any number of outstanding theories - that's the kind of thing I'm talking about.

    Now, how did I come to do this study? At Los Alamos I was brought in to run the computing machines which other people had got going, so those scientists and physicists could get back to business. I saw I was a stooge. I saw that although physically I was the same, they were different. And to put the thing bluntly, I was envious. I wanted to know why they were so different from me. I saw Feynman up close. I saw Fermi and Teller. I saw Oppenheimer. I saw Hans Bethe: he was my boss. I saw quite a few very capable people. I became very interested in the difference between those who do and those who might have done.

    When I came to Bell Labs, I came into a very productive department. Bode was the department head at the time; Shannon was there, and there were other people. I continued examining the questions, ``Why?'' and ``What is the difference?'' I continued subsequently by reading biographies, autobiographies, asking people questions such as: ``How did you come to do this?'' I tried to find out what are the differences. And that's what this talk is about.

    Now, why is this talk important? I think it is important because, as far as I know, each of you has one life to live. Even if you believe in reincarnation it doesn't do you any good from one life to the next! Why shouldn't you do significant things in this one life, however you define significant? I'm not going to define it - you know what I mean. I will talk mainly about science because that is what I have studied. But so far as I know, and I've been told by others, much of what I say applies to many fields. Outstanding work is characterized very much the same way in most fields, but I will confine myself to science.

    In order to get at you individually, I must talk in the first person. I have to get you to drop modesty and say to yourself, ``Yes, I would like to do first-class work.'' Our society frowns on people who set out to do really good work. You're not supposed to; luck is supposed to descend on you and you do great things by chance. Well, that's a kind of dumb thing to say. I say, why shouldn't you set out to do something significant. You don't have to tell other people, but shouldn't you say to yourself, ``Yes, I would like to do something significant.''

    In order to get to the second stage, I have to drop modesty and talk in the first person about what I've seen, what I've done, and what I've heard. I'm going to talk about people, some of whom you know, and I trust that when we leave, you won't quote me as saying some of the things I said.

    Let me start not logically, but psychologically. I find that the major objection is that people think great science is done by luck. It's all a matter of luck. Well, consider Einstein. Note how many different things he did that were good. Was it all luck? Wasn't it a little too repetitive? Consider Shannon. He didn't do just information theory. Several years before, he did some other good things and some which are still locked up in the security of cryptography. He did many good things.

    You see again and again, that it is more than one thing from a good person. Once in a while a person does only one thing in his whole life, and we'll talk about that later, but a lot of times there is repetition. I claim that luck will not cover everything. And I will cite Pasteur who said, ``Luck favors the prepared mind.'' And I think that says it the way I believe it. There is indeed an element of luck, and no, there isn't. The prepared mind sooner or later finds something important and does it. So yes, it is luck. The particular thing you do is luck, but that you do something is not.

    For example, when I came to Bell Labs, I shared an office for a while with Shannon. At the same time he was doing information theory, I was doing coding theory. It is suspicious that the two of us did it at the same place and at the same time - it was in the atmosphere. And you can say, ``Yes, it was luck.'' On the other hand you can say, ``But why of all the people in Bell Labs then were those the two who did it?'' Yes, it is partly luck, and partly it is the prepared mind; but `partly' is the other thing I'm going to talk about. So, although I'll come back several more times to luck, I want to dispose of this matter of luck as being the sole criterion whether you do great work or not. I claim you have some, but not total, control over it. And I will quote, finally, Newton on the matter. Newton said, ``If others would think as hard as I did, then they would get similar results.''

    One of the characteristics you see, and many people have it including great scientists, is that usually when they were young they had independent thoughts and had the courage to pursue them. For example, Einstein, somewhere around 12 or 14, asked himself the question, ``What would a light wave look like if I went with the velocity of light to look at it?'' Now he knew that electromagnetic theory says you cannot have a stationary local maximum. But if he moved along with the velocity of light, he would see a local maximum. He could see a contradiction at the age of 12, 14, or somewhere around there, that everything was not right and that the velocity of light had something peculiar. Is it luck that he finally created special relativity? Early on, he had laid down some of the pieces by thinking of the fragments. Now that's the necessary but not sufficient condition. All of these items I will talk about are both luck and not luck.

    How about having lots of `brains?' It sounds good. Most of you in this room probably have more than enough brains to do first-class work. But great work is something else than mere brains. Brains are measured in various ways. In mathematics, theoretical physics, astrophysics, typically brains correlates to a great extent with the ability to manipulate symbols. And so the typical IQ test is apt to score them fairly high. On the other hand, in other fields it is something different. For example, Bill Pfann, the fellow who did zone melting, came into my office one day. He had this idea dimly in his mind about what he wanted and he had some equations. It was pretty clear to me that this man didn't know much mathematics and he wasn't really articulate. His problem seemed interesting so I took it home and did a little work. I finally showed him how to run computers so he could compute his own answers. I gave him the power to compute. He went ahead, with negligible recognition from his own department, but ultimately he has collected all the prizes in the field. Once he got well started, his shyness, his awkwardness, his inarticulateness, fell away and he became much more productive in many other ways. Certainly he became much more articulate.

    And I can cite another person in the same way. I trust he isn't in the audience, i.e. a fellow named Clogston. I met him when I was working on a problem with John Pierce's group and I didn't think he had much. I asked my friends who had been with him at school, ``Was he like that in graduate school?'' ``Yes,'' they replied. Well I would have fired the fellow, but J. R. Pierce was smart and kept him on. Clogston finally did the Clogston cable. After that there was a steady stream of good ideas. One success brought him confidence and courage.

    One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can't, almost surely you are not going to. Courage is one of the things that Shannon had supremely. You have only to think of his major theorem. He wants to create a method of coding, but he doesn't know what to do so he makes a random code. Then he is stuck. And then he asks the impossible question, ``What would the average random code do?'' He then proves that the average code is arbitrarily good, and that therefore there must be at least one good code. Who but a man of infinite courage could have dared to think those thoughts? That is the characteristic of great scientists; they have courage. They will go forward under incredible circumstances; they think and continue to think.

    Age is another factor which the physicists particularly worry about. They always are saying that you have got to do it when you are young or you will never do it. Einstein did things very early, and all the quantum mechanic fellows were disgustingly young when they did their best work. Most mathematicians, theoretical physicists, and astrophysicists do what we consider their best work when they are young. It is not that they don't do good work in their old age but what we value most is often what they did early. On the other hand, in music, politics and literature, often what we consider their best work was done late. I don't know how whatever field you are in fits this scale, but age has some effect.

    But let me say why age seems to have the effect it does. In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more work. You may find yourself as I saw Brattain when he got a Nobel Prize. The day the prize was announced we all assembled in Arnold Auditorium; all three winners got up and made speeches. The third one, Brattain, practically with tears in his eyes, said, ``I know about this Nobel-Prize effect and I am not going to let it affect me; I am going to remain good old Walter Brattain.'' Well I said to myself, ``That is nice.'' But in a few weeks I saw it was affecting him. Now he could only work on great problems.

    When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards.

    This brings up the subject, out of order perhaps, of working conditions. What most people think are the best working conditions, are not. Very clearly they are not because people are often most productive when working conditions are bad. One of the better times of the Cambridge Physical Laboratories was when they had practically shacks - they did some of the best physics ever.

    I give you a story from my own private life. Early on it became evident to me that Bell Laboratories was not going to give me the conventional acre of programming people to program computing machines in absolute binary. It was clear they weren't going to. But that was the way everybody did it. I could go to the West Coast and get a job with the airplane companies without any trouble, but the exciting people were at Bell Labs and the fellows out there in the airplane companies were not. I thought for a long while about, ``Did I want to go or not?'' and I wondered how I could get the best of two possible worlds. I finally said to myself, ``Hamming, you think the machines can do practically everything. Why can't you make them write programs?'' What appeared at first to me as a defect forced me into automatic programming very early. What appears to be a fault, often, by a change of viewpoint, turns out to be one of the greatest assets you can have. But you are not likely to think that when you first look the thing and say, ``Gee, I'm never going to get enough programmers, so how can I ever do any great programming?''

    And there are many other stories of the same kind; Grace Hopper has similar ones. I think that if you look carefully you will see that often the great scientists, by turning the problem around a bit, changed a defect to an asset. For example, many scientists when they found they couldn't do a problem finally began to study why not. They then turned it around the other way and said, ``But of course, this is what it is'' and got an important result. So ideal working conditions are very strange. The ones you want aren't always the best ones for you.

    Now for the matter of drive. You observe that most great scientists have tremendous drive. I worked for ten years with John Tukey at Bell Labs. He had tremendous drive. One day about three or four years after I joined, I discovered that John Tukey was slightly younger than I was. John was a genius and I clearly was not. Well I went storming into Bode's office and said, ``How can anybody my age know as much as John Tukey does?'' He leaned back in his chair, put his hands behind his head, grinned slightly, and said, ``You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.'' I simply slunk out of the office!

    What Bode was saying was this: ``Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime. I took Bode's remark to heart; I spent a good deal more of my time for some years trying to work a bit harder and I found, in fact, I could get more work done. I don't like to say it in front of my wife, but I did sort of neglect her sometimes; I needed to study. You have to neglect things if you intend to get what you want done. There's no question about this.

    On this matter of drive Edison says, ``Genius is 99% perspiration and 1% inspiration.'' He may have been exaggerating, but the idea is that solid work, steadily applied, gets you surprisingly far. The steady application of effort with a little bit more work, intelligently applied is what does it. That's the trouble; drive, misapplied, doesn't get you anywhere. I've often wondered why so many of my good friends at Bell Labs who worked as hard or harder than I did, didn't have so much to show for it. The misapplication of effort is a very serious matter. Just hard work is not enough - it must be applied sensibly.

    There's another trait on the side which I want to talk about; that trait is ambiguity. It took me a while to discover its importance. Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you'll never notice the flaws; if you doubt too much you won't get started. It requires a lovely balance. But most great scientists are well aware of why their theories are true and they are also well aware of some slight misfits which don't quite fit and they don't forget it. Darwin writes in his autobiography that he found it necessary to write down every piece of evidence which appeared to contradict his beliefs because otherwise they would disappear from his mind. When you find apparent flaws you've got to be sensitive and keep track of those things, and keep an eye out for how they can be explained or how the theory can be changed to fit them. Those are often the great contributions. Great contributions are rarely done by adding another decimal place. It comes down to an emotional commitment. Most great scientists are completely committed to their problem. Those who don't become committed seldom produce outstanding, first-class work.

    Now again, emotional commitment is not enough. It is a necessary condition apparently. And I think I can tell you the reason why. Everybody who has studied creativity is driven finally to saying, ``creativity comes out of your subconscious.'' Somehow, suddenly, there it is. It just appears. Well, we know very little about the subconscious; but one thing you are pretty well aware of is that your dreams also come out of your subconscious. And you're aware your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there's the answer. For those who don't get committed to their current problem, the subconscious goofs off on other things and doesn't produce the big result. So the way to manage yourself is that when you have a real important problem you don't let anything else get the center of your attention - you keep your thoughts on the problem. Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.

    Now Alan Chynoweth mentioned that I used to eat at the physics table. I had been eating with the mathematicians and I found out that I already knew a fair amount of mathematics; in fact, I wasn't learning much. The physics table was, as he said, an exciting place, but I think he exaggerated on how much I contributed. It was very interesting to listen to Shockley, Brattain, Bardeen, J. B. Johnson, Ken McKay and other people, and I was learning a lot. But unfortunately a Nobel Prize came, and a promotion came, and what was left was the dregs. Nobody wanted what was left. Well, there was no use eating with them!

    Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, ``Do you mind if I join you?'' They can't say no, so I started eating with them for a while. And I started asking, ``What are the important problems of your field?'' And after a week or so, ``What important problems are you working on?'' And after some more time I came in one day and said, ``If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?'' I wasn't welcomed after that; I had to find somebody else to eat with! That was in the spring.

    In the fall, Dave McCall stopped me in the hall and said, ``Hamming, that remark of yours got underneath my skin. I thought about it all summer, i.e. what were the important problems in my field. I haven't changed my research,'' he says, ``but I think it was well worthwhile.'' And I said, ``Thank you Dave,'' and went on. I noticed a couple of months later he was made the head of the department. I noticed the other day he was a Member of the National Academy of Engineering. I noticed he has succeeded. I have never heard the names of any of the other fellows at that table mentioned in science and scientific circles. They were unable to ask themselves, ``What are the important problems in my field?''

    If you do not work on an important problem, it's unlikely you'll do important work. It's perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them. Let me warn you, `important problem' must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense. The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn't believe that they will lead to important problems.

    I spoke earlier about planting acorns so that oaks will grow. You can't always know exactly where to be, but you can keep active in places where something might happen. And even if you believe that great science is a matter of luck, you can stand on a mountain top where lightning strikes; you don't have to hide in the valley where you're safe. But the average scientist does routine safe work almost all the time and so he (or she) doesn't produce much. It's that simple. If you want to do great work, you clearly must work on important problems, and you should have an idea.

    Along those lines at some urging from John Tukey and others, I finally adopted what I called ``Great Thoughts Time.'' When I went to lunch Friday noon, I would only discuss great thoughts after that. By great thoughts I mean ones like: ``What will be the role of computers in all of AT&T?'', ``How will computers change science?'' For example, I came up with the observation at that time that nine out of ten experiments were done in the lab and one in ten on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. nine out of ten experiments would be done on the computer and one in ten in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they've been proved wrong while I have been proved right. They built laboratories when they didn't need them. I saw that computers were transforming science because I spent a lot of time asking ``What will be the impact of computers on science and how can I change it?'' I asked myself, ``How is it going to change Bell Labs?'' I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now. I thought hard about where was my field going, where were the opportunities, and what were the important things to do. Let me go there so there is a chance I can do important things.

    Most great scientists know many important problems. They have something between 10 and 20 important problems for which they are looking for an attack. And when they see a new idea come up, one hears them say ``Well that bears on this problem.'' They drop all the other things and get after it. Now I can tell you a horror story that was told to me but I can't vouch for the truth of it. I was sitting in an airport talking to a friend of mine from Los Alamos about how it was lucky that the fission experiment occurred over in Europe when it did because that got us working on the atomic bomb here in the US. He said ``No; at Berkeley we had gathered a bunch of data; we didn't get around to reducing it because we were building some more equipment, but if we had reduced that data we would have found fission.'' They had it in their hands and they didn't pursue it. They came in second!

    The great scientists, when an opportunity opens up, get after it and they pursue it. They drop all other things. They get rid of other things and they get after an idea because they had already thought the thing through. Their minds are prepared; they see the opportunity and they go after it. Now of course lots of times it doesn't work out, but you don't have to hit many of them to do some great science. It's kind of easy. One of the chief tricks is to live a long time!

    Another trait, it took me a while to notice. I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, ``The closed door is symbolic of a closed mind.'' I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder. Somehow they seem to work on slightly the wrong thing - not much, but enough that they miss fame.

    I want to talk on another topic. It is based on the song which I think many of you know, ``It ain't what you do, it's the way that you do it.'' I'll start with an example of my own. I was conned into doing on a digital computer, in the absolute binary days, a problem which the best analog computers couldn't do. And I was getting an answer. When I thought carefully and said to myself, ``You know, Hamming, you're going to have to file a report on this military job; after you spend a lot of money you're going to have to account for it and every analog installation is going to want the report to see if they can't find flaws in it.'' I was doing the required integration by a rather crummy method, to say the least, but I was getting the answer. And I realized that in truth the problem was not just to get the answer; it was to demonstrate for the first time, and beyond question, that I could beat the analog computer on its own ground with a digital machine. I reworked the method of solution, created a theory which was nice and elegant, and changed the way we computed the answer; the results were no different. The published report had an elegant method which was later known for years as ``Hamming's Method of Integrating Differential Equations.'' It is somewhat obsolete now, but for a while it was a very good method. By changing the problem slightly, I did important work rather than trivial work.

    In the same way, when using the machine up in the attic in the early days, I was solving one problem after another after another; a fair number were successful and there were a few failures. I went home one Friday after finishing a problem, and curiously enough I wasn't happy; I was depressed. I could see life being a long sequence of one problem after another after another. After quite a while of thinking I decided, ``No, I should be in the mass production of a variable product. I should be concerned with all of next year's problems, not just the one in front of my face.'' By changing the question I still got the same kind of results or better, but I changed things and did important work. I attacked the major problem - How do I conquer machines and do all of next year's problems when I don't know what they are going to be? How do I prepare for it? How do I do this one so I'll be on top of it? How do I obey Newton's rule? He said, ``If I have seen further than others, it is because I've stood on the shoulders of giants.'' These days we stand on each other's feet!

    You should do your job in such a fashion that others can build on top of it, so they will indeed say, ``Yes, I've stood on so and so's shoulders and I saw further.'' The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.

    Now if you are much of a mathematician you know that the effort to generalize often means that the solution is simple. Often by stopping and saying, ``This is the problem he wants but this is characteristic of so and so. Yes, I can attack the whole class with a far superior method than the particular one because I was earlier embedded in needless detail.'' The business of abstraction frequently makes things simple. Furthermore, I filed away the methods and prepared for the future problems.

    To end this part, I'll remind you, ``It is a poor workman who blames his tools - the good man gets on with the job, given what he's got, and gets the best answer he can.'' And I suggest that by altering the problem, by looking at the thing differently, you can make a great deal of difference in your final productivity because you can either do it in such a fashion that people can indeed build on what you've done, or you can do it in such a fashion that the next person has to essentially duplicate again what you've done. It isn't just a matter of the job, it's the way you write the report, the way you write the paper, the whole attitude. It's just as easy to do a broad, general job as one very special case. And it's much more satisfying and rewarding!

    I have now come down to a topic which is very distasteful; it is not sufficient to do a job, you have to sell it. `Selling' to a scientist is an awkward thing to do. It's very ugly; you shouldn't have to do it. The world is supposed to be waiting, and when you do something great, they should rush out and welcome it. But the fact is everyone is busy with their own work. You must present it so well that they will set aside what they are doing, look at what you've done, read it, and come back and say, ``Yes, that was good.'' I suggest that when you open a journal, as you turn the pages, you ask why you read some articles and not others. You had better write your report so when it is published in the Physical Review, or wherever else you want it, as the readers are turning the pages they won't just turn your pages but they will stop and read yours. If they don't stop and read it, you won't get credit.

    There are three things you have to do in selling. You have to learn to write clearly and well so that people will read it, you must learn to give reasonably formal talks, and you also must learn to give informal talks. We had a lot of so-called `back room scientists.' In a conference, they would keep quiet. Three weeks later after a decision was made they filed a report saying why you should do so and so. Well, it was too late. They would not stand up right in the middle of a hot conference, in the middle of activity, and say, ``We should do this for these reasons.'' You need to master that form of communication as well as prepared speeches.

    When I first started, I got practically physically ill while giving a speech, and I was very, very nervous. I realized I either had to learn to give speeches smoothly or I would essentially partially cripple my whole career. The first time IBM asked me to give a speech in New York one evening, I decided I was going to give a really good speech, a speech that was wanted, not a technical one but a broad one, and at the end if they liked it, I'd quietly say, ``Any time you want one I'll come in and give you one.'' As a result, I got a great deal of practice giving speeches to a limited audience and I got over being afraid. Furthermore, I could also then study what methods were effective and what were ineffective.

    While going to meetings I had already been studying why some papers are remembered and most are not. The technical person wants to give a highly limited technical talk. Most of the time the audience wants a broad general talk and wants much more survey and background than the speaker is willing to give. As a result, many talks are ineffective. The speaker names a topic and suddenly plunges into the details he's solved. Few people in the audience may follow. You should paint a general picture to say why it's important, and then slowly give a sketch of what was done. Then a larger number of people will say, ``Yes, Joe has done that,'' or ``Mary has done that; I really see where it is; yes, Mary really gave a good talk; I understand what Mary has done.'' The tendency is to give a highly restricted, safe talk; this is usually ineffective. Furthermore, many talks are filled with far too much information. So I say this idea of selling is obvious.

    Let me summarize. You've got to work on important problems. I deny that it is all luck, but I admit there is a fair element of luck. I subscribe to Pasteur's ``Luck favors the prepared mind.'' I favor heavily what I did. Friday afternoons for years - great thoughts only - means that I committed 10% of my time trying to understand the bigger problems in the field, i.e. what was and what was not important. I found in the early days I had believed `this' and yet had spent all week marching in `that' direction. It was kind of foolish. If I really believe the action is over there, why do I march in this direction? I either had to change my goal or change what I did. So I changed something I did and I marched in the direction I thought was important. It's that easy.

    Now you might tell me you haven't got control over what you have to work on. Well, when you first begin, you may not. But once you're moderately successful, there are more people asking for results than you can deliver and you have some power of choice, but not completely. I'll tell you a story about that, and it bears on the subject of educating your boss. I had a boss named Schelkunoff; he was, and still is, a very good friend of mine. Some military person came to me and demanded some answers by Friday. Well, I had already dedicated my computing resources to reducing data on the fly for a group of scientists; I was knee deep in short, small, important problems. This military person wanted me to solve his problem by the end of the day on Friday. I said, ``No, I'll give it to you Monday. I can work on it over the weekend. I'm not going to do it now.'' He goes down to my boss, Schelkunoff, and Schelkunoff says, ``You must run this for him; he's got to have it by Friday.'' I tell him, ``Why do I?''; he says, ``You have to.'' I said, ``Fine, Sergei, but you're sitting in your office Friday afternoon catching the late bus home to watch as this fellow walks out that door.'' I gave the military person the answers late Friday afternoon. I then went to Schelkunoff's office and sat down; as the man goes out I say, ``You see Schelkunoff, this fellow has nothing under his arm; but I gave him the answers.'' On Monday morning Schelkunoff called him up and said, ``Did you come in to work over the weekend?'' I could hear, as it were, a pause as the fellow ran through his mind of what was going to happen; but he knew he would have had to sign in, and he'd better not say he had when he hadn't, so he said he hadn't. Ever after that Schelkunoff said, ``You set your deadlines; you can change them.''

    One lesson was sufficient to educate my boss as to why I didn't want to do big jobs that displaced exploratory research and why I was justified in not doing crash jobs which absorb all the research computing facilities. I wanted instead to use the facilities to compute a large number of small problems. Again, in the early days, I was limited in computing capacity and it was clear, in my area, that a ``mathematician had no use for machines.'' But I needed more machine capacity. Every time I had to tell some scientist in some other area, ``No I can't; I haven't the machine capacity,'' he complained. I said ``Go tell your Vice President that Hamming needs more computing capacity.'' After a while I could see what was happening up there at the top; many people said to my Vice President, ``Your man needs more computing capacity.'' I got it!

    I also did a second thing. When I loaned what little programming power we had to help in the early days of computing, I said, ``We are not getting the recognition for our programmers that they deserve. When you publish a paper you will thank that programmer or you aren't getting any more help from me. That programmer is going to be thanked by name; she's worked hard.'' I waited a couple of years. I then went through a year of BSTJ articles and counted what fraction thanked some programmer. I took it into the boss and said, ``That's the central role computing is playing in Bell Labs; if the BSTJ is important, that's how important computing is.'' He had to give in. You can educate your bosses. It's a hard job. In this talk I'm only viewing from the bottom up; I'm not viewing from the top down. But I am telling you how you can get what you want in spite of top management. You have to sell your ideas there also.

    Well I now come down to the topic, ``Is the effort to be a great scientist worth it?'' To answer this, you must ask people. When you get beyond their modesty, most people will say, ``Yes, doing really first-class work, and knowing it, is as good as wine, women and song put together,'' or if it's a woman she says, ``It is as good as wine, men and song put together.'' And if you look at the bosses, they tend to come back or ask for reports, trying to participate in those moments of discovery. They're always in the way. So evidently those who have done it, want to do it again. But it is a limited survey. I have never dared to go out and ask those who didn't do great work how they felt about the matter. It's a biased sample, but I still think it is worth the struggle. I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.

    I've told you how to do it. It is so easy, so why do so many people, with all their talents, fail? For example, my opinion, to this day, is that there are in the mathematics department at Bell Labs quite a few people far more able and far better endowed than I, but they didn't produce as much. Some of them did produce more than I did; Shannon produced more than I did, and some others produced a lot, but I was highly productive against a lot of other fellows who were better equipped. Why is it so? What happened to them? Why do so many of the people who have great promise, fail?

    Well, one of the reasons is drive and commitment. The people who do great work with less ability but who are committed to it, get more done that those who have great skill and dabble in it, who work during the day and go home and do other things and come back and work the next day. They don't have the deep commitment that is apparently necessary for really first-class work. They turn out lots of good work, but we were talking, remember, about first-class work. There is a difference. Good people, very talented people, almost always turn out good work. We're talking about the outstanding work, the type of work that gets the Nobel Prize and gets recognition.

    The second thing is, I think, the problem of personality defects. Now I'll cite a fellow whom I met out in Irvine. He had been the head of a computing center and he was temporarily on assignment as a special assistant to the president of the university. It was obvious he had a job with a great future. He took me into his office one time and showed me his method of getting letters done and how he took care of his correspondence. He pointed out how inefficient the secretary was. He kept all his letters stacked around there; he knew where everything was. And he would, on his word processor, get the letter out. He was bragging how marvelous it was and how he could get so much more work done without the secretary's interference. Well, behind his back, I talked to the secretary. The secretary said, ``Of course I can't help him; I don't get his mail. He won't give me the stuff to log in; I don't know where he puts it on the floor. Of course I can't help him.'' So I went to him and said, ``Look, if you adopt the present method and do what you can do single-handedly, you can go just that far and no farther than you can do single-handedly. If you will learn to work with the system, you can go as far as the system will support you.'' And, he never went any further. He had his personality defect of wanting total control and was not willing to recognize that you need the support of the system.

    You find this happening again and again; good scientists will fight the system rather than learn to work with the system and take advantage of all the system has to offer. It has a lot, if you learn how to use it. It takes patience, but you can learn how to use the system pretty well, and you can learn how to get around it. After all, if you want a decision `No', you just go to your boss and get a `No' easy. If you want to do something, don't ask, do it. Present him with an accomplished fact. Don't give him a chance to tell you `No'. But if you want a `No', it's easy to get a `No'.

    Another personality defect is ego assertion and I'll speak in this case of my own experience. I came from Los Alamos and in the early days I was using a machine in New York at 590 Madison Avenue where we merely rented time. I was still dressing in western clothes, big slash pockets, a bolo and all those things. I vaguely noticed that I was not getting as good service as other people. So I set out to measure. You came in and you waited for your turn; I felt I was not getting a fair deal. I said to myself, ``Why? No Vice President at IBM said, `Give Hamming a bad time'. It is the secretaries at the bottom who are doing this. When a slot appears, they'll rush to find someone to slip in, but they go out and find somebody else. Now, why? I haven't mistreated them.'' Answer, I wasn't dressing the way they felt somebody in that situation should. It came down to just that - I wasn't dressing properly. I had to make the decision - was I going to assert my ego and dress the way I wanted to and have it steadily drain my effort from my professional life, or was I going to appear to conform better? I decided I would make an effort to appear to conform properly. The moment I did, I got much better service. And now, as an old colorful character, I get better service than other people.

    You should dress according to the expectations of the audience spoken to. If I am going to give an address at the MIT computer center, I dress with a bolo and an old corduroy jacket or something else. I know enough not to let my clothes, my appearance, my manners get in the way of what I care about. An enormous number of scientists feel they must assert their ego and do their thing their way. They have got to be able to do this, that, or the other thing, and they pay a steady price.

    John Tukey almost always dressed very casually. He would go into an important office and it would take a long time before the other fellow realized that this is a first-class man and he had better listen. For a long time John has had to overcome this kind of hostility. It's wasted effort! I didn't say you should conform; I said ``The appearance of conforming gets you a long way.'' If you chose to assert your ego in any number of ways, ``I am going to do it my way,'' you pay a small steady price throughout the whole of your professional career. And this, over a whole lifetime, adds up to an enormous amount of needless trouble.

    By taking the trouble to tell jokes to the secretaries and being a little friendly, I got superb secretarial help. For instance, one time for some idiot reason all the reproducing services at Murray Hill were tied up. Don't ask me how, but they were. I wanted something done. My secretary called up somebody at Holmdel, hopped the company car, made the hour-long trip down and got it reproduced, and then came back. It was a payoff for the times I had made an effort to cheer her up, tell her jokes and be friendly; it was that little extra work that later paid off for me. By realizing you have to use the system and studying how to get the system to do your work, you learn how to adapt the system to your desires. Or you can fight it steadily, as a small undeclared war, for the whole of your life.

    And I think John Tukey paid a terrible price needlessly. He was a genius anyhow, but I think it would have been far better, and far simpler, had he been willing to conform a little bit instead of ego asserting. He is going to dress the way he wants all of the time. It applies not only to dress but to a thousand other things; people will continue to fight the system. Not that you shouldn't occasionally!

    When they moved the library from the middle of Murray Hill to the far end, a friend of mine put in a request for a bicycle. Well, the organization was not dumb. They waited awhile and sent back a map of the grounds saying, ``Will you please indicate on this map what paths you are going to take so we can get an insurance policy covering you.'' A few more weeks went by. They then asked, ``Where are you going to store the bicycle and how will it be locked so we can do so and so.'' He finally realized that of course he was going to be red-taped to death so he gave in. He rose to be the President of Bell Laboratories.

    Barney Oliver was a good man. He wrote a letter one time to the IEEE. At that time the official shelf space at Bell Labs was so much and the height of the IEEE Proceedings at that time was larger; and since you couldn't change the size of the official shelf space he wrote this letter to the IEEE Publication person saying, ``Since so many IEEE members were at Bell Labs and since the official space was so high the journal size should be changed.'' He sent it for his boss's signature. Back came a carbon with his signature, but he still doesn't know whether the original was sent or not. I am not saying you shouldn't make gestures of reform. I am saying that my study of able people is that they don't get themselves committed to that kind of warfare. They play it a little bit and drop it and get on with their work.

    Many a second-rate fellow gets caught up in some little twitting of the system, and carries it through to warfare. He expends his energy in a foolish project. Now you are going to tell me that somebody has to change the system. I agree; somebody's has to. Which do you want to be? The person who changes the system or the person who does first-class science? Which person is it that you want to be? Be clear, when you fight the system and struggle with it, what you are doing, how far to go out of amusement, and how much to waste your effort fighting the system. My advice is to let somebody else do it and you get on with becoming a first-class scientist. Very few of you have the ability to both reform the system and become a first-class scientist.

    On the other hand, we can't always give in. There are times when a certain amount of rebellion is sensible. I have observed almost all scientists enjoy a certain amount of twitting the system for the sheer love of it. What it comes down to basically is that you cannot be original in one area without having originality in others. Originality is being different. You can't be an original scientist without having some other original characteristics. But many a scientist has let his quirks in other places make him pay a far higher price than is necessary for the ego satisfaction he or she gets. I'm not against all ego assertion; I'm against some.

    Another fault is anger. Often a scientist becomes angry, and this is no way to handle things. Amusement, yes, anger, no. Anger is misdirected. You should follow and cooperate rather than struggle against the system all the time.

    Another thing you should look for is the positive side of things instead of the negative. I have already given you several examples, and there are many, many more; how, given the situation, by changing the way I looked at it, I converted what was apparently a defect to an asset. I'll give you another example. I am an egotistical person; there is no doubt about it. I knew that most people who took a sabbatical to write a book, didn't finish it on time. So before I left, I told all my friends that when I come back, that book was going to be done! Yes, I would have it done - I'd have been ashamed to come back without it! I used my ego to make myself behave the way I wanted to. I bragged about something so I'd have to perform. I found out many times, like a cornered rat in a real trap, I was surprisingly capable. I have found that it paid to say, ``Oh yes, I'll get the answer for you Tuesday,'' not having any idea how to do it. By Sunday night I was really hard thinking on how I was going to deliver by Tuesday. I often put my pride on the line and sometimes I failed, but as I said, like a cornered rat I'm surprised how often I did a good job. I think you need to learn to use yourself. I think you need to know how to convert a situation from one view to another which would increase the chance of success.

    Now self-delusion in humans is very, very common. There are enumerable ways of you changing a thing and kidding yourself and making it look some other way. When you ask, ``Why didn't you do such and such,'' the person has a thousand alibis. If you look at the history of science, usually these days there are 10 people right there ready, and we pay off for the person who is there first. The other nine fellows say, ``Well, I had the idea but I didn't do it and so on and so on.'' There are so many alibis. Why weren't you first? Why didn't you do it right? Don't try an alibi. Don't try and kid yourself. You can tell other people all the alibis you want. I don't mind. But to yourself try to be honest.

    If you really want to be a first-class scientist you need to know yourself, your weaknesses, your strengths, and your bad faults, like my egotism. How can you convert a fault to an asset? How can you convert a situation where you haven't got enough manpower to move into a direction when that's exactly what you need to do? I say again that I have seen, as I studied the history, the successful scientist changed the viewpoint and what was a defect became an asset.

    In summary, I claim that some of the reasons why so many people who have greatness within their grasp don't succeed are: they don't work on important problems, they don't become emotionally involved, they don't try and change what is difficult to some other situation which is easily done but is still important, and they keep giving themselves alibis why they don't. They keep saying that it is a matter of luck. I've told you how easy it is; furthermore I've told you how to reform. Therefore, go forth and become great scientists!

    DISCUSSION - QUESTIONS AND ANSWERS

    A. G. Chynoweth: Well that was 50 minutes of concentrated wisdom and observations accumulated over a fantastic career; I lost track of all the observations that were striking home. Some of them are very very timely. One was the plea for more computer capacity; I was hearing nothing but that this morning from several people, over and over again. So that was right on the mark today even though here we are 20 - 30 years after when you were making similar remarks, Dick. I can think of all sorts of lessons that all of us can draw from your talk. And for one, as I walk around the halls in the future I hope I won't see as many closed doors in Bellcore. That was one observation I thought was very intriguing.

    Thank you very, very much indeed Dick; that was a wonderful recollection. I'll now open it up for questions. I'm sure there are many people who would like to take up on some of the points that Dick was making.

    Hamming: First let me respond to Alan Chynoweth about computing. I had computing in research and for 10 years I kept telling my management, ``Get that !&@#% machine out of research. We are being forced to run problems all the time. We can't do research because were too busy operating and running the computing machines.'' Finally the message got through. They were going to move computing out of research to someplace else. I was persona non grata to say the least and I was surprised that people didn't kick my shins because everybody was having their toy taken away from them. I went in to Ed David's office and said, ``Look Ed, you've got to give your researchers a machine. If you give them a great big machine, we'll be back in the same trouble we were before, so busy keeping it going we can't think. Give them the smallest machine you can because they are very able people. They will learn how to do things on a small machine instead of mass computing.'' As far as I'm concerned, that's how UNIX arose. We gave them a moderately small machine and they decided to make it do great things. They had to come up with a system to do it on. It is called UNIX!

    A. G. Chynoweth: I just have to pick up on that one. In our present environment, Dick, while we wrestle with some of the red tape attributed to, or required by, the regulators, there is one quote that one exasperated AVP came up with and I've used it over and over again. He growled that, ``UNIX was never a deliverable!''

    Question: What about personal stress? Does that seem to make a difference?

    Hamming: Yes, it does. If you don't get emotionally involved, it doesn't. I had incipient ulcers most of the years that I was at Bell Labs. I have since gone off to the Naval Postgraduate School and laid back somewhat, and now my health is much better. But if you want to be a great scientist you're going to have to put up with stress. You can lead a nice life; you can be a nice guy or you can be a great scientist. But nice guys end last, is what Leo Durocher said. If you want to lead a nice happy life with a lot of recreation and everything else, you'll lead a nice life.

    Question: The remarks about having courage, no one could argue with; but those of us who have gray hairs or who are well established don't have to worry too much. But what I sense among the young people these days is a real concern over the risk taking in a highly competitive environment. Do you have any words of wisdom on this?

    Hamming: I'll quote Ed David more. Ed David was concerned about the general loss of nerve in our society. It does seem to me that we've gone through various periods. Coming out of the war, coming out of Los Alamos where we built the bomb, coming out of building the radars and so on, there came into the mathematics department, and the research area, a group of people with a lot of guts. They've just seen things done; they've just won a war which was fantastic. We had reasons for having courage and therefore we did a great deal. I can't arrange that situation to do it again. I cannot blame the present generation for not having it, but I agree with what you say; I just cannot attach blame to it. It doesn't seem to me they have the desire for greatness; they lack the courage to do it. But we had, because we were in a favorable circumstance to have it; we just came through a tremendously successful war. In the war we were looking very, very bad for a long while; it was a very desperate struggle as you well know. And our success, I think, gave us courage and self confidence; that's why you see, beginning in the late forties through the fifties, a tremendous productivity at the labs which was stimulated from the earlier times. Because many of us were earlier forced to learn other things - we were forced to learn the things we didn't want to learn, we were forced to have an open door - and then we could exploit those things we learned. It is true, and I can't do anything about it; I cannot blame the present generation either. It's just a fact.

    Question: Is there something management could or should do?

    Hamming: Management can do very little. If you want to talk about managing research, that's a totally different talk. I'd take another hour doing that. This talk is about how the individual gets very successful research done in spite of anything the management does or in spite of any other opposition. And how do you do it? Just as I observe people doing it. It's just that simple and that hard!

    Question: Is brainstorming a daily process?

    Hamming: Once that was a very popular thing, but it seems not to have paid off. For myself I find it desirable to talk to other people; but a session of brainstorming is seldom worthwhile. I do go in to strictly talk to somebody and say, ``Look, I think there has to be something here. Here's what I think I see ...'' and then begin talking back and forth. But you want to pick capable people. To use another analogy, you know the idea called the `critical mass.' If you have enough stuff you have critical mass. There is also the idea I used to call `sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, ``Yes, yes, yes.'' What you want to do is get that critical mass in action; ``Yes, that reminds me of so and so,'' or, ``Have you thought about that or this?'' When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, ``Oh yes,'' and to find those who will stimulate you right back.

    For example, you couldn't talk to John Pierce without being stimulated very quickly. There were a group of other people I used to talk with. For example there was Ed Gilbert; I used to go down to his office regularly and ask him questions and listen and come back stimulated. I picked my people carefully with whom I did or whom I didn't brainstorm because the sound absorbers are a curse. They are just nice guys; they fill the whole space and they contribute nothing except they absorb ideas and the new ideas just die away instead of echoing on. Yes, I find it necessary to talk to people. I think people with closed doors fail to do this so they fail to get their ideas sharpened, such as ``Did you ever notice something over here?'' I never knew anything about it - I can go over and look. Somebody points the way. On my visit here, I have already found several books that I must read when I get home. I talk to people and ask questions when I think they can answer me and give me clues that I do not know about. I go out and look!

    Question: What kind of tradeoffs did you make in allocating your time for reading and writing and actually doing research?

    Hamming: I believed, in my early days, that you should spend at least as much time in the polish and presentation as you did in the original research. Now at least 50% of the time must go for the presentation. It's a big, big number.

    Question: How much effort should go into library work?

    Hamming: It depends upon the field. I will say this about it. There was a fellow at Bell Labs, a very, very, smart guy. He was always in the library; he read everything. If you wanted references, you went to him and he gave you all kinds of references. But in the middle of forming these theories, I formed a proposition: there would be no effect named after him in the long run. He is now retired from Bell Labs and is an Adjunct Professor. He was very valuable; I'm not questioning that. He wrote some very good Physical Review articles; but there's no effect named after him because he read too much. If you read all the time what other people have done you will think the way they thought. If you want to think new thoughts that are different, then do what a lot of creative people do - get the problem reasonably clear and then refuse to look at any answers until you've thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one. So yes, you need to keep up. You need to keep up more to find out what the problems are than to read to find the solutions. The reading is necessary to know what is going on and what is possible. But reading to get the solutions does not seem to be the way to do great research. So I'll give you two answers. You read; but it is not the amount, it is the way you read that counts.

    Question: How do you get your name attached to things?

    Hamming: By doing great work. I'll tell you the hamming window one. I had given Tukey a hard time, quite a few times, and I got a phone call from him from Princeton to me at Murray Hill. I knew that he was writing up power spectra and he asked me if I would mind if he called a certain window a ``Hamming window.'' And I said to him, ``Come on, John; you know perfectly well I did only a small part of the work but you also did a lot.'' He said, ``Yes, Hamming, but you contributed a lot of small things; you're entitled to some credit.'' So he called it the hamming window. Now, let me go on. I had twitted John frequently about true greatness. I said true greatness is when your name is like ampere, watt, and fourier - when it's spelled with a lower case letter. That's how the hamming window came about.

    Question: Dick, would you care to comment on the relative effectiveness between giving talks, writing papers, and writing books?

    Hamming: In the short-haul, papers are very important if you want to stimulate someone tomorrow. If you want to get recognition long-haul, it seems to me writing books is more contribution because most of us need orientation. In this day of practically infinite knowledge, we need orientation to find our way. Let me tell you what infinite knowledge is. Since from the time of Newton to now, we have come close to doubling knowledge every 17 years, more or less. And we cope with that, essentially, by specialization. In the next 340 years at that rate, there will be 20 doublings, i.e. a million, and there will be a million fields of specialty for every one field now. It isn't going to happen. The present growth of knowledge will choke itself off until we get different tools. I believe that books which try to digest, coordinate, get rid of the duplication, get rid of the less fruitful methods and present the underlying ideas clearly of what we know now, will be the things the future generations will value. Public talks are necessary; private talks are necessary; written papers are necessary. But I am inclined to believe that, in the long-haul, books which leave out what's not essential are more important than books which tell you everything because you don't want to know everything. I don't want to know that much about penguins is the usual reply. You just want to know the essence.

    Question: You mentioned the problem of the Nobel Prize and the subsequent notoriety of what was done to some of the careers. Isn't that kind of a much more broad problem of fame? What can one do?

    Hamming: Some things you could do are the following. Somewhere around every seven years make a significant, if not complete, shift in your field. Thus, I shifted from numerical analysis, to hardware, to software, and so on, periodically, because you tend to use up your ideas. When you go to a new field, you have to start over as a baby. You are no longer the big mukity muk and you can start back there and you can start planting those acorns which will become the giant oaks. Shannon, I believe, ruined himself. In fact when he left Bell Labs, I said, ``That's the end of Shannon's scientific career.'' I received a lot of flak from my friends who said that Shannon was just as smart as ever. I said, ``Yes, he'll be just as smart, but that's the end of his scientific career,'' and I truly believe it was.

    You have to change. You get tired after a while; you use up your originality in one field. You need to get something nearby. I'm not saying that you shift from music to theoretical physics to English literature; I mean within your field you should shift areas so that you don't go stale. You couldn't get away with forcing a change every seven years, but if you could, I would require a condition for doing research, being that you will change your field of research every seven years with a reasonable definition of what it means, or at the end of 10 years, management has the right to compel you to change. I would insist on a change because I'm serious. What happens to the old fellows is that they get a technique going; they keep on using it. They were marching in that direction which was right then, but the world changes. There's the new direction; but the old fellows are still marching in their former direction.

    You need to get into a new field to get new viewpoints, and before you use up all the old ones. You can do something about this, but it takes effort and energy. It takes courage to say, ``Yes, I will give up my great reputation.'' For example, when error correcting codes were well launched, having these theories, I said, ``Hamming, you are going to quit reading papers in the field; you are going to ignore it completely; you are going to try and do something else other than coast on that.'' I deliberately refused to go on in that field. I wouldn't even read papers to try to force myself to have a chance to do something else. I managed myself, which is what I'm preaching in this whole talk. Knowing many of my own faults, I manage myself. I have a lot of faults, so I've got a lot of problems, i.e. a lot of possibilities of management.

    Question: Would you compare research and management?

    Hamming: If you want to be a great researcher, you won't make it being president of the company. If you want to be president of the company, that's another thing. I'm not against being president of the company. I just don't want to be. I think Ian Ross does a good job as President of Bell Labs. I'm not against it; but you have to be clear on what you want. Furthermore, when you're young, you may have picked wanting to be a great scientist, but as you live longer, you may change your mind. For instance, I went to my boss, Bode, one day and said, ``Why did you ever become department head? Why didn't you just be a good scientist?'' He said, ``Hamming, I had a vision of what mathematics should be in Bell Laboratories. And I saw if that vision was going to be realized, I had to make it happen; I had to be department head.'' When your vision of what you want to do is what you can do single-handedly, then you should pursue it. The day your vision, what you think needs to be done, is bigger than what you can do single-handedly, then you have to move toward management. And the bigger the vision is, the farther in management you have to go. If you have a vision of what the whole laboratory should be, or the whole Bell System, you have to get there to make it happen. You can't make it happen from the bottom very easily. It depends upon what goals and what desires you have. And as they change in life, you have to be prepared to change. I chose to avoid management because I preferred to do what I could do single-handedly. But that's the choice that I made, and it is biased. Each person is entitled to their choice. Keep an open mind. But when you do choose a path, for heaven's sake be aware of what you have done and the choice you have made. Don't try to do both sides.

    Question: How important is one's own expectation or how important is it to be in a group or surrounded by people who expect great work from you?

    Hamming: At Bell Labs everyone expected good work from me - it was a big help. Everybody expects you to do a good job, so you do, if you've got pride. I think it's very valuable to have first-class people around. I sought out the best people. The moment that physics table lost the best people, I left. The moment I saw that the same was true of the chemistry table, I left. I tried to go with people who had great ability so I could learn from them and who would expect great results out of me. By deliberately managing myself, I think I did much better than laissez faire.

    Question: You, at the outset of your talk, minimized or played down luck; but you seemed also to gloss over the circumstances that got you to Los Alamos, that got you to Chicago, that got you to Bell Laboratories.

    Hamming: There was some luck. On the other hand I don't know the alternate branches. Until you can say that the other branches would not have been equally or more successful, I can't say. Is it luck the particular thing you do? For example, when I met Feynman at Los Alamos, I knew he was going to get a Nobel Prize. I didn't know what for. But I knew darn well he was going to do great work. No matter what directions came up in the future, this man would do great work. And sure enough, he did do great work. It isn't that you only do a little great work at this circumstance and that was luck, there are many opportunities sooner or later. There are a whole pail full of opportunities, of which, if you're in this situation, you seize one and you're great over there instead of over here. There is an element of luck, yes and no. Luck favors a prepared mind; luck favors a prepared person. It is not guaranteed; I don't guarantee success as being absolutely certain. I'd say luck changes the odds, but there is some definite control on the part of the individual.

    Go forth, then, and do great work!

    ]]>
    Anudeep Yegireddi