The Geeks Daily
Menu

digit

 

Page 1 of 3 1 2 3 LastLast
Results 1 to 30 of 62

Thread: The Geeks Daily

  1. #1
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default The Geeks Daily

    This thread is meant for sharing interesting articles related to Technology and Geekism on a wide base such that it doesn't fit in Technology news section, Random news section or the OSS article thread.

    Now, The Rules:
    1. Please don't copy-paste the entire article if the site's Terms and Conditions doesn't allow it. A link with a summary of the article in quotes would be better.
    2. If the site doesn't have any "Terms and conditions" or it allows for article to be fully published (with a link to the site), then you are free to paste the entire article.
    3. Keep in mind while pasting a full article that it should be under SPOILER tags - |SPOILER][/SPOILER|.
    4. Custom written articles can be posted here too. Add a [Custom] Tag to topics of such posts.
    5. Please send trackbacks to the site whose article you are using in the post.
    6. Discussions related to a corresponding article is allowed unless and until it sticks to the topic.
    7. Off-topic posts and Posts not following the corresponding site's T&C will be immediately reported to the Mods.
    Last edited by sygeek; 07-06-2011 at 09:31 AM.
    Popular Gadget Deals
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  2. #2
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default CPU vs. The Human Brain


    Spoiler:

    The brain's waves drive computation, sort of, in a 5 million core, 9 Hz computer.

    Computer manufacturers have worked in recent years to wean us off the speed metric for their chips and systems. No longer do they scream out GHz values, but use chip brands like atom, core duo, and quad core, or just give up altogether and sell on other features. They don't really have much to crow about, since chip speed increases have slowed with the increasing difficulty of cramming more elements and heat into ever smaller areas. The current state of the art is about 3 GHz, (far below predictions from 2001), on four cores in one computer, meaning that computations are spread over four different processors, which each run at 0.3 nanosecond per computation cycle.

    The division of CPUs into different cores hasn't been a matter of choice, and it hasn't been well-supported by software, most of which continues to conceived and written in linear fashion, with the top-level computer system doling out whole programs to the different processors, now that we typically have several things going on at once on our computers. Each program sends its instructions in linear order through one processor/core, in soda-straw fashion. Ever-higher clock speeds, allowing more rapid progress through the straw, still remain critical for getting more work done.

    Our brains take a rather different approach to cores, clock speeds, and parallel processing, however. They operate at variable clock speeds between 5 and 500 Hertz. No Giga here, or Mega or even Kilo. Brain waves, whose relationship to computation remains somewhat mysterious, are very slow, ranging from the delta (sleep) waves of 0-4 Hz through theta, alpha, beta, and gamma waves at 30-100+ Hz which are energetically most costly and may correlate with attention / consciousness.

    On the other hand, the brain has about 1e15 synapses, making it analogous to five million contemporary 200 million transistor chip "cores". Needless to say, the brain takes a massively parallel approach to computation. Signals run through millions of parallel nerve fibers from, say, the eye, (1.2 million in each optic nerve), through massive brain regions where each signal traverses only perhaps ten to twenty nerves in any serial path, while branching out in millions of directions as the data is sliced, diced, and re-assembled into vision. If you are interested in visual pathways, I would recommend Christof Koch's Quest for Consciousness, whose treatment of visual pathways is better than its treatment of other topics.

    Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run, though the present articles deal with one such case. Neurons can spontaneously generate synchrony in large numbers due to their intrinsic rhythmicity.

    Nor are neurons passive input-output integrators of whatever hits their dendrites, as early theories had them. Instead, they spontaneously generate cycles and noise, which enhances their sensitivity to external signals, and their ability to act collectively. They are also subject to many other influences like hormones and local non-neural glial cells. A great deal of integration happens at the synapse and regional multi-synapse levels, long before the cell body or axon is activated. This is why the synapse count is a better analog to transistor counts on chips than the neuron count. If you are interested in the topics of noise and rhythmicity, I would recommend the outstanding and advanced book by Gyorgy Buzsaki, Rhythms of the Brain. Without buying a book, you can read Buzsaki's take on consciousness.

    Two recent articles (Brandon et al., Koenig et al.) provide a small advance in this field of figuring out how brain rhythms connect with computation. Two groups seem to have had the same idea and did very similar experiments to show that a specific type of spatial computation in a brain area called the medial entorhinal cortex (mEC) near the hippocampus depends on theta rhythm clocking from a loosely connected area called the medial septum (MS). (In-depth essay on alcohol, blackouts, memory formation, the medial septum, and hippocampus, with a helpful anatomical drawing).

    Damage to the MS (situated just below the corpus collosum that connects the two brain hemispheres) was known to have a variety of effects on functions not located in the MS, but in the hippocampus and mEC, like loss of spatial memory, slowed learning of simple aversive associations, and altered patterns of food and water intake.

    The hippocampus and allied areas like the mEC are one of the best-investigated areas of the brain, along with the visual system. They mediate most short-term memory, especially spatial memory (i.e rats running in mazes). The spatial system as understood so far has several types of cells:

    Head direction cells, which know which way the head is pointed (some of them fire when the head points at one angle, others fire at other angles.

    Grid cells, which are sensitive to an abstract grid in space covering the ambient environment. Some of these cells fire when the rat is on one of the grid boundaries. So we literally have a latitude/logitude-style map in our heads, which may be why map-making comes so naturally to humans.

    Border cells, which fire when the rat is close to a wall.

    Place cells, which respond to specific locations in the ambient space- not periodically like grid cells, but typically to one place only.

    Spatial view cells, which fire when the rat is looking at a particular location, rather than when it is in that location. They also respond, as do the other cells above, when a location is being recalled rather than experienced.

    Clearly, once these cells all network together, a rather detailed self-orientation system is possible, based on high-level input from various senses (vestibular, whiskers, vision, touch). The role of rhythm is complicated in this system. For instance, the phase relation of place cell firing versus the underlying theta rhythm, (leading or following it, in a sort of syncopation), indicates closely where the animal is within the place cell's region as movement occurs. Upon entry, firing begins at the peak of the theta wave, but then precesses to the trough of the theta wave as the animal reaches the exit. Combined over many adjacent and overlapping place fields, this could conceptually provide very high precision to the animal's sense of position.


    One rat's repeated tracks in a closed maze, mapped versus firing patterns of several of its place cells, each given a different color.

    We are eavesdropping here on the unconscious processes of an animal, which it could not itself really articulate even if it wished and had language to do so. The grid and place fields are not conscious at all, but enormously intricate mechanisms that underlie implicit mapping. The animal has a "sense" of its position, (projecting a bit from our own experience), which is critical to many of its further decisions, but the details don't necessarily reach consciousness.

    The current papers deal not with place cells, which still fire in a place-specifc way without the theta rhythm, but with grid cells, whose "gridness" appears to depend strongly on the theta rhythm. The real-life fields of rat grid cells have a honeycomb-like hexagonal shape with diameters ranging from 40 to 90cm, ordered in systematic fashion from top to bottom within the mEC anatomy. The theta rhythm frequency they respond to also varies along the same axis, from 10 to 4 Hz. These values stretch and vary with the environment the animal finds itself in.


    Field size of grid cells, plotted against anatomical depth in the mEC.

    The current papers ask a simple question: do the grid cells of the mEC depend on the theta rhythm supplied from the MS, as has long been suspected from work with mEC lesions, or do they work independently and generate their own rhythm(s)?

    This was investigated by the expedient of injecting anaesthetics into the MC to temporarily stop its theta wave generation, and then polling electrodes stuck into the mEC for their grid firing characteristics as the rats were freely moving around. The grid cells still fired, but lost their spatial coherence, firing without regard to where the rat was or was going physically (see bottom trajectory maps). Spatial mapping was lost when the clock-like rhythm was lost.


    One experimental sequence. Top is the schematic of what was done. Rate map shows the firing rate of the target grid cells in a sampled 3cm square, with m=mean rate, and p=peak rate. Spatial autocorrelation shows how spatially periodic the rate map data is, and at what interval. Gridness is an abstract metric of how spatially periodic the cells fire. Trajectory shows the rat's physical paths during free behavior, overlaid with the grid cell firing data.

    "These data support the hypothesized role of theta rhythm oscillations in the generation of grid cell spatial periodicity or at least a role of MS input. The loss of grid cell spatial periodicity could contribute to the spatial memory impairments caused by lesions or inactivation of the MS."

    This is somewhat reminiscent of an artificial computer system, where computation ceases (here it becomes chaotic) when clocking ceases. Brain systems are clearly much more robust, breaking down more gracefully and not being as heavily dependent on clocking of this kind, not to mention being capable of generating most rhythms endogenously. But a similar phenomenon happens more generally, of course, during anesthesia, where the controlled long-range chaos of the gamma oscillation ceases along with attention and consciousness.

    It might be worth adding that brain waves have no particular connection with rhythmic sensory inputs like sound waves, some of which come in the same frequency range, at least at the very low end. The transduction of sound through the cochlea into neural impulses encodes them in a much more sophisticated way than simply reproducing their frequency in electrical form, and leads to wonders of computational processing such as perfect pitch, speech interpretation, and echolocation.

    Clearly, these are still early days in the effort to know how computation takes place in the brain. There is a highly mysterious bundling of widely varying timing/clocking rhythms with messy anatomy and complex content flowing through. But we also understand a lot- far more with each successive decade of work and with advancing technologies. For a few systems, (vision, position, some forms of emotion), we can track much of the circuitry from sensation to high-level processing, such as the level of face recognition. Consciousness remains unexplained, but scientists are definitely knocking at the door.


    Spoiler:

    You'd think it'd be easy to reboot a PC, wouldn't you? But then you'd also think that it'd be straightforward to convince people that at least making some effort to be nice to each other would be a mutually beneficial proposal, and look how well that's worked for us.

    Linux has a bunch of different ways to reset an x86. Some of them are 32-bit only and so I'm just going to ignore them because honestly just what are you doing with your life. Also, they're horrible. So, that leaves us with five of them.

    • kbd - reboot via the keyboard controller. The original IBM PC had the CPU reset line tied to the keyboard controller. Writing the appropriate magic value pulses the line and the machine resets. This is all very straightforward, except for the fact that modern machines don't have keyboard controllers (they're actually part of the embedded controller) and even more modern machines don't even pretend to have a keyboard controller. Now, embedded controllers run software. And, as we all know, software is dreadful. But, worse, the software on the embedded controller has been written by BIOS authors. So clearly any pretence that this ever works is some kind of elaborate fiction. Some machines are very picky about hardware being in the exact state that Windows would program. Some machines work 9 times out of 10 and then lock up due to some odd timing issue. And others simply don't work at all. Hurrah!
    • triple - attempt to generate a triple fault. This is done by loading an empty interrupt descriptor table and then calling int(3). The interrupt fails (there's no IDT), the fault handler fails (there's no IDT) and the CPU enters a condition which should, in theory, then trigger a reset. Except there doesn't seem to be a requirement that this happen and it just doesn't work on a bunch of machines.
    • pci - not actually pci. Traditional PCI config space access is achieved by writing a 32 bit value to io port 0xcf8 to identify the bus, device, function and config register. Port 0xcfc then contains the register in question. But if you write the appropriate pair of magic values to 0xcf9, the machine will reboot. Spectacular! And not standardised in any way (certainly not part of the PCI spec), so different chipsets may have different requirements. Booo.
    • efi - EFI runtime services provide an entry point to reboot the machine. It usually even works! As long as EFI runtime services are working at all, which may be a stretch.
    • acpi - Recent versions of the ACPI spec let you provide an address (typically memory or system IO space) and a value to write there. The idea is that writing the value to the address resets the system. It turns out that doing so often fails. It's also impossible to represent the PCI reboot method via ACPI, because the PCI reboot method requires a pair of values and ACPI only gives you one.



    Now, I'll admit that this all sounds pretty depressing. But people clearly sell computers with the expectation that they'll reboot correctly, so what's going on here?

    A while back I did some tests with Windows running on top of qemu. This is a great way to evaluate OS behaviour, because you've got complete control of what's handed to the OS and what the OS tries to do to the hardware. And what I discovered was a little surprising. In the absence of an ACPI reboot vector, Windows will hit the keyboard controller, wait a while, hit it again and then give up. If an ACPI reboot vector is present, windows will poke it, try the keyboard controller, poke the ACPI vector again and try the keyboard controller one more time.

    This turns out to be important. The first thing it means is that it generates two writes to the ACPI reboot vector. The second is that it leaves a gap between them while it's fiddling with the keyboard controller. And, shockingly, it turns out that on most systems the ACPI reboot vector points at 0xcf9 in system IO space. Even though most implementations nominally require two different values be written, it seems that this isn't a strict requirement and the ACPI method works.

    3.0 will ship with this behaviour by default. It makes various machines work (some Apples, for instance), improves things on some others (some Thinkpads seem to sit around for extended periods of time otherwise) and hopefully avoids the need to add any more machine-specific quirks to the reboot code. There's still some divergence between us and Windows (mostly in how often we write to the keyboard controller), which can be cleaned up if it turns out to make a difference anywhere.

    Now. Back to EFI bugs.
    Last edited by sygeek; 04-06-2011 at 06:18 AM.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  3. #3
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    Ten Oddities And Secrets About JavaScript
    Visit link for full article

    JavaScript. At once bizarre and yet beautiful, it is surely the programming language that Pablo Picasso would have invented. Null is apparently an object, an empty array is apparently equal to false, and functions are bandied around as though they were tennis balls.

    This article is aimed at intermediate developers who are curious about more advanced JavaScript. It is a collection of JavaScript’s oddities and well-kept secrets. Some sections will hopefully give you insight into how these curiosities can be useful to your code, while other sections are pure WTF material. So, let’s get started.

    Spoiler:
    1. Null is an Object
    2. NaN is a Number
    3. An Array With No Keys == False (About Truthy and Falsy)
    4. replace() Can Accept a Callback Function
    5. Regular Expressions: More Than Just Match and Replace
    6. You Can Fake Scope
    7. Functions Can Execute Themselves
    8. Firefox Reads and Returns Colors in RGB, Not Hex
    9. 0.1 + 0.2 !== 0.3
    10. Undefined Can Be Defined
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  4. #4
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By James Somers

    Spoiler:


    When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn't colorful or cartoonish; it didn't seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. It called itself the "ORIC-1 Micro Computer." The package included two cassette tapes, a few cords and a 130-page programming manual.

    On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin, not least because they had just bought the thing for more than £129. And so he did. And so, he says, "I was sucked into a hole from which I would never escape."

    It's not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word "Ready," and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.

    In less than an hour, the ORIC-1 manual took you from printing the word "hello" to writing short programs in BASIC -- the Beginner's All-Purpose Symbolic Instruction Code -- that played digital music and drew wildly interesting pictures on the screen. Just when you got the urge to try something more complicated, the manual showed you how.

    In a way, the ORIC-1 was so mesmerizing because it stripped computing down to its most basic form: you typed some instructions; it did something cool. This was the computer's essential magic laid bare. Somehow ten or twenty lines of code became shapes and sounds; somehow the machine breathed life into a block of text.

    No wonder Colin got hooked. The ORIC-1 wasn't really a toy, but a toy maker. All it asked for was a special kind of blueprint.

    Once he learned the language, it wasn't long before he was writing his own simple computer games, and, soon after, teaching himself trigonometry, calculus and Newtonian mechanics to make them better. He learned how to model gravity, friction and viscosity. He learned how to make intelligent enemies.

    More than all that, though, he learned how to teach. Without quite knowing it, Colin had absorbed from his early days with the ORIC-1 and other such microcomputers a sense for how the right mix of accessibility and complexity, of constraints and open-endedness, could take a student from total ignorance to near mastery quicker than anyone -- including his own teachers -- thought possible.

    It was a sense that would come in handy, years later, when he gave birth to Project Euler, a peculiar website that has trained tens of thousands of new programmers, and that is in its own modest way the emblem of a nascent revolution in education.


    * * *

    Sometime between middle and high school, in the early 2000s, I got a hankering to write code. It was very much a "monkey see, monkey do" sort of impulse. I had been watching a lot of TechTV -- an obscure but much-loved cable channel focused on computing, gadgets, gaming and the Web -- and Hackers, the 1995 cult classic starring Angelina Jolie in which teenaged computer whizzes, accused of cybercrimes they didn't commit, have to hack their way to the truth.

    I wanted in. So I did what you might expect an over-enthusiastic suburban nitwit to do, and asked my mom to drive me to the mall to buy Ivor Horton's 1,181-page, 4.6-pound Beginning Visual C++ 6. I imagined myself working montage-like through the book, smoothly accruing expertise one chapter at a time.

    What happened instead is that I burned out after a week. The text itself was dense and unsmiling; the exercises were difficult. It was quite possibly the least fun I've ever had with a book, or, for that matter, with anything at all. I dropped it as quickly as I had picked it up.

    Remarkably I went through this cycle several times: I saw people programming and thought it looked cool, resolved myself to learn, sought out a book and crashed the moment it got hard.

    For a while I thought I didn't have the right kind of brain for programming. Maybe I needed to be better at math. Maybe I needed to be smarter.

    But it turns out that the people trying to teach me were just doing a bad job. Those books that dragged me through a series of structured principles were just bad books. I should have ignored them. I should have just played.

    Nobody misses that fact more egregiously than the American College Board, the folks responsible for setting the AP Computer Science high school curriculum. The AP curriculum ought to be a model for how to teach people to program. Instead it's an example of how something intrinsically amusing can be made into a lifeless slog.


    I imagine that the College Board approached the problem from the top down. I imagine a group of people sat in a room somewhere and asked themselves, "What should students know by the time they finish this course?"; listed some concepts, vocabulary terms, snippets of code and provisional test questions; arranged them into "modules," swaths of exposition followed by exercises; then handed off the course, ready-made, to teachers who had no choice but to follow it to the letter.

    Whatever the process, the product is a nightmare described eloquently by Paul Lockhart, a high school mathematics teacher, in his short booklet, A Mathematician's Lament, about the sorry state of high school mathematics. His argument applies almost beat for beat to computer programming.

    Lockhart illustrates our system's sickness by imagining a fun problem, then showing how it might be gutted by educators trying to "cover" more "material."

    Take a look at this picture:

    It's sort of neat to wonder, How much of the box does the triangle take up? Two-thirds, maybe? Take a moment and try to figure it out.

    If you're having trouble, it could be because you don't have much training in real math, that is, in solving open-ended problems about simple shapes and objects. It's hard work. But it's also kind of fun -- it requires patience, creativity, an insight here and there. It feels more like working on a puzzle than one of those tedious drills at the back of a textbook.

    If you struggle for long enough you might strike upon the rather clever idea of chopping your rectangle into two pieces like so:


    Now you have two rectangles, each cut diagonally in half by a leg of the triangle. So there is exactly as much space inside the triangle as outside, which means the triangle must take up exactly half the box!
    This is what a piece of mathematics looks and feels like. That little narrative is an example of the mathematician's art: asking simple and elegant questions about our imaginary creations, and crafting satisfying and beautiful explanations. There is really nothing else quite like this realm of pure idea; it's fascinating, it's fun, and it's free!
    But this is not what math feels like in school. The creative process is inverted, vitiated:
    This is why it is so heartbreaking to see what is being done to mathematics in school. This rich and fascinating adventure of the imagination has been reduced to a sterile set of "facts" to be memorized and procedures to be followed. In place of a simple and natural question about shapes, and a creative and rewarding process of invention and discovery, students are treated to this:


    "The area of a triangle is equal to one-half its base times its height." Students are asked to memorize this formula and then "apply" it over and over in the "exercises." Gone is the thrill, the joy, even the pain and frustration of the creative act. There is not even a problem anymore. The question has been asked and answered at the same time -- there is nothing left for the student to do.
    * * *
    My struggle to become a hacker finally saw a breakthrough late in my freshman year of college, when I stumbled on a simple question:
    If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.

    Find the sum of all the multiples of 3 or 5 below 1000.
    This was the puzzle that turned me into a programmer. This was Project Euler problem #1, written in 2001 by a then much older Colin Hughes, that student of the ORIC-1 who had gone on to become a math teacher at a small British grammar school and, not long after, the unseen professor to tens of thousands of fledglings like myself.

    The problem itself is a lot like Lockhart's triangle question -- simple enough to entice the freshest beginner, sufficiently complicated to require some thought.

    What's especially neat about it is that someone who has never programmed -- someone who doesn't even know what a program is -- can learn to write code that solves this problem in less than three hours. I've seen it happen. All it takes is a little hunger. You just have to want the answer.

    That's the pedagological ballgame: get your student to want to find something out. All that's left after that is to make yourself available for hints and questions. "That student is taught the best who is told the least."

    It's like sitting a kid down at the ORIC-1. Kids are naturally curious. They love blank slates: a sandbox, a bag of LEGOs. Once you show them a little of what the machine can do they'll clamor for more. They'll want to know how to make that circle a little smaller or how to make that song go a little faster. They'll imagine a game in their head and then relentlessly fight to build it.

    Along the way, of course, they'll start to pick up all the concepts you wanted to teach them in the first place. And those concepts will stick because they learned them not in a vacuum, but in the service of a problem they were itching to solve.

    Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes -- and later, a team of eight or nine hand-picked helpers -- crafted problems that lots of people get the itch to solve. And it's an effective teacher because those problems are arranged like the programs in the ORIC-1's manual, in what Hughes calls an "inductive chain":

    The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.

    This is an idea that's long been familiar to video game designers, who know that players have the most fun when they're pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge -- a quick demonstration that's hard to screw up -- and certified with a harder version, the idea being to only let players move on when they've shown that they're ready. The result is a gradual ratcheting up the learning curve.

    Project Euler is engaging in part because it's set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only "rule" is that the program you use to solve the problem should take no more than one minute of computer time to run.

    On top of this there is one brilliant feature: once you get the right answer you're given access to a forum where successful solvers share their approaches. It's the ideal time to pick up new ideas -- after you've wrapped your head around a problem enough to solve it.

    This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem's forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.

    Even if you're not a programmer, it's worth solving a Project Euler problem just to see what happens in these forums. What you'll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it's been quietly thriving on this site. It's the global, distributed classroom, a nurturing community of self-motivated learners -- old, young, from more than two hundred countries -- all sharing in the pleasure of finding things out.

    * * *

    It's tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?

    Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.

    The first is that it's naturally addictive. Computers are really fast; even in the '80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff -- perhaps a small hit of dopamine -- as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.

    It's important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.

    The second feature, by contrast, is something that at first glance looks totally immaterial. It's the simple fact that code is text.

    Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

    Unfortunately that's not going to happen. You can't just copy and paste a Bob Villa video to fix your garage door.

    But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

    I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

    Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.

    * * *

    Garry Kasparov, a chess grandmaster who was famously bested by IBM's Deep Blue supercomputer, notes how machines have changed the way the game is learned:
    There have been many unintended consequences, both positive and negative, of the rapid proliferation of powerful chess software. Kids love computers and take to them naturally, so it's no surprise that the same is true of the combination of chess and computers. With the introduction of super-powerful software it became possible for a youngster to have a top- level opponent at home instead of needing a professional trainer from an early age. Countries with little by way of chess tradition and few available coaches can now produce prodigies.
    A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.

    Take that thought to its logical end. Imagine a future in which the best way to learn how to do something -- how to write prose, how to solve differential equations, how to fly a plane -- is to download software, not unlike today's chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.

    If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.
    Last edited by sygeek; 04-06-2011 at 06:17 AM.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  5. #5
    Your Ad here nisargshah95's Avatar
    Join Date
    Feb 2010
    Location
    Goa, India
    Posts
    394

    Thumbs up Re: The Geeks Daily

    Quote Originally Posted by SyGeek View Post
    0.1 + 0.2 !== 0.3
    For those who want to know why it's like this, go to 14. Floating Point Arithmetic: Issues and Limitations — Python v2.7.1 documentation

    Quote Originally Posted by SyGeek View Post
    By James Somers

    Spoiler:


    When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn't colorful or cartoonish; it didn't seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. It called itself the "ORIC-1 Micro Computer." The package included two cassette tapes, a few cords and a 130-page programming manual.

    On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin, not least because they had just bought the thing for more than £129. And so he did. And so, he says, "I was sucked into a hole from which I would never escape."

    It's not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word "Ready," and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.

    In less than an hour, the ORIC-1 manual took you from printing the word "hello" to writing short programs in BASIC -- the Beginner's All-Purpose Symbolic Instruction Code -- that played digital music and drew wildly interesting pictures on the screen. Just when you got the urge to try something more complicated, the manual showed you how.

    In a way, the ORIC-1 was so mesmerizing because it stripped computing down to its most basic form: you typed some instructions; it did something cool. This was the computer's essential magic laid bare. Somehow ten or twenty lines of code became shapes and sounds; somehow the machine breathed life into a block of text.

    No wonder Colin got hooked. The ORIC-1 wasn't really a toy, but a toy maker. All it asked for was a special kind of blueprint.

    Once he learned the language, it wasn't long before he was writing his own simple computer games, and, soon after, teaching himself trigonometry, calculus and Newtonian mechanics to make them better. He learned how to model gravity, friction and viscosity. He learned how to make intelligent enemies.

    More than all that, though, he learned how to teach. Without quite knowing it, Colin had absorbed from his early days with the ORIC-1 and other such microcomputers a sense for how the right mix of accessibility and complexity, of constraints and open-endedness, could take a student from total ignorance to near mastery quicker than anyone -- including his own teachers -- thought possible.

    It was a sense that would come in handy, years later, when he gave birth to Project Euler, a peculiar website that has trained tens of thousands of new programmers, and that is in its own modest way the emblem of a nascent revolution in education.


    * * *

    Sometime between middle and high school, in the early 2000s, I got a hankering to write code. It was very much a "monkey see, monkey do" sort of impulse. I had been watching a lot of TechTV -- an obscure but much-loved cable channel focused on computing, gadgets, gaming and the Web -- and Hackers, the 1995 cult classic starring Angelina Jolie in which teenaged computer whizzes, accused of cybercrimes they didn't commit, have to hack their way to the truth.

    I wanted in. So I did what you might expect an over-enthusiastic suburban nitwit to do, and asked my mom to drive me to the mall to buy Ivor Horton's 1,181-page, 4.6-pound Beginning Visual C++ 6. I imagined myself working montage-like through the book, smoothly accruing expertise one chapter at a time.

    What happened instead is that I burned out after a week. The text itself was dense and unsmiling; the exercises were difficult. It was quite possibly the least fun I've ever had with a book, or, for that matter, with anything at all. I dropped it as quickly as I had picked it up.

    Remarkably I went through this cycle several times: I saw people programming and thought it looked cool, resolved myself to learn, sought out a book and crashed the moment it got hard.

    For a while I thought I didn't have the right kind of brain for programming. Maybe I needed to be better at math. Maybe I needed to be smarter.

    But it turns out that the people trying to teach me were just doing a bad job. Those books that dragged me through a series of structured principles were just bad books. I should have ignored them. I should have just played.

    Nobody misses that fact more egregiously than the American College Board, the folks responsible for setting the AP Computer Science high school curriculum. The AP curriculum ought to be a model for how to teach people to program. Instead it's an example of how something intrinsically amusing can be made into a lifeless slog.


    I imagine that the College Board approached the problem from the top down. I imagine a group of people sat in a room somewhere and asked themselves, "What should students know by the time they finish this course?"; listed some concepts, vocabulary terms, snippets of code and provisional test questions; arranged them into "modules," swaths of exposition followed by exercises; then handed off the course, ready-made, to teachers who had no choice but to follow it to the letter.

    Whatever the process, the product is a nightmare described eloquently by Paul Lockhart, a high school mathematics teacher, in his short booklet, A Mathematician's Lament, about the sorry state of high school mathematics. His argument applies almost beat for beat to computer programming.

    Lockhart illustrates our system's sickness by imagining a fun problem, then showing how it might be gutted by educators trying to "cover" more "material."

    Take a look at this picture:

    It's sort of neat to wonder, How much of the box does the triangle take up? Two-thirds, maybe? Take a moment and try to figure it out.

    If you're having trouble, it could be because you don't have much training in real math, that is, in solving open-ended problems about simple shapes and objects. It's hard work. But it's also kind of fun -- it requires patience, creativity, an insight here and there. It feels more like working on a puzzle than one of those tedious drills at the back of a textbook.

    If you struggle for long enough you might strike upon the rather clever idea of chopping your rectangle into two pieces like so:


    Now you have two rectangles, each cut diagonally in half by a leg of the triangle. So there is exactly as much space inside the triangle as outside, which means the triangle must take up exactly half the box!

    But this is not what math feels like in school. The creative process is inverted, vitiated:

    * * *
    My struggle to become a hacker finally saw a breakthrough late in my freshman year of college, when I stumbled on a simple question:

    This was the puzzle that turned me into a programmer. This was Project Euler problem #1, written in 2001 by a then much older Colin Hughes, that student of the ORIC-1 who had gone on to become a math teacher at a small British grammar school and, not long after, the unseen professor to tens of thousands of fledglings like myself.

    The problem itself is a lot like Lockhart's triangle question -- simple enough to entice the freshest beginner, sufficiently complicated to require some thought.

    What's especially neat about it is that someone who has never programmed -- someone who doesn't even know what a program is -- can learn to write code that solves this problem in less than three hours. I've seen it happen. All it takes is a little hunger. You just have to want the answer.

    That's the pedagological ballgame: get your student to want to find something out. All that's left after that is to make yourself available for hints and questions. "That student is taught the best who is told the least."

    It's like sitting a kid down at the ORIC-1. Kids are naturally curious. They love blank slates: a sandbox, a bag of LEGOs. Once you show them a little of what the machine can do they'll clamor for more. They'll want to know how to make that circle a little smaller or how to make that song go a little faster. They'll imagine a game in their head and then relentlessly fight to build it.

    Along the way, of course, they'll start to pick up all the concepts you wanted to teach them in the first place. And those concepts will stick because they learned them not in a vacuum, but in the service of a problem they were itching to solve.

    Project Euler, named for the Swiss mathematician Leonhard Euler, is popular (more than 150,000 users have submitted 2,630,835 solutions) precisely because Colin Hughes -- and later, a team of eight or nine hand-picked helpers -- crafted problems that lots of people get the itch to solve. And it's an effective teacher because those problems are arranged like the programs in the ORIC-1's manual, in what Hughes calls an "inductive chain":

    The problems range in difficulty and for many the experience is inductive chain learning. That is, by solving one problem it will expose you to a new concept that allows you to undertake a previously inaccessible problem. So the determined participant will slowly but surely work his/her way through every problem.

    This is an idea that's long been familiar to video game designers, who know that players have the most fun when they're pushed always to the edge of their ability. The trick is to craft a ladder of increasingly difficult levels, each one building on the last. New skills are introduced with an easier version of a challenge -- a quick demonstration that's hard to screw up -- and certified with a harder version, the idea being to only let players move on when they've shown that they're ready. The result is a gradual ratcheting up the learning curve.

    Project Euler is engaging in part because it's set up like a video game, with 340 fun, very carefully ordered problems. Each has its own page, like this one that asks you to discover the three most popular squares in a game of Monopoly played with 4-sided (instead of 6-sided) dice. At the bottom of the puzzle description is a box where you can enter your answer, usually just a whole number. The only "rule" is that the program you use to solve the problem should take no more than one minute of computer time to run.

    On top of this there is one brilliant feature: once you get the right answer you're given access to a forum where successful solvers share their approaches. It's the ideal time to pick up new ideas -- after you've wrapped your head around a problem enough to solve it.

    This is also why a lot of experienced programmers use Project Euler to learn a new language. Each problem's forum is a kind of Rosetta stone. For a single simple problem you might find annotated solutions in Python, C, Assembler, BASIC, Ruby, Java, J and FORTRAN.

    Even if you're not a programmer, it's worth solving a Project Euler problem just to see what happens in these forums. What you'll find there is something that educators, technologists and journalists have been talking about for decades. And for nine years it's been quietly thriving on this site. It's the global, distributed classroom, a nurturing community of self-motivated learners -- old, young, from more than two hundred countries -- all sharing in the pleasure of finding things out.

    * * *

    It's tempting to generalize: If programming is best learned in this playful, bottom-up way, why not everything else? Could there be a Project Euler for English or Biology?

    Maybe. But I think it helps to recognize that programming is actually a very unusual activity. Two features in particular stick out.

    The first is that it's naturally addictive. Computers are really fast; even in the '80s they were really fast. What that means is there is almost no time between changing your program and seeing the results. That short feedback loop is mentally very powerful. Every few minutes you get a little payoff -- perhaps a small hit of dopamine -- as you hack and tweak, hack and tweak, and see that your program is a little bit better, a little bit closer to what you had in mind.

    It's important because learning is all about solving hard problems, and solving hard problems is all about not giving up. So a machine that triggers hours-long bouts of frantic obsessive excitement is a pretty nifty learning tool.

    The second feature, by contrast, is something that at first glance looks totally immaterial. It's the simple fact that code is text.

    Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

    Unfortunately that's not going to happen. You can't just copy and paste a Bob Villa video to fix your garage door.

    But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

    I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

    Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.

    * * *

    Garry Kasparov, a chess grandmaster who was famously bested by IBM's Deep Blue supercomputer, notes how machines have changed the way the game is learned:

    A student can now download a free program that plays better than any living human. He can use it as a sparring partner, a coach, an encyclopedia of important games and openings, or a highly technical analyst of individual positions. He can become an expert without ever leaving the house.

    Take that thought to its logical end. Imagine a future in which the best way to learn how to do something -- how to write prose, how to solve differential equations, how to fly a plane -- is to download software, not unlike today's chess engines, that takes you from zero to sixty by way of a delightfully addictive inductive chain.

    If the idea sounds far-fetched, consider that I was taught to program by a program whose programmer, more than twenty-five years earlier, was taught to program by a program.
    Great article buddy. Keep posting! I guess we should start a thread where we discuss Euler's problems What say?
    Last edited by nisargshah95; 04-06-2011 at 02:01 PM.
    "The nature of the Internet and the importance of net neutrality is that innovation can come from everyone."
    System: HP 15-r022TX { Intel i5 4210U || 8GB RAM || NVIDIA GeForce 820M || Ubuntu 14.04 Trusty Tahr LTS 64-bit (Primary) + Windows 8.1 Pro 64-bit || 1TB HDD + 1TB Seagate external HDD }
    Twitter - https://twitter.com/nisargshah95

  6. #6
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    Great article buddy. Keep posting! I guess we should start a thread where we discuss Euler's problems What say?
    Sure, but no one looks interested in it and so I didn't bother creating one. Also, Euler's forums already have a section dedicated to this, so it doesn't make much sense unless you guys want a familiar community discussion to this.
    Last edited by sygeek; 04-06-2011 at 03:06 PM.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  7. #7
    Your Ad here nisargshah95's Avatar
    Join Date
    Feb 2010
    Location
    Goa, India
    Posts
    394

    Thumbs up Re: The Geeks Daily

    Quote Originally Posted by SyGeek View Post
    Sure, but no one looks interested in it and so I didn't bother creating one. Also, Euler's forums already have a section dedicated to this, so it doesn't make much sense unless you guys want a familiar community discussion to this.
    Oh. Anyways don't stop postin the articles. They're good.

    BTW Yay! I solved the first problem - If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
    Find the sum of all the multiples of 3 or 5 below 1000. Did it using JavaScript (and Python console for calculations).
    Last edited by nisargshah95; 04-06-2011 at 06:46 PM.
    "The nature of the Internet and the importance of net neutrality is that innovation can come from everyone."
    System: HP 15-r022TX { Intel i5 4210U || 8GB RAM || NVIDIA GeForce 820M || Ubuntu 14.04 Trusty Tahr LTS 64-bit (Primary) + Windows 8.1 Pro 64-bit || 1TB HDD + 1TB Seagate external HDD }
    Twitter - https://twitter.com/nisargshah95

  8. #8
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Hate Java? You’re fighting the wrong battle.


    Spoiler:
    One of the most interesting trends I’ve seen lately is the unpopularity of Java around blogs, DZone and others. It seems some people are even offended, some even on a personal level, by suggesting the Java is superior in any way to their favorite web 2.0 language.

    Java has been widely successful for a number of reasons:
    • It’s widely accepted in the established companies.
    • It’s one of the fastest languages.
    • It’s one of the most secure languages.
    • Synchronization primitives are built into the language.
    • It’s platform independent.
    • Hotspot is open source.
    • Thousands of vendors exist for a multitude of Java products.
    • Thousands of open source libraries exist for Java.
    • Community governance via that JCP (pre-Oracle).

    This is quite a resume for any language, and it shows, as Java has enjoyed a long streak as being one of the most popular languages around.
    So, why suddenly, in late 2010 and 2011, is Java suddenly the hated demon it is?
    It’s popular to hate Java.
    • C-like syntax is no longer popular.
    • Hate for Oracle is being leveraged to promote individual interests.
    • People have been exposed to really bad code, that’s been written in Java.
    • … insert next hundred reasons here.

    Java, the actual language and API, does have quite a few real problems… too many to list here (a mix of native and object types, an abundance of abandoned APIs, inconsistent use of checked exceptions). But I’m offering an olive branch… Lets discuss the real problem and not throw the baby out with the bath water.

    So what is the real problem in the this industry? Java, with its faults, has completely conquered web application programming. On the sidelines, charging hard, new languages are being invented at a rate that is mind-blowing, to also conquer web application programming. The two are pitted together, and we’re left with what looks a bunch of preppy mall-kids battling for street territory by break dancing. And while everyone is bickering around whether PHP or Rails 3.1 runs faster and can serve more simultaneous requests, there lurks a silent elephant in the room, which is laughing quietly as we duke it out in childish arguments over syntax and runtimes.

    Tell me, what do the following have in common?
    • Paying with a credit card.
    • Going to the emergency room.
    • Adjusting your 401k.
    • Using your insurance card at the dentist.
    • Shopping around for the best car insurance.
    • A BNSF train pulling a Union Pacific coal car.
    • Transferring money between banks.
    • Filling a prescription.

    All the above industries are billion dollar players in our economy. All of the above industries write new COBOL and mainframe assembler programs. I’m not making this up, I work in the last industry, and I’ve interviewed and interned in the others.

    For god sakes people, COBOL, invented in 1959, is still being written today, for real! We’re not talking maintaining a few lines here and there, we’re talking thousands of new lines, every day, to implement new functionality and new requirements. These industries haven’t even caught word the breeze has shifted to the cloud. These industries are essential; they form the building blocks of our economy. Despite this, they do not innovate and they carry massive expenses with their legacy technology. The costs of running business are enormous, and a good percentage of those are IT costs.

    How expensive? Lets talk about mainframe licensing, for instance. Lets say you buy the Enterprise version of MongoDB and put in on a box. You then proceed to peg out the CPU doing transaction after transaction to the database… The next week, you go on vacation, and leave MongoDB running without doing a thing. How much did MongoDB cost in both weeks? The same.

    Mainframes software is licensed much different. Lets say you buy your mainframe for a couple million and buy a database product for it. You then spend all week pegging the CPU(s) with database requests. You check your mail, and you now have a million dollar bill from the database vendor. Wait, I bought the hardware, why am I paying another bill? The software on a mainframe is often billed by usage, or how many CPU cycles you spend using it. If you spend 2,000,000 cpu cycles running the database, you will end up owing the vendor $2mil. Bizzare? Absolutely!

    These invisible industries you utilize every day are full of bloat, legacy systems, and high costs. Java set out to conquer many fronts, and while it thoroughly took over the web application arena, it fizzled out in centralized computing. These industries are ripe for reducing costs and becoming more efficient, but honestly, we’re embarrassing ourselves. These industries stick with their legacy systems because they don’t think Ruby, Python, Scala, Lua, PHP, Java could possibly handle the ‘load’, scalability, or uptime requirements that their legacy systems provide. This is so far from the truth, but again, there has been 0 innovation in the arenas in the last 15 years, despite the progress of web technology making galaxy-sized leaps.

    So next week someone will invent another DSL that makes Twitter easier to use, but your bank will be writing new COBOL to more efficiently transfer funds to another Bank. We’re embarrassing ourselves with our petty arguments. There is an entire economy that needs to see the benefits of distributed computing, but if the friendly fire continues, we’ll all lose. Lest stop these ridiculous arguments, pass the torch peacefully, and conquer some of these behemoths!
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  9. #9
    Broken In
    Join Date
    May 2009
    Location
    BANgalore
    Posts
    116

    Default Re: The Geeks Daily

    thanks for posting this article "How I Failed, Failed, and Finally Succeeded at Learning How to Code". been going through it
    http://bugup.co.cc

  10. #10
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By Alex Schiff, University of Michigan

    Spoiler:

    A month ago, I turned down a very good opportunity from a just-funded startup to continue my job for the rest of the summer. It was in an industry I was passionate about, I would have had a leadership position and having just received a raise, the pay would have been substantially higher than most jobs for 20-year-old college students. I had worked there for a year (full-time during last summer and part-time during the school year) and common sense should have pushed me to go back.

    But I didn’t.

    I’ve never been one to base my actions on others’ expectations. Just ask my dad, with whom I was having arguments about moral relativism by the time I was 13. That’s why I didn’t think twice about the implications of turning down an opportunity most people my age would kill for to start my own company. When you take a leap of faith of that magnitude, you can’t look back.

    That’s not how the rest of the world sees it, though. As a college student, I’m expected to spend my summers either gaining experience in an internship or working at some job (no matter how menial) to earn money. Every April, the “So where are you working this summer?” conversation descends on the University of Michigan campus like a storm cloud. When I told people I was foregoing a paycheck for at least the next several months to build a startup, the reactions were a mix of confusion and misinformed assumptions that I couldn’t land a “real job.”

    This sentiment surfaced recently with a conversation with a family member that asserted I needed to “pay my dues to society” by joining the workforce. And most adults I know tell me I need to get a real job first before starting my own company. One common thought is, “Most of the world has to wait until they’re at least 40 before they can even think about doing something like that. Why should you be any different?” It almost feels like people assume we have some sort of secular “original sin” that demands I work for someone else before I do what makes me happy. Even when I talk to peers who don’t understand entrepreneurship, their reaction can be subtle condescension and comments like, “Oh that’s cool, but you’re going to get a real job next summer or when you graduate, right?”

    This is my real job. Building startups is what I want to do with my life, preferably as a founder. I’m really bad at working for other people. I have no deference to authority figures and have never been shy to voice my opinions, oftentimes to my detriment. I also can’t stand waiting on people that are in higher positions than me. It makes me feel like I should be in their place and really gets under my skin. All this makes me terrible at learning things from other people and taking advice. I need to learn by doing things and figuring out how to solve problems by myself. I’ll ask questions later.

    As a first-time founder, I can’t escape admitting that starting fetchnotes is an immense learning experience. I’m under no illusion that I have any idea what I’m doing. I’m thankful I had a job where I learned a lot of core skills on the fly — recruiting, business development, management, a little sales and a lot about culture creation. But what I learned — and what most people learn in generalist, non-specialized jobs available to people our age — was the tip of the iceberg.

    When you start something from scratch, you gain a much deeper understanding of these skills. Instead of being told, “We need Drupal developers. Go find Drupal developers here, here and here,” you need to brainstorm the best technical implementation of your idea, figure out what skills that requires and then figure out how to reach those people. Instead of being told, “Go reach out to these people for partnerships to do X, Y and Z,” you need to figure out what types of people and entities you’ll need to grow and how to convince them to do what you need them to do. When you’re an employee, you learn the “what”, when you’re a founder, you learn the “how” and “why.” You need to learn how to rally and motivate people and create a culture in a way that just isn’t remotely the same as a later-hired manager. There are at least 50 orders of magnitude in the difference between the strategic and innovative thinking required by a founder and that of even the most integral first employee.

    Besides, put yourself in an employer’s shoes. You’re interviewing two college graduates — one who started a company and can clearly articulate why it succeeded or failed, and one who had an internship from a “brand name” institution. If I’m interviewing with someone who chooses the latter candidate, they’re not a place I want to work for. It’s likely a “do what we tell you because you’re our employee” working environment. And if that sounds like someone you want to work for, this article is probably irrelevant to you anyway.

    That’s why I never understood the argument about needing to get a job or internship as a “learning experience” or to “pay your dues.” There’s no better learning experience than starting with nothing and figuring it out for yourself (or, thankfully for me, with a co-founder). And there’s no better time to start a company than as a student. When else will your bills, foregone wages and cost of failure be so low? If I fail right now, I’ll be out some money and some time. If I wait until I’m out of college, have a family to support and student loans to pay back, that cost could be being poor, hungry and homeless.

    Okay, maybe that’s a little bit of hyperbole, but you get my point. If you have a game-changing idea, don’t make yourself wait because society says you need an internship every summer to get ahead. To quote a former boss, “just **** it out.”

    Alex Schiff is a co-founder of The New Student Union.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  11. #11
    Your Ad here nisargshah95's Avatar
    Join Date
    Feb 2010
    Location
    Goa, India
    Posts
    394

    Default Re: The Geeks Daily

    Waiting for another article buddy...
    "The nature of the Internet and the importance of net neutrality is that innovation can come from everyone."
    System: HP 15-r022TX { Intel i5 4210U || 8GB RAM || NVIDIA GeForce 820M || Ubuntu 14.04 Trusty Tahr LTS 64-bit (Primary) + Windows 8.1 Pro 64-bit || 1TB HDD + 1TB Seagate external HDD }
    Twitter - https://twitter.com/nisargshah95

  12. #12
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily


    Spoiler:
    So the Sony saga continues. As if the whole thing about 77 million breached PlayStation Network accounts wasn’t bad enough, numerous other security breaches in other Sony services have followed in the ensuing weeks, most recently with SonyPictures.com.

    As bad guys often like to do, the culprits quickly stood up and put their handiwork on show. This time around it was a group going by the name of LulzSec. Here’s the interesting bit:
    Sony stored over 1,000,000 passwords of its customers in plaintext
    Well actually, the really interesting bit is that they created a torrent of some of the breached accounts so that anyone could go and grab a copy. Ouch. Remember these are innocent customers’ usernames and passwords so we’re talking pretty serious data here. There’s no need to delve into everything Sony did wrong here, that’s both mostly obvious and not the objective of this post.

    I thought it would be interesting to take a look at password practices from a real data source. I spend a bit of time writing about how people and software manage passwords and often talk about thing like entropy and reuse, but are these really discussion worthy topics? I mean do people generally get passwords right anyway and regularly use long, random, unique strings? We’ve got the data – let’s find out.

    What’s in the torrent

    The Sony Pictures torrent contains a number of text files with breached information and a few instructions:


    The interesting bits are in the “Sony Pictures” folder and in particular, three files with a whole bunch of accounts in them:


    After a little bit of cleansing, de-duping and an import into SQL Server for analysis, we end up with a total of 37,608 accounts. The LulzSec post earlier on did mention this was only a subset of the million they managed to obtain but it should be sufficient for our purposes here today.

    Analysis

    Here’s what I’m really interested in:
    1. Length
    2. Variety of character types
    3. Randomness
    4. Uniqueness

    These are pretty well accepted measures for password entropy and the more you have of each, the better. Preferably heaps of all of them.

    Length

    Firstly there’s length; the accepted principle is that as length increases, as does entropy. Longer password = stronger password (all things else being equal). How long is long enough? Well, part of the problem is that there’s no consensus and you end up with all sorts of opinions on the subject. Considering the usability versus security balance, around eight characters plus is a pretty generally accepted yardstick. Let’s see the Sony breakdown:


    We end up with 93% of accounts being between 6 and 10 characters long which is pretty predictable. Bang on 50% of these are less than eight characters. It’s interesting that seven character long passwords are a bit of an outlier – odd number discrimination, perhaps?

    I ended up grouping the instances of 20 or more characters together – there are literally only a small handful of them. In fact there’s really only a handful from the teens onwards so what we’d consider is a relatively secure length really just doesn’t feature.

    Character types

    Length only gives us so much, what’s really important is the diversity within that length. Let’s take a look at character types and we’ll categorise them as follows:
    1. Numbers
    2. Uppercase
    3. Lowercase
    4. Everything else

    Again, we’ve got this issue of usability and security to consider but good practice would normally be considered as having three or more character types. Let’s see what we’ve got:


    Or put another way, only 4% of passwords had three or more character types. But it’s the spread of character types which is also interesting, particularly when only a single type is used:


    In short, half of the passwords had only one character type and nine out of ten of those where all lowercase. But the really startling bit is the use of non-alphanumeric or characters:


    Yep, less than 1% of passwords contained a non-alphanumeric character. Interestingly, this also reconciles with the analysis done on the Gawker database a little while back.

    Randomness

    So how about randomness? Well, one way to look at this is how many of the passwords are identical. The top 25 were:

    seinfeld, password, winner, 123456, purple, sweeps, contest, princess, maggie, 9452, peanut, shadow, ginger, michael, buster, sunshine, tigger, cookie, george, summer, taylor, bosco, abc123, ashley, bailey


    Many of the usual culprits are in there; “password”, “123456” and “abc123”. We saw all these back in the top 25 from the Gawker breach. We also see lots of passwords related to the fact this database was apparently related to a competition: “winner”, “sweeps” and “contest”. A few of these look very specific (9452, for example), but there may have been context to this in the signup process which lead multiple people to choose the same password.

    However in the grand scheme of things, there weren’t a whole lot of instances of multiple people choosing the same password, in fact the 25 above boiled down to only 2.5%. Furthermore, 80% of passwords actually only occurred once so whilst poor password entropy is looking rampant, most people are making these poor choices independently and achieving different results.

    Another way of assessing the randomness is to compare the passwords to a password dictionary. Now this doesn’t necessarily mean an English dictionary in the way we know it, rather it’s a collection of words which may be used as passwords so you’ll get things like obfuscated characters and letter / number combinations. I’ll use [S]this one[/S] which has about 1.7 million entries. Let’s see how many of the Sony passwords are in there:


    So more than one third of passwords conform to a relatively predictable pattern. That’s not to say they’re not long enough or don’t contain sufficient character types, in fact the passwords “1qazZAQ!” and “dallascowboys” were both matched so you’ve got four character types (even with a special character) and then a 13 character long password respectively. The thing is that they’re simply not random – they’ve obviously made appearances in password databases before.

    Uniqueness

    This is the one that gets really interesting as it asks the question “are people creating unique passwords across multiple accounts?” The thing about this latest Sony exploit is that it included data from multiple apparently independent locations within the organisation and as we saw earlier on, the dump LulzSec provided consists of several different data sources.

    Of particular interest in those data sources are the “Beauty” and “Delboca” files as they contain almost all the accounts with a pretty even split between them. They also contain well over 2,000 accounts with the same email address, i.e. someone has registered on both databases.

    So how rampant is password reuse between these two systems? Let’s take a look:


    92% of passwords were reused across both systems. That’s a pretty damning indictment of the whole “unique password” mantra. Is the situation really this bad? Or are the figures skewed by folks perhaps thinking “Sony is Sony” and being a little relaxed with their reuse?

    Let’s make it really interesting and compare accounts against Gawker. The internet being what it is there will always be the full Gawker database floating around out there and a quick Google search easily discovers live torrents. Gnosis (the group behind the Gawker breach) was a bit more generous than LulzSec and provided over 188,000 accounts for us to take a look at.

    Although there were only 88 email addresses found in common with Sony (I had thought it might be a bit higher but then again, they’re pretty independent fields), the results are still very interesting:


    Two thirds of people with accounts at both Sony and Gawker reused their passwords. Now I’m not sure how much crossover there was timeframe wise in terms of when the Gawker accounts were created versus when the Sony ones were. It’s quite possible the Sony accounts came after the Gawker breach (remember this was six months ago now), and people got a little wise to the non-unique risk. But whichever way you look at it, there’s an awful lot of reuse going on here.

    What really strikes me in this case is that between these two systems we have a couple of hundred thousand email addresses, usernames (the Gawker dump included these) and passwords. Based on the finding above, there’s a statistically good chance that the majority of them will work with other websites. How many Gmail or eBay or Facebook accounts are we holding the keys to here? And of course “we” is a bit misleading because anyone can grab these off the net right now. Scary stuff.

    Putting it in a exploit context

    When an entire database is compromised and all the passwords are just sitting there in plain text, the only thing saving customers of the service is their password uniqueness. Forget about rainbow tables and brute force – we’ll come back to that – the one thing which stops the problem becoming any worse for them is that it’s the only place those credentials appear. Of course we know that both from the findings above and many other online examples, password reuse is the norm rather than the exception.

    But what if the passwords in the database were hashed? Not even salted, just hashed? How vulnerable would the passwords have been to a garden variety rainbow attack? It’s pretty easy to get your hands on a rainbow table of hashed passwords containing between one and nine lowercase and numeric characters (RainbowCrack is a good place to start), so how many of the Sony passwords would easily fall?


    82% of passwords would easily fall to a basic rainbow table attack. Not good, but you can see why the rainbow table approach can be so effective, not so much because of its ability to make smart use of the time-memory trade-off scenario, but simply because it only needs to work against a narrow character set of very limited length to achieve a high success rate.

    And if the passwords were salted before the hash is applied? Well, more than a third of the passwords were easily found in a common dictionary so it’s just a matter of having the compute power to brute force them and repeat the salt plus hash process. It may not be a trivial exercise, but there’s a very high probability of a significant portion of the passwords being exposed.

    Summary

    None of this is overly surprising, although it remains alarming. We know passwords are too short, too simple, too predictable and too much like the other ones the individual has created in other locations. The bit which did take me back a bit was the extent to which passwords conformed to very predictable patterns, namely only using alphanumeric character, being 10 characters or less and having a much better than average chance of being the same as other passwords the user has created on totally independent systems.

    Sony has clearly screwed up big time here, no doubt. The usual process with these exploits is to berate the responsible organisation for only using MD5 or because they didn’t salt the password before hashing, but to not even attempt to obfuscate passwords and simply store them in the clear? Wow.

    But the bigger story here, at least to my eye, is that users continue to apply lousy password practices. Sony’s breach is Sony’s fault, no doubt, but a whole bunch of people have made the situation far worse than it needs to be through reuse. Next week when another Sony database is exposed (it’s a pretty safe bet based on recent form), even if an attempt has been made to secure passwords, there’s a damn good chance a significant portion of them will be exposed anyway. And that is simply the fault of the end users.

    Conclusion? Well, I’ll simply draw back to a previous post and say it again: The only secure password is the one you can’t remember.


    There are loads of pending articles ATM, I'll publish them whenever I'm free.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  13. #13
    its a Race to Infinity... Vyom's Avatar
    Join Date
    May 2009
    Location
    New Delhi
    Posts
    4,414

    Default Re: The Geeks Daily


    Spoiler:


    Time flies when you're having fun. But you're at work, and work sucks. So how is it 5:00 already?

    When we talk about "losing time," we aren't referring to that great night out, or that week of wonderful vacation, or the three-hour film that honestly didn't feel like more than an hour. No, when we fret about not having enough time, or wonder where exactly all those hours went, we're talking about mundane things. The workday. A lazy, unremarkable Sunday. Days when we gave time no apparent reason to fly, and it flew anyway.

    Why does that happen? And where did all the time go? The secret lies in your brain's ticking clock—an elusive, inexact, and easily ignorable clock.

    First of all, yes

    In understanding any complex issue, especially a psychological one, intuition doesn't usually get us too far. As often as you can scrabble together a theory about how the mind works, a man in a lab coat will adjust his glasses, tilt forward his brow, and deliver a carefully intoned, "Actually..."

    But not today. Most of what you think you know about the perception of time is true.

    Read More...
    My Boring Website: http://vineetkumar.me | My not so boring FB Geek Page: www.fb.com/geeksworld
    Moto E Unboxing: http://youtu.be/132Jjx_tMs0

  14. #14
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    ^Nice post, though I've already read it a few months ago .
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  15. #15
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default The Internet Is My Religion



    Spoiler:
    Today, I was lucky enough to attend the second day of sessions at Personal Democracy Forum. I didn’t really know what I was getting myself into. As a social web / identity junkie, I was excited to see Vivek Kundra, Jay Rosen, Dan Gillmor, and Doc Searls. I hadn’t heard of many of the other presenters, including one whose talk would be the most inspiring I had ever seen on a live stage.

    As Jim Gilliam took the stage, his slightly nervous, ever-so-geeky sensibility betrayed no signs of the passion, earnestness, and magnificence with which he would deliver what can only described as a modern epic: his life story.

    Watch it now:



    [Don't read on unless you have watched the video. The rest of this post probably won't make much sense.]


    Apologies for the long quote, but I find his closing words incredibly profound [my bolding]:
    As I was prepping for the surgery, I wasn’t thinking about Jesus, or whether my heart would start beating again after they stopped it, or whether I would go to heaven if it didn’t. I was thinking about all of the people who had gotten me here. I owed every moment of my life to countless people I would never meet. Tomorrow, that interconnectedness would be represented in my own physical body – three different DNAs: individually they were useless, but together, they would equal one functioning human. What an incredible debt to repay! I didn’t even know where to start.

    And that’s when I truly found God. God is just what happens when humanity is connected. Humanity connected is God. There was no way I would ever repay this debt. It was only by the grace of God – your grace, that I would be saved. The truth is we all have this same cross to bear. We all owe every moment of our lives to countless people we will never meet. Whether it’s the soldiers who give us freedom because they fight for our country, or the surgeons who give us the cures that keep us alive. We all owe every moment of our lives to each other. We are all connected. We are all in debt to each other.

    The Internet gives us the opportunity to repay just a small part of that debt. As a child, I believed in creationism, that the Universe was created in six days. Today, we are the creators. We each have our own unique skills and talents to contribute to create the Kingdom of God. We serve God best when we do what we love for the greatest cause we can imagine. What the people in this room do is spiritual – it is profound. We are the leaders of this new religion. We have faith that people connected can create a new world. Each one of us is a creator but together we are The Creator.

    All I know about the person whose lungs I now have is that he was 22 years old and six feet tall. I know nothing about who he was as a person, but I do know something about his family. I know that in the height of loss, when all anyone should have to do is grieve, as their son or their brother lay motionless on the bed, they were asked to give up to seven strangers a chance to live. And they said yes.

    Today, I breathe through someone else’s lungs while another’s blood flows through my veins. I have faith in people, I believe in God, and the Internet is my religion.
    The audience rose in a standing ovation, twice. A few of the reactions:
    You know it’s an amazing talk when everyone looks up from their computer and stops working to pay attention. #pdf11@katieharbath

    Standing ovation for @jgilliam at #PDF11, not a dry eye in the house – @doctorow
    As I walked back to the office from the Skirball Center this afternoon, I found myself thinking through what his message means to me, and why I was so moved by his words. Working at betaworks, I am confronted with and fascinated daily by the creative opportunities on the Web – for opportunities to change the way that we connect, communicate, share, learn, discover, live, and grow. Technology is only as good as the people who wield it, so perhaps I’m a bit idyllic and naive in my boundless optimism, but I am consistently awestruck at the power of the Web as a creative force.

    I’m not a religious person, but I do believe there is something humbling about the act of creation – whether your form of creation is art, software, ideas, words, music – there is something about the act of creation that is worth striving for, worth sacrificing worth, worth living for. Regardless of your view of her politics, Ayn Rand spoke to this notion beautifully:
    “Whether it’s a symphony or a coal mine, all work is an act of creating and comes from the same source: from an inviolate capacity to see through one’s own eyes . . . which means: the capacity to see, to connect and to make what had not been seen, connected and made before.” – Ch. II, The Utopia of Greed, Atlas Shrugged
    The Web – at its simplest, an open and generally accessible medium for two-way connectivity – bridges creative energy irrespective of geography, socioeconomic status, field of study, and language. It enables and even encourages the collision of ideas, problem statements, inspirations, and solutions. As Stephen Johnson offers in his fantastic book, Where Good Ideas Come From, “good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time.” He might as well be describing the Web.

    The Internet is a medium capable of unlocking and combining the creative energies of Earth’s seven billion in a way never before imaginable. Through the near-infinite scale with which it powers human connectivity, the Internet has shown in just a few short years its ability to enable anything from a collection of the world’s information, to a revolution, to, in the case of Jim Gilliam, life itself.

    I’m so excited to be a small part of what can only be called a movement. I’m excited to build, I’m excited to change, and, perhaps most critically, I’m excited to defend.
    Last edited by sygeek; 17-06-2011 at 01:49 PM.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  16. #16
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    Write

    Spoiler:
    Yesterday was my 49th birthday. By fortuitous circumstance, I spotted an item on Hacker News explaining that reputation on Stack Overflow seems to rise with age. I don’t have very much Stack Overflow reputation, but I do have a little Hacker News karma and over the years I’ve written a few articles that made it to the front page of reddit, programming.reddit.com, and Hacker News.

    Somebody suggested that age was my secret for garnering reputation and writing well. I don’t think so. Here’s my secret, here’s what I think I do to get reputation, and what I think will may work for you:

    Write.

    That’s it. That’s everything. Just write. If you need more words, the secret to internet reputation is to write more. If you aren’t writing now, start writing. If you are writing now, write more.

    Now some of you want more exposition, so for entertainment purposes only, I’ll explain why I think this is the case. But even if I’m wrong about why it’s the case, I’m sure I’m right that it is the case. So write.

    Now here’s why I think writing more is the right strategy. The wrong strategy is to write less often but increase the quality.

    This is a wrong strategy because it is based on a wrong assumption, namely that there’s a big tradeoff between quality and quantity. I agree that given more time, I can polish an essay. I can fix typos, tighten things up, clarify things. That’s very true, and if you are talking about the difference between one essay a day done well and three done poorly, I’ll buy that you are already writing enough if you write one a day, and you are better off getting the spelling right than to write two more unpolished essays.

    But in quantities of less than one essay a day or one essay a week, the choice between writing more essays and writing higher quality essays is a false dichotomy. In fact, you may find that practice writing improves your writing, so writing more often leads to writing with higher quality. You also get nearly instantaneous feedback on the Internet, so the more you write, the more you learn about what works and what doesn’t work when you write.

    Now that I’ve explained why I think writing less often is the wrong strategy, I will explain how writing for the Internet rewards writing more often. Writing on the Internet is nothing like writing on dead trees. For various legacy reasons, writing on dead trees involves writing books. The entire industry is built around long feedback cycles. It’s very expensive to get things wrong up front, so the process is optimized around doing it right the first time, with editors and proof-readers and what-not all conspiring to delay publishing your words where people can read them.

    Worse, the feedback loop is appalling. What are you supposed to do with a bad review on Amazon.com? Incorporate it into the second edition of your masterpiece?

    Speaking of masterpieces, that’s the other problem. Since books are what sell, if you want to write on dead trees, you have to write books. A book is a Big Thing, involving a lot of Planning. And structure. And organization. It demands a quality approach. Books are the “Big Design Up Front” poster children for writing.

    Essays, rants, opinions… If writing book is Big Design Up Front, blogging and commenting is Cowboy Coding. A book is a Cathedral, a blog is a Bazaar. And in a good way! You get feedback faster. It’s the ultimate in Release Early, Release Often. You have an idea, you write it, you get feedback, you edit.

    I am unapologetic about editing my comments and essays. Some criticize me for retracting my words when faced with a good argument. I say, **** You, this is not a debate, this is a process for generating and refining good ideas. I lie, of course, I have never said that. I actually say “Thank You!” Or I try. When I fail to be gracious in accepting criticism, that is my failing. The process of releasing ideas and refining them in the spotlight is one I value and think is a win for everyone.

    Another problem with a book is that it’s One Big Thing. Very few book reviews say “Chapter two is a gem, buy the book for this and ignore chapter six, the author is confused.” Most just say “He’s an idiot, chapter six is proof of that.”

    A blog is not One Big Thing. Many people say my blog is worth reading. They are probably wrong: I have had many popular essays. But for every “hit,” I have had an outrageous number of misses. If you read everything I wrote starting in 2004 to now, you’d be amazed I get any work in this industry. What people mean is, my good stuff is worth reading.

    That’s the magic of the Internet. Thanks to Twitter and Hacker News and whatever else, if you write a good thing, it gets judged on its own. You can write 99 failures for every success, but you are judged by your best work, not your worst.

    And let me tell you something about my Best Work: I often think I am writing something Important, something They’ll Remember Me For. And it sinks without a trace. A recent essay on the value of planning and the unimportance of plans comes to mind.

    And then a day later I’ll dash off a rant based on a simple idea or insight, and the next thing I know it’s #1 on Hacker News. If I was writing a book, I’d do a terrible job, because my nose for what people want is broken. When I write essays, I don’t care, I write everything and I let Hacker News and Twitter sort out the wheat from my chaff.

    If you have a good nose, a great instinct, maybe you can write less. But even if you don’t, you write more and you crowd-source the nose for you. And thanks to the fine granularity of essays and the willingness of the crowd to ignore you remises and celebrate your hits, your reputation grows inexorably whenever you sit down and simply write.

    So write.

    (discuss)

    p.s. Here's an interesting counter-point.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  17. #17
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily


    Spoiler:
    Some April morning last year I received a letter from the local police department, bureau of criminal investigation. “Whoops”, I thought. What could have happened there? Had I forgot to pay for a speeding ticket? I opened the letter. It said I was the main suspect in a case of “data destruction” and I was supposed to visit the police department as soon as possible to file a testimony.

    Wait. What is “data destruction”? Well, I had to translate it, but, I am from Austria where there is a paragraph (§126a, StGB) that basically says the following: If you modify, delete or destroy data that is not yours, you may get a prison sentence of six months or a fine. There are probably similar laws in other countries.

    But how could I have done that? I wasn’t aware of any situation in which I could have deleted anyone’s data. I work as a sysadmin for a small consulting company, but it seemed implausible that they would charge me with the above mentioned.

    What I supposedly did wrong

    So I went to the police department. I was terrified because I had absolutely no idea what I had done wrong. The police officer however was very friendly and asked me to take a seat. He wanted to know if I knew a person X from Tyrol. Of course I didn’t. That was more than 500 kilometers away. Turns out, I hacked their Facebook profile.

    Here’s the summary of what I was being charged with:
    • Creating a fake e-mail address impersonating as the victim
    • Using this e-mail address to hack into their Facebook account
    • Deleting all data from the Facebook profile and then changing the e-mail address and password
    • Deleting the fake e-mail address

    All that had happened one Sunday evening. I recall being at home with my girlfriend, watching TV. I like to keep a detailed schedule in my calendar, therefore I knew. And I knew I was absolutely innocent. But how did they think it was me?

    How I became suspect

    Well, at that time I had an iPhone. I also had a mobile broadband contract with a major telephone company, let’s call them Company X. The police officer told me that upon investigation, they positively identified the IP address under which the e-mail address was created. It was the IP address assigned to my iPhone that evening.

    That seemed impossible. There were several proofs supporting the fact that I could never have done this:
    • We have no 3G reception in our apartment.
    • The e-mail address was deleted five minutes after being created. Nobody is that quick on an iPhone.
    • The e-mail provider doesn’t offer the feature to register an address on their mobile sites.

    You can’t change Facebook account details on their mobile interface as well. I know, I could have used the non-mobile site, but I wouldn’t have been that fast.

    All that I told the police officer. He said he understood and jotted down some notes. They would contact me and I shouldn’t have to worry. At least he was on my side. But now I was there, main suspect in a case I never wanted to be in. The real offender was still out there.

    What I did next? I called the telephone company.

    Contacting the Telco

    Just like most of the time when you call your ISP/Telco, they don’t really care what you have to say. I probably talked to ten different people. Chances are you have more knowledge about computers and how the internet works than they do. That’s why it didn’t surprise me that I was told things like:

    • “That’s absolutely impossible”
    • “If they say it’s your IP, you’re guilty!”
    • “Let me get a supervisor” (hung up after a minute of elevator music)
    • “I really don’t know what this is all about”


    At that point I just gave up. I had already contacted a lawyer who would be prepared to go to court with me if necessary. As a student without proper insurance, it didn’t help that I had to pay him in advance just to get hold of the case files and take a look at them. I waited and waited, and then I got a phone call.

    How everything sorted itself out

    It was the legal department of the Telco. A lady was calling, and the first thing she did was to deeply apologize. She told me what had happened: Normally, when the prosecutor asks for the IP address and the corresponding owner, they have to fill out a form containing both information, which is then sent to the authorities. In my case they had gotten the IP address from the e-mail provider and the employee’s job was to match it against their records. The flaw could not be simpler: She had just swapped two digits in the IP address.

    As a compensation they said I’d no longer have to pay the base fee – how generous! Luckily, they also accepted to pay my lawyer’s costs, whose invoice I just forwarded to them. I think they were just scared that I would take them to court for wrongfully delivering me.

    A few weeks later the police officer contacted me. He also confirmed that the real offender was X’s ex-boyfriend, who probably just knew the password and wanted some payback.

    What we can learn from this

    One can clearly see from such an example is that there are still some holes in the security of current data retention policies. While governments have an understandable interest in storing communication data to allow effective criminal prosecution, the following should not be forgotten: No matter how perfect a system is, there is always the possibility of a weak implementation. Also, once the human factor comes into play, we can’t rely on the principles of an automated system anymore (even if it was flawless). To err is human, it seems. Luckily, I was forgiven in that case.

    So, should you ever get into a situation where you are wrongfully suspected, make sure to let people know that there is a possibility of an error, even if they tell you otherwise.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  18. #18
    Your Ad here nisargshah95's Avatar
    Join Date
    Feb 2010
    Location
    Goa, India
    Posts
    394

    Thumbs up Re: The Geeks Daily

    Quote Originally Posted by SyGeek View Post

    Spoiler:
    Some April morning last year I received a letter from the local police department, bureau of criminal investigation. “Whoops”, I thought. What could have happened there? Had I forgot to pay for a speeding ticket? I opened the letter. It said I was the main suspect in a case of “data destruction” and I was supposed to visit the police department as soon as possible to file a testimony.

    Wait. What is “data destruction”? Well, I had to translate it, but, I am from Austria where there is a paragraph (§126a, StGB) that basically says the following: If you modify, delete or destroy data that is not yours, you may get a prison sentence of six months or a fine. There are probably similar laws in other countries.

    But how could I have done that? I wasn’t aware of any situation in which I could have deleted anyone’s data. I work as a sysadmin for a small consulting company, but it seemed implausible that they would charge me with the above mentioned.

    What I supposedly did wrong

    So I went to the police department. I was terrified because I had absolutely no idea what I had done wrong. The police officer however was very friendly and asked me to take a seat. He wanted to know if I knew a person X from Tyrol. Of course I didn’t. That was more than 500 kilometers away. Turns out, I hacked their Facebook profile.

    Here’s the summary of what I was being charged with:
    • Creating a fake e-mail address impersonating as the victim
    • Using this e-mail address to hack into their Facebook account
    • Deleting all data from the Facebook profile and then changing the e-mail address and password
    • Deleting the fake e-mail address

    All that had happened one Sunday evening. I recall being at home with my girlfriend, watching TV. I like to keep a detailed schedule in my calendar, therefore I knew. And I knew I was absolutely innocent. But how did they think it was me?

    How I became suspect

    Well, at that time I had an iPhone. I also had a mobile broadband contract with a major telephone company, let’s call them Company X. The police officer told me that upon investigation, they positively identified the IP address under which the e-mail address was created. It was the IP address assigned to my iPhone that evening.

    That seemed impossible. There were several proofs supporting the fact that I could never have done this:
    • We have no 3G reception in our apartment.
    • The e-mail address was deleted five minutes after being created. Nobody is that quick on an iPhone.
    • The e-mail provider doesn’t offer the feature to register an address on their mobile sites.

    You can’t change Facebook account details on their mobile interface as well. I know, I could have used the non-mobile site, but I wouldn’t have been that fast.

    All that I told the police officer. He said he understood and jotted down some notes. They would contact me and I shouldn’t have to worry. At least he was on my side. But now I was there, main suspect in a case I never wanted to be in. The real offender was still out there.

    What I did next? I called the telephone company.

    Contacting the Telco

    Just like most of the time when you call your ISP/Telco, they don’t really care what you have to say. I probably talked to ten different people. Chances are you have more knowledge about computers and how the internet works than they do. That’s why it didn’t surprise me that I was told things like:

    • “That’s absolutely impossible”
    • “If they say it’s your IP, you’re guilty!”
    • “Let me get a supervisor” (hung up after a minute of elevator music)
    • “I really don’t know what this is all about”


    At that point I just gave up. I had already contacted a lawyer who would be prepared to go to court with me if necessary. As a student without proper insurance, it didn’t help that I had to pay him in advance just to get hold of the case files and take a look at them. I waited and waited, and then I got a phone call.

    How everything sorted itself out

    It was the legal department of the Telco. A lady was calling, and the first thing she did was to deeply apologize. She told me what had happened: Normally, when the prosecutor asks for the IP address and the corresponding owner, they have to fill out a form containing both information, which is then sent to the authorities. In my case they had gotten the IP address from the e-mail provider and the employee’s job was to match it against their records. The flaw could not be simpler: She had just swapped two digits in the IP address.

    As a compensation they said I’d no longer have to pay the base fee – how generous! Luckily, they also accepted to pay my lawyer’s costs, whose invoice I just forwarded to them. I think they were just scared that I would take them to court for wrongfully delivering me.

    A few weeks later the police officer contacted me. He also confirmed that the real offender was X’s ex-boyfriend, who probably just knew the password and wanted some payback.

    What we can learn from this

    One can clearly see from such an example is that there are still some holes in the security of current data retention policies. While governments have an understandable interest in storing communication data to allow effective criminal prosecution, the following should not be forgotten: No matter how perfect a system is, there is always the possibility of a weak implementation. Also, once the human factor comes into play, we can’t rely on the principles of an automated system anymore (even if it was flawless). To err is human, it seems. Luckily, I was forgiven in that case.

    So, should you ever get into a situation where you are wrongfully suspected, make sure to let people know that there is a possibility of an error, even if they tell you otherwise.
    . This one's good.
    "The nature of the Internet and the importance of net neutrality is that innovation can come from everyone."
    System: HP 15-r022TX { Intel i5 4210U || 8GB RAM || NVIDIA GeForce 820M || Ubuntu 14.04 Trusty Tahr LTS 64-bit (Primary) + Windows 8.1 Pro 64-bit || 1TB HDD + 1TB Seagate external HDD }
    Twitter - https://twitter.com/nisargshah95

  19. #19
    Whompy Whomperson Nipun's Avatar
    Join Date
    Mar 2011
    Location
    New Delhi
    Posts
    1,467

    Default Re: The Geeks Daily

    Quote Originally Posted by SyGeek View Post
    This is very good... umm... article.


    I have not read it till end. I will do it once I complete my HomeWork
    The beauty of Indian roads is that one needs to look on both sides while crossing a one way road!
    [B]▒▒ [URL="http://adf.ly/2148556/tf2comic"] ¯TF2 COMIC MAKER!_ ▒▒[/URL][/B]
    [url="http://adf.ly/2148556/roadsense"]Drive sensibly, please![/url]
    [URL="http://adf.ly/2148556/nocivicsense"]Educated Illiterates[/URL]

  20. #20
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By David Eagleman

    Spoiler:


    Advances in brain science are calling into question the volition behind many criminal acts. A leading neuroscientist describes how the foundations of our criminal-justice system are beginning to crumble, and proposes a new way forward for law and order.

    On the steamy first day of August 1966, Charles Whitman took an elevator to the top floor of the University of Texas Tower in Austin. The 25-year-old climbed the stairs to the observation deck, lugging with him a footlocker full of guns and ammunition. At the top, he killed a receptionist with the butt of his rifle. Two families of tourists came up the stairwell; he shot at them at point-blank range. Then he began to fire indiscriminately from the deck at people below. The first woman he shot was pregnant. As her boyfriend knelt to help her, Whitman shot him as well. He shot pedestrians in the street and an ambulance driver who came to rescue them.

    The evening before, Whitman had sat at his typewriter and composed a suicide note:
    I don’t really understand myself these days. I am supposed to be an average reasonable and intelligent young man. However, lately (I can’t recall when it started) I have been a victim of many unusual and irrational thoughts.
    By the time the police shot him dead, Whitman had killed 13 people and wounded 32 more. The story of his rampage dominated national headlines the next day. And when police went to investigate his home for clues, the story became even stranger: in the early hours of the morning on the day of the shooting, he had murdered his mother and stabbed his wife to death in her sleep.
    It was after much thought that I decided to kill my wife, Kathy, tonight … I love her dearly, and she has been as fine a wife to me as any man could ever hope to have. I cannot rationa[l]ly pinpoint any specific reason for doing this …
    Along with the shock of the murders lay another, more hidden, surprise: the juxtaposition of his aberrant actions with his unremarkable personal life. Whitman was an Eagle Scout and a former marine, studied architectural engineering at the University of Texas, and briefly worked as a bank teller and volunteered as a scoutmaster for Austin’s Boy Scout Troop 5. As a child, he’d scored 138 on the Stanford-Binet IQ test, placing in the 99th percentile. So after his shooting spree from the University of Texas Tower, everyone wanted answers.

    For that matter, so did Whitman. He requested in his suicide note that an autopsy be performed to determine if something had changed in his brain—because he suspected it had.
    I talked with a Doctor once for about two hours and tried to convey to him my fears that I felt [overcome by] overwhelming violent impulses. After one session I never saw the Doctor again, and since then I have been fighting my mental turmoil alone, and seemingly to no avail.
    Whitman’s body was taken to the morgue, his skull was put under the bone saw, and the medical examiner lifted the brain from its vault. He discovered that Whitman’s brain harbored a tumor the diameter of a nickel. This tumor, called a glioblastoma, had blossomed from beneath a structure called the thalamus, impinged on the hypothalamus, and compressed a third region called the amygdala. The amygdala is involved in emotional regulation, especially of fear and aggression. By the late 1800s, researchers had discovered that damage to the amygdala caused emotional and social disturbances. In the 1930s, the researchers Heinrich Klüver and Paul Bucy demonstrated that damage to the amygdala in monkeys led to a constellation of symptoms, including lack of fear, blunting of emotion, and overreaction. Female monkeys with amygdala damage often neglected or physically abused their infants. In humans, activity in the amygdala increases when people are shown threatening faces, are put into frightening situations, or experience social phobias. Whitman’s intuition about himself—that something in his brain was changing his behavior—was spot-on.

    Stories like Whitman’s are not uncommon: legal cases involving brain damage crop up increasingly often. As we develop better technologies for probing the brain, we detect more problems, and link them more easily to aberrant behavior. Take the 2000 case of a 40-year-old man we’ll call Alex, whose sexual preferences suddenly began to transform. He developed an interest in child pornography—and not just a little interest, but an overwhelming one. He poured his time into child-pornography Web sites and magazines. He also solicited prostitution at a massage parlor, something he said he had never previously done. He reported later that he’d wanted to stop, but “the pleasure principle overrode” his restraint. He worked to hide his acts, but subtle sexual advances toward his prepubescent stepdaughter alarmed his wife, who soon discovered his collection of child pornography. He was removed from his house, found guilty of child molestation, and sentenced to rehabilitation in lieu of prison. In the rehabilitation program, he made inappropriate sexual advances toward the staff and other clients, and was expelled and routed toward prison.

    At the same time, Alex was complaining of worsening headaches. The night before he was to report for prison sentencing, he couldn’t stand the pain anymore, and took himself to the emergency room. He underwent a brain scan, which revealed a massive tumor in his orbitofrontal cortex. Neurosurgeons removed the tumor. Alex’s sexual appetite returned to normal.

    The year after the brain surgery, his pedophilic behavior began to return. The neuroradiologist discovered that a portion of the tumor had been missed in the surgery and was regrowing—and Alex went back under the knife. After the removal of the remaining tumor, his behavior again returned to normal.

    When your biology changes, so can your decision-making and your desires. The drives you take for granted (“I’m a heterosexual/homosexual,” “I’m attracted to children/adults,” “I’m aggressive/not aggressive,” and so on) depend on the intricate details of your neural machinery. Although acting on such drives is popularly thought to be a free choice, the most cursory examination of the evidence demonstrates the limits of that assumption.

    Alex’s sudden pedophilia illustrates that hidden drives and desires can lurk undetected behind the neural machinery of socialization. When the frontal lobes are compromised, people become disinhibited, and startling behaviors can emerge. Disinhibition is commonly seen in patients with frontotemporal dementia, a tragic disease in which the frontal and temporal lobes degenerate. With the loss of that brain tissue, patients lose the ability to control their hidden impulses. To the frustration of their loved ones, these patients violate social norms in endless ways: shoplifting in front of store managers, removing their clothes in public, running stop signs, breaking out in song at inappropriate times, eating food scraps found in public trash cans, being physically aggressive or sexually transgressive. Patients with frontotemporal dementia commonly end up in courtrooms, where their lawyers, doctors, and embarrassed adult children must explain to the judge that the violation was not the perpetrator’s fault, exactly: much of the brain has degenerated, and medicine offers no remedy. Fifty-seven percent of frontotemporal-dementia patients violate social norms, as compared with only 27 percent of Alzheimer’s patients.

    Changes in the balance of brain chemistry, even small ones, can also cause large and unexpected changes in behavior. Victims of Parkinson’s disease offer an example. In 2001, families and caretakers of Parkinson’s patients began to notice something strange. When patients were given a drug called pramipexole, some of them turned into gamblers. And not just casual gamblers, but pathological gamblers. These were people who had never gambled much before, and now they were flying off to Vegas. One 68-year-old man amassed losses of more than $200,000 in six months at a series of casinos. Some patients became consumed with Internet poker, racking up unpayable credit-card bills. For several, the new addiction reached beyond gambling, to compulsive eating, excessive alcohol consumption, and hypersexuality.

    What was going on? Parkinson’s involves the loss of brain cells that produce a neurotransmitter known as dopamine. Pramipexole works by impersonating dopamine. But it turns out that dopamine is a chemical doing double duty in the brain. Along with its role in motor commands, it also mediates the reward systems, guiding a person toward food, drink, mates, and other things useful for survival. Because of dopamine’s role in weighing the costs and benefits of decisions, imbalances in its levels can trigger gambling, overeating, and drug addiction—behaviors that result from a reward system gone awry. Physicians now watch for these behavioral changes as a possible side effect of drugs like pramipexole. Luckily, the negative effects of the drug are reversible—the physician simply lowers the dosage, and the compulsive gambling goes away.

    The lesson from all these stories is the same: human behavior cannot be separated from human biology. If we like to believe that people make free choices about their behavior (as in, “I don’t gamble, because I’m strong-willed”), cases like Alex the pedophile, the frontotemporal shoplifters, and the gambling Parkinson’s patients may encourage us to examine our views more carefully. Perhaps not everyone is equally “free” to make socially appropriate choices.

    Does the discovery of Charles Whitman’s brain tumor modify your feelings about the senseless murders he committed? Does it affect the sentence you would find appropriate for him, had he survived that day? Does the tumor change the degree to which you consider the killings “his fault”? Couldn’t you just as easily be unlucky enough to develop a tumor and lose control of your behavior?

    On the other hand, wouldn’t it be dangerous to conclude that people with a tumor are free of guilt, and that they should be let off the hook for their crimes?

    As our understanding of the human brain improves, juries are increasingly challenged with these sorts of questions. When a criminal stands in front of the judge’s bench today, the legal system wants to know whether he is blameworthy. Was it his fault, or his biology’s fault?

    I submit that this is the wrong question to be asking. The choices we make are inseparably yoked to our neural circuitry, and therefore we have no meaningful way to tease the two apart. The more we learn, the more the seemingly simple concept of blameworthiness becomes complicated, and the more the foundations of our legal system are strained.

    If I seem to be heading in an uncomfortable direction—toward letting criminals off the hook—please read on, because I’m going to show the logic of a new argument, piece by piece. The upshot is that we can build a legal system more deeply informed by science, in which we will continue to take criminals off the streets, but we will customize sentencing, leverage new opportunities for rehabilitation, and structure better incentives for good behavior. Discoveries in neuroscience suggest a new way forward for law and order—one that will lead to a more cost-effective, humane, and flexible system than the one we have today. When modern brain science is laid out clearly, it is difficult to justify how our legal system can continue to function without taking what we’ve learned into account.

    Many of us like to believe that all adults possess the same capacity to make sound choices. It’s a charitable idea, but demonstrably wrong. People’s brains are vastly different.

    Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

    And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

    Genes are part of the story, but they’re not the whole story. We are likewise influenced by the environments in which we grow up. Substance abuse by a mother during pregnancy, maternal stress, and low birth weight all can influence how a baby will turn out as an adult. As a child grows, neglect, physical abuse, and head injury can impede mental development, as can the physical environment. (For example, the major public-health movement to eliminate lead-based paint grew out of an understanding that ingesting lead can cause brain damage, making children less intelligent and, in some cases, more impulsive and aggressive.) And every experience throughout our lives can modify genetic expression—activating certain genes or switching others off—which in turn can inaugurate new behaviors. In this way, genes and environments intertwine.

    When it comes to nature and nurture, the important point is that we choose neither one. We are each constructed from a genetic blueprint, and then born into a world of circumstances that we cannot control in our most-formative years. The complex interactions of genes and environment mean that all citizens—equal before the law—possess different perspectives, dissimilar personalities, and varied capacities for decision-making. The unique patterns of neurobiology inside each of our heads cannot qualify as choices; these are the cards we’re dealt.

    Because we did not choose the factors that affected the formation and structure of our brain, the concepts of free will and personal responsibility begin to sprout question marks. Is it meaningful to say that Alex made bad choices, even though his brain tumor was not his fault? Is it justifiable to say that the patients with frontotemporal dementia or Parkinson’s should be punished for their bad behavior?

    It is problematic to imagine yourself in the shoes of someone breaking the law and conclude, “Well, I wouldn’t have done that”—because if you weren’t exposed to in utero cocaine, lead poisoning, and physical abuse, and he was, then you and he are not directly comparable. You cannot walk a mile in his shoes.

    The legal system rests on the assumption that we are “practical reasoners,” a term of art that presumes, at bottom, the existence of free will. The idea is that we use conscious deliberation when deciding how to act—that is, in the absence of external duress, we make free decisions. This concept of the practical reasoner is intuitive but problematic.

    The existence of free will in human behavior is the subject of an ancient debate. Arguments in support of free will are typically based on direct subjective experience (“I feel like I made the decision to lift my finger just now”). But evaluating free will requires some nuance beyond our immediate intuitions.

    Consider a decision to move or speak. It feels as though free will leads you to stick out your tongue, or scrunch up your face, or call someone a name. But free will is not required to play any role in these acts. People with Tourette’s syndrome, for instance, suffer from involuntary movements and vocalizations. A typical Touretter may stick out his tongue, scrunch up his face, or call someone a name—all without choosing to do so.

    We immediately learn two things from the Tourette’s patient. First, actions can occur in the absence of free will. Second, the Tourette’s patient has no free won’t. He cannot use free will to override or control what subconscious parts of his brain have decided to do. What the lack of free will and the lack of free won’t have in common is the lack of “free.” Tourette’s syndrome provides a case in which the underlying neural machinery does its thing, and we all agree that the person is not responsible.

    This same phenomenon arises in people with a condition known as chorea, for whom actions of the hands, arms, legs, and face are involuntary, even though they certainly look voluntary: ask such a patient why she is moving her fingers up and down, and she will explain that she has no control over her hand. She cannot not do it. Similarly, some split-brain patients (who have had the two hemispheres of the brain surgically disconnected) develop alien-hand syndrome: while one hand buttons up a shirt, the other hand works to unbutton it. When one hand reaches for a pencil, the other bats it away. No matter how hard the patient tries, he cannot make his alien hand not do what it’s doing. The movements are not “his” to freely start or stop.

    Unconscious acts are not limited to unintended shouts or wayward hands; they can be surprisingly sophisticated. Consider Kenneth Parks, a 23-year-old Canadian with a wife, a five-month-old daughter, and a close relationship with his in-laws (his mother-in-law described him as a “gentle giant”). Suffering from financial difficulties, marital problems, and a gambling addiction, he made plans to go see his in-laws to talk about his troubles.

    In the wee hours of May 23, 1987, Kenneth arose from the couch on which he had fallen asleep, but he did not awaken. Sleepwalking, he climbed into his car and drove the 14 miles to his in-laws’ home. He broke in, stabbed his mother-in-law to death, and assaulted his father-in-law, who survived. Afterward, he drove himself to the police station. Once there, he said, “I think I have killed some people … My hands,” realizing for the first time that his own hands were severely cut.

    Over the next year, Kenneth’s testimony was remarkably consistent, even in the face of attempts to lead him astray: he remembered nothing of the incident. Moreover, while all parties agreed that Kenneth had undoubtedly committed the murder, they also agreed that he had no motive. His defense attorneys argued that this was a case of killing while sleepwalking, known as homicidal somnambulism.

    Although critics cried “Faker!,” sleepwalking is a verifiable phenomenon. On May 25, 1988, after lengthy consideration of electrical recordings from Kenneth’s brain, the jury concluded that his actions had indeed been involuntary, and declared him not guilty.

    As with Tourette’s sufferers, split-brain patients, and those with choreic movements, Kenneth’s case illustrates that high-level behaviors can take place in the absence of free will. Like your heartbeat, breathing, blinking, and swallowing, even your mental machinery can run on autopilot. The crux of the question is whether all of your actions are fundamentally on autopilot or whether some little bit of you is “free” to choose, independent of the rules of biology.

    This has always been the sticking point for philosophers and scientists alike. After all, there is no spot in the brain that is not densely interconnected with—and driven by—other brain parts. And that suggests that no part is independent and therefore “free.” In modern science, it is difficult to find the gap into which to slip free will—the uncaused causer—because there seems to be no part of the machinery that does not follow in a causal relationship from the other parts.

    Free will may exist (it may simply be beyond our current science), but one thing seems clear: if free will does exist, it has little room in which to operate. It can at best be a small factor riding on top of vast neural networks shaped by genes and environment. In fact, free will may end up being so small that we eventually think about bad decision-making in the same way we think about any physical process, such as diabetes or lung disease.

    The study of brains and behaviors is in the midst of a conceptual shift. Historically, clinicians and lawyers have agreed on an intuitive distinction between neurological disorders (“brain problems”) and psychiatric disorders (“mind problems”). As recently as a century ago, a common approach was to get psychiatric patients to “toughen up,” through deprivation, pleading, or torture. Not surprisingly, this approach was medically fruitless. After all, while psychiatric disorders tend to be the product of more-subtle forms of brain pathology, they, too, are based in the biological details of the brain.

    What accounts for the shift from blame to biology? Perhaps the largest driving force is the effectiveness of pharmaceutical treatments. No amount of threatening will chase away depression, but a little pill called fluoxetine often does the trick. Schizophrenic symptoms cannot be overcome by exorcism, but they can be controlled by risperidone. Mania responds not to talk or to ostracism, but to lithium. These successes, most of them introduced in the past 60 years, have underscored the idea that calling some disorders “brain problems” while consigning others to the ineffable realm of “the psychic” does not make sense. Instead, we have begun to approach mental problems in the same way we might approach a broken leg. The neuroscientist Robert Sapolsky invites us to contemplate this conceptual shift with a series of questions:
    Is a loved one, sunk in a depression so severe that she cannot function, a case of a disease whose biochemical basis is as “real” as is the biochemistry of, say, diabetes, or is she merely indulging herself? Is a child doing poorly at school because he is unmotivated and slow, or because there is a neurobiologically based learning disability? Is a friend, edging towards a serious problem with substance abuse, displaying a simple lack of discipline, or suffering from problems with the neurochemistry of reward?
    Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications. Tom Bingham, Britain’s former senior law lord, once put it this way:
    In the past, the law has tended to base its approach … on a series of rather crude working assumptions: adults of competent mental capacity are free to choose whether they will act in one way or another; they are presumed to act rationally, and in what they conceive to be their own best interests; they are credited with such foresight of the consequences of their actions as reasonable people in their position could ordinarily be expected to have; they are generally taken to mean what they say.

    Whatever the merits or demerits of working assumptions such as these in the ordinary range of cases, it is evident that they do not provide a uniformly accurate guide to human behaviour.
    The more we discover about the circuitry of the brain, the more we tip away from accusations of indulgence, lack of motivation, and poor discipline—and toward the details of biology. The shift from blame to science reflects our modern understanding that our perceptions and behaviors are steered by deeply embedded neural programs.

    Imagine a spectrum of culpability. On one end, we find people like Alex the pedophile, or a patient with frontotemporal dementia who exposes himself in public. In the eyes of the judge and jury, these are people who suffered brain damage at the hands of fate and did not choose their neural situation. On the other end of the spectrum—the blameworthy side of the “fault” line—we find the common criminal, whose brain receives little study, and about whom our current technology might be able to say little anyway. The overwhelming majority of lawbreakers are on this side of the line, because they don’t have any obvious, measurable biological problems. They are simply thought of as freely choosing actors.

    Such a spectrum captures the common intuition that juries hold regarding blameworthiness. But there is a deep problem with this intuition. Technology will continue to improve, and as we grow better at measuring problems in the brain, the fault line will drift into the territory of people we currently hold fully accountable for their crimes. Problems that are now opaque will open up to examination by new techniques, and we may someday find that many types of bad behavior have a basic biological explanation—as has happened with schizophrenia, epilepsy, depression, and mania.

    Today, neuroimaging is a crude technology, unable to explain the details of individual behavior. We can detect only large-scale problems, but within the coming decades, we will be able to detect patterns at unimaginably small levels of the microcircuitry that correlate with behavioral problems. Neuroscience will be better able to say why people are predisposed to act the way they do. As we become more skilled at specifying how behavior results from the microscopic details of the brain, more defense lawyers will point to biological mitigators of guilt, and more juries will place defendants on the not-blameworthy side of the line.

    This puts us in a strange situation. After all, a just legal system cannot define culpability simply by the limitations of current technology. Expert medical testimony generally reflects only whether we yet have names and measurements for a problem, not whether a problem exists. A legal system that declares a person culpable at the beginning of a decade and not culpable at the end is one in which culpability carries no clear meaning.

    The crux of the problem is that it no longer makes sense to ask, “To what extent was it his biology, and to what extent was it him?,” because we now understand that there is no meaningful distinction between a person’s biology and his decision-making. They are inseparable.

    While our current style of punishment rests on a bedrock of personal volition and blame, our modern understanding of the brain suggests a different approach. Blameworthiness should be removed from the legal argot. It is a backward-looking concept that demands the impossible task of untangling the hopelessly complex web of genetics and environment that constructs the trajectory of a human life.

    Instead of debating culpability, we should focus on what to do, moving forward, with an accused lawbreaker. I suggest that the legal system has to become forward-looking, primarily because it can no longer hope to do otherwise. As science complicates the question of culpability, our legal and social policy will need to shift toward a different set of questions: How is a person likely to behave in the future? Are criminal actions likely to be repeated? Can this person be helped toward pro-social behavior? How can incentives be realistically structured to deter crime?

    The important change will be in the way we respond to the vast range of criminal acts. Biological explanation will not exculpate criminals; we will still remove from the streets lawbreakers who prove overaggressive, underempathetic, and poor at controlling their impulses. Consider, for example, that the majority of known serial killers were abused as children. Does this make them less blameworthy? Who cares? It’s the wrong question. The knowledge that they were abused encourages us to support social programs to prevent child abuse, but it does nothing to change the way we deal with the particular serial murderer standing in front of the bench. We still need to keep him off the streets, irrespective of his past misfortunes. The child abuse cannot serve as an excuse to let him go; the judge must keep society safe.

    Those who break social contracts need to be confined, but in this framework, the future is more important than the past. Deeper biological insight into behavior will foster a better understanding of recidivism—and this offers a basis for empirically based sentencing. Some people will need to be taken off the streets for a longer time (even a lifetime), because their likelihood of reoffense is high; others, because of differences in neural constitution, are less likely to recidivate, and so can be released sooner.

    The law is already forward-looking in some respects: consider the leniency afforded a crime of passion versus a premeditated murder. Those who commit the former are less likely to recidivate than those who commit the latter, and their sentences sensibly reflect that. Likewise, American law draws a bright line between criminal acts committed by minors and those by adults, punishing the latter more harshly. This approach may be crude, but the intuition behind it is sound: adolescents command lesser skills in decision-making and impulse control than do adults; a teenager’s brain is simply not like an adult’s brain. Lighter sentences are appropriate for those whose impulse control is likely to improve naturally as adolescence gives way to adulthood.

    Taking a more scientific approach to sentencing, case by case, could move us beyond these limited examples. For instance, important changes are happening in the sentencing of sex offenders. In the past, researchers have asked psychiatrists and parole-board members how likely specific sex offenders were to relapse when let out of prison. Both groups had experience with sex offenders, so predicting who was going straight and who was coming back seemed simple. But surprisingly, the expert guesses showed almost no correlation with the actual outcomes. The psychiatrists and parole-board members had only slightly better predictive accuracy than coin-flippers. This astounded the legal community.

    So researchers tried a more actuarial approach. They set about recording dozens of characteristics of some 23,000 released sex offenders: whether the offender had unstable employment, had been sexually abused as a child, was addicted to drugs, showed remorse, had deviant sexual interests, and so on. Researchers then tracked the offenders for an average of five years after release to see who wound up back in prison. At the end of the study, they computed which factors best explained the reoffense rates, and from these and later data they were able to build actuarial tables to be used in sentencing.

    Which factors mattered? Take, for instance, low remorse, denial of the crime, and sexual abuse as a child. You might guess that these factors would correlate with sex offenders’ recidivism. But you would be wrong: those factors offer no predictive power. How about antisocial personality disorder and failure to complete treatment? These offer somewhat more predictive power. But among the strongest predictors of recidivism are prior sexual offenses and sexual interest in children. When you compare the predictive power of the actuarial approach with that of the parole boards and psychiatrists, there is no contest: numbers beat intuition. In courtrooms across the nation, these actuarial tests are now used in presentencing to modulate the length of prison terms.

    We will never know with certainty what someone will do upon release from prison, because real life is complicated. But greater predictive power is hidden in the numbers than people generally expect. Statistically based sentencing is imperfect, but it nonetheless allows evidence to trump folk intuition, and it offers customization in place of the blunt guidelines that the legal system typically employs. The current actuarial approaches do not require a deep understanding of genes or brain chemistry, but as we introduce more science into these measures—for example, with neuroimaging studies—the predictive power will only improve. (To make such a system immune to government abuse, the data and equations that compose the sentencing guidelines must be transparent and available online for anyone to verify.)

    Beyond customized sentencing, a forward-thinking legal system informed by scientific insights into the brain will enable us to stop treating prison as a one-size-fits-all solution. To be clear, I’m not opposed to incarceration, and its purpose is not limited to the removal of dangerous people from the streets. The prospect of incarceration deters many crimes, and time actually spent in prison can steer some people away from further criminal acts upon their release. But that works only for those whose brains function normally. The problem is that prisons have become our de facto mental-health-care institutions—and inflicting punishment on the mentally ill usually has little influence on their future behavior. An encouraging trend is the establishment of mental-health courts around the nation: through such courts, people with mental illnesses can be helped while confined in a tailored environment. Cities such as Richmond, Virginia, are moving in this direction, for reasons of justice as well as cost-effectiveness. Sheriff C. T. Woody, who estimates that nearly 20 percent of Richmond’s prisoners are mentally ill, told CBS News, “The jail isn’t a place for them. They should be in a mental-health facility.” Similarly, many jurisdictions are opening drug courts and developing alternative sentences; they have realized that prisons are not as useful for solving addictions as are meaningful drug-rehabilitation programs.

    A forward-thinking legal system will also parlay biological understanding into customized rehabilitation, viewing criminal behavior the way we understand other medical conditions such as epilepsy, schizophrenia, and depression—conditions that now allow the seeking and giving of help. These and other brain disorders find themselves on the not-blameworthy side of the fault line, where they are now recognized as biological, not demonic, issues.

    Many people recognize the long-term cost-effectiveness of rehabilitating offenders instead of packing them into overcrowded prisons. The challenge has been the dearth of new ideas about how to rehabilitate them. A better understanding of the brain offers new ideas. For example, poor impulse control is characteristic of many prisoners. These people generally can express the difference between right and wrong actions, and they understand the disadvantages of punishment—but they are handicapped by poor control of their impulses. Whether as a result of anger or temptation, their actions override reasoned consideration of the future.

    If it seems difficult to empathize with people who have poor impulse control, just think of all the things you succumb to against your better judgment. Alcohol? Chocolate cake? Television? It’s not that we don’t know what’s best for us, it’s simply that the frontal-lobe circuits representing long-term considerations can’t always win against short-term desire when temptation is in front of us.

    With this understanding in mind, we can modify the justice system in several ways. One approach, advocated by Mark A. R. Kleiman, a professor of public policy at UCLA, is to ramp up the certainty and swiftness of punishment—for instance, by requiring drug offenders to undergo twice-weekly drug testing, with automatic, immediate consequences for failure—thereby not relying on distant abstraction alone. Similarly, economists have suggested that the drop in crime since the early 1990s has been due, in part, to the increased presence of police on the streets: their visibility shores up support for the parts of the brain that weigh long-term consequences.

    We may be on the cusp of finding new rehabilitative strategies as well, affording people better control of their behavior, even in the absence of external authority. To help a citizen reintegrate into society, the ethical goal is to change him as little as possible while bringing his behavior into line with society’s needs. My colleagues and I are proposing a new approach, one that grows from the understanding that the brain operates like a team of rivals, with different neural populations competing to control the single output channel of behavior. Because it’s a competition, the outcome can be tipped. I call the approach “the prefrontal workout.”

    The basic idea is to give the frontal lobes practice in squelching the short-term brain circuits. To this end, my colleagues Stephen LaConte and Pearl Chiu have begun providing real-time feedback to people during brain scanning. Imagine that you’d like to quit smoking cigarettes. In this experiment, you look at pictures of cigarettes during brain imaging, and the experimenters measure which regions of your brain are involved in the craving. Then they show you the activity in those networks, represented by a vertical bar on a computer screen, while you look at more cigarette pictures. The bar acts as a thermometer for your craving: if your craving networks are revving high, the bar is high; if you’re suppressing your craving, the bar is low. Your job is to make the bar go down. Perhaps you have insight into what you’re doing to resist the craving; perhaps the mechanism is inaccessible. In any case, you try out different mental avenues until the bar begins to slowly sink. When it goes all the way down, that means you’ve successfully recruited frontal circuitry to squelch the activity in the networks involved in impulsive craving. The goal is for the long term to trump the short term. Still looking at pictures of cigarettes, you practice making the bar go down over and over, until you’ve strengthened those frontal circuits. By this method, you’re able to visualize the activity in the parts of your brain that need modulation, and you can witness the effects of different mental approaches you might take.

    If this sounds like biofeedback from the 1970s, it is—but this time with vastly more sophistication, monitoring specific networks inside the head rather than a single electrode on the skin. This research is just beginning, so the method’s efficacy is not yet known—but if it works well, it will be a game changer. We will be able to take it to the incarcerated population, especially those approaching release, to try to help them avoid coming back through the revolving prison doors.

    This prefrontal workout is designed to better balance the debate between the long- and short-term parties of the brain, giving the option of reflection before action to those who lack it. And really, that’s all maturation is. The main difference between teenage and adult brains is the development of the frontal lobes. The human prefrontal cortex does not fully develop until the early 20s, and this fact underlies the impulsive behavior of teenagers. The frontal lobes are sometimes called the organ of socialization, because becoming socialized largely involves developing the circuitry to squelch our first impulses.

    This explains why damage to the frontal lobes unmasks unsocialized behavior that we would never have thought was hidden inside us. Recall the patients with frontotemporal dementia who shoplift, expose themselves, and burst into song at inappropriate times. The networks for those behaviors have been lurking under the surface all along, but they’ve been masked by normally functioning frontal lobes. The same sort of unmasking happens in people who go out and get rip-roaring drunk on a Saturday night: they’re disinhibiting normal frontal-lobe function and letting more-impulsive networks climb onto the main stage. After training at the prefrontal gym, a person might still crave a cigarette, but he’ll know how to beat the craving instead of letting it win. It’s not that we don’t want to enjoy our impulsive thoughts (Mmm, cake), it’s merely that we want to endow the frontal cortex with some control over whether we act upon them (I’ll pass). Similarly, if a person thinks about committing a criminal act, that’s permissible as long as he doesn’t take action.

    For the pedophile, we cannot hope to control whether he is attracted to children. That he never acts on the attraction may be the best we can hope for, especially as a society that respects individual rights and freedom of thought. Social policy can hope only to prevent impulsive thoughts from tipping into behavior without reflection. The goal is to give more control to the neural populations that care about long-term consequences—to inhibit impulsivity, to encourage reflection. If a person thinks about long-term consequences and still decides to move forward with an illegal act, then we’ll respond accordingly. The prefrontal workout leaves the brain intact—no drugs or surgery—and uses the natural mechanisms of brain plasticity to help the brain help itself. It’s a tune-up rather than a product recall.

    We have hope that this approach represents the correct model: it is grounded simultaneously in biology and in libertarian ethics, allowing a person to help himself by improving his long-term decision-making. Like any scientific attempt, it could fail for any number of unforeseen reasons. But at least we have reached a point where we can develop new ideas rather than assuming that repeated incarceration is the single practical solution for deterring crime.

    Along any axis that we use to measure human beings, we discover a wide-ranging distribution, whether in empathy, intelligence, impulse control, or aggression. People are not created equal. Although this variability is often imagined to be best swept under the rug, it is in fact the engine of evolution. In each generation, nature tries out as many varieties as it can produce, along all available dimensions.

    Variation gives rise to lushly diverse societies—but it serves as a source of trouble for the legal system, which is largely built on the premise that humans are all equal before the law. This myth of human equality suggests that people are equally capable of controlling impulses, making decisions, and comprehending consequences. While admirable in spirit, the notion of neural equality is simply not true.

    As brain science improves, we will better understand that people exist along continua of capabilities, rather than in simplistic categories. And we will be better able to tailor sentencing and rehabilitation for the individual, rather than maintain the pretense that all brains respond identically to complex challenges and that all people therefore deserve the same punishments. Some people wonder whether it’s unfair to take a scientific approach to sentencing—after all, where’s the humanity in that? But what’s the alternative? As it stands now, ugly people receive longer sentences than attractive people; psychiatrists have no capacity to guess which sex offenders will reoffend; and our prisons are overcrowded with drug addicts and the mentally ill, both of whom could be better helped by rehabilitation. So is current sentencing really superior to a scientifically informed approach?

    Neuroscience is beginning to touch on questions that were once only in the domain of philosophers and psychologists, questions about how people make decisions and the degree to which those decisions are truly “free.” These are not idle questions. Ultimately, they will shape the future of legal theory and create a more biologically informed jurisprudence.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  21. #21
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily


    Spoiler:
    I've noted before that we are witnessing a classic patent thicket in the realm of smartphones, with everyone and his or her dog suing everyone else (and their dog.) But without doubt one of the more cynical applications of intellectual monopolies is Oracle suit against Google. This smacked entirely of the lovely Larry Ellison spotting a chance to extra some money without needing to do much other than point his legal department in the right direction.

    If that sounds harsh, take a read of this document from the case that turned up recently. It's Google's response to an Oracle expert witness's estimate of how much the former should be paying the latter:

    Cockburn opines that Google, if found to infringe, would owe Oracle between 1.4 and 6.1 billion dollars -- a breathtaking figure that is out of proportion to any meaningful measure of the intellectual property at issue. Even the low end of Cockburn’s range is over 10 times the amount that Sun Microsystems, Inc. made each year for the entirety of its Java licensing program and 20 times what Sun made for Java-based mobile licensing. Cockburn’s theory is neatly tailored to enable Oracle to finance nearly all of its multi-billion dollar acquisition of Sun, even though the asserted patents and copyrights accounted for only a fraction of the value of Sun.
    It does, indeed, sound rather as if Ellison is trying to get his entire purchase price back in a single swoop.

    Now, I may be somewhat biased against this action, since it is causing all sorts of problems for the Linux-based Android, and I am certainly not a lawyer, but it does seem to me that the points of Google's lawyers are pretty spot on. For example:

    First, Cockburn has no basis for including all of Google’s revenue from Android phones into the base of his royalty calculation. The accused product here is the Android software platform, which Google does not sell (and Google does not receive any payment, fee, royalty, or other remuneration for its contributions to Android). Cockburn seems to be arguing that Google’s advertising revenue from, e.g., mobile searches on Android devices should be included in the royalty base as a convoyed sale, though he never articulates or supports this justification and ignores the applicable principles under Uniloc and other cases. In fact, the value of the Android software and of Google’s ads are entirely separate: the software allows for phones to function, whether or not the user is viewing ads; and Google’s ads are viewable on any software and are not uniquely enabled by Android. Cockburn’s analysis effectively seeks disgorgement of Google’s profits even though “[t]he determination of a reasonable royalty . . . is based not on the infringer’s profit, but on the royalty to which a willing licensor and a willing licensee would have agreed at the time the infringement began.”
    Oracle's expert seems to be adopting the old kitchen-sink approach, throwing in everything he can think of.
    Second, Cockburn includes Oracle’s “lost profits and opportunities” in his purported royalty base. This is an obvious ploy to avoid the more demanding test for recovery of lost profits that Oracle cannot meet. ... Most audaciously, Cockburn tries to import into his royalty base the alleged harm Sun and Oracle would have suffered from so-called “fragmentation” of Java into myriad competing standards, opining that Oracle’s damages from the Android software includes theoretical downstream harm to a wholly different Oracle product. This is not a cognizable patent damages theory, and is unsupported by any precedent or analytical reasoning.
    Even assuming that Google has willfully infringed on all the patents that Oracle claims - and that has still to be proved - it's hard to see how Oracle has really lost “opportunities” as a result. If anything, the huge success of Android, based as it is on Java, is likely to increase the demand for Java programmers, and generally make the entire Java ecosystem more valuable - greatly to Oracle's benefit.

    So, irrespective of any royalties that may or may not be due, Oracle has in any case already gained from Google's action, and will continue to benefit from the rise of Android as the leading smartphone operating system. Moreover, as Android is used in other areas - tablets, set-top boxes, TVs etc. - Oracle will again benefit from the vastly increased size of the Java ecosystem over which it has substantial control.

    Of course, I am totally unsurprised to find Oracle doing this. But to be fair to the Larry Ellison and his company, this isn't just about Oracle, but is also to do with the inherent problems of software patents, which encourage this kind of behavior (not least by rewarding it handsomely, sometimes.)

    Lest you think this is just my jaundiced viewpoint, let's turn to recent paper from James Bessen, who is a Fellow of the Berkman Center for Internet and Society at Harvard, and Lecturer at the Boston University School of Law. I've mentioned Bessen several times in this blog, in connection with his book “Patent Failure”, which is a look at the US patent system in general. Here's the background to the current paper, entitled “A Generation of Software Patents”:

    In 1994, the Court of Appeals for the Federal Circuit decided in In re Alappat that an invention that had a novel software algorithm combined with a trivial physical step was eligible for patent protection. This ruling opened the way for a large scale increase in patenting of software. Alappat and his fellow inventors were granted patent 5,440,676, the patent at issue in the appeal, in 1995. That patent expired in 2008. In other words, we have now experienced a full generation of software patents.

    The Alappat decision was controversial, not least because the software industry had been highly innovative without patent protection. In fact, there had long been industry opposition to patenting software. Since the 1960s, computer companies opposed patents on software, first, in their input to a report by a presidential commission in 1966 and then in amici briefs to the Supreme Court in Gottschalk v. Benson in 1972 (they later changed their views). Major software firms opposed software patents through the mid-1990s.6 Perhaps more surprising, software developers themselves have mostly been opposed to patents on software.
    That's a useful reminder that the software industry was innovative before there were software patents, and didn't want them introduced. The key question that Bessen addresses in his paper is a good one: how have things panned out in the 15 or so years since software patents have been granted in the US?

    Here's what he says happened:

    To summarize the literature, in the 1990s, the number of software patents granted grew rapidly, but these were acquired primarily by firms outside the software industry and perhaps for reasons other than to protect innovations. Relatively few software firms obtained patents in the 1990s and so, it seems that most software firms did not benefit from software patents. More recently, the majority of venture-backed startups do seem to have obtained patents. The reasons for this, however, are not entirely clear and so it is hard to know whether these firms realized substantial positive incentives for investing in innovation from patents. On the other hand, software patents are distinctly implicated in the tripling of patent litigation since the early 1990s. This litigation implies that software patents imposed significant disincentives for investment in R&D for most industries including software.
    It is hard to conclude from the above findings that software patents significantly increased R&D incentives in the software industry.

    And yet this is one of the reasons that is often given to justify the existence of software patents despite their manifest problems.

    Bessen then goes on to look at how things have changed more recently:

    most software firms still do not patent, although the percentage has increased. And most software patents go to firms outside the software industry, despite the industry’s substantial role in software innovation. While the share of patents going to the software industry has increased, that increase is largely the result of patenting by a few large firms.
    Again, this gives the lie to the claim that software patents are crucial for smaller companies in order to protect their investments; instead, the evidence is that large companies are simply building up bigger and bigger patent portfolios, largely for defensive purposes, as Bessen notes in his concluding remarks:

    Has the patent system adapted to software patents so as to overcome initial problems of too little benefit for the software industry and too much litigation? The evidence makes it hard to conclude that these problems have been resolved. While more software firms now obtain patents, most still do not, hence most software firms do not directly benefit from software patents. Patenting in the software industry is largely the activity of a few large firms. These firms realize benefits from patents, but the incentives that patents provide them might well be limited because these firms likely have other ways of earning returns from their innovations, such as network effects and complementary services. Moreover, anecdotal evidence suggests that some of these firms patent for defensive reasons, rather than to realize rents on their innovations: Adobe, Oracle and others announced that patents were not necessary in order to promote innovation at USPTO hearings in 1994, yet they now patent heavily.

    On the other hand, the number of lawsuits involving software patents has more than tripled since 1999. This represents a substantial increase in litigation risk and hence a disincentive to invest in innovation. The silver lining is that the probability that a software patent is in a lawsuit has stopped increasing and might have begun a declining trend. This occurred perhaps in response to a new attitude in the courts and several Supreme Court decisions that have reined in some of the worst excesses related to software patents.
    These comments come from an academic who certainly has no animus against patents. They hardly represent a ringing endorsement, but emphasize, rather, that very little is gained by granting such intellectual monopolies. Careful academic work like this, taken together with the extraordinary circus we are witnessing in the smartphone arena, strengthens the case for calling a halt now to the failed experiment of software patents.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  22. #22
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default The PicoLisp Ticker


    Spoiler:
    Around end of May, I was playing with an algorithm I had received from Bengt Grahn, many years ago. A small program - it was even part of the PicoLisp distribution ("misc/crap.l") for many years - which when given an arbitrary sample text in some language, produces an endless stream of pseudo-text which strongly resembles that language.

    It was fun, so I decided to set up a PicoLisp "Ticker" page, producing a stream of "news": http://ticker.picolisp.com

    The source for the server is simple:
    PHP Code:
    (allowed ()
          *
    Page "!start" "@lib.css" "ticker.zip" )

       (
    load "@lib/http.l" "@lib/xhtml.l")
       (
    load "misc/crap.l")

       (
    one *Page)

       (
    de start ()
          (
    seed (time))
          (
    html 0 "PicoLisp Ticker" "@lib.css" NIL
             
    (<h2NIL "Page " *Page)
             (<
    div'em50
                (do 3 (<p> NIL (crap 4)))
                (<spread>
                   (<href> "Sources" "ticker.zip")
                   (<this> '
    *Page (inc *Page"Next page") ) ) ) )

       (
    de main ()
          (
    learn "misc/ticker.txt") )

       (
    de go ()
          (
    server 21000 "!start") ) 
    The sample text for the learning phase, "misc/ticker.txt", is a plain text version of the PicoLisp FAQ. The complete source, including the text generator, can be downloaded via the "Sources" link as "ticker.zip".

    Now look at the "Next page" link, appearing on the bottom right of the page. It always points to a page with a number one greater than the current page, providing an unlimited supply of ticker pages.

    I went ahead, and installed and started the server. To get some logging, I inserted the line
    [PHP] (out 2 (prinl (stamp) " {" *Url "} Page " *Page " [" *Adr "] " *Agent))/PHP]

    at the beginning of the 'start' function.

    On June 18th I announced it on Twitter, and watched the log files. Immediately, within one or two seconds (!), Googlebot accessed it:
    PHP Code:
       2011-06-18 11:22:04  Page 1  [66.249.71.139Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) 
    Wow, I thought, that was fast! Don't know if this was just by chance, or if Google always has such a close watch on Twitter.

    Anyway, I was curious about what the search engine would do with such nonsense text, and how it would handle the infinite number of pages. During the next seconds and minutes, other bots and possibly human users accessed the ticker:
    PHP Code:
       2011-06-18 11:22:08  Page 1  [65.52.23.76Mozilla/4.0 (compatibleMSIE 7.0Windows NT 6.0)
       
    2011-06-18 11:22:10  Page 1  [65.52.4.133Mozilla/4.0 (compatibleMSIE 7.0Windows NT 6.0)
       
    2011-06-18 11:22:20  Page 1  [50.16.239.111Mozilla/5.0 (compatibleBirubot/1.0Gecko/2009032608 Firefox/3.0.8
       2011
    -06-18 11:29:52  Page 1  [174.129.42.87Python-urllib/2.6
       2011
    -06-18 11:30:34  Page 1  [174.129.42.87Python-urllib/2.6
       2011
    -06-18 11:33:54  Page 1  [89.151.99.92Mozilla/5.0 (compatibleMSIE 6.0bWindows NT 5.0Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
       2011
    -06-18 11:33:54  Page 1  [89.151.99.92Mozilla/5.0 (compatibleMSIE 6.0bWindows NT 5.0Gecko/2009011913 Firefox/3.0.6 TweetmemeBot
       2011
    -06-18 13:47:21  Page 1  [190.175.174.220Mozilla/5.0 (X11ULinux i686en-USrv:1.9.2.17Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
       2011
    -06-18 13:49:13  Page 2  [190.175.174.220Mozilla/5.0 (X11ULinux i686en-USrv:1.9.2.17Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
       2011
    -06-18 13:49:21  Page 3  [190.175.174.220Mozilla/5.0 (X11ULinux i686en-USrv:1.9.2.17Gecko/20110428 Fedora/3.6.17-1.fc14 Firefox/3.6.17
       2011
    -06-18 19:43:36  Page 1  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:43:54  Page 2  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:44:11  Page 3  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:44:13  Page 4  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:44:16  Page 5  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:44:18  Page 6  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30
       2011
    -06-18 19:44:20  Page 7  [24.167.162.218Mozilla/5.0 (X11Linux x86_64AppleWebKit/534.30 (KHTMLlike GeckoChrome/12.0.742.91 Safari/534.30 
    Mr. Google came back the following day:
    PHP Code:
       2011-06-19 00:25:57  Page 2  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-06-19 01:03:13  Page 3  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-06-19 01:35:57  Page 4  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-06-19 02:39:19  Page 5  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-06-19 03:43:39  Page 6  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-06-19 04:17:02  Page 7  [66.249.67.197Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) 
    In between (not shown here) were also some accesses, probably by non-bots, who usually gave up after a few pages.

    Mr. Google, however, assiduously went through "all" pages. The page numbers increased sequentially, but he also re-visited page 1, going up again. Now there were several indexing threads, and by June 23rd the first one exceeded page 150.

    I felt sorry for poor Googlebot, and installed a "robots.txt" the same day, disallowing the ticker page for robots. I could see that several other bots fetched "robots.txt". But not Google. Instead, it kept following the pages of the ticker.

    Then, finally, on July 5th, Googlebot looked at "robots.txt":
    PHP Code:
       "robots.txt" 2011-07-05 07:03:05 Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) ticker.picolisp.com
       
    "robots.txt: disallowed all" 
    The indexing, however, went on. Excerpt:
    PHP Code:
       2011-07-05 04:27:46 {!startPage 500  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 04:58:50 {!startPage 501  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 05:30:24 {!startPage 502  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 06:02:10 {!startPage 503  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 06:32:14 {!startPage 504  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 07:02:41 {!startPage 505  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 08:02:31 {!startPage 506  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 08:45:52 {!startPage 507  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 09:20:06 {!startPage 508  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html)
       
    2011-07-05 09:51:49 {!startPage 509  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) 
    Strange. I would have expected the indexing to stop after Page 505.

    In fact, all other robots seem to obey "robots.txt". Mr. Google, however, even started a new thread five days later again:
    PHP Code:
       2011-07-10 02:22:52 {!startPage 1  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) 
    I should feel flattered, if the PicoLisp news ticker is so interesting!

    How will that go on? As of today, we have reached
    PHP Code:
       2011-07-15 09:42:36 {!startPage 879  [66.249.71.203Mozilla/5.0 (compatibleGooglebot/2.1; +http://www.google.com/bot.html) 
    I'll stay tuned ...
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  23. #23
    Your Ad here nisargshah95's Avatar
    Join Date
    Feb 2010
    Location
    Goa, India
    Posts
    394

    Thumbs up Re: The Geeks Daily

    Articles worth a read buddy. Keep it up...
    "The nature of the Internet and the importance of net neutrality is that innovation can come from everyone."
    System: HP 15-r022TX { Intel i5 4210U || 8GB RAM || NVIDIA GeForce 820M || Ubuntu 14.04 Trusty Tahr LTS 64-bit (Primary) + Windows 8.1 Pro 64-bit || 1TB HDD + 1TB Seagate external HDD }
    Twitter - https://twitter.com/nisargshah95

  24. #24
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By Gudio van Rossum
    Spoiler:
    This morning I had a chat with the students at Google's CAPE program. Since I wrote up what I wanted to say I figured I might as well blog it here. Warning: this is pretty unedited (or else it would never be published . I'm posting it in my "personal" blog instead of the "Python history" blog because it mostly touches on my career before Python. Here goes.

    Have you ever written a computer program? Using which language?
    • HTML
    • Javascript
    • Java
    • Python
    • C++
    • C
    • Other - which?

    [It turned out the students had used a mixture of Scratch, App Inventor, and Processing. A few students had also used Python or Java.]

    Have you ever invented a programming language?

    If you have programmed, you know some of the problems with programming languages. Have you ever thought about why programming isn't easier? Would it help if you could just talk to your computer? Have you tried speech recognition software? I have. It doesn't work very well yet.

    How do you think programmers will write software 10 years from now? Or 30? 50?

    Do you know how programmers worked 30 years ago?

    I do.

    I was born in Holland in 1956. Things were different.

    I didn't know what a computer was until I was 18. However, I tinkered with electronics. I built a digital clock. My dream was to build my own calculator.

    Then I went to university in Amsterdam to study mathematics and they had a computer that was free for students to use! (Not unlimited though. We were allowed to use something like one second of CPU time per day.

    I had to learn how to use punch cards. There were machines to create them that had a keyboard. The machines were as big as a desk and made a terrible noise when you hit a key: a small hole was punched in the card with a huge force and great precision. If you made a mistake you had to start over.

    I didn't get to see the actual computer for several more years. What we had in the basement of the math department was just an end point for a network that ran across the city. There were card readers and line printers and operators who controlled them. But the actual computer was elsewhere.

    It was a huge, busy place, where programmers got together and discussed their problems, and I loved to hang out there. In fact, I loved it so much I nearly dropped out of university. But eventually I graduated.

    Aside: Punch cards weren't invented for computers; they were invented for sorting census data and the like before WW2. [UPDATE: actually much earlier, though the IBM 80-column format I used did originate in 1928.] There were large mechanical machines for sorting stacks of cards. But punch cards are the reason that some software still limits you (or just defaults) to 80 characters per line.

    My first program was a kind of "hello world" program written in Algol-60. That language was only popular in Europe, I believe. After another student gave me a few hints I learned the rest of the language straight from the official definition of the language, the "Revised Report on the Algorithmic Language Algol-60." That was not an easy report to read! The language was a bit cumbersome, but I didn't mind, I learned the basics of programming anyway: variables, expressions, functions, input/output.

    Then a professor mentioned that there was a new programming language named Pascal. There was a Pascal compiler on our mainframe so I decided to learn it. I borrowed the book on Pascal from the departmental library (there was only one book, and only one copy, and I couldn't afford my own). After skimming it, I decided that the only thing I really needed were the "railroad diagrams" at the end of the book that summarized the language's syntax. I made photocopies of those and returned the book to the library.

    Aside: Pascal really had only one new feature compared to Algol-60, pointers. These baffled me for the longest time. Eventually I learned assembly programming, which explained the memory model of a computer for the first time. I realized that a pointer was just an address. Then I finally understood them.

    I guess this is how I got interested in programming languages. I learned the other languages of the day along the way: Fortran, Lisp, Basic, Cobol. With all this knowledge of programming, I managed to get a plum part-time job at the data center maintaining the mainframe's operating system. It was the most coveted job among programmers. It gave me access to unlimited computer time, the fastest terminals (still 80 x 24 though , and most important, a stimulating environment where I got to learn from other programmers. I also got access to a Unix system, learned C and shell programming, and at some point we had an Apple II (mostly remembered for hours of playing space invaders). I even got to implement a new (but very crummy) programming language!

    All this time, programming was one of the most fun things in my life. I thought of ideas for new programs to write all the time. But interestingly, I wasn't very interested in using computers for practical stuff! Nor even to solve mathematical puzzles (except that I invented a clever way of programming Conway's Game of Life that came from my understanding of using logic gates to build a binary addition circuit).

    What I liked most though was write programs to make the life of programmers better. One of my early creations was a text editor that was better than the system's standard text editor (which wasn't very hard . I also wrote an archive program that helped conserve disk space; it was so popular and useful that the data center offered it to all its customers. I liked sharing programs, and my own principles for sharing were very similar to what later would become Open Source (except I didn't care about licenses -- still don't .

    As a term project I wrote a static analyzer for Pascal programs with another student. Looking back I think it was a horrible program, but our professor thought it was brilliant and we both got an A+. That's where I learned about parsers and such, and that you can do more with a parser than write a compiler.

    I combined pleasure with a good cause when I helped out a small left-wing political party in Holland automate their membership database. This was until then maintained by hand as a collection of metal plates plates into which letters were stamped using an antiquated machine not unlike a steam hammer . In the end the project was not a great success, but my contributions (including an emulation of Unix's venerable "ed" editor program written in Cobol) piqued the attention of another volunteer, whose day job was as computer science researcher at the Mathematical Center. (Now CWI.)

    This was Lambert Meertens. It so happened that he was designing his own programming language, named B (later ABC), and when I graduated he offered me a job on his team of programmers who were implementing an interpreter for the language (what we would now call a virtual machine).

    The rest I have written up earlier in my Python history blog.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  25. #25
    Whompy Whomperson Nipun's Avatar
    Join Date
    Mar 2011
    Location
    New Delhi
    Posts
    1,467

    Default Re: The Geeks Daily

    Nice......
    The beauty of Indian roads is that one needs to look on both sides while crossing a one way road!
    [B]▒▒ [URL="http://adf.ly/2148556/tf2comic"] ¯TF2 COMIC MAKER!_ ▒▒[/URL][/B]
    [url="http://adf.ly/2148556/roadsense"]Drive sensibly, please![/url]
    [URL="http://adf.ly/2148556/nocivicsense"]Educated Illiterates[/URL]

  26. #26
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    When patents attack Android
    By David Drummond
    Spoiler:
    I have worked in the tech sector for over two decades. Microsoft and Apple have always been at each other’s throats, so when they get into bed together you have to start wondering what's going on. Here is what’s happening:

    Android is on fire. More than 550,000 Android devices are activated every day, through a network of 39 manufacturers and 231 carriers. Android and other platforms are competing hard against each other, and that’s yielding cool new devices and amazing mobile apps for consumers.

    But Android’s success has yielded something else: a hostile, organized campaign against Android by Microsoft, Oracle, Apple and other companies, waged through bogus patents.

    They’re doing this by banding together to acquire Novell’s old patents (the “CPTN” group including Microsoft and Apple) and Nortel’s old patents (the “Rockstar” group including Microsoft and Apple), to make sure Google didn’t get them; seeking $15 licensing fees for every Android device; attempting to make it more expensive for phone manufacturers to license Android (which we provide free of charge) than Windows Mobile; and even suing Barnes & Noble, HTC, Motorola, and Samsung. Patents were meant to encourage innovation, but lately they are being used as a weapon to stop it.

    A smartphone might involve as many as 250,000 (largely questionable) patent claims, and our competitors want to impose a “tax” for these dubious patents that makes Android devices more expensive for consumers. They want to make it harder for manufacturers to sell Android devices. Instead of competing by building new features or devices, they are fighting through litigation.

    This anti-competitive strategy is also escalating the cost of patents way beyond what they’re really worth. Microsoft and Apple’s winning $4.5 billion for Nortel’s patent portfolio was nearly five times larger than the pre-auction estimate of $1 billion. Fortunately, the law frowns on the accumulation of dubious patents for anti-competitive means — which means these deals are likely to draw regulatory scrutiny, and this patent bubble will pop.

    We’re not naive; technology is a tough and ever-changing industry and we work very hard to stay focused on our own business and make better products. But in this instance we thought it was important to speak out and make it clear that we’re determined to preserve Android as a competitive choice for consumers, by stopping those who are trying to strangle it.

    We’re looking intensely at a number of ways to do that. We’re encouraged that the Department of Justice forced the group I mentioned earlier to license the former Novell patents on fair terms, and that it’s looking into whether Microsoft and Apple acquired the Nortel patents for anti-competitive means. We’re also looking at other ways to reduce the anti-competitive threats against Android by strengthening our own patent portfolio. Unless we act, consumers could face rising costs for Android devices — and fewer choices for their next phone.
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  27. #27
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By Pablo Villalba
    Spoiler:
    My romanian friend

    Romanian and Spanish are quite different languages. I’m spanish and it’s almost impossible to understand for me, since I have never studied it.

    Some years ago I met a group of romanians who were in Spain for their first time. I was surprised to see they could understand Spanish quite well, even if they could barely speak it. They had never studied Spanish before and I was very puzzled. How could it be that they understood me while I couldn’t understand them?

    My friend explained: As a child, she had been spent long hours watching latin american soap operas on TV. These shows had spanish audio and romanian subtitles. Because she was a child, like many others, she just picked it up naturally from hearing it – but she never learned how to speak in Spanish.

    She was raised by her environment and close family, but while she watched those shows she learned about a completely different language.

    This story is not about romanian and spanish, or a (perhaps rare) case of somebody learning a language by accident. I understood the meaning of this story much later, when I looked back at my first years with a computer.

    A kid with a computer

    Your parents and teachers are just some of your influences. You have also been raised in a way by Disney, Hollywood, TV series. I was, like many others, raised by the Valley.

    As a kid I would play video games and enjoy the quirky humor from LucasArts. Then my father would set up QBasic for me and help me get started write my own games. I’d hop into the IRC to learn and share with others. I’d play online and meet like-minded people, and learn their language and style by imitation. I’d read and learn about just about everything fun I could get my hands on. I’d read Slashdot and embrace the open-source anti-Microsoft ideas. I’d learn web design and try to build something like others were building out there. And I’d dream of growing up and having a game development startup.

    Unknowingly, I was growing up into a culture that wasn’t my immediate environment. And I felt incredibly at home. That doesn’t mean I got disconnected from my surroundings, I just had a connection with that new world, with its trends and stories and memes.

    The pilgrims

    I was 24 the first time I went to the Valley. As I walked through San Francisco and met people there, I couldn’t help having a déja-vu feeling. It was like I had already been there a long time ago, like the city had been waiting for me all those years.

    As Nolan Bushnell gave me a ride on his car through the city, I was thinking about this: all the kids who grew up in their little towns hacking on their computers would someday do their pilgrimage and meet each other here. The Valley had raised us all, and we were finally coming back home.

    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  28. #28
    Wise Old Owl sygeek's Avatar
    Join Date
    Apr 2011
    Location
    Lucknow
    Posts
    1,845

    Default Re: The Geeks Daily

    By Scott Hanselman
    Spoiler:
    pho·ny also pho·ney (fō'nē) adj. pho·ni·er, pho·ni·est

    1.
    a. Not genuine or real; counterfeit: a phony credit card.
    b. False; spurious: a phony name.

    2. Not honest or truthful; deceptive: a phony excuse.

    3.
    a. Insincere or hypocritical.
    b. Giving a false impression of truth or authenticity; specious.


    Along with my regular job at Microsoft I also mentor a number of developers and program managers. I spoke to a young man recently who is extremely thoughtful and talented and he confessed he was having a crisis of confidence. He was getting stuck on things he didn't think he should be getting stuck on, not moving projects forward, and it was starting to seep into his regular life.

    He said:

    "Deep down know I’m ok. Programming since 13, graduated top of CS degree, got into Microsoft – but [I feel like I'm] an imposter."

    I told him, straight up, You Are Not Alone.

    For example, I've got 30 domains and I've only done something awesome with 3 of them. Sometimes when I log into my DNS manager I just see 27 failures. I think to myself, there's 27 potential businesses, 27 potential cool open source projects just languishing. If you knew anything you'd have made those happen. What a phony.

    I hit Zero Email a week ago, now I'm at 122 today in my Inbox and it's stressing me out. And I teach people how to manage their inboxes. What a phony.

    When I was 21 I was untouchable. I thought I was a gift to the world and you couldn't tell me anything. The older I get the more I realize that I'm just never going to get it all, and I don't think as fast as I used to. What a phony.

    I try to learn a new language each year and be a Polyglot Programmer but I can feel F# leaking out of my head as I type this and I still can't get my head around really poetic idiomatic Ruby. What a phony.

    I used to speak Spanish really well and I still study Zulu with my wife but I spoke to a native Spanish speaker today and realize I'm lucky if I can order a burrito. I've all but forgotten my years of Amharic. My Arabic, Hindi and Chinese have atrophied into catch phrases at this point. What a phony. (Clarification: This one is not intended as a humblebrag. I was a linguist and languages were part of my identity and I'm losing that and it makes me sad.)

    But here's the thing. We all feel like phonies sometimes. We are all phonies. That's how we grow. We get into situations that are just a little more than we can handle, or we get in a little over our heads. Then we can handle them, and we aren't phonies, and we move on to the next challenge.

    The idea of the Imposter Syndrome is not a new one.

    Despite external evidence of their competence, those with the syndrome remain convinced that they are frauds and do not deserve the success they have achieved. Proof of success is dismissed as luck, timing, or as a result of deceiving others into thinking they are more intelligent and competent than they believe themselves to be.

    The opposite of this is even more interesting, the Dunning-Kruger effect. You may have had a manager or two with this issue.

    The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to recognize their mistakes.

    It's a great read for a Wikipedia article, but here's the best line and the one you should remember.

    ...people with true ability tended to underestimate their relative competence.

    I got an email from a podcast listener a few years ago. I remembered it when writing this post, found it in the archives and I'm including some of it here with emphasis mine.

    I am a regular listener to your podcast and have great respect for you. With that in mind, I was quite shocked to hear you say on a recent podcast, "Everyone is lucky to have a job" and apply that you include yourself in this sentiment.

    I have heard developers much lesser than your stature indicate a much more healthy (and accurate) attitude that they feel they are good enough that they can get a job whenever they want and so it's not worth letting their current job cause them stress. Do you seriously think that you would have a hard time getting a job or for that matter starting your own business? If you do, you have a self-image problem that you should seriously get help with.

    But it's actually not you I'm really concerned about... it's your influence on your listeners. If they hear that you are worried about their job, they may be influenced to feel that surely they should be worried.


    I really appreciated what this listener said and emailed him so. Perhaps my attitude is a Western Cultural thing, or a uniquely American one. I'd be interested in what you think, Dear Non-US Reader. I maintain that most of us feel this way sometimes. Perhaps we're unable to admit it. When I see programmers with blog titles like "I'm a freaking ninja" or "bad ass world's greatest programmer" I honestly wonder if they are delusional or psychotic. Maybe they just aren't very humble.

    I stand by my original statement that I feel like a phony sometimes. Sometimes I joke, "Hey, it's a good day, my badge still works" or I answer "How are you?" with "I'm still working." I do that because it's true. I'm happy to have a job, while I could certainly work somewhere else. Do I need to work at Microsoft? Of course not. I could probably work anywhere if I put my mind to it, even the IT department at Little Debbie Snack Cakes. I use insecurity as a motivator to achieve and continue teaching.

    I asked some friends if they felt this way and here's some of what they said.

    • Totally! Not. I've worked hard to develop and hone my craft, I try to be innovative, and deliver results.
    • Plenty of times! Most recently I started a new job where I've been doing a lot of work in a language I'm rusty in and all the "Woot I've been doing 10 years worth of X language" doesn't mean jack. Very eye opening, very humbling, very refreshing
    • Quite often actually, especially on sites like stack overflow. It can be pretty intimidating and demotivating at times. Getting started in open source as well. I usually get over it and just tell myself that I just haven't encountered a particular topic before so I'm not an expert at it yet. I then dive in and learn all I can about it.
    • I always feel like a phony just biding my time until I'm found out. It definitely motivates me to excel further, hoping to outrun that sensation that I'm going to be called out for something I can't do
    • Phony? I don't. If anything, I wish I was doing more stuff on a grander scale. But I'm content with where I am now (entrepreneurship and teaching).
    • I think you are only a phony when you reflect your past work and don't feel comfortable about your own efforts and achievements.
    • Hell, no. I work my ass off. I own up to what I don't know, admit my mistakes, give credit freely to other when it's due and spend a lot of time always trying to learn more. I never feel like a phony.
    • Quite often. I don't truly think I'm a phony, but certainly there are crises of confidence that happen... particularly when I get stuck on something and start thrashing.


    There are some folks who totally have self-confidence. Of the comment sample above, there are three "I don't feel like a phony" comments. But check this out: two of those folks aren't in IT. Perhaps IT people are more likely to have low self-confidence?

    The important thing is to recognize this: If you are reading this or any blog, writing a blog of your own, or working in IT, you are probably in the top 1% of the wealth in the world. It may not feel like it, but you are very fortunate and likely very skilled. There are a thousand reasons why you are where you are and your self-confidence and ability are just one factor. It's OK to feel like a phony sometimes. It's healthy if it's moves you forward.

    I'll leave you with this wonderful comment from Dave Ward:

    I think the more you know, the more you realize just how much you don't know. So paradoxically, the deeper down the rabbit hole you go, the more you might tend to fixate on the growing collection of unlearned peripheral concepts that you become conscious of along the way.

    That can manifest itself as feelings of fraudulence when people are calling you a "guru" or "expert" while you're internally overwhelmed by the ever-expanding volumes of things you're learning that you don't know.

    However, I think it's important to tamp those insecurities down and continue on with confidence enough to continue learning. After all, you've got the advantage of having this long list of things you know you don't know, whereas most people haven't even taken the time to uncover that treasure map yet. What's more, no one else has it all figured out either. We're all just fumbling around in the adjacent possible, grasping at whatever good ideas and understanding we can manage to wrap our heads around.


    Tell me your stories in the comments. We're also discussing this on this Google+ thread.

    And remember, "Fake it til' you make it."
    [B]AMD FX 6300 | Asus M5A97 Evo R2.0 | Saophire HD 7870 Ghz Edition | Dell S2204L | WD Blue 1TB | Kingston HyperX Blu 4GB | Seasonic S12ii 520W | Samsung 24x DVDRW | Corsair 200R | Dell Keyboard | Logitech MX518 | Edifer X600 | Microtek 1kVA UPS[/B]

  29. #29
    Guess Who's Back Who's Avatar
    Join Date
    May 2004
    Location
    Bangalore
    Posts
    352

    Default Re: The Geeks Daily

    I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.
    Stay Hungry. Stay Foolish.

  30. #30
    Whompy Whomperson Nipun's Avatar
    Join Date
    Mar 2011
    Location
    New Delhi
    Posts
    1,467

    Default Re: The Geeks Daily

    Quote Originally Posted by Who View Post
    I am sticking this thread as i personally feel the articles here are very good , i also request other people to contribute here , i would have moved it to OSS & programming section but i think articles are in a broader category , so community discussion seems fine at the moment but feel free to make any suggestions, thank you.
    Thats great..!

    The articles are really very good, sygeek!
    Last edited by Nipun; 15-09-2011 at 09:25 AM.
    The beauty of Indian roads is that one needs to look on both sides while crossing a one way road!
    [B]▒▒ [URL="http://adf.ly/2148556/tf2comic"] ¯TF2 COMIC MAKER!_ ▒▒[/URL][/B]
    [url="http://adf.ly/2148556/roadsense"]Drive sensibly, please![/url]
    [URL="http://adf.ly/2148556/nocivicsense"]Educated Illiterates[/URL]

Similar Threads

  1. Geeks,Help me to buy 5.1 speakers around 5k
    By RohanAJoshi in forum Audio
    Replies: 15
    Last Post: 12-02-2011, 11:06 PM
  2. Daily SMS !
    By Batistabomb in forum Chit-Chat
    Replies: 3
    Last Post: 05-12-2007, 03:40 PM
  3. Help me geeks...!
    By damritraj in forum Software Q&A
    Replies: 5
    Last Post: 29-06-2007, 10:11 AM
  4. to all geeks out there.
    By quark in forum QnA (read only)
    Replies: 2
    Last Post: 02-01-2005, 01:25 PM
  5. Have U join the IRC Yet (Geeks Only)!!!
    By Ashis in forum QnA (read only)
    Replies: 21
    Last Post: 17-11-2004, 12:50 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •